diff --git a/dev/404.html b/dev/404.html index 04fa419bcc..b903177afc 100644 --- a/dev/404.html +++ b/dev/404.html @@ -5,11 +5,11 @@ 404 | Lux.jl Docs - + - + @@ -26,7 +26,7 @@
- + \ No newline at end of file diff --git a/dev/api/Accelerator_Support/MLDataDevices.html b/dev/api/Accelerator_Support/MLDataDevices.html index ef134d036b..9ed5563542 100644 --- a/dev/api/Accelerator_Support/MLDataDevices.html +++ b/dev/api/Accelerator_Support/MLDataDevices.html @@ -5,15 +5,15 @@ MLDataDevices | Lux.jl Docs - + - + - - - + + + @@ -31,12 +31,12 @@
Skip to content

MLDataDevices

MLDataDevices.jl is a lightweight package defining rules for transferring data across devices.

Preferences

MLDataDevices.gpu_backend! Function
julia
gpu_backend!() = gpu_backend!("")
 gpu_backend!(backend) = gpu_backend!(string(backend))
 gpu_backend!(backend::AbstractGPUDevice)
-gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

Data Transfer

MLDataDevices.cpu_device Function
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source

MLDataDevices.gpu_device Function
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
-    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

  • force::Bool: If true, then an error is thrown if no functional GPU device is found.

source

MLDataDevices.reactant_device Function
julia
reactant_device(;
+gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

Data Transfer

MLDataDevices.cpu_device Function
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source

MLDataDevices.gpu_device Function
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
+    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

  • force::Bool: If true, then an error is thrown if no functional GPU device is found.

source

MLDataDevices.reactant_device Function
julia
reactant_device(;
     force::Bool=false, client=missing, device=missing
-) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

Danger

This is an experimental feature and might change without deprecations

source

Miscellaneous

MLDataDevices.reset_gpu_device! Function
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source

MLDataDevices.supported_gpu_backends Function
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source

MLDataDevices.default_device_rng Function
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source

MLDataDevices.get_device Function
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice() – denotes that the device type is unknown.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source

MLDataDevices.get_device_type Function
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • Nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice – denotes that the device type is unknown.

source

MLDataDevices.loaded Function
julia
loaded(x::AbstractDevice) -> Bool
-loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • CUDA.jl and cuDNN.jl (or just LuxCUDA.jl) for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source

MLDataDevices.functional Function
julia
functional(x::AbstractDevice) -> Bool
-functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source

MLDataDevices.isleaf Function
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

Multi-GPU Support

MLDataDevices.set_device! Function
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

Iteration

MLDataDevices.DeviceIterator Type
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils
+) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

Danger

This is an experimental feature and might change without deprecations

source

Miscellaneous

MLDataDevices.reset_gpu_device! Function
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source

MLDataDevices.supported_gpu_backends Function
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source

MLDataDevices.default_device_rng Function
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source

MLDataDevices.get_device Function
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice() – denotes that the device type is unknown.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source

MLDataDevices.get_device_type Function
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • Nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice – denotes that the device type is unknown.

source

MLDataDevices.loaded Function
julia
loaded(x::AbstractDevice) -> Bool
+loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • CUDA.jl and cuDNN.jl (or just LuxCUDA.jl) for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source

MLDataDevices.functional Function
julia
functional(x::AbstractDevice) -> Bool
+functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source

MLDataDevices.isleaf Function
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

Multi-GPU Support

MLDataDevices.set_device! Function
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

Iteration

MLDataDevices.DeviceIterator Type
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils
 
 julia> X = rand(Float64, 3, 33);
 
@@ -54,8 +54,8 @@
        end
 (i, summary(x)) = (1, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
 (i, summary(x)) = (2, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
-(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")

source

- +(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")

source

+ \ No newline at end of file diff --git a/dev/api/Building_Blocks/LuxCore.html b/dev/api/Building_Blocks/LuxCore.html index cd6c3bb6b0..b6d55ac319 100644 --- a/dev/api/Building_Blocks/LuxCore.html +++ b/dev/api/Building_Blocks/LuxCore.html @@ -5,15 +5,15 @@ LuxCore | Lux.jl Docs - + - + - - - + + + @@ -28,8 +28,8 @@ -
Skip to content

LuxCore

LuxCore.jl defines the abstract layers for Lux. Allows users to be compatible with the entirely of Lux.jl without having such a heavy dependency. If you are depending on Lux.jl directly, you do not need to depend on LuxCore.jl (all the functionality is exported via Lux.jl).

Abstract Types

LuxCore.AbstractLuxLayer Type
julia
abstract type AbstractLuxLayer

Abstract Type for all Lux Layers

Users implementing their custom layer, must implement

  • initialparameters(rng::AbstractRNG, layer::CustomAbstractLuxLayer) – This returns a NamedTuple containing the trainable parameters for the layer.

  • initialstates(rng::AbstractRNG, layer::CustomAbstractLuxLayer) – This returns a NamedTuple containing the current state for the layer. For most layers this is typically empty. Layers that would potentially contain this include BatchNorm, LSTM, GRU, etc.

Optionally:

  • parameterlength(layer::CustomAbstractLuxLayer) – These can be automatically calculated, but it is recommended that the user defines these.

  • statelength(layer::CustomAbstractLuxLayer) – These can be automatically calculated, but it is recommended that the user defines these.

See also AbstractLuxContainerLayer

source

LuxCore.AbstractLuxWrapperLayer Type
julia
abstract type AbstractLuxWrapperLayer{layer} <: AbstractLuxLayer

See AbstractLuxContainerLayer for detailed documentation. This abstract type is very similar to AbstractLuxContainerLayer except that it allows for a single layer to be wrapped in a container.

Additionally, on calling initialparameters and initialstates, the parameters and states are not wrapped in a NamedTuple with the same name as the field.

As a convenience, we define the fallback call (::AbstractLuxWrapperLayer)(x, ps, st), which calls getfield(x, layer)(x, ps, st).

source

LuxCore.AbstractLuxContainerLayer Type
julia
abstract type AbstractLuxContainerLayer{layers} <: AbstractLuxLayer

Abstract Container Type for certain Lux Layers. layers is a tuple containing fieldnames for the layer, and constructs the parameters and states using those.

Users implementing their custom layer can extend the same functions as in AbstractLuxLayer.

Advanced Structure Manipulation

Advanced structure manipulation of these layers post construction is possible via Functors.fmap. For a more flexible interface, we recommend using Lux.Experimental.@layer_map.

fmap Support

fmap support needs to be explicitly enabled by loading Functors.jl and Setfield.jl.

Changes from Pre-1.0 Behavior

Previously if layers was a singleton tuple, initialparameters and initialstates would return the parameters and states for the single field layers. From v1.0.0 onwards, even for singleton tuples, the parameters/states are wrapped in a NamedTuple with the same name as the field. See AbstractLuxWrapperLayer to replicate the previous behavior of singleton tuples.

source

General

LuxCore.apply Function
julia
apply(model, x, ps, st)

In most cases this function simply calls model(x, ps, st). However, it is still recommended to call apply instead of model(x, ps, st) directly. Some of the reasons for this include:

  1. For certain types of inputs x, we might want to perform preprocessing before calling model. For eg, if x is an Array of ReverseDiff.TrackedReals this can cause significant regressions in model(x, ps, st) (since it won't hit any of the BLAS dispatches). In those cases, we would automatically convert x to a ReverseDiff.TrackedArray.

  2. Certain user defined inputs need to be applied to specific layers but we want the datatype of propagate through all the layers (even unsupported ones). In these cases, we can unpack the input in apply and pass it to the appropriate layer and then repack it before returning. See the Lux manual on Custom Input Types for a motivating example.

Tip

apply is integrated with DispatchDoctor.jl that allows automatic verification of type stability. By default this is "disable"d. For more information, see the documentation.

source

LuxCore.stateless_apply Function
julia
stateless_apply(model, x, ps)

Calls apply and only returns the first argument. This function requires that model has an empty state of NamedTuple(). Behavior of other kinds of models are undefined and it is the responsibility of the user to ensure that the model has an empty state.

source

LuxCore.check_fmap_condition Function
julia
check_fmap_condition(cond, tmatch::Union{Type, Nothing}, x) -> Bool

fmaps into the structure x and see if cond is satisfied for any of the leaf elements.

Arguments

  • cond - A function that takes a single argument and returns a Bool.

  • tmatch - A shortcut to check if x is of type tmatch. Can be disabled by passing nothing.

  • x - The structure to check.

Returns

A Boolean Value

source

LuxCore.contains_lux_layer Function
julia
contains_lux_layer(l) -> Bool

Check if the structure l is a Lux AbstractLuxLayer or a container of such a layer.

source

LuxCore.display_name Function
julia
display_name(layer::AbstractLuxLayer)

Printed Name of the layer. If the layer has a field name that is used, else the type name is used.

source

LuxCore.replicate Function
julia
replicate(rng::AbstractRNG)

Creates a copy of the rng state depending on its type.

source

LuxCore.setup Function
julia
setup(rng::AbstractRNG, layer)

Shorthand for getting the parameters and states of the layer l. Is equivalent to (initialparameters(rng, l), initialstates(rng, l)).

Warning

This function is not pure, it mutates rng.

source

Parameters

LuxCore.initialparameters Function
julia
initialparameters(rng::AbstractRNG, layer)

Generate the initial parameters of the layer l.

source

LuxCore.parameterlength Function
julia
parameterlength(layer)

Return the total number of parameters of the layer l.

source

States

LuxCore.initialstates Function
julia
initialstates(rng::AbstractRNG, layer)

Generate the initial states of the layer l.

source

LuxCore.statelength Function
julia
statelength(layer)

Return the total number of states of the layer l.

source

LuxCore.testmode Function
julia
testmode(st::NamedTuple)

Make all occurrences of training in state stVal(false).

source

LuxCore.trainmode Function
julia
trainmode(st::NamedTuple)

Make all occurrences of training in state stVal(true).

source

LuxCore.update_state Function
julia
update_state(st::NamedTuple, key::Symbol, value; exclude=Internal.isleaf)

Recursively update all occurrences of the key in the state st with the value. exclude is a function that is passed to Functors.fmap_with_path's exclude keyword.

Needs Functors.jl

This function requires Functors.jl to be loaded.

source

Layer size

LuxCore.outputsize Function
julia
outputsize(layer, x, rng)

Return the output size of the layer.

The fallback implementation of this function assumes the inputs were batched, i.e., if any of the outputs are Arrays, with ndims(A) > 1, it will return size(A)[1:(end - 1)]. If this behavior is undesirable, provide a custom outputsize(layer, x, rng) implementation).

Fallback Implementation

The fallback implementation of this function is defined once Lux.jl is loaded.

Changes from Pre-1.0 Behavior

Previously it was possible to override this function by defining outputsize(layer). However, this can potentially introduce a bug that is hard to bypass. See this PR for more information.

source

- +
Skip to content

LuxCore

LuxCore.jl defines the abstract layers for Lux. Allows users to be compatible with the entirely of Lux.jl without having such a heavy dependency. If you are depending on Lux.jl directly, you do not need to depend on LuxCore.jl (all the functionality is exported via Lux.jl).

Abstract Types

LuxCore.AbstractLuxLayer Type
julia
abstract type AbstractLuxLayer

Abstract Type for all Lux Layers

Users implementing their custom layer, must implement

  • initialparameters(rng::AbstractRNG, layer::CustomAbstractLuxLayer) – This returns a NamedTuple containing the trainable parameters for the layer.

  • initialstates(rng::AbstractRNG, layer::CustomAbstractLuxLayer) – This returns a NamedTuple containing the current state for the layer. For most layers this is typically empty. Layers that would potentially contain this include BatchNorm, LSTM, GRU, etc.

Optionally:

  • parameterlength(layer::CustomAbstractLuxLayer) – These can be automatically calculated, but it is recommended that the user defines these.

  • statelength(layer::CustomAbstractLuxLayer) – These can be automatically calculated, but it is recommended that the user defines these.

See also AbstractLuxContainerLayer

source

LuxCore.AbstractLuxWrapperLayer Type
julia
abstract type AbstractLuxWrapperLayer{layer} <: AbstractLuxLayer

See AbstractLuxContainerLayer for detailed documentation. This abstract type is very similar to AbstractLuxContainerLayer except that it allows for a single layer to be wrapped in a container.

Additionally, on calling initialparameters and initialstates, the parameters and states are not wrapped in a NamedTuple with the same name as the field.

As a convenience, we define the fallback call (::AbstractLuxWrapperLayer)(x, ps, st), which calls getfield(x, layer)(x, ps, st).

source

LuxCore.AbstractLuxContainerLayer Type
julia
abstract type AbstractLuxContainerLayer{layers} <: AbstractLuxLayer

Abstract Container Type for certain Lux Layers. layers is a tuple containing fieldnames for the layer, and constructs the parameters and states using those.

Users implementing their custom layer can extend the same functions as in AbstractLuxLayer.

Advanced Structure Manipulation

Advanced structure manipulation of these layers post construction is possible via Functors.fmap. For a more flexible interface, we recommend using Lux.Experimental.@layer_map.

fmap Support

fmap support needs to be explicitly enabled by loading Functors.jl and Setfield.jl.

Changes from Pre-1.0 Behavior

Previously if layers was a singleton tuple, initialparameters and initialstates would return the parameters and states for the single field layers. From v1.0.0 onwards, even for singleton tuples, the parameters/states are wrapped in a NamedTuple with the same name as the field. See AbstractLuxWrapperLayer to replicate the previous behavior of singleton tuples.

source

General

LuxCore.apply Function
julia
apply(model, x, ps, st)

In most cases this function simply calls model(x, ps, st). However, it is still recommended to call apply instead of model(x, ps, st) directly. Some of the reasons for this include:

  1. For certain types of inputs x, we might want to perform preprocessing before calling model. For eg, if x is an Array of ReverseDiff.TrackedReals this can cause significant regressions in model(x, ps, st) (since it won't hit any of the BLAS dispatches). In those cases, we would automatically convert x to a ReverseDiff.TrackedArray.

  2. Certain user defined inputs need to be applied to specific layers but we want the datatype of propagate through all the layers (even unsupported ones). In these cases, we can unpack the input in apply and pass it to the appropriate layer and then repack it before returning. See the Lux manual on Custom Input Types for a motivating example.

Tip

apply is integrated with DispatchDoctor.jl that allows automatic verification of type stability. By default this is "disable"d. For more information, see the documentation.

source

LuxCore.stateless_apply Function
julia
stateless_apply(model, x, ps)

Calls apply and only returns the first argument. This function requires that model has an empty state of NamedTuple(). Behavior of other kinds of models are undefined and it is the responsibility of the user to ensure that the model has an empty state.

source

LuxCore.check_fmap_condition Function
julia
check_fmap_condition(cond, tmatch::Union{Type, Nothing}, x) -> Bool

fmaps into the structure x and see if cond is satisfied for any of the leaf elements.

Arguments

  • cond - A function that takes a single argument and returns a Bool.

  • tmatch - A shortcut to check if x is of type tmatch. Can be disabled by passing nothing.

  • x - The structure to check.

Returns

A Boolean Value

source

LuxCore.contains_lux_layer Function
julia
contains_lux_layer(l) -> Bool

Check if the structure l is a Lux AbstractLuxLayer or a container of such a layer.

source

LuxCore.display_name Function
julia
display_name(layer::AbstractLuxLayer)

Printed Name of the layer. If the layer has a field name that is used, else the type name is used.

source

LuxCore.replicate Function
julia
replicate(rng::AbstractRNG)

Creates a copy of the rng state depending on its type.

source

LuxCore.setup Function
julia
setup(rng::AbstractRNG, layer)

Shorthand for getting the parameters and states of the layer l. Is equivalent to (initialparameters(rng, l), initialstates(rng, l)).

Warning

This function is not pure, it mutates rng.

source

Parameters

LuxCore.initialparameters Function
julia
initialparameters(rng::AbstractRNG, layer)

Generate the initial parameters of the layer l.

source

LuxCore.parameterlength Function
julia
parameterlength(layer)

Return the total number of parameters of the layer l.

source

States

LuxCore.initialstates Function
julia
initialstates(rng::AbstractRNG, layer)

Generate the initial states of the layer l.

source

LuxCore.statelength Function
julia
statelength(layer)

Return the total number of states of the layer l.

source

LuxCore.testmode Function
julia
testmode(st::NamedTuple)

Make all occurrences of training in state stVal(false).

source

LuxCore.trainmode Function
julia
trainmode(st::NamedTuple)

Make all occurrences of training in state stVal(true).

source

LuxCore.update_state Function
julia
update_state(st::NamedTuple, key::Symbol, value; exclude=Internal.isleaf)

Recursively update all occurrences of the key in the state st with the value. exclude is a function that is passed to Functors.fmap_with_path's exclude keyword.

Needs Functors.jl

This function requires Functors.jl to be loaded.

source

Layer size

LuxCore.outputsize Function
julia
outputsize(layer, x, rng)

Return the output size of the layer.

The fallback implementation of this function assumes the inputs were batched, i.e., if any of the outputs are Arrays, with ndims(A) > 1, it will return size(A)[1:(end - 1)]. If this behavior is undesirable, provide a custom outputsize(layer, x, rng) implementation).

Fallback Implementation

The fallback implementation of this function is defined once Lux.jl is loaded.

Changes from Pre-1.0 Behavior

Previously it was possible to override this function by defining outputsize(layer). However, this can potentially introduce a bug that is hard to bypass. See this PR for more information.

source

+ \ No newline at end of file diff --git a/dev/api/Building_Blocks/WeightInitializers.html b/dev/api/Building_Blocks/WeightInitializers.html index c21197a0aa..9bb6169c70 100644 --- a/dev/api/Building_Blocks/WeightInitializers.html +++ b/dev/api/Building_Blocks/WeightInitializers.html @@ -5,15 +5,15 @@ WeightInitializers | Lux.jl Docs - + - + - - - + + + @@ -29,8 +29,8 @@
Skip to content

WeightInitializers

This package is a light dependency providing common weight initialization schemes for deep learning models.

Supported RNG Types

RNG Type / PackageReturned Array TypeUnsupported Functions
Random.jlArray
StableRNGs.jlArray
CUDA.CURAND.default_rng()CuArray
CUDA.default_rng()CuArray
GPUArrays.default_rng(CuArray)CuArray
AMDGPU.rocrand_rng()ROCArray
AMDGPU.gpuarrays_rng()ROCArray
GPUArrays.default_rng(ROCArray)ROCArray
Metal.gpuarrays_rng()MtlArrayorthogonal
GPUArrays.default_rng(MtlArray)MtlArrayorthogonal
oneAPI.gpuarrays_rng()oneArrayorthogonal, truncated_normal
GPUArrays.default_rng(oneArray)oneArrayorthogonal, truncated_normal

API Reference

Main Functions

WeightInitializers.glorot_normal Function
julia
glorot_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

WeightInitializers.glorot_uniform Function
julia
glorot_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [x,x], where x = gain * sqrt(6 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

WeightInitializers.identity_init Function
julia
identity_init([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; gain::Number=1,
+    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

WeightInitializers.glorot_uniform Function
julia
glorot_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [x,x], where x = gain * sqrt(6 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

WeightInitializers.identity_init Function
julia
identity_init([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; gain::Number=1,
     shift::Union{Integer, Tuple{Integer, Integer}}=0) -> AbstractArray{T}

Constructs an array that aims to provide an identity mapping when used as parameters in most layers of a neural network. The identity mapping is scaled by the gain parameter.

Behavior

  • 1D: Returns a Vector of zeros (useful for biases in layers where input_size == output_size).

  • 2D: Returns an identity matrix (useful for fully connected layers with equal input and output sizes).

  • More than 2D: Returns a tensor where the central slice along the last two dimensions is an identity matrix, and the rest are zeros (useful for convolutional layers, simulating an identity convolution).

Caveats

  • Not all layers will result in an identity mapping when using this initializer. Exceptions include recurrent and normalization layers.

  • Layers must have input_size == output_size for a perfect identity mapping. In cases where this condition is not met, the function pads extra dimensions with zeros.

  • For convolutional layers to achieve an identity mapping, kernel sizes must be odd, and appropriate padding must be applied to ensure the output feature maps are the same size as the input feature maps.

Arguments

  • rng::AbstractRNG: An optional random number generator, included for consistency with other initializers but ignored since the output is deterministic.

  • T::Type{<:Number}: The numeric type of the array elements.

  • size...: The dimensions of the array to be initialized.

  • gain::Number=1: A scaling factor applied to the identity mapping.

  • shift::Union{Integer, Tuple{Integer, Integer}}=0: An integer or a tuple specifying the circular shift applied to the output array.

Returns

  • AbstractArray{T}: An array initialized to represent an identity mapping, scaled by gain and optionally shifted by shift.

Examples

julia
julia> identity_init(Xoshiro(123), Float32, 5, 5)
 5×5 Matrix{Float32}:
  1.0  1.0  1.0  1.0  1.0
@@ -44,43 +44,43 @@
 [:, :, 1, 1] =
  0.0  0.0  0.0
  0.0  1.5  0.0
- 0.0  0.0  0.0

source

WeightInitializers.kaiming_normal Function
julia
kaiming_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in)

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

WeightInitializers.kaiming_uniform Function
julia
kaiming_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in).

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

WeightInitializers.sparse_init Function
julia
sparse_init([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
+ 0.0  0.0  0.0

source

WeightInitializers.kaiming_normal Function
julia
kaiming_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in)

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

WeightInitializers.kaiming_uniform Function
julia
kaiming_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in).

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

WeightInitializers.sparse_init Function
julia
sparse_init([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
     sparsity::Number, std::Number=0.01) -> AbstractArray{T}

Creates a sparsely initialized weight matrix with a specified proportion of zeroed elements, using random numbers drawn from a normal distribution for the non-zero elements. This method was introduced in [1].

Note

The sparsity parameter controls the proportion of the matrix that will be zeroed. For example, a sparsity of 0.3 means that approximately 30% of the elements will be set to zero. The non-zero elements are distributed according to a normal distribution, scaled by the std parameter.

Arguments

  • rng::AbstractRNG: The random number generator to use.

  • T::Type{<:Number}: The numeric type of the elements in the returned array.

  • dims::Integer...: The dimensions of the weight matrix to be generated.

  • sparsity::Number: The proportion of elements to be zeroed. Must be between 0 and 1.

  • std::Number=0.01: The standard deviation of the normal distribution before applying gain.

Returns

  • AbstractArray{T}: A sparsely initialized weight matrix of dimensions dims and type T.

Examples

julia
julia> y = sparse_init(Xoshiro(123), Float32, 5, 5; sparsity=0.3, std=0.01);
 
 julia> y isa Matrix{Float32}
 true
 
 julia> size(y) == (5, 5)
-true

References

[1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

source

WeightInitializers.truncated_normal Function
julia
truncated_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; mean = 0,
-    std = 1, lo = -2, hi = 2) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100)).

source

WeightInitializers.orthogonal Function
julia
orthogonal([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
-    gain = 1)  -> AbstractArray{T, length(dims)}

Return an AbstractArray{T} of the given dimensions (dims) which is a (semi) orthogonal matrix, as described in [1].

The function constructs an orthogonal or semi-orthogonal matrix depending on the specified dimensions. For two dimensions, it returns a matrix where dims = (rows, cols). For more than two dimensions, it computes an orthogonal matrix of size prod(dims[1:(end - 1)]) by dims[end] before reshaping it to the original dimensions.

Cannot construct a vector, i.e., length(dims) == 1 is forbidden.

Arguments

  • rng::AbstractRNG: Random number generator.

  • T::Type{<:Real}: The type of the elements in the array.

  • dims::Integer...: The dimensions of the array.

  • gain::Number: Scaling factor for the elements of the orthogonal matrix.

References

[1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

source

Other Convenience Functions

Beware

Unlike the other functions these ones don't take a type argument.

WeightInitializers.zeros16 Function
julia
zeros16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones16 Function
julia
ones16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand16 Function
julia
rand16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn16 Function
julia
randn16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zeros32 Function
julia
zeros32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones32 Function
julia
ones32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand32 Function
julia
rand32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn32 Function
julia
randn32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zeros64 Function
julia
zeros64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones64 Function
julia
ones64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand64 Function
julia
rand64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn64 Function
julia
randn64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC16 Function
julia
zerosC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC16 Function
julia
onesC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC16 Function
julia
randC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC16 Function
julia
randnC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC32 Function
julia
zerosC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC32 Function
julia
onesC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC32 Function
julia
randC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC32 Function
julia
randnC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC64 Function
julia
zerosC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC64 Function
julia
onesC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC64 Function
julia
randC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC64 Function
julia
randnC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a standard normal distribution.

source

- +true

References

[1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

source

WeightInitializers.truncated_normal Function
julia
truncated_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; mean = 0,
+    std = 1, lo = -2, hi = 2) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100)).

source

WeightInitializers.orthogonal Function
julia
orthogonal([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
+    gain = 1)  -> AbstractArray{T, length(dims)}

Return an AbstractArray{T} of the given dimensions (dims) which is a (semi) orthogonal matrix, as described in [1].

The function constructs an orthogonal or semi-orthogonal matrix depending on the specified dimensions. For two dimensions, it returns a matrix where dims = (rows, cols). For more than two dimensions, it computes an orthogonal matrix of size prod(dims[1:(end - 1)]) by dims[end] before reshaping it to the original dimensions.

Cannot construct a vector, i.e., length(dims) == 1 is forbidden.

Arguments

References

[1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

source

Other Convenience Functions

Beware

Unlike the other functions these ones don't take a type argument.

WeightInitializers.zeros16 Function
julia
zeros16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones16 Function
julia
ones16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand16 Function
julia
rand16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn16 Function
julia
randn16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zeros32 Function
julia
zeros32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones32 Function
julia
ones32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand32 Function
julia
rand32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn32 Function
julia
randn32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zeros64 Function
julia
zeros64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.ones64 Function
julia
ones64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of ones.

source

WeightInitializers.rand64 Function
julia
rand64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randn64 Function
julia
randn64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC16 Function
julia
zerosC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC16 Function
julia
onesC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC16 Function
julia
randC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC16 Function
julia
randnC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC32 Function
julia
zerosC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC32 Function
julia
onesC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC32 Function
julia
randC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC32 Function
julia
randnC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a standard normal distribution.

source

WeightInitializers.zerosC64 Function
julia
zerosC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of zeros.

source

WeightInitializers.onesC64 Function
julia
onesC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of ones.

source

WeightInitializers.randC64 Function
julia
randC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a uniform distribution.

source

WeightInitializers.randnC64 Function
julia
randnC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a standard normal distribution.

source

+ \ No newline at end of file diff --git a/dev/api/Lux/autodiff.html b/dev/api/Lux/autodiff.html index ee555bb2c8..45b9b5c62e 100644 --- a/dev/api/Lux/autodiff.html +++ b/dev/api/Lux/autodiff.html @@ -5,15 +5,15 @@ Automatic Differentiation Helpers | Lux.jl Docs - + - + - - - + + + @@ -28,8 +28,8 @@ -
Skip to content

Automatic Differentiation Helpers

JVP & VJP Wrappers

Lux.jacobian_vector_product Function
julia
jacobian_vector_product(f, backend::AbstractADType, x, u)

Compute the Jacobian-Vector Product (fx)u. This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD.

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the JVP.

  • x: The input to the function.

  • u: An object of the same structure as x.

Returns

  • v: The Jacobian Vector Product.

source

Lux.vector_jacobian_product Function
julia
vector_jacobian_product(f, backend::AbstractADType, x, u)

Compute the Vector-Jacobian Product (fx)Tu. This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD.

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the VJP.

  • x: The input to the function.

  • u: An object of the same structure as f(x).

Returns

  • v: The Vector Jacobian Product.

source

Batched AD

Lux.batched_jacobian Function
julia
batched_jacobian(f, backend::AbstractADType, x::AbstractArray)

Computes the Jacobian of a function f with respect to a batch of inputs x. This expects the following properties for y = f(x):

  1. ndims(y) ≥ 2

  2. size(y, ndims(y)) == size(x, ndims(x))

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff
AutoZygoteZygote.jl

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the jacobian.

  • x: The input to the function. Must have ndims(x) ≥ 2.

Returns

  • J: The Jacobian of f with respect to x. This will be a 3D Array. If the dimensions of x are (N₁, N₂, ..., Nₙ, B) and of y are (M₁, M₂, ..., Mₘ, B), then J will be a ((M₁ × M₂ × ... × Mₘ), (N₁ × N₂ × ... × Nₙ), B) Array.

Danger

f(x) must not be inter-mixing the batch dimensions, else the result will be incorrect. For example, if f contains operations like batch normalization, then the result will be incorrect.

source

Nested 2nd Order AD

Consult the manual page on Nested AD for information on nested automatic differentiation.

- +
Skip to content

Automatic Differentiation Helpers

JVP & VJP Wrappers

Lux.jacobian_vector_product Function
julia
jacobian_vector_product(f, backend::AbstractADType, x, u)

Compute the Jacobian-Vector Product (fx)u. This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD.

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the JVP.

  • x: The input to the function.

  • u: An object of the same structure as x.

Returns

  • v: The Jacobian Vector Product.

source

Lux.vector_jacobian_product Function
julia
vector_jacobian_product(f, backend::AbstractADType, x, u)

Compute the Vector-Jacobian Product (fx)Tu. This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD.

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the VJP.

  • x: The input to the function.

  • u: An object of the same structure as f(x).

Returns

  • v: The Vector Jacobian Product.

source

Batched AD

Lux.batched_jacobian Function
julia
batched_jacobian(f, backend::AbstractADType, x::AbstractArray)

Computes the Jacobian of a function f with respect to a batch of inputs x. This expects the following properties for y = f(x):

  1. ndims(y) ≥ 2

  2. size(y, ndims(y)) == size(x, ndims(x))

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff
AutoZygoteZygote.jl

Arguments

  • f: The function to compute the jacobian of.

  • backend: The backend to use for computing the jacobian.

  • x: The input to the function. Must have ndims(x) ≥ 2.

Returns

  • J: The Jacobian of f with respect to x. This will be a 3D Array. If the dimensions of x are (N₁, N₂, ..., Nₙ, B) and of y are (M₁, M₂, ..., Mₘ, B), then J will be a ((M₁ × M₂ × ... × Mₘ), (N₁ × N₂ × ... × Nₙ), B) Array.

Danger

f(x) must not be inter-mixing the batch dimensions, else the result will be incorrect. For example, if f contains operations like batch normalization, then the result will be incorrect.

source

Nested 2nd Order AD

Consult the manual page on Nested AD for information on nested automatic differentiation.

+ \ No newline at end of file diff --git a/dev/api/Lux/contrib.html b/dev/api/Lux/contrib.html index 22729182b3..ac2480d65d 100644 --- a/dev/api/Lux/contrib.html +++ b/dev/api/Lux/contrib.html @@ -5,15 +5,15 @@ Experimental Features | Lux.jl Docs - + - + - - - + + + @@ -29,8 +29,8 @@
Skip to content

Experimental Features

All features listed on this page are experimental which means:

  1. No SemVer Guarantees. We use code here to iterate fast. That said, historically we have never broken any code in this module and have always provided a deprecation period.

  2. Expect edge-cases and report them. It will help us move these features out of experimental sooner.

  3. None of the features are exported.

Parameter Freezing

Lux.Experimental.FrozenLayer Type
julia
FrozenLayer(l::AbstractLuxLayer, which_params::Optional{Tuple})

Freeze the parameters with name which_params of the layer l.

Use Lux.Experimental.freeze instead

It is always recommended to use the Lux.Experimental.freeze function instead of directly using the FrozenLayer constructor.

No checks for which_params

There are no checks for which_params. For example, if the original layer has parameters named (:weight, :bias), and which_params is set to (:myweight,) then none of the parameters are frozen and no error is thrown.

Arguments

  • l: Lux AbstractLuxLayer.

  • which_params: Parameter Names to be Frozen. Can be set to nothing, in which case all parameters are frozen.

Extended Help

Parameters

  • Parameters of the layer l excluding which_params.

States

  • frozen_params: Parameters that are frozen, i.e., which_params.

  • states: The state of the inner layer l.

Note on Internal Layer Implementation

The inner layer should work with NamedTuple parameters. In order to support custom parameter types, users need to implement Lux.Utils.merge(::CustomParamType, ::NamedTuple) or extend Lux.Utils.named_tuple(::CustomParamType) to return a NamedTuple.

Example

julia
julia> Lux.Experimental.FrozenLayer(Dense(2 => 2), (:weight,))
-FrozenLayer(Dense(2 => 2), (:weight,))  # 2 parameters, plus 4 non-trainable

See also Lux.Experimental.freeze, Lux.Experimental.unfreeze.

source

Lux.Experimental.freeze Function
julia
freeze(l::AbstractLuxLayer, which_params::Optional{Tuple} = nothing)

Constructs a version of l with which_params frozen. If which_params is nothing, then all parameters are frozen.

source

julia
freeze(l::AbstractLuxLayer, ps, st::NamedTuple,
-    which_params::Optional{Tuple} = nothing)

Construct a Lux.Experimental.FrozenLayer for l with the current parameters and states. If which_params is nothing, then all parameters are frozen.

source

Lux.Experimental.unfreeze Function
julia
unfreeze(l::FrozenLayer)

Unfreezes the layer l.

source

julia
unfreeze(l::FrozenLayer, ps, st::NamedTuple)

Unwraps a Lux.Experimental.FrozenLayer l with the current parameters and states.

source

For detailed usage example look at the manual page.

Map over Layer

Lux.Experimental.layer_map Function
julia
layer_map(f, l::AbstractLuxLayer, ps, st::NamedTuple)

Map the function f over the model l, with the parameters ps and states st. This is different from Functors.fmap since it zips the layers, parameters, and states and invokes the function on all of them together.

KeyPath provided to the function

The KeyPath depths on the structure of the parameters and states. This is of consequence exclusively for AbstractLuxWrapperLayer where the structure of the layer doesn't match the structure of the parameters and states. In the example, provided below, the KeyPath is (:chain, :dense_1) for the first layer (following the structure in ps) while accessing the same layer in the chain is done with ( :chain, :layers, :dense_1).

Call Signature for f

  • Must take 4 inputs – AbstractLuxLayer, Corresponding Parameters, Corresponding States, and the Functors.KeyPath to the layer.

  • Must return a tuple of 3 elements – AbstractLuxLayer, new parameters and the new states.

Extended Help

Example

julia
julia> using Lux, Random
+FrozenLayer(Dense(2 => 2), (:weight,))  # 2 parameters, plus 4 non-trainable

See also Lux.Experimental.freeze, Lux.Experimental.unfreeze.

source

Lux.Experimental.freeze Function
julia
freeze(l::AbstractLuxLayer, which_params::Optional{Tuple} = nothing)

Constructs a version of l with which_params frozen. If which_params is nothing, then all parameters are frozen.

source

julia
freeze(l::AbstractLuxLayer, ps, st::NamedTuple,
+    which_params::Optional{Tuple} = nothing)

Construct a Lux.Experimental.FrozenLayer for l with the current parameters and states. If which_params is nothing, then all parameters are frozen.

source

Lux.Experimental.unfreeze Function
julia
unfreeze(l::FrozenLayer)

Unfreezes the layer l.

source

julia
unfreeze(l::FrozenLayer, ps, st::NamedTuple)

Unwraps a Lux.Experimental.FrozenLayer l with the current parameters and states.

source

For detailed usage example look at the manual page.

Map over Layer

Lux.Experimental.layer_map Function
julia
layer_map(f, l::AbstractLuxLayer, ps, st::NamedTuple)

Map the function f over the model l, with the parameters ps and states st. This is different from Functors.fmap since it zips the layers, parameters, and states and invokes the function on all of them together.

KeyPath provided to the function

The KeyPath depths on the structure of the parameters and states. This is of consequence exclusively for AbstractLuxWrapperLayer where the structure of the layer doesn't match the structure of the parameters and states. In the example, provided below, the KeyPath is (:chain, :dense_1) for the first layer (following the structure in ps) while accessing the same layer in the chain is done with ( :chain, :layers, :dense_1).

Call Signature for f

  • Must take 4 inputs – AbstractLuxLayer, Corresponding Parameters, Corresponding States, and the Functors.KeyPath to the layer.

  • Must return a tuple of 3 elements – AbstractLuxLayer, new parameters and the new states.

Extended Help

Example

julia
julia> using Lux, Random
 
 julia> c = Parallel(
            +; chain=Chain(; dense_1=Dense(2 => 3), bn=BatchNorm(3), dense_2=Dense(3 => 5)),
@@ -57,10 +57,10 @@
 julia> all(iszero, (ps_new.chain.dense_1.weight, ps_new.chain.dense_1.bias,
                     ps_new.chain.dense_2.weight, ps_new.chain.dense_2.bias,
                     ps_new.dense_3.weight, ps_new.dense_3.bias))
-true

source

Debugging Functionality

Model not working properly! Here are some functionalities to help you debug you Lux model.

Lux.Experimental.@debug_mode Macro
julia
@debug_mode layer kwargs...

Recurses into the layer and replaces the inner most non Container Layers with a Lux.Experimental.DebugLayer.

See Lux.Experimental.DebugLayer for details about the Keyword Arguments.

source

Lux.Experimental.DebugLayer Type
julia
DebugLayer(layer::AbstractLuxLayer;
+true

source

Debugging Functionality

Model not working properly! Here are some functionalities to help you debug you Lux model.

Lux.Experimental.@debug_mode Macro
julia
@debug_mode layer kwargs...

Recurses into the layer and replaces the inner most non Container Layers with a Lux.Experimental.DebugLayer.

See Lux.Experimental.DebugLayer for details about the Keyword Arguments.

source

Lux.Experimental.DebugLayer Type
julia
DebugLayer(layer::AbstractLuxLayer;
     nan_check::Union{Symbol, StaticSymbol, Val}=static(:both),
     error_check::Union{StaticBool, Bool, Val{true}, Val{false}}=True(),
-    location::KeyPath=KeyPath())

A wrapper over Lux layers that adds checks for NaNs and errors. This is useful for debugging.

Arguments

  • layer: The layer to be wrapped.

Extended Help

Keyword Arguments

  • nan_check: Whether to check for NaNs in the input, parameters, and states. Can be :both, :forward, :backward, or :none.

  • error_check: Whether to check for errors in the layer. If true, will throw an error if the layer fails.

  • location: The location of the layer. Use Lux.Experimental.@debug_mode to construct this layer to populate this value correctly.

Input / Output

Inputs and outputs are the same as the layer unless one of the nan_check or error_check criteria is met.

If nan_check is enabled and NaNs are detected then a DomainError is thrown. If error_check is enabled, then any errors in the layer are thrown with useful information to track where the error originates.

ChainRules Compatible Reverse Mode AD Tools

nan_check for the backward mode only works with ChainRules Compatible Reverse Mode AD Tools currently.

Disable After Debugging

This layer is only meant to be used for debugging. If used for actual training or inference, will lead to extremely bad performance.

See Lux.Experimental.@debug_mode to construct this layer.

source

Tied Parameters

Lux.Experimental.share_parameters Function
julia
share_parameters(ps, sharing)
+    location::KeyPath=KeyPath())

A wrapper over Lux layers that adds checks for NaNs and errors. This is useful for debugging.

Arguments

  • layer: The layer to be wrapped.

Extended Help

Keyword Arguments

  • nan_check: Whether to check for NaNs in the input, parameters, and states. Can be :both, :forward, :backward, or :none.

  • error_check: Whether to check for errors in the layer. If true, will throw an error if the layer fails.

  • location: The location of the layer. Use Lux.Experimental.@debug_mode to construct this layer to populate this value correctly.

Input / Output

Inputs and outputs are the same as the layer unless one of the nan_check or error_check criteria is met.

If nan_check is enabled and NaNs are detected then a DomainError is thrown. If error_check is enabled, then any errors in the layer are thrown with useful information to track where the error originates.

ChainRules Compatible Reverse Mode AD Tools

nan_check for the backward mode only works with ChainRules Compatible Reverse Mode AD Tools currently.

Disable After Debugging

This layer is only meant to be used for debugging. If used for actual training or inference, will lead to extremely bad performance.

See Lux.Experimental.@debug_mode to construct this layer.

source

Tied Parameters

Lux.Experimental.share_parameters Function
julia
share_parameters(ps, sharing)
 share_parameters(ps, sharing, new_parameters)

Updates the parameters in ps with a common set of parameters new_parameters that are shared between each list in the nested list sharing. (That was kind of a mouthful, the example should make it clear).

Arguments

  • ps: Original parameters.

  • sharing: A nested list of lists of accessors of ps which need to shate the parameters (See the example for details). (Each list in the list must be disjoint)

  • new_parameters: If passed the length of new_parameters must be equal to the length of sharing. For each vector in sharing the corresponding parameter in new_parameters will be used. (If not passed, the parameters corresponding to the first element of each vector in sharing will be used).

Returns

Updated Parameters having the same structure as ps.

Example

julia
julia> model = Chain(; d1=Dense(2 => 4, tanh),
            d3=Chain(; l1=Dense(4 => 2), l2=Dense(2 => 4)), d2=Dense(4 => 2))
 Chain(
@@ -82,8 +82,8 @@
            ps.d3.l2.bias === ps.d1.bias &&
            ps.d2.weight === ps.d3.l1.weight &&
            ps.d2.bias === ps.d3.l1.bias
-true

ComponentArrays

ComponentArrays doesn't allow sharing parameters. Converting the returned parameters to a ComponentArray will silently cause the parameter sharing to be undone.

source

- +true

ComponentArrays

ComponentArrays doesn't allow sharing parameters. Converting the returned parameters to a ComponentArray will silently cause the parameter sharing to be undone.

source

+ \ No newline at end of file diff --git a/dev/api/Lux/distributed_utils.html b/dev/api/Lux/distributed_utils.html index 0e3c38e43b..6f4482c4e1 100644 --- a/dev/api/Lux/distributed_utils.html +++ b/dev/api/Lux/distributed_utils.html @@ -5,15 +5,15 @@ Distributed Utils | Lux.jl Docs - + - + - - - + + + @@ -28,11 +28,11 @@ -
Skip to content

Distributed Utils

Note

These functionalities are available via the Lux.DistributedUtils module.

Backends

Lux.MPIBackend Type
julia
MPIBackend(comm = nothing)

Create an MPI backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(MPIBackend).

source

Lux.NCCLBackend Type
julia
NCCLBackend(comm = nothing, mpi_backend = nothing)

Create an NCCL backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(NCCLBackend).

source

Initialization

Lux.DistributedUtils.initialize Function
julia
initialize(backend::Type{<:AbstractLuxDistributedBackend}; kwargs...)

Initialize the given backend. Users can supply cuda_devices and amdgpu_devices to initialize the backend with the given devices. These can be set to missing to prevent initialization of the given device type. If set to nothing, and the backend is functional we assign GPUs in a round-robin fashion. Finally, a list of integers can be supplied to initialize the backend with the given devices.

Possible values for backend are:

  • MPIBackend: MPI backend for distributed training. Requires MPI.jl to be installed.

  • NCCLBackend: NCCL backend for CUDA distributed training. Requires CUDA.jl, MPI.jl, and NCCL.jl to be installed. This also wraps MPI backend for non-CUDA communications.

source

Lux.DistributedUtils.initialized Function
julia
initialized(backend::Type{<:AbstractLuxDistributedBackend})

Check if the given backend is initialized.

source

Lux.DistributedUtils.get_distributed_backend Function
julia
get_distributed_backend(backend::Type{<:AbstractLuxDistributedBackend})

Get the distributed backend for the given backend type. Possible values are:

  • MPIBackend: MPI backend for distributed training. Requires MPI.jl to be installed.

  • NCCLBackend: NCCL backend for CUDA distributed training. Requires CUDA.jl, MPI.jl, and NCCL.jl to be installed. This also wraps MPI backend for non-CUDA communications.

Danger

initialize(backend; kwargs...) must be called before calling this function.

source

Helper Functions

Lux.DistributedUtils.local_rank Function
julia
local_rank(backend::AbstractLuxDistributedBackend)

Get the local rank for the given backend.

source

Lux.DistributedUtils.total_workers Function
julia
total_workers(backend::AbstractLuxDistributedBackend)

Get the total number of workers for the given backend.

source

Communication Primitives

Lux.DistributedUtils.allreduce! Function
julia
allreduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op)
-allreduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op)

Backend Agnostic API to perform an allreduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

Lux.DistributedUtils.bcast! Function
julia
bcast!(backend::AbstractLuxDistributedBackend, sendrecvbuf; root::Int=0)
-bcast!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf; root::Int=0)

Backend Agnostic API to broadcast the given buffer sendrecvbuf or sendbuf to all workers into recvbuf. The value at root will be broadcasted to all other workers.

source

Lux.DistributedUtils.reduce! Function
julia
reduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op; root::Int=0)
-reduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op; root::Int=0)

Backend Agnostic API to perform a reduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

Lux.DistributedUtils.synchronize!! Function
julia
synchronize!!(backend::AbstractLuxDistributedBackend, ps; root::Int=0)

Synchronize the given structure ps using the given backend. The value at root will be broadcasted to all other workers.

source

Optimizers.jl Integration

Lux.DistributedUtils.DistributedOptimizer Type
julia
DistributedOptimizer(backend::AbstractLuxDistributedBacked, optimizer)

Wrap the optimizer in a DistributedOptimizer. Before updating the parameters, this averages the gradients across the processes using Allreduce.

Arguments

  • optimizer: An Optimizer compatible with the Optimisers.jl package

source

MLUtils.jl Integration

Lux.DistributedUtils.DistributedDataContainer Type
julia
DistributedDataContainer(backend::AbstractLuxDistributedBackend, data)

data must be compatible with MLUtils interface. The returned container is compatible with MLUtils interface and is used to partition the dataset across the available processes.

Load MLUtils.jl

MLUtils.jl must be installed and loaded before using this.

source

- +
Skip to content

Distributed Utils

Note

These functionalities are available via the Lux.DistributedUtils module.

Backends

Lux.MPIBackend Type
julia
MPIBackend(comm = nothing)

Create an MPI backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(MPIBackend).

source

Lux.NCCLBackend Type
julia
NCCLBackend(comm = nothing, mpi_backend = nothing)

Create an NCCL backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(NCCLBackend).

source

Initialization

Lux.DistributedUtils.initialize Function
julia
initialize(backend::Type{<:AbstractLuxDistributedBackend}; kwargs...)

Initialize the given backend. Users can supply cuda_devices and amdgpu_devices to initialize the backend with the given devices. These can be set to missing to prevent initialization of the given device type. If set to nothing, and the backend is functional we assign GPUs in a round-robin fashion. Finally, a list of integers can be supplied to initialize the backend with the given devices.

Possible values for backend are:

  • MPIBackend: MPI backend for distributed training. Requires MPI.jl to be installed.

  • NCCLBackend: NCCL backend for CUDA distributed training. Requires CUDA.jl, MPI.jl, and NCCL.jl to be installed. This also wraps MPI backend for non-CUDA communications.

source

Lux.DistributedUtils.initialized Function
julia
initialized(backend::Type{<:AbstractLuxDistributedBackend})

Check if the given backend is initialized.

source

Lux.DistributedUtils.get_distributed_backend Function
julia
get_distributed_backend(backend::Type{<:AbstractLuxDistributedBackend})

Get the distributed backend for the given backend type. Possible values are:

  • MPIBackend: MPI backend for distributed training. Requires MPI.jl to be installed.

  • NCCLBackend: NCCL backend for CUDA distributed training. Requires CUDA.jl, MPI.jl, and NCCL.jl to be installed. This also wraps MPI backend for non-CUDA communications.

Danger

initialize(backend; kwargs...) must be called before calling this function.

source

Helper Functions

Lux.DistributedUtils.local_rank Function
julia
local_rank(backend::AbstractLuxDistributedBackend)

Get the local rank for the given backend.

source

Lux.DistributedUtils.total_workers Function
julia
total_workers(backend::AbstractLuxDistributedBackend)

Get the total number of workers for the given backend.

source

Communication Primitives

Lux.DistributedUtils.allreduce! Function
julia
allreduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op)
+allreduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op)

Backend Agnostic API to perform an allreduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

Lux.DistributedUtils.bcast! Function
julia
bcast!(backend::AbstractLuxDistributedBackend, sendrecvbuf; root::Int=0)
+bcast!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf; root::Int=0)

Backend Agnostic API to broadcast the given buffer sendrecvbuf or sendbuf to all workers into recvbuf. The value at root will be broadcasted to all other workers.

source

Lux.DistributedUtils.reduce! Function
julia
reduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op; root::Int=0)
+reduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op; root::Int=0)

Backend Agnostic API to perform a reduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

Lux.DistributedUtils.synchronize!! Function
julia
synchronize!!(backend::AbstractLuxDistributedBackend, ps; root::Int=0)

Synchronize the given structure ps using the given backend. The value at root will be broadcasted to all other workers.

source

Optimizers.jl Integration

Lux.DistributedUtils.DistributedOptimizer Type
julia
DistributedOptimizer(backend::AbstractLuxDistributedBacked, optimizer)

Wrap the optimizer in a DistributedOptimizer. Before updating the parameters, this averages the gradients across the processes using Allreduce.

Arguments

  • optimizer: An Optimizer compatible with the Optimisers.jl package

source

MLUtils.jl Integration

Lux.DistributedUtils.DistributedDataContainer Type
julia
DistributedDataContainer(backend::AbstractLuxDistributedBackend, data)

data must be compatible with MLUtils interface. The returned container is compatible with MLUtils interface and is used to partition the dataset across the available processes.

Load MLUtils.jl

MLUtils.jl must be installed and loaded before using this.

source

+ \ No newline at end of file diff --git a/dev/api/Lux/interop.html b/dev/api/Lux/interop.html index ec6754ec9f..5d1a7756c8 100644 --- a/dev/api/Lux/interop.html +++ b/dev/api/Lux/interop.html @@ -5,15 +5,15 @@ Interoperability between Lux and other packages | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
Skip to content

Interoperability between Lux and other packages

Switching from older frameworks

Flux Models to Lux Models

Flux.jl has been around in the Julia ecosystem for a long time and has a large userbase, hence we provide a way to convert Flux models to Lux models.

Tip

Accessing these functions require manually loading Flux, i.e., using Flux must be present somewhere in the code for these to be used.

Adapt.adapt Method
julia
Adapt.adapt(from::FromFluxAdaptor, L)

Adapt a Flux model L to Lux model. See FromFluxAdaptor for more details.

source

Lux.FromFluxAdaptor Type
julia
FromFluxAdaptor(preserve_ps_st::Bool=false, force_preserve::Bool=false)

Convert a Flux Model to Lux Model.

active field

This always ignores the active field of some of the Flux layers. This is almost never going to be supported.

Keyword Arguments

  • preserve_ps_st: Set to true to preserve the states and parameters of the layer. This attempts the best possible way to preserve the original model. But it might fail. If you need to override possible failures, set force_preserve to true.

  • force_preserve: Some of the transformations with state and parameters preservation haven't been implemented yet, in these cases, if force_transform is false a warning will be printed and a core Lux layer will be returned. Else, it will create a FluxLayer.

Example

julia
julia> import Flux
+    
Skip to content

Interoperability between Lux and other packages

Switching from older frameworks

Flux Models to Lux Models

Flux.jl has been around in the Julia ecosystem for a long time and has a large userbase, hence we provide a way to convert Flux models to Lux models.

Tip

Accessing these functions require manually loading Flux, i.e., using Flux must be present somewhere in the code for these to be used.

Adapt.adapt Method
julia
Adapt.adapt(from::FromFluxAdaptor, L)

Adapt a Flux model L to Lux model. See FromFluxAdaptor for more details.

source

Lux.FromFluxAdaptor Type
julia
FromFluxAdaptor(preserve_ps_st::Bool=false, force_preserve::Bool=false)

Convert a Flux Model to Lux Model.

active field

This always ignores the active field of some of the Flux layers. This is almost never going to be supported.

Keyword Arguments

  • preserve_ps_st: Set to true to preserve the states and parameters of the layer. This attempts the best possible way to preserve the original model. But it might fail. If you need to override possible failures, set force_preserve to true.

  • force_preserve: Some of the transformations with state and parameters preservation haven't been implemented yet, in these cases, if force_transform is false a warning will be printed and a core Lux layer will be returned. Else, it will create a FluxLayer.

Example

julia
julia> import Flux
 
 julia> using Adapt, Lux, Random
 
@@ -41,7 +41,7 @@
 julia> ps, st = Lux.setup(Random.default_rng(), m2);
 
 julia> size(first(m2(x, ps, st)))
-(2, 32)

source

Lux.FluxLayer Type
julia
FluxLayer(layer)

Serves as a compatibility layer between Flux and Lux. This uses Optimisers.destructure API internally.

Warning

Lux was written to overcome the limitations of destructure + Flux. It is recommended to rewrite your layer in Lux instead of using this layer.

Warning

Introducing this Layer in your model will lead to type instabilities, given the way Optimisers.destructure works.

Arguments

  • layer: Flux layer

Parameters

  • p: Flattened parameters of the layer

source

Using a different backend for Lux

Lux Models to Simple Chains

SimpleChains.jl provides a way to train Small Neural Networks really fast on CPUs. See this blog post for more details. This section describes how to convert Lux models to SimpleChains models while preserving the layer interface.

Tip

Accessing these functions require manually loading SimpleChains, i.e., using SimpleChains must be present somewhere in the code for these to be used.

Adapt.adapt Method
julia
Adapt.adapt(from::ToSimpleChainsAdaptor, L::AbstractLuxLayer)

Adapt a Simple Chains model to Lux model. See ToSimpleChainsAdaptor for more details.

source

Lux.ToSimpleChainsAdaptor Type
julia
ToSimpleChainsAdaptor(input_dims, convert_to_array::Bool=false)

Adaptor for converting a Lux Model to SimpleChains. The returned model is still a Lux model, and satisfies the AbstractLuxLayer interfacem but all internal calculations are performed using SimpleChains.

Warning

There is no way to preserve trained parameters and states when converting to SimpleChains.jl.

Warning

Any kind of initialization function is not preserved when converting to SimpleChains.jl.

Arguments

  • input_dims: Tuple of input dimensions excluding the batch dimension. These must be of static type as SimpleChains expects.

  • convert_to_array: SimpleChains.jl by default outputs StrideArraysCore.StrideArray, but this might not compose well with other packages. If convert_to_array is set to true, the output will be converted to a regular Array.

Example

julia
julia> import SimpleChains
+(2, 32)

source

Lux.FluxLayer Type
julia
FluxLayer(layer)

Serves as a compatibility layer between Flux and Lux. This uses Optimisers.destructure API internally.

Warning

Lux was written to overcome the limitations of destructure + Flux. It is recommended to rewrite your layer in Lux instead of using this layer.

Warning

Introducing this Layer in your model will lead to type instabilities, given the way Optimisers.destructure works.

Arguments

  • layer: Flux layer

Parameters

  • p: Flattened parameters of the layer

source

Using a different backend for Lux

Lux Models to Simple Chains

SimpleChains.jl provides a way to train Small Neural Networks really fast on CPUs. See this blog post for more details. This section describes how to convert Lux models to SimpleChains models while preserving the layer interface.

Tip

Accessing these functions require manually loading SimpleChains, i.e., using SimpleChains must be present somewhere in the code for these to be used.

Adapt.adapt Method
julia
Adapt.adapt(from::ToSimpleChainsAdaptor, L::AbstractLuxLayer)

Adapt a Simple Chains model to Lux model. See ToSimpleChainsAdaptor for more details.

source

Lux.ToSimpleChainsAdaptor Type
julia
ToSimpleChainsAdaptor(input_dims, convert_to_array::Bool=false)

Adaptor for converting a Lux Model to SimpleChains. The returned model is still a Lux model, and satisfies the AbstractLuxLayer interfacem but all internal calculations are performed using SimpleChains.

Warning

There is no way to preserve trained parameters and states when converting to SimpleChains.jl.

Warning

Any kind of initialization function is not preserved when converting to SimpleChains.jl.

Arguments

  • input_dims: Tuple of input dimensions excluding the batch dimension. These must be of static type as SimpleChains expects.

  • convert_to_array: SimpleChains.jl by default outputs StrideArraysCore.StrideArray, but this might not compose well with other packages. If convert_to_array is set to true, the output will be converted to a regular Array.

Example

julia
julia> import SimpleChains
 
 julia> using Adapt, Lux, Random
 
@@ -58,9 +58,9 @@
 julia> x = randn(Float32, 28, 28, 1, 1);
 
 julia> size(first(simple_chains_model(x, ps, st)))
-(10, 1)

source

Lux.SimpleChainsLayer Type
julia
SimpleChainsLayer(layer, to_array::Union{Bool, Val}=Val(false))
-SimpleChainsLayer(layer, lux_layer, to_array)

Wraps a SimpleChains layer into a Lux layer. All operations are performed using SimpleChains but the layer satisfies the AbstractLuxLayer interface.

ToArray is a boolean flag that determines whether the output should be converted to a regular Array or not. Default is false.

Arguments

  • layer: SimpleChains layer

  • lux_layer: Potentially equivalent Lux layer that is used for printing

source

- +(10, 1)

source

Lux.SimpleChainsLayer Type
julia
SimpleChainsLayer(layer, to_array::Union{Bool, Val}=Val(false))
+SimpleChainsLayer(layer, lux_layer, to_array)

Wraps a SimpleChains layer into a Lux layer. All operations are performed using SimpleChains but the layer satisfies the AbstractLuxLayer interface.

ToArray is a boolean flag that determines whether the output should be converted to a regular Array or not. Default is false.

Arguments

  • layer: SimpleChains layer

  • lux_layer: Potentially equivalent Lux layer that is used for printing

source

+ \ No newline at end of file diff --git a/dev/api/Lux/layers.html b/dev/api/Lux/layers.html index 1ff7d7490e..b3d213894c 100644 --- a/dev/api/Lux/layers.html +++ b/dev/api/Lux/layers.html @@ -5,15 +5,15 @@ Built-In Layers | Lux.jl Docs - + - + - - - + + + @@ -35,7 +35,7 @@ layer_2 = NoOpLayer(), layer_3 = NoOpLayer(), ) # Total: 0 parameters, - # plus 0 states.

source

Lux.Chain Type
julia
Chain(layers...; name=nothing)
+          #        plus 0 states.

source

Lux.Chain Type
julia
Chain(layers...; name=nothing)
 Chain(; layers..., name=nothing)

Collects multiple layers / functions to be called in sequence on a given input.

Arguments

Extended Help

Inputs

Input x is passed sequentially to each layer, and must conform to the input requirements of the internal layers.

Returns

Parameters

States

Miscellaneous Properties

Example

julia
julia> Chain(Dense(2, 3, relu), BatchNorm(3), Dense(3, 2))
 Chain(
     layer_1 = Dense(2 => 3, relu),      # 9 parameters
@@ -50,7 +50,7 @@
     layer_2 = BatchNorm(3, affine=true, track_stats=true),  # 6 parameters, plus 7
     layer_3 = Dense(3 => 2),            # 8 parameters
 )         # Total: 23 parameters,
-          #        plus 7 states.

source

Lux.PairwiseFusion Type
julia
PairwiseFusion(connection, layers...; name=nothing)
+          #        plus 7 states.

source

Lux.PairwiseFusion Type
julia
PairwiseFusion(connection, layers...; name=nothing)
 PairwiseFusion(connection; name=nothing, layers...)
 PairwiseFusion(; connection, layers..., name=nothing)
x1 → layer1 → y1 ↘
                   connection → layer2 → y2 ↘
@@ -61,7 +61,7 @@
 end
  1. Any other kind of input
julia
y = x
 for i in 1:N
     y = connection(x, layers[i](y))
-end

Returns

Parameters

States

source

Lux.Parallel Type
julia
Parallel(connection, layers...; name=nothing)
+end

Returns

Parameters

States

source

Lux.Parallel Type
julia
Parallel(connection, layers...; name=nothing)
 Parallel(connection; name=nothing, layers...)
 Parallel(; connection, layers..., name=nothing)

Create a layer which passes an input to each path in layers, before reducing the output with connection.

Arguments

Extended Help

Inputs

Returns

Parameters

States

See also SkipConnection which is Parallel with one identity.

Example

julia
julia> model = Parallel(nothing, Dense(2, 1), Dense(2, 1))
 Parallel(
@@ -77,37 +77,37 @@
        x2 = randn(rng, Float32, 2);
 
 julia> size.(first(model((x1, x2), ps, st)))
-((1,), (1,))

source

Lux.SkipConnection Type
julia
SkipConnection(layers, connection; name=nothing)
-SkipConnection(; layers, connection, name=nothing)

Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

The simplest "ResNet"-type connection is just SkipConnection(layer, +).

Arguments

Extended Help

Inputs

Returns

Parameters

States

See Parallel for a more general implementation.

source

Lux.RepeatedLayer Type
julia
RepeatedLayer(model; repeats::Val = Val(10), input_injection::Val = Val(false))

Iteratively applies model for repeats number of times. The initial input is passed into the model repeatedly if input_injection = Val(true). This layer unrolls the computation, however, semantically this is same as:

julia
res = x
+((1,), (1,))

source

Lux.SkipConnection Type
julia
SkipConnection(layers, connection; name=nothing)
+SkipConnection(; layers, connection, name=nothing)

Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

The simplest "ResNet"-type connection is just SkipConnection(layer, +).

Arguments

Extended Help

Inputs

Returns

Parameters

States

See Parallel for a more general implementation.

source

Lux.RepeatedLayer Type
julia
RepeatedLayer(model; repeats::Val = Val(10), input_injection::Val = Val(false))

Iteratively applies model for repeats number of times. The initial input is passed into the model repeatedly if input_injection = Val(true). This layer unrolls the computation, however, semantically this is same as:

julia
res = x
 for i in 1:repeats
     res, st = model(res, ps, st)
 end
julia
res = x
 for i in 1:repeats
     res, st = model((res, x), ps, st)
-end

It is expected that repeats will be a reasonable number below 20, beyond that compile times for gradients might be unreasonably high.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

source

Convolutional Layers

Lux.Conv Type
julia
Conv(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
+end

It is expected that repeats will be a reasonable number below 20, beyond that compile times for gradients might be unreasonably high.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

source

Convolutional Layers

Lux.Conv Type
julia
Conv(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
      activation=identity; init_weight=nothing, init_bias=nothing, stride=1,
-     pad=0, dilation=1, groups=1, use_bias=True(), cross_correlation=False())

Standard convolutional layer.

Conv 2D

Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100 x 100 RGB image would be a 100 x 100 x 3 x 1 array, and a batch of 50 would be a 100 x 100 x 3 x 50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5, 5), a 2-tuple of integers. To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N + 2, where size(x, N + 1) == in_chs is the number of input channels, and size(x, ndims(x)) is the number of observations in a batch.

Warning

Frameworks like Pytorch perform cross-correlation in their convolution layers. Pass cross_correlation=true to use cross-correlation instead.

Arguments

Extended Help

Keyword Arguments

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

Parameters

source

Lux.ConvTranspose Type
julia
ConvTranspose(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
+     pad=0, dilation=1, groups=1, use_bias=True(), cross_correlation=False())

Standard convolutional layer.

Conv 2D

Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100 x 100 RGB image would be a 100 x 100 x 3 x 1 array, and a batch of 50 would be a 100 x 100 x 3 x 50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5, 5), a 2-tuple of integers. To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N + 2, where size(x, N + 1) == in_chs is the number of input channels, and size(x, ndims(x)) is the number of observations in a batch.

Warning

Frameworks like Pytorch perform cross-correlation in their convolution layers. Pass cross_correlation=true to use cross-correlation instead.

Arguments

Extended Help

Keyword Arguments

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

Parameters

source

Lux.ConvTranspose Type
julia
ConvTranspose(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
               activation=identity; init_weight=glorot_uniform, init_bias=zeros32,
               stride=1, pad=0, outpad=0, dilation=1, groups=1, use_bias=True(),
-              cross_correlation=False())

Standard convolutional transpose layer.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

source

Dropout Layers

Lux.AlphaDropout Type
julia
AlphaDropout(p::Real)

AlphaDropout layer.

Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also Dropout, VariationalHiddenDropout

source

Lux.Dropout Type
julia
Dropout(p; dims=:)

Dropout layer.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, VariationalHiddenDropout

source

Lux.VariationalHiddenDropout Type
julia
VariationalHiddenDropout(p; dims=:)

VariationalHiddenDropout layer. The only difference from Dropout is that the mask is retained until Lux.update_state(l, :update_mask, Val(true)) is called.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, Dropout

source

Pooling Layers

Lux.AdaptiveLPPool Type
julia
AdaptiveLPPool(output_size; p=2)

Adaptive LP Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

Lux.AdaptiveMaxPool Type
julia
AdaptiveMaxPool(output_size)

Adaptive Max Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

Lux.AdaptiveMeanPool Type
julia
AdaptiveMeanPool(output_size)

Adaptive Mean Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

Lux.GlobalLPPool Type
julia
GlobalLPPool(; p=2)

Global LP Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

Lux.GlobalMaxPool Type
julia
GlobalMaxPool()

Global Max Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

Lux.GlobalMeanPool Type
julia
GlobalMeanPool()

Global Mean Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

Lux.LPPool Type
julia
LPPool(window; stride=window, pad=0, dilation=1, p=2)

LP Pooling layer, which replaces all pixels in a block of size window with the reduction operation: lp.

Arguments

Keyword Arguments

GPU Support

This layer is currently only supported on CPU.

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Lux.MaxPool Type
julia
MaxPool(window; stride=window, pad=0, dilation=1)

Max Pooling layer, which replaces all pixels in a block of size window with the reduction operation: max.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Lux.MeanPool Type
julia
MeanPool(window; stride=window, pad=0, dilation=1)

Mean Pooling layer, which replaces all pixels in a block of size window with the reduction operation: mean.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Recurrent Layers

Lux.GRUCell Type
julia
GRUCell((in_dims, out_dims)::Pair{<:Int,<:Int}; use_bias=true, train_state::Bool=false,
-        init_weight=nothing, init_bias=nothing, init_state=zeros32)

Gated Recurrent Unit (GRU) Cell

r=σ(Wir×x+bir+Whr×hprev+bhr)z=σ(Wiz×x+biz+Whz×hprev+bhz)n=tanh(Win×x+bin+r(Whn×hprev+bhn))hnew=(1z)n+zhprev

Arguments

Inputs

Returns

Parameters

States

source

Lux.LSTMCell Type
julia
LSTMCell(in_dims => out_dims; use_bias::Bool=true, train_state::Bool=false,
+              cross_correlation=False())

Standard convolutional transpose layer.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

source

Dropout Layers

Lux.AlphaDropout Type
julia
AlphaDropout(p::Real)

AlphaDropout layer.

Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also Dropout, VariationalHiddenDropout

source

Lux.Dropout Type
julia
Dropout(p; dims=:)

Dropout layer.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, VariationalHiddenDropout

source

Lux.VariationalHiddenDropout Type
julia
VariationalHiddenDropout(p; dims=:)

VariationalHiddenDropout layer. The only difference from Dropout is that the mask is retained until Lux.update_state(l, :update_mask, Val(true)) is called.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, Dropout

source

Pooling Layers

Lux.AdaptiveLPPool Type
julia
AdaptiveLPPool(output_size; p=2)

Adaptive LP Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

Lux.AdaptiveMaxPool Type
julia
AdaptiveMaxPool(output_size)

Adaptive Max Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

Lux.AdaptiveMeanPool Type
julia
AdaptiveMeanPool(output_size)

Adaptive Mean Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

Lux.GlobalLPPool Type
julia
GlobalLPPool(; p=2)

Global LP Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

Lux.GlobalMaxPool Type
julia
GlobalMaxPool()

Global Max Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

Lux.GlobalMeanPool Type
julia
GlobalMeanPool()

Global Mean Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

Lux.LPPool Type
julia
LPPool(window; stride=window, pad=0, dilation=1, p=2)

LP Pooling layer, which replaces all pixels in a block of size window with the reduction operation: lp.

Arguments

Keyword Arguments

GPU Support

This layer is currently only supported on CPU.

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Lux.MaxPool Type
julia
MaxPool(window; stride=window, pad=0, dilation=1)

Max Pooling layer, which replaces all pixels in a block of size window with the reduction operation: max.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Lux.MeanPool Type
julia
MeanPool(window; stride=window, pad=0, dilation=1)

Mean Pooling layer, which replaces all pixels in a block of size window with the reduction operation: mean.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Oi=Ii+pi+p(i+N)%|p|di×(ki1)si+1

source

Recurrent Layers

Lux.GRUCell Type
julia
GRUCell((in_dims, out_dims)::Pair{<:Int,<:Int}; use_bias=true, train_state::Bool=false,
+        init_weight=nothing, init_bias=nothing, init_state=zeros32)

Gated Recurrent Unit (GRU) Cell

r=σ(Wir×x+bir+Whr×hprev+bhr)z=σ(Wiz×x+biz+Whz×hprev+bhz)n=tanh(Win×x+bin+r(Whn×hprev+bhn))hnew=(1z)n+zhprev

Arguments

Inputs

Returns

Parameters

States

source

Lux.LSTMCell Type
julia
LSTMCell(in_dims => out_dims; use_bias::Bool=true, train_state::Bool=false,
          train_memory::Bool=false, init_weight=nothing, init_bias=nothing,
-         init_state=zeros32, init_memory=zeros32)

Long Short-Term (LSTM) Cell

i=σ(Wii×x+Whi×hprev+bi)f=σ(Wif×x+Whf×hprev+bf)g=tanh(Wig×x+Whg×hprev+bg)o=σ(Wio×x+Who×hprev+bo)cnew=fcprev+ighnew=otanh(cnew)

Arguments

Inputs

Returns

Parameters

States

source

Lux.RNNCell Type
julia
RNNCell(in_dims => out_dims, activation=tanh; use_bias=True(), train_state=False(),
-    init_bias=nothing, init_weight=nothing, init_state=zeros32)

An Elman RNNCell cell with activation (typically set to tanh or relu).

hnew=activation(weightih×x+biasih+weighthh×hprev+biashh)

Arguments

Inputs

Returns

Parameters

States

source

Lux.Recurrence Type
julia
Recurrence(cell;
+         init_state=zeros32, init_memory=zeros32)

Long Short-Term (LSTM) Cell

i=σ(Wii×x+Whi×hprev+bi)f=σ(Wif×x+Whf×hprev+bf)g=tanh(Wig×x+Whg×hprev+bg)o=σ(Wio×x+Who×hprev+bo)cnew=fcprev+ighnew=otanh(cnew)

Arguments

Inputs

Returns

Parameters

States

source

Lux.RNNCell Type
julia
RNNCell(in_dims => out_dims, activation=tanh; use_bias=True(), train_state=False(),
+    init_bias=nothing, init_weight=nothing, init_state=zeros32)

An Elman RNNCell cell with activation (typically set to tanh or relu).

hnew=activation(weightih×x+biasih+weighthh×hprev+biashh)

Arguments

Inputs

Returns

Parameters

States

source

Lux.Recurrence Type
julia
Recurrence(cell;
     ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex(),
     return_sequence::Bool=false)

Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) to automatically operate over a sequence of inputs.

Relation to Flux.Recur

This is completely distinct from Flux.Recur. It doesn't make the cell stateful, rather allows operating on an entire sequence of inputs at once. See StatefulRecurrentCell for functionality similar to Flux.Recur.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Tip

Frameworks like Tensorflow have special implementation of StackedRNNCells to handle sequentially composed RNN Cells. In Lux, one can simple stack multiple Recurrence blocks in a Chain to achieve the same.

Chain(
     Recurrence(RNNCell(inputsize => latentsize); return_sequence=true),
     Recurrence(RNNCell(latentsize => latentsize); return_sequence=true),
     :
     x -> stack(x; dims=2)
-)

For some discussion on this topic, see https://github.com/LuxDL/Lux.jl/issues/472.

source

Lux.StatefulRecurrentCell Type
julia
StatefulRecurrentCell(cell)

Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) and makes it stateful.

To avoid undefined behavior, once the processing of a single sequence of data is complete, update the state with Lux.update_state(st, :carry, nothing).

Arguments

Inputs

Returns

States

source

Lux.BidirectionalRNN Type
julia
BidirectionalRNN(cell::AbstractRecurrentCell,
+)

For some discussion on this topic, see https://github.com/LuxDL/Lux.jl/issues/472.

source

Lux.StatefulRecurrentCell Type
julia
StatefulRecurrentCell(cell)

Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) and makes it stateful.

To avoid undefined behavior, once the processing of a single sequence of data is complete, update the state with Lux.update_state(st, :carry, nothing).

Arguments

Inputs

Returns

States

source

Lux.BidirectionalRNN Type
julia
BidirectionalRNN(cell::AbstractRecurrentCell,
     backward_cell::Union{AbstractRecurrentCell, Nothing}=nothing;
     merge_mode::Union{Function, Nothing}=vcat,
-    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex())

Bidirectional RNN wrapper.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

source

Linear Layers

Lux.Bilinear Type
julia
Bilinear((in1_dims, in2_dims) => out, activation=identity; init_weight=nothing,
+    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex())

Bidirectional RNN wrapper.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

source

Linear Layers

Lux.Bilinear Type
julia
Bilinear((in1_dims, in2_dims) => out, activation=identity; init_weight=nothing,
          init_bias=nothing, use_bias=True())
 Bilinear(in12_dims => out, activation=identity; init_weight=nothing,
-         init_bias=nothing, use_bias=True())

Create a fully connected layer between two inputs and an output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i in 1:out:

z[i] = activation(x' * W[i, :, :] * y + bias[i])

If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Lux.Dense Type
julia
Dense(in_dims => out_dims, activation=identity; init_weight=nothing,
-      init_bias=nothing, use_bias=True())

Create a traditional fully connected layer, whose forward pass is given by: y = activation.(weight * x .+ bias)

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Lux.Embedding Type
julia
Embedding(in_dims => out_dims; init_weight=rand32)

A lookup table that stores embeddings of dimension out_dims for a vocabulary of size in_dims. When the vocabulary is multi-dimensional, the input is expected to be a tuple of Cartesian indices.

This layer is often used to store word embeddings and retrieve them using indices.

Arguments

Keyword Arguments

Input

Returns

source

Lux.Scale Type
julia
Scale(dims, activation=identity; init_weight=ones32, init_bias=zeros32, use_bias=True())

Create a Sparsely Connected Layer with a very specific structure (only Diagonal Elements are non-zero). The forward pass is given by: y = activation.(weight .* x .+ bias)

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Misc. Helper Layers

Lux.FlattenLayer Type
julia
FlattenLayer(; N = nothing)

Flattens the passed array into a matrix.

Keyword Arguments

Inputs

Returns

Example

julia
julia> model = FlattenLayer()
+         init_bias=nothing, use_bias=True())

Create a fully connected layer between two inputs and an output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i in 1:out:

z[i] = activation(x' * W[i, :, :] * y + bias[i])

If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Lux.Dense Type
julia
Dense(in_dims => out_dims, activation=identity; init_weight=nothing,
+      init_bias=nothing, use_bias=True())

Create a traditional fully connected layer, whose forward pass is given by: y = activation.(weight * x .+ bias)

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Lux.Embedding Type
julia
Embedding(in_dims => out_dims; init_weight=rand32)

A lookup table that stores embeddings of dimension out_dims for a vocabulary of size in_dims. When the vocabulary is multi-dimensional, the input is expected to be a tuple of Cartesian indices.

This layer is often used to store word embeddings and retrieve them using indices.

Arguments

Keyword Arguments

Input

Returns

source

Lux.Scale Type
julia
Scale(dims, activation=identity; init_weight=ones32, init_bias=zeros32, use_bias=True())

Create a Sparsely Connected Layer with a very specific structure (only Diagonal Elements are non-zero). The forward pass is given by: y = activation.(weight .* x .+ bias)

Arguments

Keyword Arguments

Input

Returns

Parameters

source

Misc. Helper Layers

Lux.FlattenLayer Type
julia
FlattenLayer(; N = nothing)

Flattens the passed array into a matrix.

Keyword Arguments

Inputs

Returns

Example

julia
julia> model = FlattenLayer()
 FlattenLayer{Nothing}(nothing)
 
 julia> rng = Random.default_rng();
@@ -117,9 +117,9 @@
 
 julia> y, st_new = model(x, ps, st);
        size(y)
-(8, 2)

source

Lux.Maxout Type
julia
Maxout(layers...)
+(8, 2)

source

Lux.Maxout Type
julia
Maxout(layers...)
 Maxout(; layers...)
-Maxout(f::Function, n_alts::Int)

This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the the internal layers' outputs.

Maxout over linear dense layers satisfies the universal approximation theorem. See [1].

See also Parallel to reduce with other operators.

Arguments

Extended Help

Inputs

Returns

Parameters

States

References

[1] Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389

source

Lux.NoOpLayer Type
julia
NoOpLayer()

As the name suggests does nothing but allows pretty printing of layers. Whatever input is passed is returned.

Example

julia
julia> model = NoOpLayer()
+Maxout(f::Function, n_alts::Int)

This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the the internal layers' outputs.

Maxout over linear dense layers satisfies the universal approximation theorem. See [1].

See also Parallel to reduce with other operators.

Arguments

Extended Help

Inputs

Returns

Parameters

States

References

[1] Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389

source

Lux.NoOpLayer Type
julia
NoOpLayer()

As the name suggests does nothing but allows pretty printing of layers. Whatever input is passed is returned.

Example

julia
julia> model = NoOpLayer()
 NoOpLayer()
 
 julia> rng = Random.default_rng();
@@ -129,7 +129,7 @@
 1
 
 julia> y, st_new = model(x, ps, st)
-(1, NamedTuple())

source

Lux.ReshapeLayer Type
julia
ReshapeLayer(dims)

Reshapes the passed array to have a size of (dims..., :)

Arguments

Inputs

Returns

Example

julia
julia> model = ReshapeLayer((2, 2))
+(1, NamedTuple())

source

Lux.ReshapeLayer Type
julia
ReshapeLayer(dims)

Reshapes the passed array to have a size of (dims..., :)

Arguments

Inputs

Returns

Example

julia
julia> model = ReshapeLayer((2, 2))
 ReshapeLayer(output_dims = (2, 2, :))
 
 julia> rng = Random.default_rng();
@@ -139,7 +139,7 @@
 
 julia> y, st_new = model(x, ps, st);
        size(y)
-(2, 2, 3)

source

Lux.SelectDim Type
julia
SelectDim(dim, i)

Return a view of all the data of the input x where the index for dimension dim equals i. Equivalent to view(x,:,:,...,i,:,:,...) where i is in position d.

Arguments

Inputs

Returns

source

Lux.WrappedFunction Type
julia
WrappedFunction(f)

Wraps a stateless and parameter less function. Might be used when a function is added to Chain. For example, Chain(x -> relu.(x)) would not work and the right thing to do would be Chain((x, ps, st) -> (relu.(x), st)). An easier thing to do would be Chain(WrappedFunction(Base.Fix1(broadcast, relu)))

Arguments

Inputs

Returns

source

Lux.ReverseSequence Type
julia
ReverseSequence(dim = nothing)

Reverse the specified dimension dims of the passed array

Arguments

Inputs

Returns

Example

julia
julia> model = ReverseSequence()
+(2, 2, 3)

source

Lux.SelectDim Type
julia
SelectDim(dim, i)

Return a view of all the data of the input x where the index for dimension dim equals i. Equivalent to view(x,:,:,...,i,:,:,...) where i is in position d.

Arguments

Inputs

Returns

source

Lux.WrappedFunction Type
julia
WrappedFunction(f)

Wraps a stateless and parameter less function. Might be used when a function is added to Chain. For example, Chain(x -> relu.(x)) would not work and the right thing to do would be Chain((x, ps, st) -> (relu.(x), st)). An easier thing to do would be Chain(WrappedFunction(Base.Fix1(broadcast, relu)))

Arguments

Inputs

Returns

source

Lux.ReverseSequence Type
julia
ReverseSequence(dim = nothing)

Reverse the specified dimension dims of the passed array

Arguments

Inputs

Returns

Example

julia
julia> model = ReverseSequence()
 ReverseSequence{Nothing}(nothing)
 
 julia> rng = Random.default_rng();
@@ -148,7 +148,7 @@
        x = [1.0, 2.0, 3.0];
 
 julia> y, st_new = model(x, ps, st)
-([3.0, 2.0, 1.0], NamedTuple())

source

Normalization Layers

Lux.BatchNorm Type
julia
BatchNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
+([3.0, 2.0, 1.0], NamedTuple())

source

Normalization Layers

Lux.BatchNorm Type
julia
BatchNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
           affine=True(), track_stats=True(), epsilon=1f-5, momentum=0.1f0)

Batch Normalization layer.

BatchNorm computes the mean and variance for each D1×...×DN2×1×DN input slice and normalises the input accordingly.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

Use Lux.testmode during inference.

Example

julia
julia> Chain(Dense(784 => 64), BatchNorm(64, relu), Dense(64 => 10), BatchNorm(10))
 Chain(
     layer_1 = Dense(784 => 64),         # 50_240 parameters
@@ -156,7 +156,7 @@
     layer_3 = Dense(64 => 10),          # 650 parameters
     layer_4 = BatchNorm(10, affine=true, track_stats=true),  # 20 parameters, plus 21
 )         # Total: 51_038 parameters,
-          #        plus 150 states.

Warning

Passing a batch size of 1, during training will result in an error.

See also BatchNorm, InstanceNorm, LayerNorm, WeightNorm

source

Lux.GroupNorm Type
julia
GroupNorm(chs::Integer, groups::Integer, activation=identity; init_bias=zeros32,
+          #        plus 150 states.

Warning

Passing a batch size of 1, during training will result in an error.

See also BatchNorm, InstanceNorm, LayerNorm, WeightNorm

source

Lux.GroupNorm Type
julia
GroupNorm(chs::Integer, groups::Integer, activation=identity; init_bias=zeros32,
           init_scale=ones32, affine=true, epsilon=1f-5)

Group Normalization layer.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

Use Lux.testmode during inference.

Example

julia
julia> Chain(Dense(784 => 64), GroupNorm(64, 4, relu), Dense(64 => 10), GroupNorm(10, 5))
 Chain(
     layer_1 = Dense(784 => 64),         # 50_240 parameters
@@ -164,7 +164,7 @@
     layer_3 = Dense(64 => 10),          # 650 parameters
     layer_4 = GroupNorm(10, 5, affine=true),  # 20 parameters
 )         # Total: 51_038 parameters,
-          #        plus 0 states.

See also GroupNorm, InstanceNorm, LayerNorm, WeightNorm

source

Lux.InstanceNorm Type
julia
InstanceNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
+          #        plus 0 states.

See also GroupNorm, InstanceNorm, LayerNorm, WeightNorm

source

Lux.InstanceNorm Type
julia
InstanceNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
              affine=False(), track_stats=False(), epsilon=1f-5, momentum=0.1f0)

Instance Normalization. For details see [1].

Instance Normalization computes the mean and variance for each D1×...×DN2×1×1` input slice and normalises the input accordingly.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

Use Lux.testmode during inference.

Example

julia
julia> Chain(Dense(784 => 64), InstanceNorm(64, relu; affine=true), Dense(64 => 10),
            InstanceNorm(10, relu; affine=true))
 Chain(
@@ -173,11 +173,11 @@
     layer_3 = Dense(64 => 10),          # 650 parameters
     layer_4 = InstanceNorm(10, relu, affine=true, track_stats=false),  # 20 parameters, plus 1
 )         # Total: 51_038 parameters,
-          #        plus 2 states.

References

[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

See also BatchNorm, GroupNorm, LayerNorm, WeightNorm

source

Lux.LayerNorm Type
julia
LayerNorm(shape::NTuple{N, Int}, activation=identity; epsilon=1f-5, dims=Colon(),
-          affine=true, init_bias=zeros32, init_scale=ones32)

Computes mean and standard deviation over the whole input array, and uses these to normalize the whole array. Optionally applies an elementwise affine transformation afterwards.

Given an input array x, this layer computes

y=xE[x]Var[x]+ϵγ+β

where γ & β are trainable parameters if affine=true.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

source

Lux.WeightNorm Type
julia
WeightNorm(layer::AbstractLuxLayer, which_params::NTuple{N, Symbol},
-           dims::Union{Tuple, Nothing}=nothing)

Applies weight normalization to a parameter in the given layer.

w=gvv

Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This updates the parameters in which_params (e.g. weight) using two parameters: one specifying the magnitude (e.g. weight_g) and one specifying the direction (e.g. weight_v).

Arguments

Inputs

Returns

Parameters

States

source

Upsampling

Lux.PixelShuffle Type
julia
PixelShuffle(r::Int)

Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

See NNlib.pixel_shuffle for more details.

Arguments

Inputs

Returns

source

Lux.Upsample Type
julia
Upsample(mode = :nearest; [scale, size, align_corners=false])
-Upsample(scale, mode = :nearest)

Upsampling Layer.

Layer Construction

Option 1

Exactly one of two keywords must be specified:

Option 2

Currently supported upsampling modes and corresponding NNlib's methods are:

Extended Help

Other Keyword Arguments

Inputs

Returns

source

- + # plus 2 states.

References

[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

See also BatchNorm, GroupNorm, LayerNorm, WeightNorm

source

Lux.LayerNorm Type
julia
LayerNorm(shape::NTuple{N, Int}, activation=identity; epsilon=1f-5, dims=Colon(),
+          affine=true, init_bias=zeros32, init_scale=ones32)

Computes mean and standard deviation over the whole input array, and uses these to normalize the whole array. Optionally applies an elementwise affine transformation afterwards.

Given an input array x, this layer computes

y=xE[x]Var[x]+ϵγ+β

where γ & β are trainable parameters if affine=true.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

source

Lux.WeightNorm Type
julia
WeightNorm(layer::AbstractLuxLayer, which_params::NTuple{N, Symbol},
+           dims::Union{Tuple, Nothing}=nothing)

Applies weight normalization to a parameter in the given layer.

w=gvv

Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This updates the parameters in which_params (e.g. weight) using two parameters: one specifying the magnitude (e.g. weight_g) and one specifying the direction (e.g. weight_v).

Arguments

Inputs

Returns

Parameters

States

source

Upsampling

Lux.PixelShuffle Type
julia
PixelShuffle(r::Int)

Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

See NNlib.pixel_shuffle for more details.

Arguments

Inputs

Returns

source

Lux.Upsample Type
julia
Upsample(mode = :nearest; [scale, size, align_corners=false])
+Upsample(scale, mode = :nearest)

Upsampling Layer.

Layer Construction

Option 1

Exactly one of two keywords must be specified:

Option 2

Currently supported upsampling modes and corresponding NNlib's methods are:

Extended Help

Other Keyword Arguments

Inputs

Returns

source

+ \ No newline at end of file diff --git a/dev/api/Lux/utilities.html b/dev/api/Lux/utilities.html index 1176ae96d4..33451116da 100644 --- a/dev/api/Lux/utilities.html +++ b/dev/api/Lux/utilities.html @@ -5,15 +5,15 @@ Utilities | Lux.jl Docs - + - + - - - + + + @@ -28,13 +28,13 @@ -
Skip to content

Utilities

Training API

Helper Functions making it easier to train Lux.jl models.

Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.

Lux.Training.TrainState Type
julia
TrainState

Training State containing:

  • model: Lux model.

  • parameters: Trainable Variables of the model.

  • states: Non-trainable Variables of the model.

  • optimizer: Optimizer from Optimisers.jl.

  • optimizer_state: Optimizer State.

  • step: Number of updates of the parameters made.

Internal fields:

  • cache: Cached values. Implementations are free to use this for whatever they want.

  • objective_function: Objective function might be cached.

Warning

Constructing this object directly shouldn't be considered a stable API. Use the version with the Optimisers API.

source

Lux.Training.TrainState Method
julia
TrainState(model::Lux.AbstractLuxLayer, ps, st, optimizer::Optimisers.AbstractRule)

Constructor for TrainState.

Arguments

  • ps: Parameters of the model.

  • st: States of the model.

  • model: Lux model.

  • optimizer: Optimizer from Optimisers.jl.

Returns

TrainState object.

source

Lux.Training.compute_gradients Function
julia
compute_gradients(ad::AbstractADType, objective_function::Function, data,
-    ts::TrainState)

Compute the gradients of the objective function wrt parameters stored in ts.

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl
AutoReverseDiff(; compile)ReverseDiff.jl
AutoTrackerTracker.jl
AutoEnzymeEnzyme.jl

Arguments

  • ad: Backend (from ADTypes.jl) used to compute the gradients.

  • objective_function: Objective function. The function must take 4 inputs – model, parameters, states and data. The function must return 3 values – loss, updated_state, and any computed statistics.

  • data: Data used to compute the gradients.

  • ts: Current Training State. See TrainState.

Return

A 4-Tuple containing:

  • grads: Computed Gradients.

  • loss: Loss from the objective function.

  • stats: Any computed statistics from the objective function.

  • ts: Updated Training State.

Known Limitations

  • AutoReverseDiff(; compile=true) is not supported for Lux models with non-empty state st. Additionally the returned stats must be empty (NamedTuple()). We catch these issues in most cases and throw an error.

Aliased Gradients

grads returned by this function might be aliased by the implementation of the gradient backend. For example, if you cache the grads from step i, the new gradients returned in step i + 1 might be aliased by the old gradients. If you want to prevent this, simply use copy(grads) or deepcopy(grads) to make a copy of the gradients.

source

Lux.Training.apply_gradients Function
julia
apply_gradients(ts::TrainState, grads)

Update the parameters stored in ts using the gradients grads.

Arguments

  • ts: TrainState object.

  • grads: Gradients of the loss function wrt ts.params.

Returns

Updated TrainState object.

source

Lux.Training.apply_gradients! Function
julia
apply_gradients!(ts::TrainState, grads)

Update the parameters stored in ts using the gradients grads. This is an inplace version of apply_gradients.

Arguments

  • ts: TrainState object.

  • grads: Gradients of the loss function wrt ts.params.

Returns

Updated TrainState object.

source

Lux.Training.single_train_step Function
julia
single_train_step(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients. All backends supported via compute_gradients are supported here.

In most cases you should use single_train_step! instead of this function.

Keyword Arguments

  • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

Return

Returned values are the same as single_train_step!.

source

Lux.Training.single_train_step! Function
julia
single_train_step!(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients!. All backends supported via compute_gradients are supported here.

Keyword Arguments

  • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

Return

Returned values are the same as compute_gradients. Note that despite the !, only the parameters in ts are updated inplace. Users should be using the returned ts object for further training steps, else there is no caching and performance will be suboptimal (and absolutely terrible for backends like AutoReactant).

source

Loss Functions

Loss Functions Objects take 2 forms of inputs:

  1. and y where is the predicted output and y is the target output.

  2. model, ps, st, (x, y) where model is the model, ps are the parameters, st are the states and (x, y) are the input and target pair. Then it returns the loss, updated states, and an empty named tuple. This makes them compatible with the Training API.

Warning

When using ChainRules.jl compatible AD (like Zygote), we only compute the gradients wrt the inputs and drop any gradients wrt the targets.

Lux.GenericLossFunction Type
julia
GenericLossFunction(loss_fn; agg = mean)

Takes any function loss_fn that maps 2 number inputs to a single number output. Additionally, array inputs are efficiently broadcasted and aggregated using agg.

julia
julia> mseloss = GenericLossFunction((ŷ, y) -> abs2(ŷ - y));
+    
Skip to content

Utilities

Training API

Helper Functions making it easier to train Lux.jl models.

Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.

Lux.Training.TrainState Type
julia
TrainState

Training State containing:

  • model: Lux model.

  • parameters: Trainable Variables of the model.

  • states: Non-trainable Variables of the model.

  • optimizer: Optimizer from Optimisers.jl.

  • optimizer_state: Optimizer State.

  • step: Number of updates of the parameters made.

Internal fields:

  • cache: Cached values. Implementations are free to use this for whatever they want.

  • objective_function: Objective function might be cached.

Warning

Constructing this object directly shouldn't be considered a stable API. Use the version with the Optimisers API.

source

Lux.Training.TrainState Method
julia
TrainState(model::Lux.AbstractLuxLayer, ps, st, optimizer::Optimisers.AbstractRule)

Constructor for TrainState.

Arguments

  • ps: Parameters of the model.

  • st: States of the model.

  • model: Lux model.

  • optimizer: Optimizer from Optimisers.jl.

Returns

TrainState object.

source

Lux.Training.compute_gradients Function
julia
compute_gradients(ad::AbstractADType, objective_function::Function, data,
+    ts::TrainState)

Compute the gradients of the objective function wrt parameters stored in ts.

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl
AutoReverseDiff(; compile)ReverseDiff.jl
AutoTrackerTracker.jl
AutoEnzymeEnzyme.jl

Arguments

  • ad: Backend (from ADTypes.jl) used to compute the gradients.

  • objective_function: Objective function. The function must take 4 inputs – model, parameters, states and data. The function must return 3 values – loss, updated_state, and any computed statistics.

  • data: Data used to compute the gradients.

  • ts: Current Training State. See TrainState.

Return

A 4-Tuple containing:

  • grads: Computed Gradients.

  • loss: Loss from the objective function.

  • stats: Any computed statistics from the objective function.

  • ts: Updated Training State.

Known Limitations

  • AutoReverseDiff(; compile=true) is not supported for Lux models with non-empty state st. Additionally the returned stats must be empty (NamedTuple()). We catch these issues in most cases and throw an error.

Aliased Gradients

grads returned by this function might be aliased by the implementation of the gradient backend. For example, if you cache the grads from step i, the new gradients returned in step i + 1 might be aliased by the old gradients. If you want to prevent this, simply use copy(grads) or deepcopy(grads) to make a copy of the gradients.

source

Lux.Training.apply_gradients Function
julia
apply_gradients(ts::TrainState, grads)

Update the parameters stored in ts using the gradients grads.

Arguments

  • ts: TrainState object.

  • grads: Gradients of the loss function wrt ts.params.

Returns

Updated TrainState object.

source

Lux.Training.apply_gradients! Function
julia
apply_gradients!(ts::TrainState, grads)

Update the parameters stored in ts using the gradients grads. This is an inplace version of apply_gradients.

Arguments

  • ts: TrainState object.

  • grads: Gradients of the loss function wrt ts.params.

Returns

Updated TrainState object.

source

Lux.Training.single_train_step Function
julia
single_train_step(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients. All backends supported via compute_gradients are supported here.

In most cases you should use single_train_step! instead of this function.

Keyword Arguments

  • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

Return

Returned values are the same as single_train_step!.

source

Lux.Training.single_train_step! Function
julia
single_train_step!(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients!. All backends supported via compute_gradients are supported here.

Keyword Arguments

  • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

Return

Returned values are the same as compute_gradients. Note that despite the !, only the parameters in ts are updated inplace. Users should be using the returned ts object for further training steps, else there is no caching and performance will be suboptimal (and absolutely terrible for backends like AutoReactant).

source

Loss Functions

Loss Functions Objects take 2 forms of inputs:

  1. and y where is the predicted output and y is the target output.

  2. model, ps, st, (x, y) where model is the model, ps are the parameters, st are the states and (x, y) are the input and target pair. Then it returns the loss, updated states, and an empty named tuple. This makes them compatible with the Training API.

Warning

When using ChainRules.jl compatible AD (like Zygote), we only compute the gradients wrt the inputs and drop any gradients wrt the targets.

Lux.GenericLossFunction Type
julia
GenericLossFunction(loss_fn; agg = mean)

Takes any function loss_fn that maps 2 number inputs to a single number output. Additionally, array inputs are efficiently broadcasted and aggregated using agg.

julia
julia> mseloss = GenericLossFunction((ŷ, y) -> abs2(ŷ - y));
 
 julia> y_model = [1.1, 1.9, 3.1];
 
 julia> mseloss(y_model, 1:3)  0.01
-true

Special Note

This function takes any of the LossFunctions.jl public functions into the Lux Losses API with efficient aggregation.

source

Lux.BinaryCrossEntropyLoss Type
julia
BinaryCrossEntropyLoss(; agg = mean, epsilon = nothing,
+true

Special Note

This function takes any of the LossFunctions.jl public functions into the Lux Losses API with efficient aggregation.

source

Lux.BinaryCrossEntropyLoss Type
julia
BinaryCrossEntropyLoss(; agg = mean, epsilon = nothing,
     label_smoothing::Union{Nothing, Real}=nothing,
     logits::Union{Bool, Val}=Val(false))

Binary Cross Entropy Loss with optional label smoothing and fused logit computation.

Returns the binary cross entropy loss computed as:

  • If logits is either false or Val(false):

    agg(y~log(y^+ϵ)(1y~)log(1y^+ϵ))
  • If logits is true or Val(true):

    agg((1y~)y^logσ(y^))

The value of y~ is computed using label smoothing. If label_smoothing is nothing, then no label smoothing is applied. If label_smoothing is a real number [0,1], then the value of y~ is:

y~=(1α)y+α0.5

where α is the value of label_smoothing.

Extended Help

Example

julia
julia> bce = BinaryCrossEntropyLoss();
 
@@ -62,7 +62,7 @@
 julia> logitbce_ls = BinaryCrossEntropyLoss(label_smoothing=0.1, logits=Val(true));
 
 julia> logitbce_ls(y_model, y_bin) > logitbce(y_model, y_bin)
-true

source

Lux.BinaryFocalLoss Type
julia
BinaryFocalLoss(; gamma = 2, agg = mean, epsilon = nothing)

Return the binary focal loss [1]. The model input, y^, is expected to be normalized (i.e. softmax output).

For γ=0 this is equivalent to BinaryCrossEntropyLoss.

Example

julia
julia> y = [0  1  0
+true

source

Lux.BinaryFocalLoss Type
julia
BinaryFocalLoss(; gamma = 2, agg = mean, epsilon = nothing)

Return the binary focal loss [1]. The model input, y^, is expected to be normalized (i.e. softmax output).

For γ=0 this is equivalent to BinaryCrossEntropyLoss.

Example

julia
julia> y = [0  1  0
             1  0  1];
 
 julia> ŷ = [0.268941  0.5  0.268941
@@ -72,7 +72,7 @@
 true
 
 julia> BinaryFocalLoss(gamma=0)(ŷ, y)  BinaryCrossEntropyLoss()(ŷ, y)
-true

References

[1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

source

Lux.CrossEntropyLoss Type
julia
CrossEntropyLoss(;
+true

References

[1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

source

Lux.CrossEntropyLoss Type
julia
CrossEntropyLoss(;
     agg=mean, epsilon=nothing, dims=1, logits::Union{Bool, Val}=Val(false),
     label_smoothing::Union{Nothing, Real}=nothing
 )

Return the cross entropy loss which is used in multi-class classification tasks. The input, y^, is expected to be normalized (i.e. softmax output) if logits is false or Val(false).

The loss is calculated as:

agg(y~log(y^+ϵ))

where ϵ is added for numerical stability. The value of y~ is computed using label smoothing. If label_smoothing is nothing, then no label smoothing is applied. If label_smoothing is a real number [0,1], then the value of y~ is calculated as:

y~=(1α)y+αsize along dim

where α is the value of label_smoothing.

Extended Help

Example

julia
julia> y = [1  0  0  0  1
@@ -96,7 +96,7 @@
 true
 
 julia> CrossEntropyLoss(label_smoothing=0.15)(y_model, y)  1.5776052f0
-true

source

Lux.DiceCoeffLoss Type
julia
DiceCoeffLoss(; smooth = true, agg = mean)

Return the Dice Coefficient loss [1] which is used in segmentation tasks. The dice coefficient is similar to the F1_score. Loss calculated as:

agg(12yy^+αy2+y^2+α)

where α is the smoothing factor (smooth).

Example

julia
julia> y_pred = [1.1, 2.1, 3.1];
+true

source

Lux.DiceCoeffLoss Type
julia
DiceCoeffLoss(; smooth = true, agg = mean)

Return the Dice Coefficient loss [1] which is used in segmentation tasks. The dice coefficient is similar to the F1_score. Loss calculated as:

agg(12yy^+αy2+y^2+α)

where α is the smoothing factor (smooth).

Example

julia
julia> y_pred = [1.1, 2.1, 3.1];
 
 julia> DiceCoeffLoss()(y_pred, 1:3)   0.000992391663909964
 true
@@ -105,7 +105,7 @@
 true
 
 julia> DiceCoeffLoss()(reshape(y_pred, 3, 1), reshape(1:3, 3, 1))  0.000992391663909964
-true

References

[1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). Ieee, 2016.

source

Lux.FocalLoss Type
julia
FocalLoss(; gamma = 2, dims = 1, agg = mean, epsilon = nothing)

Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, y^, is expected to be normalized (i.e. softmax output).

The modulating factor γ, controls the down-weighting strength. For γ=0 this is equivalent to CrossEntropyLoss.

Example

julia
julia> y = [1  0  0  0  1
+true

References

[1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). Ieee, 2016.

source

Lux.FocalLoss Type
julia
FocalLoss(; gamma = 2, dims = 1, agg = mean, epsilon = nothing)

Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, y^, is expected to be normalized (i.e. softmax output).

The modulating factor γ, controls the down-weighting strength. For γ=0 this is equivalent to CrossEntropyLoss.

Example

julia
julia> y = [1  0  0  0  1
             0  1  0  1  0
             0  0  1  0  0]
 3×5 Matrix{Int64}:
@@ -120,20 +120,20 @@
  0.665241   0.665241   0.665241   0.665241   0.665241
 
 julia> FocalLoss()(ŷ, y)  1.1277556f0
-true

References

[1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

source

Lux.HingeLoss Function
julia
HingeLoss(; agg = mean)

Return the hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

agg(max(0,1yy^))

Usually used with classifiers like Support Vector Machines.

Example

julia
julia> loss = HingeLoss();
+true

References

[1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

source

Lux.HingeLoss Function
julia
HingeLoss(; agg = mean)

Return the hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

agg(max(0,1yy^))

Usually used with classifiers like Support Vector Machines.

Example

julia
julia> loss = HingeLoss();
 
 julia> y_true = [1, -1, 1, 1];
 
 julia> y_pred = [0.1, 0.3, 1, 1.5];
 
 julia> loss(y_pred, y_true)  0.55
-true

source

Lux.HuberLoss Function
julia
HuberLoss(; delta = 1, agg = mean)

Returns the Huber loss, calculated as:

L={0.5|yy^|2if |yy^|δδ(|yy^|0.5δ)otherwise

where δ is the delta parameter.

Example

julia
julia> y_model = [1.1, 2.1, 3.1];
+true

source

Lux.HuberLoss Function
julia
HuberLoss(; delta = 1, agg = mean)

Returns the Huber loss, calculated as:

L={0.5|yy^|2if |yy^|δδ(|yy^|0.5δ)otherwise

where δ is the delta parameter.

Example

julia
julia> y_model = [1.1, 2.1, 3.1];
 
 julia> HuberLoss()(y_model, 1:3)  0.005000000000000009
 true
 
 julia> HuberLoss(delta=0.05)(y_model, 1:3)  0.003750000000000005
-true

source

Lux.KLDivergenceLoss Type
julia
KLDivergenceLoss(; dims = 1, agg = mean, epsilon = nothing, label_smoothing = nothing)

Return the Kullback-Leibler Divergence loss between the predicted distribution y^ and the true distribution y:

The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

For epsilon and label_smoothing, see CrossEntropyLoss.

Example

julia
julia> p1 = [1 0; 0 1]
+true

source

Lux.KLDivergenceLoss Type
julia
KLDivergenceLoss(; dims = 1, agg = mean, epsilon = nothing, label_smoothing = nothing)

Return the Kullback-Leibler Divergence loss between the predicted distribution y^ and the true distribution y:

The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

For epsilon and label_smoothing, see CrossEntropyLoss.

Example

julia
julia> p1 = [1 0; 0 1]
 2×2 Matrix{Int64}:
  1  0
  0  1
@@ -153,44 +153,44 @@
 0.0
 
 julia> KLDivergenceLoss(; epsilon=0)(p1, p2)
-Inf

source

Lux.MAELoss Function
julia
MAELoss(; agg = mean)

Returns the loss corresponding to mean absolute error:

agg(|y^y|)

Example

julia
julia> loss = MAELoss();
+Inf

source

Lux.MAELoss Function
julia
MAELoss(; agg = mean)

Returns the loss corresponding to mean absolute error:

agg(|y^y|)

Example

julia
julia> loss = MAELoss();
 
 julia> y_model = [1.1, 1.9, 3.1];
 
 julia> loss(y_model, 1:3)  0.1
-true

source

Lux.MSELoss Function
julia
MSELoss(; agg = mean)

Returns the loss corresponding to mean squared error:

agg((y^y)2)

Example

julia
julia> loss = MSELoss();
+true

source

Lux.MSELoss Function
julia
MSELoss(; agg = mean)

Returns the loss corresponding to mean squared error:

agg((y^y)2)

Example

julia
julia> loss = MSELoss();
 
 julia> y_model = [1.1, 1.9, 3.1];
 
 julia> loss(y_model, 1:3)  0.01
-true

source

Lux.MSLELoss Function
julia
MSLELoss(; agg = mean, epsilon = nothing)

Returns the loss corresponding to mean squared logarithmic error:

agg((log(y^+ϵ)log(y+ϵ))2)

epsilon is added to both y and to prevent taking the logarithm of zero. If epsilon is nothing, then we set it to eps(<type of y and ŷ>).

Example

julia
julia> loss = MSLELoss();
+true

source

Lux.MSLELoss Function
julia
MSLELoss(; agg = mean, epsilon = nothing)

Returns the loss corresponding to mean squared logarithmic error:

agg((log(y^+ϵ)log(y+ϵ))2)

epsilon is added to both y and to prevent taking the logarithm of zero. If epsilon is nothing, then we set it to eps(<type of y and ŷ>).

Example

julia
julia> loss = MSLELoss();
 
 julia> loss(Float32[1.1, 2.2, 3.3], 1:3)  0.009084041f0
 true
 
 julia> loss(Float32[0.9, 1.8, 2.7], 1:3)  0.011100831f0
-true

source

Lux.PoissonLoss Function
julia
PoissonLoss(; agg = mean, epsilon = nothing)

Return how much the predicted distribution y^ diverges from the expected Poisson distribution y, calculated as:

agg(y^ylog(y^))

Example

julia
julia> y_model = [1, 3, 3];  # data should only take integral values
+true

source

Lux.PoissonLoss Function
julia
PoissonLoss(; agg = mean, epsilon = nothing)

Return how much the predicted distribution y^ diverges from the expected Poisson distribution y, calculated as:

agg(y^ylog(y^))

Example

julia
julia> y_model = [1, 3, 3];  # data should only take integral values
 
 julia> PoissonLoss()(y_model, 1:3)  0.502312852219817
-true

source

Lux.SiameseContrastiveLoss Function
julia
SiameseContrastiveLoss(; margin = true, agg = mean)

Return the contrastive loss [1] which can be useful for training Siamese Networks. It is given by:

agg((1y)y^2+ymax(0,marginy^)2)

Specify margin to set the baseline for distance at which pairs are dissimilar.

Example

julia
julia>= [0.5, 1.5, 2.5];
+true

source

Lux.SiameseContrastiveLoss Function
julia
SiameseContrastiveLoss(; margin = true, agg = mean)

Return the contrastive loss [1] which can be useful for training Siamese Networks. It is given by:

agg((1y)y^2+ymax(0,marginy^)2)

Specify margin to set the baseline for distance at which pairs are dissimilar.

Example

julia
julia>= [0.5, 1.5, 2.5];
 
 julia> SiameseContrastiveLoss()(ŷ, 1:3)  -4.833333333333333
 true
 
 julia> SiameseContrastiveLoss(margin=2)(ŷ, 1:3)  -4.0
-true

References

[1] Hadsell, Raia, Sumit Chopra, and Yann LeCun. "Dimensionality reduction by learning an invariant mapping." 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE, 2006.

source

Lux.SquaredHingeLoss Function
julia
SquaredHingeLoss(; agg = mean)

Return the squared hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

agg(max(0,1yy^)2)

Usually used with classifiers like Support Vector Machines.

Example

julia
julia> loss = SquaredHingeLoss();
+true

References

[1] Hadsell, Raia, Sumit Chopra, and Yann LeCun. "Dimensionality reduction by learning an invariant mapping." 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE, 2006.

source

Lux.SquaredHingeLoss Function
julia
SquaredHingeLoss(; agg = mean)

Return the squared hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

agg(max(0,1yy^)2)

Usually used with classifiers like Support Vector Machines.

Example

julia
julia> loss = SquaredHingeLoss();
 
 julia> y_true = [1, -1, 1, 1];
 
 julia> y_pred = [0.1, 0.3, 1, 1.5];
 
 julia> loss(y_pred, y_true)  0.625
-true

source

LuxOps Module

Lux.LuxOps Module
julia
LuxOps

This module is a part of Lux.jl. It contains operations that are useful in DL context. Additionally certain operations here alias Base functions to behave more sensibly with GPUArrays.

source

Lux.LuxOps.eachslice Function
julia
eachslice(x, dims::Val)

Same as Base.eachslice but doesn't produce a SubArray for the slices if x is a GPUArray.

Additional dispatches for RNN helpers are also provided for TimeLastIndex and BatchLastIndex.

source

Lux.LuxOps.foldl_init Function
julia
foldl_init(op, x)
-foldl_init(op, x, init)

Exactly same as foldl(op, x; init) in the forward pass. But, gives gradients wrt init in the backward pass.

source

Lux.LuxOps.getproperty Function
julia
getproperty(x, ::Val{v})
-getproperty(x, ::StaticSymbol{v})

Similar to Base.getproperty but requires a Val (or Static.StaticSymbol). Additionally, if v is not present in x, then nothing is returned.

source

Lux.LuxOps.xlogx Function
julia
xlogx(x::Number)

Return x * log(x) for x ≥ 0, handling x == 0 by taking the limit from above, to get zero.

source

Lux.LuxOps.xlogy Function
julia
xlogy(x::Number, y::Number)

Return x * log(y) for y > 0, and zero when x == 0.

source

Lux.LuxOps.istraining Function
julia
istraining(::Val{training})
+true

source

LuxOps Module

Lux.LuxOps Module
julia
LuxOps

This module is a part of Lux.jl. It contains operations that are useful in DL context. Additionally certain operations here alias Base functions to behave more sensibly with GPUArrays.

source

Lux.LuxOps.eachslice Function
julia
eachslice(x, dims::Val)

Same as Base.eachslice but doesn't produce a SubArray for the slices if x is a GPUArray.

Additional dispatches for RNN helpers are also provided for TimeLastIndex and BatchLastIndex.

source

Lux.LuxOps.foldl_init Function
julia
foldl_init(op, x)
+foldl_init(op, x, init)

Exactly same as foldl(op, x; init) in the forward pass. But, gives gradients wrt init in the backward pass.

source

Lux.LuxOps.getproperty Function
julia
getproperty(x, ::Val{v})
+getproperty(x, ::StaticSymbol{v})

Similar to Base.getproperty but requires a Val (or Static.StaticSymbol). Additionally, if v is not present in x, then nothing is returned.

source

Lux.LuxOps.xlogx Function
julia
xlogx(x::Number)

Return x * log(x) for x ≥ 0, handling x == 0 by taking the limit from above, to get zero.

source

Lux.LuxOps.xlogy Function
julia
xlogy(x::Number, y::Number)

Return x * log(y) for y > 0, and zero when x == 0.

source

Lux.LuxOps.istraining Function
julia
istraining(::Val{training})
 istraining(::StaticBool)
 istraining(::Bool)
-istraining(st::NamedTuple)

Returns true if training is true or if st contains a training field with value true. Else returns false.

source

Lux.LuxOps.multigate Function
julia
multigate(x::AbstractArray, ::Val{N})

Split up x into N equally sized chunks (along dimension 1).

source

Recursive Operations

Lux.recursive_map Function
julia
recursive_map(f, x, args...)

Similar to fmap(f, args...) but with restricted support for the notion of "leaf" types. However, this allows for more efficient and type stable implementations of recursive operations.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

How this works?

For the following types it directly defines recursion rules:

  1. AbstractArray: If eltype is isbitstype, then f is applied to the array, else we recurse on the array.

  2. Tuple/NamedTuple: We recurse on the values.

  3. Number/Val/Nothing: We directly apply f.

  4. For all other types, we recurse on the fields using Functors.fmap.

Note

In most cases, users should gravitate towards Functors.fmap if it is being used outside of hot loops. Even for other cases, it is always recommended to verify the correctness of this implementation for specific usecases.

source

Lux.recursive_add!! Function
julia
recursive_add!!(x, y)

Recursively add the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(+, x, y), but this implementation uses type stable code for common cases.

Any leaves of x that are arrays and allow in-place addition will be modified in place.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_copyto! Function
julia
recursive_copyto!(x, y)

Recursively copy the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(copyto!, x, y), but this implementation uses type stable code for common cases. Note that any immutable leaf will lead to an error.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_eltype Function
julia
recursive_eltype(x, unwrap_ad_types = Val(false))

Recursively determine the element type of a nested structure x. This is equivalent to doing fmap(Lux.Utils.eltype, x), but this implementation uses type stable code for common cases.

For ambiguous inputs like nothing and Val types we return Bool as the eltype.

If unwrap_ad_types is set to Val(true) then for tracing and operator overloading based ADs (ForwardDiff, ReverseDiff, Tracker), this function will return the eltype of the unwrapped value.

source

Lux.recursive_make_zero Function
julia
recursive_make_zero(x)

Recursively create a zero value for a nested structure x. This is equivalent to doing fmap(zero, x), but this implementation uses type stable code for common cases.

See also Lux.recursive_make_zero!!.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_make_zero!! Function
julia
recursive_make_zero!!(x)

Recursively create a zero value for a nested structure x. Leaves that can be mutated with in-place zeroing will be modified in place.

See also Lux.recursive_make_zero for fully out-of-place version.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Updating Floating Point Precision

By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.

Lux.f16 Function
julia
f16(m)

Converts the eltype of m floating point values to Float16. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.f32 Function
julia
f32(m)

Converts the eltype of m floating point values to Float32. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.f64 Function
julia
f64(m)

Converts the eltype of m floating point values to Float64. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.bf16 Function
julia
bf16(m)

Converts the eltype of m floating point values to BFloat16. To avoid recursion into structs mark them with Functors.@leaf.

Warning

BFloat16s.jl needs to be loaded before using this function.

Support for BFloat16

Most Lux operations aren't optimized for BFloat16 yet. Instead this is meant to be used together with Reactant.@compile.

source

Element Type Matching

Lux.match_eltype Function
julia
match_eltype(layer, ps, st, args...)

Helper function to "maybe" (see below) match the element type of args... with the element type of the layer's parameters and states. This is useful for debugging purposes, to track down accidental type-promotions inside Lux layers.

Extended Help

Controlling the Behavior via Preferences

Behavior of this function is controlled via the eltype_mismatch_handling preference. The following options are supported:

  • "none": This is the default behavior. In this case, this function is a no-op, i.e., it simply returns args....

  • "warn": This option will issue a warning if the element type of args... does not match the element type of the layer's parameters and states. The warning will contain information about the layer and the element type mismatch.

  • "convert": This option is same as "warn", but it will also convert the element type of args... to match the element type of the layer's parameters and states (for the cases listed below).

  • "error": Same as "warn", but instead of issuing a warning, it will throw an error.

Warning

We print the warning for type-mismatch only once.

Element Type Conversions

For "convert" only the following conversions are done:

Element Type of parameters/statesElement Type of args...Converted to
Float64IntegerFloat64
Float32Float64Float32
Float32IntegerFloat32
Float16Float64Float16
Float16Float32Float16
Float16IntegerFloat16

source

Stateful Layer

Lux.StatefulLuxLayer Type
julia
StatefulLuxLayer{FT}(model, ps, st)

Warning

This is not a Lux.AbstractLuxLayer

A convenience wrapper over Lux layers which stores the parameters and states internally. This is meant to be used in internal implementation of layers.

Usecases

  • Internal implementation of @compact heavily uses this layer.

  • In SciML codebases where propagating state might involving Boxing. For a motivating example, see the Neural ODE tutorial.

  • Facilitates Nested AD support in Lux. For more details on this feature, see the Nested AD Manual Page.

Static Parameters

  • If FT = true then the type of the state is fixed, i.e., typeof(last(model(x, ps, st))) == st.

  • If FT = false then type of the state might change. Note that while this works in all cases, it will introduce type instability.

Arguments

  • model: A Lux layer

  • ps: The parameters of the layer. This can be set to nothing, if the user provides the parameters on function call

  • st: The state of the layer

Inputs

  • x: The input to the layer

  • ps: The parameters of the layer. Optional, defaults to s.ps

Outputs

  • y: The output of the layer

source

Compact Layer

Lux.@compact Macro
julia
@compact(kw...) do x
+istraining(st::NamedTuple)

Returns true if training is true or if st contains a training field with value true. Else returns false.

source

Lux.LuxOps.multigate Function
julia
multigate(x::AbstractArray, ::Val{N})

Split up x into N equally sized chunks (along dimension 1).

source

Recursive Operations

Lux.recursive_map Function
julia
recursive_map(f, x, args...)

Similar to fmap(f, args...) but with restricted support for the notion of "leaf" types. However, this allows for more efficient and type stable implementations of recursive operations.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

How this works?

For the following types it directly defines recursion rules:

  1. AbstractArray: If eltype is isbitstype, then f is applied to the array, else we recurse on the array.

  2. Tuple/NamedTuple: We recurse on the values.

  3. Number/Val/Nothing: We directly apply f.

  4. For all other types, we recurse on the fields using Functors.fmap.

Note

In most cases, users should gravitate towards Functors.fmap if it is being used outside of hot loops. Even for other cases, it is always recommended to verify the correctness of this implementation for specific usecases.

source

Lux.recursive_add!! Function
julia
recursive_add!!(x, y)

Recursively add the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(+, x, y), but this implementation uses type stable code for common cases.

Any leaves of x that are arrays and allow in-place addition will be modified in place.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_copyto! Function
julia
recursive_copyto!(x, y)

Recursively copy the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(copyto!, x, y), but this implementation uses type stable code for common cases. Note that any immutable leaf will lead to an error.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_eltype Function
julia
recursive_eltype(x, unwrap_ad_types = Val(false))

Recursively determine the element type of a nested structure x. This is equivalent to doing fmap(Lux.Utils.eltype, x), but this implementation uses type stable code for common cases.

For ambiguous inputs like nothing and Val types we return Bool as the eltype.

If unwrap_ad_types is set to Val(true) then for tracing and operator overloading based ADs (ForwardDiff, ReverseDiff, Tracker), this function will return the eltype of the unwrapped value.

source

Lux.recursive_make_zero Function
julia
recursive_make_zero(x)

Recursively create a zero value for a nested structure x. This is equivalent to doing fmap(zero, x), but this implementation uses type stable code for common cases.

See also Lux.recursive_make_zero!!.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Lux.recursive_make_zero!! Function
julia
recursive_make_zero!!(x)

Recursively create a zero value for a nested structure x. Leaves that can be mutated with in-place zeroing will be modified in place.

See also Lux.recursive_make_zero for fully out-of-place version.

Deprecation Warning

Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

source

Updating Floating Point Precision

By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.

Lux.f16 Function
julia
f16(m)

Converts the eltype of m floating point values to Float16. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.f32 Function
julia
f32(m)

Converts the eltype of m floating point values to Float32. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.f64 Function
julia
f64(m)

Converts the eltype of m floating point values to Float64. To avoid recursion into structs mark them with Functors.@leaf.

source

Lux.bf16 Function
julia
bf16(m)

Converts the eltype of m floating point values to BFloat16. To avoid recursion into structs mark them with Functors.@leaf.

Warning

BFloat16s.jl needs to be loaded before using this function.

Support for BFloat16

Most Lux operations aren't optimized for BFloat16 yet. Instead this is meant to be used together with Reactant.@compile.

source

Element Type Matching

Lux.match_eltype Function
julia
match_eltype(layer, ps, st, args...)

Helper function to "maybe" (see below) match the element type of args... with the element type of the layer's parameters and states. This is useful for debugging purposes, to track down accidental type-promotions inside Lux layers.

Extended Help

Controlling the Behavior via Preferences

Behavior of this function is controlled via the eltype_mismatch_handling preference. The following options are supported:

  • "none": This is the default behavior. In this case, this function is a no-op, i.e., it simply returns args....

  • "warn": This option will issue a warning if the element type of args... does not match the element type of the layer's parameters and states. The warning will contain information about the layer and the element type mismatch.

  • "convert": This option is same as "warn", but it will also convert the element type of args... to match the element type of the layer's parameters and states (for the cases listed below).

  • "error": Same as "warn", but instead of issuing a warning, it will throw an error.

Warning

We print the warning for type-mismatch only once.

Element Type Conversions

For "convert" only the following conversions are done:

Element Type of parameters/statesElement Type of args...Converted to
Float64IntegerFloat64
Float32Float64Float32
Float32IntegerFloat32
Float16Float64Float16
Float16Float32Float16
Float16IntegerFloat16

source

Stateful Layer

Lux.StatefulLuxLayer Type
julia
StatefulLuxLayer{FT}(model, ps, st)

Warning

This is not a Lux.AbstractLuxLayer

A convenience wrapper over Lux layers which stores the parameters and states internally. This is meant to be used in internal implementation of layers.

Usecases

  • Internal implementation of @compact heavily uses this layer.

  • In SciML codebases where propagating state might involving Boxing. For a motivating example, see the Neural ODE tutorial.

  • Facilitates Nested AD support in Lux. For more details on this feature, see the Nested AD Manual Page.

Static Parameters

  • If FT = true then the type of the state is fixed, i.e., typeof(last(model(x, ps, st))) == st.

  • If FT = false then type of the state might change. Note that while this works in all cases, it will introduce type instability.

Arguments

  • model: A Lux layer

  • ps: The parameters of the layer. This can be set to nothing, if the user provides the parameters on function call

  • st: The state of the layer

Inputs

  • x: The input to the layer

  • ps: The parameters of the layer. Optional, defaults to s.ps

Outputs

  • y: The output of the layer

source

Compact Layer

Lux.@compact Macro
julia
@compact(kw...) do x
     ...
     @return y # optional (but recommended for best performance)
 end
@@ -306,7 +306,7 @@
 true

You may also specify a name for the model, which will be used instead of the default printout, which gives a verbatim representation of the code used to construct the model:

julia
julia> model = @compact(w=rand(3), name="Linear(3 => 1)") do x
            @return sum(w .* x)
        end
-Linear(3 => 1)      # 3 parameters

This can be useful when using @compact to hierarchically construct complex models to be used inside a Chain.

Type Stability

If your input function f is type-stable but the generated model is not type stable, it should be treated as a bug. We will appreciate issues if you find such cases.

Parameter Count

Array Parameter don't print the number of parameters on the side. However, they do account for the total number of parameters printed at the bottom.

source

Lux.@init_fn Macro
julia
@init_fn(fn, kind::Symbol = :parameter)

Create an initializer function for a parameter or state to be used for in a Compact Lux Layer created using @compact.

Arguments

  • fn: The function to be used for initializing the parameter or state. This only takes a single argument rng.

  • kind: If set to :parameter, the initializer function will be used to initialize the parameters of the layer. If set to :state, the initializer function will be used to initialize the states of the layer.

Examples

julia
julia> using Lux, Random
+Linear(3 => 1)      # 3 parameters

This can be useful when using @compact to hierarchically construct complex models to be used inside a Chain.

Type Stability

If your input function f is type-stable but the generated model is not type stable, it should be treated as a bug. We will appreciate issues if you find such cases.

Parameter Count

Array Parameter don't print the number of parameters on the side. However, they do account for the total number of parameters printed at the bottom.

source

Lux.@init_fn Macro
julia
@init_fn(fn, kind::Symbol = :parameter)

Create an initializer function for a parameter or state to be used for in a Compact Lux Layer created using @compact.

Arguments

  • fn: The function to be used for initializing the parameter or state. This only takes a single argument rng.

  • kind: If set to :parameter, the initializer function will be used to initialize the parameters of the layer. If set to :state, the initializer function will be used to initialize the states of the layer.

Examples

julia
julia> using Lux, Random
 
 julia> r = @compact(w=@init_fn(rng->randn32(rng, 3, 2)),
            b=@init_fn(rng->randn32(rng, 3), :state)) do x
@@ -322,7 +322,7 @@
 (3,)
 
 julia> size(r([1, 2], ps, st)[1])
-(3,)

source

Lux.@non_trainable Macro
julia
@non_trainable(x)

Mark a value as non-trainable. This bypasses the regular checks and places the value into the state of the layer.

Arguments

  • x: The value to be marked as non-trainable.

Examples

julia
julia> using Lux, Random
+(3,)

source

Lux.@non_trainable Macro
julia
@non_trainable(x)

Mark a value as non-trainable. This bypasses the regular checks and places the value into the state of the layer.

Arguments

  • x: The value to be marked as non-trainable.

Examples

julia
julia> using Lux, Random
 
 julia> r = @compact(w=ones(3), w_fixed=@non_trainable(rand(3))) do x
            @return sum(w .* x .+ w_fixed)
@@ -342,9 +342,9 @@
 true
 
 julia> res isa Number
-true

source

Miscellaneous

Lux.set_dispatch_doctor_preferences! Function
julia
set_dispatch_doctor_preferences!(mode::String)
-set_dispatch_doctor_preferences!(; luxcore::String="disable", luxlib::String="disable")

Set the dispatch doctor preference for LuxCore and LuxLib packages.

mode can be "disable", "warn", or "error". For details on the different modes, see the DispatchDoctor.jl documentation.

If the preferences are already set, then no action is taken. Otherwise the preference is set. For changes to take effect, the Julia session must be restarted.

source

- +true

source

Miscellaneous

Lux.set_dispatch_doctor_preferences! Function
julia
set_dispatch_doctor_preferences!(mode::String)
+set_dispatch_doctor_preferences!(; luxcore::String="disable", luxlib::String="disable")

Set the dispatch doctor preference for LuxCore and LuxLib packages.

mode can be "disable", "warn", or "error". For details on the different modes, see the DispatchDoctor.jl documentation.

If the preferences are already set, then no action is taken. Otherwise the preference is set. For changes to take effect, the Julia session must be restarted.

source

+ \ No newline at end of file diff --git a/dev/api/NN_Primitives/ActivationFunctions.html b/dev/api/NN_Primitives/ActivationFunctions.html index a461dcc2ba..fb8c5911c6 100644 --- a/dev/api/NN_Primitives/ActivationFunctions.html +++ b/dev/api/NN_Primitives/ActivationFunctions.html @@ -5,15 +5,15 @@ Activation Functions | Lux.jl Docs - + - + - - - + + + @@ -494,7 +494,7 @@ └────────────────────────────────────────┘ -2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀4⠀ ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

source

- + \ No newline at end of file diff --git a/dev/api/NN_Primitives/LuxLib.html b/dev/api/NN_Primitives/LuxLib.html index 130e96e0f7..c03808a199 100644 --- a/dev/api/NN_Primitives/LuxLib.html +++ b/dev/api/NN_Primitives/LuxLib.html @@ -5,15 +5,15 @@ LuxLib | Lux.jl Docs - + - + - - - + + + @@ -28,20 +28,20 @@ -
Skip to content

LuxLib

Backend for Lux.jl

Apply Activation

LuxLib.API.fast_activation Function
julia
fast_activation::F, x::AbstractArray) where {F}

Compute σ.(x) with the best possible implementation available. On CPUs we unroll the loop and use LoopVectorization.jl to vectorize the computation. On GPUs we use simply use broadcasting.

Note

This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

Arguments

  • σ: Activation function

  • x: Input array

Returns

  • Output Array with the same size as x

source

LuxLib.API.fast_activation!! Function
julia
fast_activation!!::F, x::AbstractArray) where {F}

Compute σ.(x) with the best possible implementation available. If it is possible to rewrite x in-place, it does so. If x is an immutable array, it falls back to the generic implementation.

Note

This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

Load SLEEFPirates.jl to get faster activations

Certain activation functions are replaced with specialized implementations from SLEEFPirates.jl for FP32. This might lead to faster performance but can cause slight decrease in accuracy (in the floating point limit).

Arguments

  • σ: Activation function

  • x: Input array

Returns

  • Output Array with the same size as x

source

Batched Operations

LuxLib.API.batched_matmul Function
julia
batched_matmul(x, y)

Computes the batched matrix multiplication of x and y. For more details see the NNlib documentation on NNlib.batched_mul. This function is mostly a wrapper around batched_mul but attempts to be faster on CPUs.

Load LoopVectorization.jl to get faster batched matrix multiplication

On CPUs loading LoopVectorization adds faster implementations of batched matrix multiplication.

source

Bias Activation

LuxLib.API.bias_activation Function
julia
bias_activation(σ, x, bias)

Applies the activation function σ elementwise to the result of broadcasted addition of x and bias along the penultimate dimension. A vector x is treated as a matrix with a single last dimension.

Arguments

  • σ: Activation function

  • x: Input to be transformed

  • bias: Bias to be added. Can be nothing.

See also bias_activation!!, fast_activation.

source

LuxLib.API.bias_activation!! Function
julia
bias_activation!!(σ, x, bias)

Same as bias_activation but might update x in-place if possible. Users should not rely on x being mutated, it is recommended to use it like y = bias_activation!!(σ, x, bias). If x is updated in-place, y aliases x.

See also bias_activation, fast_activation!!.

source

Convolutional Layers

LuxLib.API.fused_conv_bias_activation Function
julia
fused_conv_bias_activation::F, weight::AbstractArray, x::AbstractArray,
-    b::Optional{<:AbstractVector}, cdims::ConvDims) where {F}

Computes σ.(conv(x, weight, cdims) .+ b) (b is not exactly broadcasted like this, rather it is reshaped and broadcasted to the penultimate dimension) with the best possible implementation available. This operation fuses operations into a single kernel if possible, and minimizes reallocations by reusing the output buffer for multiple operations.

Arguments

  • σ: Activation function

  • weight: Weight tensor

  • x: Input tensor

  • b: Bias tensor (can be nothing)

  • cdims: ConvDims object

Notes on implementation

  • For CUDA Arrays, this uses fused CUDNN kernels when the activation is identity or relu. For other activations, it tries to fuse the operations on the Julia side.

  • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

  • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

  • For Mixed-Precision Inputs on GPU, we type promote the inputs to the highest precision, with a warning.

source

Dropout

LuxLib.API.alpha_dropout Function
julia
alpha_dropout(rng::AbstractRNG, x, p, training)
-alpha_dropout(rng::AbstractRNG, x, p, training, α, A, B)

Alpha Dropout: Dropout ensuring that the mean and variance of the output remains same as the input. For details see [1]. Use the second call signature to avoid recomputing the constants for a fixed dropout probability.

Arguments

  • rng: Random number generator

  • x: Input Array

  • p: Probability of an element to be dropped out

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context`

  • α: -1.7580993408473766. Computed at limit x tends to infinity, selu(x) = -λβ = α

  • A: Scaling factor for the mean

  • B: Scaling factor for the variance

Returns

  • Output Array after applying alpha dropout

  • Updated state for the random number generator

References

[1] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017).

source

LuxLib.API.dropout Function
julia
dropout(rng::AbstractRNG, x, p, training, invp, dims)
+    
Skip to content

LuxLib

Backend for Lux.jl

Apply Activation

LuxLib.API.fast_activation Function
julia
fast_activation::F, x::AbstractArray) where {F}

Compute σ.(x) with the best possible implementation available. On CPUs we unroll the loop and use LoopVectorization.jl to vectorize the computation. On GPUs we use simply use broadcasting.

Note

This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

Arguments

  • σ: Activation function

  • x: Input array

Returns

  • Output Array with the same size as x

source

LuxLib.API.fast_activation!! Function
julia
fast_activation!!::F, x::AbstractArray) where {F}

Compute σ.(x) with the best possible implementation available. If it is possible to rewrite x in-place, it does so. If x is an immutable array, it falls back to the generic implementation.

Note

This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

Load SLEEFPirates.jl to get faster activations

Certain activation functions are replaced with specialized implementations from SLEEFPirates.jl for FP32. This might lead to faster performance but can cause slight decrease in accuracy (in the floating point limit).

Arguments

  • σ: Activation function

  • x: Input array

Returns

  • Output Array with the same size as x

source

Batched Operations

LuxLib.API.batched_matmul Function
julia
batched_matmul(x, y)

Computes the batched matrix multiplication of x and y. For more details see the NNlib documentation on NNlib.batched_mul. This function is mostly a wrapper around batched_mul but attempts to be faster on CPUs.

Load LoopVectorization.jl to get faster batched matrix multiplication

On CPUs loading LoopVectorization adds faster implementations of batched matrix multiplication.

source

Bias Activation

LuxLib.API.bias_activation Function
julia
bias_activation(σ, x, bias)

Applies the activation function σ elementwise to the result of broadcasted addition of x and bias along the penultimate dimension. A vector x is treated as a matrix with a single last dimension.

Arguments

  • σ: Activation function

  • x: Input to be transformed

  • bias: Bias to be added. Can be nothing.

See also bias_activation!!, fast_activation.

source

LuxLib.API.bias_activation!! Function
julia
bias_activation!!(σ, x, bias)

Same as bias_activation but might update x in-place if possible. Users should not rely on x being mutated, it is recommended to use it like y = bias_activation!!(σ, x, bias). If x is updated in-place, y aliases x.

See also bias_activation, fast_activation!!.

source

Convolutional Layers

LuxLib.API.fused_conv_bias_activation Function
julia
fused_conv_bias_activation::F, weight::AbstractArray, x::AbstractArray,
+    b::Optional{<:AbstractVector}, cdims::ConvDims) where {F}

Computes σ.(conv(x, weight, cdims) .+ b) (b is not exactly broadcasted like this, rather it is reshaped and broadcasted to the penultimate dimension) with the best possible implementation available. This operation fuses operations into a single kernel if possible, and minimizes reallocations by reusing the output buffer for multiple operations.

Arguments

  • σ: Activation function

  • weight: Weight tensor

  • x: Input tensor

  • b: Bias tensor (can be nothing)

  • cdims: ConvDims object

Notes on implementation

  • For CUDA Arrays, this uses fused CUDNN kernels when the activation is identity or relu. For other activations, it tries to fuse the operations on the Julia side.

  • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

  • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

  • For Mixed-Precision Inputs on GPU, we type promote the inputs to the highest precision, with a warning.

source

Dropout

LuxLib.API.alpha_dropout Function
julia
alpha_dropout(rng::AbstractRNG, x, p, training)
+alpha_dropout(rng::AbstractRNG, x, p, training, α, A, B)

Alpha Dropout: Dropout ensuring that the mean and variance of the output remains same as the input. For details see [1]. Use the second call signature to avoid recomputing the constants for a fixed dropout probability.

Arguments

  • rng: Random number generator

  • x: Input Array

  • p: Probability of an element to be dropped out

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context`

  • α: -1.7580993408473766. Computed at limit x tends to infinity, selu(x) = -λβ = α

  • A: Scaling factor for the mean

  • B: Scaling factor for the variance

Returns

  • Output Array after applying alpha dropout

  • Updated state for the random number generator

References

[1] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017).

source

LuxLib.API.dropout Function
julia
dropout(rng::AbstractRNG, x, p, training, invp, dims)
 dropout(rng::AbstractRNG, x, mask, p, training, update_mask::Union{Val, StaticBool},
-    invp, dims)

Dropout: Simple Way to prevent Neural Networks for Overfitting. For details see [1].

Arguments

  • rng: Random number generator

  • x: Input Array

  • mask: Dropout Mask. If not used then it is constructed automatically

  • p: Probability of an element to be dropped out

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • update_mask: If Val(true) or True() then the mask is generated and used. Else, the mask provided is directly used

  • invp: Inverse multiplied to the mask. Calculated as invp = 1 / (1 - p).

Returns

  • Output Array after applying dropout

  • Dropout Mask (if training == false, the returned value is meaningless)

  • Updated state for the random number generator

References

[1] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.

source

Fully Connected Layers

LuxLib.API.fused_dense_bias_activation Function
julia
fused_dense_bias_activation::F, weight::AbstractMatrix, x::AbstractMatrix,
+    invp, dims)

Dropout: Simple Way to prevent Neural Networks for Overfitting. For details see [1].

Arguments

  • rng: Random number generator

  • x: Input Array

  • mask: Dropout Mask. If not used then it is constructed automatically

  • p: Probability of an element to be dropped out

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • update_mask: If Val(true) or True() then the mask is generated and used. Else, the mask provided is directly used

  • invp: Inverse multiplied to the mask. Calculated as invp = 1 / (1 - p).

Returns

  • Output Array after applying dropout

  • Dropout Mask (if training == false, the returned value is meaningless)

  • Updated state for the random number generator

References

[1] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.

source

Fully Connected Layers

LuxLib.API.fused_dense_bias_activation Function
julia
fused_dense_bias_activation::F, weight::AbstractMatrix, x::AbstractMatrix,
     b::Optional{<:AbstractVector}) where {F}

Compute σ.(weight * x .+ b) with the best possible implementation available. Currently this implementation attempts to minimize reallocations by reusing the output buffer for multiple operations.

Arguments

  • σ: Activation function

  • weight: Weight matrix

  • x: Input matrix

  • b: Bias vector (can be nothing)

Notes on implementation

  • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

  • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

  • For CUDA Arrays, this uses a special fused implementation via cuBLASLt.

  • For small CPU Arrays, we use LoopVectorization.jl. On x86_64 we use Octavian for medium sized matrices. This is overridden if special BLAS implementations are loaded (currently MKL, AppleAccelerate, and BLISBLAS).

!!! tip "Load Octavian.jl

Loading `Octavian.jl` enables a polyalgorithm that uses different backends based on the
-input sizes.

source

Normalization

LuxLib.API.batchnorm Function
julia
batchnorm(x, scale, bias, running_mean, running_var, training,
-    σ=identity, momentum = 0.1f0, epsilon = eps(eltype(x)) ^ (5 // 7))

Batch Normalization. For details see [1].

Batch Normalization computes the mean and variance for each D1×...×DN2×1×DN input slice and normalises the input accordingly.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

Normalized Array of same size as x. And a Named Tuple containing the updated running mean and variance.

References

[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.

source

LuxLib.API.groupnorm Function
julia
groupnorm(x, scale, bias, groups::Int, σ::F=identity,
-    epsilon=eps(eltype(x)) ^ (5 // 7))

Group Normalization. For details see [1].

This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • groups: Number of groups

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

The normalized array is returned.

References

[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.

source

LuxLib.API.instancenorm Function
julia
instancenorm(x, scale, bias, training, act, epsilon = eps(eltype(x)) ^ (5 // 7))
+input sizes.

source

Normalization

LuxLib.API.batchnorm Function
julia
batchnorm(x, scale, bias, running_mean, running_var, training,
+    σ=identity, momentum = 0.1f0, epsilon = eps(eltype(x)) ^ (5 // 7))

Batch Normalization. For details see [1].

Batch Normalization computes the mean and variance for each D1×...×DN2×1×DN input slice and normalises the input accordingly.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

Normalized Array of same size as x. And a Named Tuple containing the updated running mean and variance.

References

[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.

source

LuxLib.API.groupnorm Function
julia
groupnorm(x, scale, bias, groups::Int, σ::F=identity,
+    epsilon=eps(eltype(x)) ^ (5 // 7))

Group Normalization. For details see [1].

This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • groups: Number of groups

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

The normalized array is returned.

References

[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.

source

LuxLib.API.instancenorm Function
julia
instancenorm(x, scale, bias, training, act, epsilon = eps(eltype(x)) ^ (5 // 7))
 instancenorm(x, scale, bias, running_mean, running_var, training, act, momentum,
-    epsilon = eps(eltype(x)) ^ (5 // 7))

Instance Normalization. For details see [1].

Instance Normalization computes the mean and variance for each D1×...×DN2×1×1 input slice and normalises the input accordingly.

Arguments

  • x: Input to be Normalized (must be atleast 3D)

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

Returns

Normalized Array of same size as x. And a Named Tuple containing the updated running mean and variance.

References

[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

source

LuxLib.API.layernorm Function
julia
layernorm(x::AbstractArray{xT, N}, scale, bias, σ = identity, dims=1:(N - 1),
-    epsilon = eps(eltype(x)) ^ (5 / 7)) where {xT, N}

Layer Normalization. For details see [1].

Given an input array x, this layer computes

y=xE[x]Var[x]+ϵγ+β

and applies the activation function σ elementwise to y.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • σ: Activation function (default: identity)

  • dims: Dimensions along which the mean and std of x is computed. If nothing is passed, the dims are inferred based on the dimensions of scale and bias. For example, if x is N dimensional and scale and bias are M dimensional, then the dims will be 1:(N - M).

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

Normalized Array of same size as x.

References

[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).

source

Helper Functions

LuxLib.internal_operation_mode Function
julia
internal_operation_mode(xs::Tuple)
-internal_operation_mode(x::AbstractArray)

Returns the internal operation mode for the given array(s). This is useful to define custom implementations using different backends like simple Julia broadcasting, Kernel Abstractions, Loop Vectorization, etc.

Currently supported modes are:

  • GenericBroadcastOp: This is the fallback for most types. For the following types this is the preferred mode:

    • Arrays with fast_scalar_indexing set to False.

    • Static Arrays

    • ReverseDiff Arrays

    • Tracker Arrays

    • ForwardDiff.Dual Arrays

    • Complex Arrays

  • GPUBroadcastOp{dev}: GPU Arrays where dev is obtained from get_device_type(xs). This option dispatches should preferably use KernelAbstractions or specialized vendor dispatches.

  • LoopedArrayOp: CPU arrays that can be optimized using SIMD Loops, ideally using LoopVectorization.jl or Polyester.jl.

source

- + epsilon = eps(eltype(x)) ^ (5 // 7))

Instance Normalization. For details see [1].

Instance Normalization computes the mean and variance for each D1×...×DN2×1×1 input slice and normalises the input accordingly.

Arguments

  • x: Input to be Normalized (must be atleast 3D)

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

Returns

Normalized Array of same size as x. And a Named Tuple containing the updated running mean and variance.

References

[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

source

LuxLib.API.layernorm Function
julia
layernorm(x::AbstractArray{xT, N}, scale, bias, σ = identity, dims=1:(N - 1),
+    epsilon = eps(eltype(x)) ^ (5 / 7)) where {xT, N}

Layer Normalization. For details see [1].

Given an input array x, this layer computes

y=xE[x]Var[x]+ϵγ+β

and applies the activation function σ elementwise to y.

Arguments

  • x: Input to be Normalized

  • scale: Scale factor (γ) (can be nothing)

  • bias: Bias factor (β) (can be nothing)

  • σ: Activation function (default: identity)

  • dims: Dimensions along which the mean and std of x is computed. If nothing is passed, the dims are inferred based on the dimensions of scale and bias. For example, if x is N dimensional and scale and bias are M dimensional, then the dims will be 1:(N - M).

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

Returns

Normalized Array of same size as x.

References

[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).

source

Helper Functions

LuxLib.internal_operation_mode Function
julia
internal_operation_mode(xs::Tuple)
+internal_operation_mode(x::AbstractArray)

Returns the internal operation mode for the given array(s). This is useful to define custom implementations using different backends like simple Julia broadcasting, Kernel Abstractions, Loop Vectorization, etc.

Currently supported modes are:

  • GenericBroadcastOp: This is the fallback for most types. For the following types this is the preferred mode:

    • Arrays with fast_scalar_indexing set to False.

    • Static Arrays

    • ReverseDiff Arrays

    • Tracker Arrays

    • ForwardDiff.Dual Arrays

    • Complex Arrays

  • GPUBroadcastOp{dev}: GPU Arrays where dev is obtained from get_device_type(xs). This option dispatches should preferably use KernelAbstractions or specialized vendor dispatches.

  • LoopedArrayOp: CPU arrays that can be optimized using SIMD Loops, ideally using LoopVectorization.jl or Polyester.jl.

source

+ \ No newline at end of file diff --git a/dev/api/NN_Primitives/NNlib.html b/dev/api/NN_Primitives/NNlib.html index dd7c4ac840..1e49defa59 100644 --- a/dev/api/NN_Primitives/NNlib.html +++ b/dev/api/NN_Primitives/NNlib.html @@ -5,15 +5,15 @@ NNlib | Lux.jl Docs - + - + - - - + + + @@ -647,7 +647,7 @@ 0 │⣇⣇⣸⣀⣸⣀⣀⣟⣀⣀⣸⣃⣀⣀⣀⣿⣀⣀⣀⣀⣀⣿⣀⣀⣀⣀⣀⣀⣈⣇⣀⣀⣀⣀⣀⣀⣀⣀⣀⣱│ └────────────────────────────────────────┘ ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀

source

NNlib.logaddexp Function
julia
logaddexp(a, b)

Adds log-space a and b such that the result equals log(exp(a)+exp(b))

source

NNlib.depthwiseconv_direct! Function
julia
depthwiseconv_direct!(y, x, w, cdims; alpha=1, beta=0)

Direct depthwise convolution implementation; used for debugging, tests, and mixing/ matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

One subtlety about depthwise convolutions; the shape of a depthwise convolutional kernel is (spatial_dims..., C_mult, C_in), so the axis that must match with the number of channels in x is the last, not the second-to-last, as in a normal dense convolution.

See the docstring for conv_direct!() for more on the optional parameters.

source

NNlib.im2col! Function
julia
im2col!(col, x, cdims)

Converts a 3d image x into a matrix col for usage with GEMM-calculated convolution. Patches of x of size (kernel_w, kernel_h, kernel_d, C_in) will be extracted and laid out along the rows of col, one for each output pixel. This routine is used by all im2col-based convolutions, just with extra singleton dimensions added in the case of 2d or 1d images.

source

NNlib.predilate Function
julia
predilate(x, dilation::Tuple)

Places elements of x within a lattice of zeros, used in expressing a transposed convolution in terms of normal convolution. Note that while we call this "predilation" for aesthetic reasons, you are typically passing a "stride" value into here. Yes, transposed convolution is confusing.

source

NNlib.safe_div Function
julia
safe_div(x, y)

Returns x/y unless y==0, in which case it just returns x. (Used internally by scatter.)

source

- + \ No newline at end of file diff --git a/dev/api/Testing_Functionality/LuxTestUtils.html b/dev/api/Testing_Functionality/LuxTestUtils.html index 268aa4e651..74c9851029 100644 --- a/dev/api/Testing_Functionality/LuxTestUtils.html +++ b/dev/api/Testing_Functionality/LuxTestUtils.html @@ -5,15 +5,15 @@ LuxTestUtils | Lux.jl Docs - + - + - - - + + + @@ -35,12 +35,12 @@ julia> @jet sum(1, 1) target_modules=(Base, Core) opt_broken=true call_broken=true Test Broken - Expression: #= REPL[21]:1 =# JET.@test_opt target_modules = (Base, Core) sum(1, 1)

source

LuxTestUtils.jet_target_modules! Function
julia
jet_target_modules!(list::Vector{String}; force::Bool=false)

This sets target_modules for all JET tests when using @jet.

source

Gradient Correctness

LuxTestUtils.test_gradients Function
julia
test_gradients(f, args...; skip_backends=[], broken_backends=[], kwargs...)

Test the gradients of f with respect to args using the specified backends.

BackendADTypeCPUGPUNotes
Zygote.jlAutoZygote()
Tracker.jlAutoTracker()
ReverseDiff.jlAutoReverseDiff()
ForwardDiff.jlAutoForwardDiff()len ≤ 100
FiniteDiff.jlAutoFiniteDiff()len ≤ 100
Enzyme.jlAutoEnzyme()Only Reverse Mode

Arguments

Keyword Arguments

Example

julia
julia> f(x, y, z) = x .+ sum(abs2, y.t) + sum(y.x.z)
+  Expression: #= REPL[21]:1 =# JET.@test_opt target_modules = (Base, Core) sum(1, 1)

source

LuxTestUtils.jet_target_modules! Function
julia
jet_target_modules!(list::Vector{String}; force::Bool=false)

This sets target_modules for all JET tests when using @jet.

source

Gradient Correctness

LuxTestUtils.test_gradients Function
julia
test_gradients(f, args...; skip_backends=[], broken_backends=[], kwargs...)

Test the gradients of f with respect to args using the specified backends.

BackendADTypeCPUGPUNotes
Zygote.jlAutoZygote()
Tracker.jlAutoTracker()
ReverseDiff.jlAutoReverseDiff()
ForwardDiff.jlAutoForwardDiff()len ≤ 100
FiniteDiff.jlAutoFiniteDiff()len ≤ 100
Enzyme.jlAutoEnzyme()Only Reverse Mode

Arguments

Keyword Arguments

Example

julia
julia> f(x, y, z) = x .+ sum(abs2, y.t) + sum(y.x.z)
 
 julia> x = (; t=rand(10), x=(z=[2.0],))
 
-julia> test_gradients(f, 1.0, x, nothing)

source

LuxTestUtils.@test_gradients Macro
julia
@test_gradients(f, args...; kwargs...)

See the documentation of test_gradients for more details. This macro provides correct line information for the failing tests.

source

Extensions to @test

LuxTestUtils.@test_softfail Macro
julia
@test_softfail expr

Evaluate expr and record a test result. If expr throws an exception, the test result will be recorded as an error. If expr returns a value, and it is not a boolean, the test result will be recorded as an error.

If the test result is false then the test will be recorded as a broken test, else it will be recorded as a pass.

source

- +julia> test_gradients(f, 1.0, x, nothing)

source

LuxTestUtils.@test_gradients Macro
julia
@test_gradients(f, args...; kwargs...)

See the documentation of test_gradients for more details. This macro provides correct line information for the failing tests.

source

Extensions to @test

LuxTestUtils.@test_softfail Macro
julia
@test_softfail expr

Evaluate expr and record a test result. If expr throws an exception, the test result will be recorded as an error. If expr returns a value, and it is not a boolean, the test result will be recorded as an error.

If the test result is false then the test will be recorded as a broken test, else it will be recorded as a pass.

source

+ \ No newline at end of file diff --git a/dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.lean.js b/dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.lean.js deleted file mode 100644 index bb1d05b44a..0000000000 --- a/dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.lean.js +++ /dev/null @@ -1,27 +0,0 @@ -import{_ as l,c as p,j as i,a as e,G as t,a2 as n,B as d,o as h}from"./chunks/framework.I-x9Gl6h.js";const T=JSON.parse('{"title":"MLDataDevices","description":"","frontmatter":{},"headers":[],"relativePath":"api/Accelerator_Support/MLDataDevices.md","filePath":"api/Accelerator_Support/MLDataDevices.md","lastUpdated":null}'),o={name:"api/Accelerator_Support/MLDataDevices.md"},r={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"};function L(j,s,A,x,M,B){const a=d("Badge");return h(),p("div",null,[s[42]||(s[42]=i("h1",{id:"MLDataDevices-API",tabindex:"-1"},[e("MLDataDevices "),i("a",{class:"header-anchor",href:"#MLDataDevices-API","aria-label":'Permalink to "MLDataDevices {#MLDataDevices-API}"'},"​")],-1)),s[43]||(s[43]=i("p",null,[i("code",null,"MLDataDevices.jl"),e(" is a lightweight package defining rules for transferring data across devices.")],-1)),s[44]||(s[44]=i("h2",{id:"preferences",tabindex:"-1"},[e("Preferences "),i("a",{class:"header-anchor",href:"#preferences","aria-label":'Permalink to "Preferences"'},"​")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"MLDataDevices.gpu_backend!",href:"#MLDataDevices.gpu_backend!"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_backend!")],-1)),s[1]||(s[1]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=n(`
julia
gpu_backend!() = gpu_backend!("")
-gpu_backend!(backend) = gpu_backend!(string(backend))
-gpu_backend!(backend::AbstractGPUDevice)
-gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

`,5))]),s[45]||(s[45]=i("h2",{id:"Data-Transfer",tabindex:"-1"},[e("Data Transfer "),i("a",{class:"header-anchor",href:"#Data-Transfer","aria-label":'Permalink to "Data Transfer {#Data-Transfer}"'},"​")],-1)),i("details",c,[i("summary",null,[s[3]||(s[3]=i("a",{id:"MLDataDevices.cpu_device",href:"#MLDataDevices.cpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.cpu_device")],-1)),s[4]||(s[4]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=n('
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source

',3))]),i("details",k,[i("summary",null,[s[6]||(s[6]=i("a",{id:"MLDataDevices.gpu_device",href:"#MLDataDevices.gpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_device")],-1)),s[7]||(s[7]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=n(`
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
-    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

source

`,10))]),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"MLDataDevices.reactant_device",href:"#MLDataDevices.reactant_device"},[i("span",{class:"jlbinding"},"MLDataDevices.reactant_device")],-1)),s[10]||(s[10]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=n(`
julia
reactant_device(;
-    force::Bool=false, client=missing, device=missing
-) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

Danger

This is an experimental feature and might change without deprecations

source

`,5))]),s[46]||(s[46]=i("h2",{id:"miscellaneous",tabindex:"-1"},[e("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",u,[i("summary",null,[s[12]||(s[12]=i("a",{id:"MLDataDevices.reset_gpu_device!",href:"#MLDataDevices.reset_gpu_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.reset_gpu_device!")],-1)),s[13]||(s[13]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=n('
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source

',3))]),i("details",E,[i("summary",null,[s[15]||(s[15]=i("a",{id:"MLDataDevices.supported_gpu_backends",href:"#MLDataDevices.supported_gpu_backends"},[i("span",{class:"jlbinding"},"MLDataDevices.supported_gpu_backends")],-1)),s[16]||(s[16]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=n('
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source

',4))]),i("details",y,[i("summary",null,[s[18]||(s[18]=i("a",{id:"MLDataDevices.default_device_rng",href:"#MLDataDevices.default_device_rng"},[i("span",{class:"jlbinding"},"MLDataDevices.default_device_rng")],-1)),s[19]||(s[19]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=n('
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source

',3))]),i("details",b,[i("summary",null,[s[21]||(s[21]=i("a",{id:"MLDataDevices.get_device",href:"#MLDataDevices.get_device"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device")],-1)),s[22]||(s[22]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=n('
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source

',7))]),i("details",v,[i("summary",null,[s[24]||(s[24]=i("a",{id:"MLDataDevices.get_device_type",href:"#MLDataDevices.get_device_type"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device_type")],-1)),s[25]||(s[25]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=n('
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

source

',6))]),i("details",D,[i("summary",null,[s[27]||(s[27]=i("a",{id:"MLDataDevices.loaded",href:"#MLDataDevices.loaded"},[i("span",{class:"jlbinding"},"MLDataDevices.loaded")],-1)),s[28]||(s[28]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=n(`
julia
loaded(x::AbstractDevice) -> Bool
-loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

source

`,4))]),i("details",F,[i("summary",null,[s[30]||(s[30]=i("a",{id:"MLDataDevices.functional",href:"#MLDataDevices.functional"},[i("span",{class:"jlbinding"},"MLDataDevices.functional")],-1)),s[31]||(s[31]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=n(`
julia
functional(x::AbstractDevice) -> Bool
-functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source

`,4))]),i("details",f,[i("summary",null,[s[33]||(s[33]=i("a",{id:"MLDataDevices.isleaf",href:"#MLDataDevices.isleaf"},[i("span",{class:"jlbinding"},"MLDataDevices.isleaf")],-1)),s[34]||(s[34]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=n('
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

',6))]),s[47]||(s[47]=i("h2",{id:"Multi-GPU-Support",tabindex:"-1"},[e("Multi-GPU Support "),i("a",{class:"header-anchor",href:"#Multi-GPU-Support","aria-label":'Permalink to "Multi-GPU Support {#Multi-GPU-Support}"'},"​")],-1)),i("details",C,[i("summary",null,[s[36]||(s[36]=i("a",{id:"MLDataDevices.set_device!",href:"#MLDataDevices.set_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.set_device!")],-1)),s[37]||(s[37]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=n('
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

',14))]),s[48]||(s[48]=i("h2",{id:"iteration",tabindex:"-1"},[e("Iteration "),i("a",{class:"header-anchor",href:"#iteration","aria-label":'Permalink to "Iteration"'},"​")],-1)),i("details",m,[i("summary",null,[s[39]||(s[39]=i("a",{id:"MLDataDevices.DeviceIterator",href:"#MLDataDevices.DeviceIterator"},[i("span",{class:"jlbinding"},"MLDataDevices.DeviceIterator")],-1)),s[40]||(s[40]=e()),t(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[41]||(s[41]=n(`
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils
-
-julia> X = rand(Float64, 3, 33);
-
-julia> dataloader = DataLoader(X; batchsize=13, shuffle=false);
-
-julia> for (i, x) in enumerate(dataloader)
-           @show i, summary(x)
-       end
-(i, summary(x)) = (1, "3×13 Matrix{Float64}")
-(i, summary(x)) = (2, "3×13 Matrix{Float64}")
-(i, summary(x)) = (3, "3×7 Matrix{Float64}")
-
-julia> for (i, x) in enumerate(CUDADevice()(dataloader))
-           @show i, summary(x)
-       end
-(i, summary(x)) = (1, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
-(i, summary(x)) = (2, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
-(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")

source

`,9))])])}const w=l(o,[["render",L]]);export{T as __pageData,w as default}; diff --git a/dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.js b/dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.js similarity index 95% rename from dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.js rename to dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.js index bb1d05b44a..0e86923183 100644 --- a/dev/assets/api_Accelerator_Support_MLDataDevices.md.B7-0S35X.js +++ b/dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.js @@ -1,12 +1,12 @@ -import{_ as l,c as p,j as i,a as e,G as t,a2 as n,B as d,o as h}from"./chunks/framework.I-x9Gl6h.js";const T=JSON.parse('{"title":"MLDataDevices","description":"","frontmatter":{},"headers":[],"relativePath":"api/Accelerator_Support/MLDataDevices.md","filePath":"api/Accelerator_Support/MLDataDevices.md","lastUpdated":null}'),o={name:"api/Accelerator_Support/MLDataDevices.md"},r={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"};function L(j,s,A,x,M,B){const a=d("Badge");return h(),p("div",null,[s[42]||(s[42]=i("h1",{id:"MLDataDevices-API",tabindex:"-1"},[e("MLDataDevices "),i("a",{class:"header-anchor",href:"#MLDataDevices-API","aria-label":'Permalink to "MLDataDevices {#MLDataDevices-API}"'},"​")],-1)),s[43]||(s[43]=i("p",null,[i("code",null,"MLDataDevices.jl"),e(" is a lightweight package defining rules for transferring data across devices.")],-1)),s[44]||(s[44]=i("h2",{id:"preferences",tabindex:"-1"},[e("Preferences "),i("a",{class:"header-anchor",href:"#preferences","aria-label":'Permalink to "Preferences"'},"​")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"MLDataDevices.gpu_backend!",href:"#MLDataDevices.gpu_backend!"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_backend!")],-1)),s[1]||(s[1]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=n(`
julia
gpu_backend!() = gpu_backend!("")
+import{_ as l,c as p,j as i,a as e,G as t,a2 as n,B as d,o as h}from"./chunks/framework.BetCMmtc.js";const B=JSON.parse('{"title":"MLDataDevices","description":"","frontmatter":{},"headers":[],"relativePath":"api/Accelerator_Support/MLDataDevices.md","filePath":"api/Accelerator_Support/MLDataDevices.md","lastUpdated":null}'),o={name:"api/Accelerator_Support/MLDataDevices.md"},r={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"};function m(A,s,L,j,T,x){const a=d("Badge");return h(),p("div",null,[s[42]||(s[42]=i("h1",{id:"MLDataDevices-API",tabindex:"-1"},[e("MLDataDevices "),i("a",{class:"header-anchor",href:"#MLDataDevices-API","aria-label":'Permalink to "MLDataDevices {#MLDataDevices-API}"'},"​")],-1)),s[43]||(s[43]=i("p",null,[i("code",null,"MLDataDevices.jl"),e(" is a lightweight package defining rules for transferring data across devices.")],-1)),s[44]||(s[44]=i("h2",{id:"preferences",tabindex:"-1"},[e("Preferences "),i("a",{class:"header-anchor",href:"#preferences","aria-label":'Permalink to "Preferences"'},"​")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"MLDataDevices.gpu_backend!",href:"#MLDataDevices.gpu_backend!"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_backend!")],-1)),s[1]||(s[1]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=n(`
julia
gpu_backend!() = gpu_backend!("")
 gpu_backend!(backend) = gpu_backend!(string(backend))
 gpu_backend!(backend::AbstractGPUDevice)
-gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

`,5))]),s[45]||(s[45]=i("h2",{id:"Data-Transfer",tabindex:"-1"},[e("Data Transfer "),i("a",{class:"header-anchor",href:"#Data-Transfer","aria-label":'Permalink to "Data Transfer {#Data-Transfer}"'},"​")],-1)),i("details",c,[i("summary",null,[s[3]||(s[3]=i("a",{id:"MLDataDevices.cpu_device",href:"#MLDataDevices.cpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.cpu_device")],-1)),s[4]||(s[4]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=n('
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source

',3))]),i("details",k,[i("summary",null,[s[6]||(s[6]=i("a",{id:"MLDataDevices.gpu_device",href:"#MLDataDevices.gpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_device")],-1)),s[7]||(s[7]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=n(`
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
-    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

  • force::Bool: If true, then an error is thrown if no functional GPU device is found.

source

`,10))]),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"MLDataDevices.reactant_device",href:"#MLDataDevices.reactant_device"},[i("span",{class:"jlbinding"},"MLDataDevices.reactant_device")],-1)),s[10]||(s[10]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=n(`
julia
reactant_device(;
+gpu_backend!(backend::String)

Creates a LocalPreferences.toml file with the desired GPU backend.

If backend == "", then the gpu_backend preference is deleted. Otherwise, backend is validated to be one of the possible backends and the preference is set to backend.

If a new backend is successfully set, then the Julia session must be restarted for the change to take effect.

source

`,5))]),s[45]||(s[45]=i("h2",{id:"Data-Transfer",tabindex:"-1"},[e("Data Transfer "),i("a",{class:"header-anchor",href:"#Data-Transfer","aria-label":'Permalink to "Data Transfer {#Data-Transfer}"'},"​")],-1)),i("details",c,[i("summary",null,[s[3]||(s[3]=i("a",{id:"MLDataDevices.cpu_device",href:"#MLDataDevices.cpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.cpu_device")],-1)),s[4]||(s[4]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=n('
julia
cpu_device() -> CPUDevice()

Return a CPUDevice object which can be used to transfer data to CPU.

source

',3))]),i("details",k,[i("summary",null,[s[6]||(s[6]=i("a",{id:"MLDataDevices.gpu_device",href:"#MLDataDevices.gpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_device")],-1)),s[7]||(s[7]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=n(`
julia
gpu_device(device_id::Union{Nothing, Integer}=nothing;
+    force::Bool=false) -> AbstractDevice

Selects GPU device based on the following criteria:

  1. If gpu_backend preference is set and the backend is functional on the system, then that device is selected.

  2. Otherwise, an automatic selection algorithm is used. We go over possible device backends in the order specified by supported_gpu_backends() and select the first functional backend.

  3. If no GPU device is functional and force is false, then cpu_device() is invoked.

  4. If nothing works, an error is thrown.

Arguments

  • device_id::Union{Nothing, Integer}: The device id to select. If nothing, then we return the last selected device or if none was selected then we run the autoselection and choose the current device using CUDA.device() or AMDGPU.device() or similar. If Integer, then we select the device with the given id. Note that this is 1-indexed, in contrast to the 0-indexed CUDA.jl. For example, id = 4 corresponds to CUDA.device!(3).

Warning

device_id is only applicable for CUDA and AMDGPU backends. For Metal, oneAPI and CPU backends, device_id is ignored and a warning is printed.

Warning

gpu_device won't select a CUDA device unless both CUDA.jl and cuDNN.jl are loaded. This is to ensure that deep learning operations work correctly. Nonetheless, if cuDNN is not loaded you can still manually create a CUDADevice object and use it (e.g. dev = CUDADevice()).

Keyword Arguments

  • force::Bool: If true, then an error is thrown if no functional GPU device is found.

source

`,10))]),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"MLDataDevices.reactant_device",href:"#MLDataDevices.reactant_device"},[i("span",{class:"jlbinding"},"MLDataDevices.reactant_device")],-1)),s[10]||(s[10]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=n(`
julia
reactant_device(;
     force::Bool=false, client=missing, device=missing
-) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

Danger

This is an experimental feature and might change without deprecations

source

`,5))]),s[46]||(s[46]=i("h2",{id:"miscellaneous",tabindex:"-1"},[e("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",u,[i("summary",null,[s[12]||(s[12]=i("a",{id:"MLDataDevices.reset_gpu_device!",href:"#MLDataDevices.reset_gpu_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.reset_gpu_device!")],-1)),s[13]||(s[13]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=n('
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source

',3))]),i("details",E,[i("summary",null,[s[15]||(s[15]=i("a",{id:"MLDataDevices.supported_gpu_backends",href:"#MLDataDevices.supported_gpu_backends"},[i("span",{class:"jlbinding"},"MLDataDevices.supported_gpu_backends")],-1)),s[16]||(s[16]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=n('
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source

',4))]),i("details",y,[i("summary",null,[s[18]||(s[18]=i("a",{id:"MLDataDevices.default_device_rng",href:"#MLDataDevices.default_device_rng"},[i("span",{class:"jlbinding"},"MLDataDevices.default_device_rng")],-1)),s[19]||(s[19]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=n('
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source

',3))]),i("details",b,[i("summary",null,[s[21]||(s[21]=i("a",{id:"MLDataDevices.get_device",href:"#MLDataDevices.get_device"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device")],-1)),s[22]||(s[22]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=n('
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice() – denotes that the device type is unknown.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source

',7))]),i("details",v,[i("summary",null,[s[24]||(s[24]=i("a",{id:"MLDataDevices.get_device_type",href:"#MLDataDevices.get_device_type"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device_type")],-1)),s[25]||(s[25]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=n('
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • Nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice – denotes that the device type is unknown.

source

',6))]),i("details",D,[i("summary",null,[s[27]||(s[27]=i("a",{id:"MLDataDevices.loaded",href:"#MLDataDevices.loaded"},[i("span",{class:"jlbinding"},"MLDataDevices.loaded")],-1)),s[28]||(s[28]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=n(`
julia
loaded(x::AbstractDevice) -> Bool
-loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • CUDA.jl and cuDNN.jl (or just LuxCUDA.jl) for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source

`,4))]),i("details",F,[i("summary",null,[s[30]||(s[30]=i("a",{id:"MLDataDevices.functional",href:"#MLDataDevices.functional"},[i("span",{class:"jlbinding"},"MLDataDevices.functional")],-1)),s[31]||(s[31]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=n(`
julia
functional(x::AbstractDevice) -> Bool
-functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source

`,4))]),i("details",f,[i("summary",null,[s[33]||(s[33]=i("a",{id:"MLDataDevices.isleaf",href:"#MLDataDevices.isleaf"},[i("span",{class:"jlbinding"},"MLDataDevices.isleaf")],-1)),s[34]||(s[34]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=n('
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

',6))]),s[47]||(s[47]=i("h2",{id:"Multi-GPU-Support",tabindex:"-1"},[e("Multi-GPU Support "),i("a",{class:"header-anchor",href:"#Multi-GPU-Support","aria-label":'Permalink to "Multi-GPU Support {#Multi-GPU-Support}"'},"​")],-1)),i("details",C,[i("summary",null,[s[36]||(s[36]=i("a",{id:"MLDataDevices.set_device!",href:"#MLDataDevices.set_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.set_device!")],-1)),s[37]||(s[37]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=n('
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

',14))]),s[48]||(s[48]=i("h2",{id:"iteration",tabindex:"-1"},[e("Iteration "),i("a",{class:"header-anchor",href:"#iteration","aria-label":'Permalink to "Iteration"'},"​")],-1)),i("details",m,[i("summary",null,[s[39]||(s[39]=i("a",{id:"MLDataDevices.DeviceIterator",href:"#MLDataDevices.DeviceIterator"},[i("span",{class:"jlbinding"},"MLDataDevices.DeviceIterator")],-1)),s[40]||(s[40]=e()),t(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[41]||(s[41]=n(`
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils
+) -> Union{ReactantDevice, CPUDevice}

Return a ReactantDevice object if functional. Otherwise, throw an error if force is true. Falls back to CPUDevice if force is false.

client and device are used to specify the client and particular device to use. If not specified, then the default client and index are used.

Danger

This is an experimental feature and might change without deprecations

source

`,5))]),s[46]||(s[46]=i("h2",{id:"miscellaneous",tabindex:"-1"},[e("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",u,[i("summary",null,[s[12]||(s[12]=i("a",{id:"MLDataDevices.reset_gpu_device!",href:"#MLDataDevices.reset_gpu_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.reset_gpu_device!")],-1)),s[13]||(s[13]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=n('
julia
reset_gpu_device!()

Resets the selected GPU device. This is useful when automatic GPU selection needs to be run again.

source

',3))]),i("details",E,[i("summary",null,[s[15]||(s[15]=i("a",{id:"MLDataDevices.supported_gpu_backends",href:"#MLDataDevices.supported_gpu_backends"},[i("span",{class:"jlbinding"},"MLDataDevices.supported_gpu_backends")],-1)),s[16]||(s[16]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=n('
julia
supported_gpu_backends() -> Tuple{String, ...}

Return a tuple of supported GPU backends.

Warning

This is not the list of functional backends on the system, but rather backends which MLDataDevices.jl supports.

source

',4))]),i("details",y,[i("summary",null,[s[18]||(s[18]=i("a",{id:"MLDataDevices.default_device_rng",href:"#MLDataDevices.default_device_rng"},[i("span",{class:"jlbinding"},"MLDataDevices.default_device_rng")],-1)),s[19]||(s[19]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=n('
julia
default_device_rng(::AbstractDevice)

Returns the default RNG for the device. This can be used to directly generate parameters and states on the device using WeightInitializers.jl.

source

',3))]),i("details",b,[i("summary",null,[s[21]||(s[21]=i("a",{id:"MLDataDevices.get_device",href:"#MLDataDevices.get_device"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device")],-1)),s[22]||(s[22]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=n('
julia
get_device(x) -> dev::AbstractDevice | Exception | Nothing

If all arrays (on the leaves of the structure) are on the same device, we return that device. Otherwise, we throw an error. If the object is device agnostic, we return nothing.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice() – denotes that the device type is unknown.

See also get_device_type for a faster alternative that can be used for dispatch based on device type.

source

',7))]),i("details",v,[i("summary",null,[s[24]||(s[24]=i("a",{id:"MLDataDevices.get_device_type",href:"#MLDataDevices.get_device_type"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device_type")],-1)),s[25]||(s[25]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=n('
julia
get_device_type(x) -> Type{<:AbstractDevice} | Exception | Type{Nothing}

Similar to get_device but returns the type of the device instead of the device itself. This value is often a compile time constant and is recommended to be used instead of get_device where ever defining dispatches based on the device type.

Note

Trigger Packages must be loaded for this to return the correct device.

Special Retuened Values

  • Nothing – denotes that the object is device agnostic. For example, scalar, abstract range, etc.

  • UnknownDevice – denotes that the device type is unknown.

source

',6))]),i("details",D,[i("summary",null,[s[27]||(s[27]=i("a",{id:"MLDataDevices.loaded",href:"#MLDataDevices.loaded"},[i("span",{class:"jlbinding"},"MLDataDevices.loaded")],-1)),s[28]||(s[28]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=n(`
julia
loaded(x::AbstractDevice) -> Bool
+loaded(::Type{<:AbstractDevice}) -> Bool

Checks if the trigger package for the device is loaded. Trigger packages are as follows:

  • CUDA.jl and cuDNN.jl (or just LuxCUDA.jl) for NVIDIA CUDA Support.

  • AMDGPU.jl for AMD GPU ROCM Support.

  • Metal.jl for Apple Metal GPU Support.

  • oneAPI.jl for Intel oneAPI GPU Support.

source

`,4))]),i("details",F,[i("summary",null,[s[30]||(s[30]=i("a",{id:"MLDataDevices.functional",href:"#MLDataDevices.functional"},[i("span",{class:"jlbinding"},"MLDataDevices.functional")],-1)),s[31]||(s[31]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=n(`
julia
functional(x::AbstractDevice) -> Bool
+functional(::Type{<:AbstractDevice}) -> Bool

Checks if the device is functional. This is used to determine if the device can be used for computation. Note that even if the backend is loaded (as checked via MLDataDevices.loaded), the device may not be functional.

Note that while this function is not exported, it is considered part of the public API.

source

`,4))]),i("details",C,[i("summary",null,[s[33]||(s[33]=i("a",{id:"MLDataDevices.isleaf",href:"#MLDataDevices.isleaf"},[i("span",{class:"jlbinding"},"MLDataDevices.isleaf")],-1)),s[34]||(s[34]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=n('
julia
isleaf(x) -> Bool

Returns true if x is a leaf node in the data structure.

Defining MLDataDevices.isleaf(x::T) = true for custom types can be used to customize the behavior the data movement behavior when an object with nested structure containing the type is transferred to a device.

Adapt.adapt_structure(::AbstractDevice, x::T) or Adapt.adapt_structure(::AbstractDevice, x::T) will be called during data movement if isleaf(x::T).

If MLDataDevices.isleaf(x::T) is not defined, then it will fall back to Functors.isleaf(x).

source

',6))]),s[47]||(s[47]=i("h2",{id:"Multi-GPU-Support",tabindex:"-1"},[e("Multi-GPU Support "),i("a",{class:"header-anchor",href:"#Multi-GPU-Support","aria-label":'Permalink to "Multi-GPU Support {#Multi-GPU-Support}"'},"​")],-1)),i("details",f,[i("summary",null,[s[36]||(s[36]=i("a",{id:"MLDataDevices.set_device!",href:"#MLDataDevices.set_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.set_device!")],-1)),s[37]||(s[37]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=n('
julia
set_device!(T::Type{<:AbstractDevice}, dev_or_id)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • dev_or_id: Can be the device from the corresponding package. For example for CUDA it can be a CuDevice. If it is an integer, it is the device id to set. This is 1-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

julia
set_device!(T::Type{<:AbstractDevice}, ::Nothing, rank::Integer)

Set the device for the given type. This is a no-op for CPUDevice. For CUDADevice and AMDGPUDevice, it prints a warning if the corresponding trigger package is not loaded.

Currently, MetalDevice and oneAPIDevice don't support setting the device.

Arguments

  • T::Type{<:AbstractDevice}: The device type to set.

  • rank::Integer: Local Rank of the process. This is applicable for distributed training and must be 0-indexed.

Danger

This specific function should be considered experimental at this point and is currently provided to support distributed training in Lux. As such please use Lux.DistributedUtils instead of using this function.

source

',14))]),s[48]||(s[48]=i("h2",{id:"iteration",tabindex:"-1"},[e("Iteration "),i("a",{class:"header-anchor",href:"#iteration","aria-label":'Permalink to "Iteration"'},"​")],-1)),i("details",_,[i("summary",null,[s[39]||(s[39]=i("a",{id:"MLDataDevices.DeviceIterator",href:"#MLDataDevices.DeviceIterator"},[i("span",{class:"jlbinding"},"MLDataDevices.DeviceIterator")],-1)),s[40]||(s[40]=e()),t(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[41]||(s[41]=n(`
julia
DeviceIterator(dev::AbstractDevice, iterator)

Create a DeviceIterator that iterates through the provided iterator via iterate. Upon each iteration, the current batch is copied to the device dev, and the previous iteration is marked as freeable from GPU memory (via unsafe_free!) (no-op for a CPU device).

The conversion follows the same semantics as dev(<item from iterator>).

Similarity to CUDA.CuIterator

The design inspiration was taken from CUDA.CuIterator and was generalized to work with other backends and more complex iterators (using Functors).

MLUtils.DataLoader

Calling dev(::MLUtils.DataLoader) will automatically convert the dataloader to use the same semantics as DeviceIterator. This is generally preferred over looping over the dataloader directly and transferring the data to the device.

Examples

The following was run on a computer with an NVIDIA GPU.

julia
julia> using MLDataDevices, MLUtils
 
 julia> X = rand(Float64, 3, 33);
 
@@ -24,4 +24,4 @@ import{_ as l,c as p,j as i,a as e,G as t,a2 as n,B as d,o as h}from"./chunks/fr
        end
 (i, summary(x)) = (1, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
 (i, summary(x)) = (2, "3×13 CuArray{Float32, 2, CUDA.DeviceMemory}")
-(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")

source

`,9))])])}const w=l(o,[["render",L]]);export{T as __pageData,w as default}; +(i, summary(x)) = (3, "3×7 CuArray{Float32, 2, CUDA.DeviceMemory}")

source

`,9))])])}const P=l(o,[["render",m]]);export{B as __pageData,P as default}; diff --git a/dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.lean.js b/dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.lean.js new file mode 100644 index 0000000000..dfba35602d --- /dev/null +++ b/dev/assets/api_Accelerator_Support_MLDataDevices.md.BHPxWmuW.lean.js @@ -0,0 +1 @@ +import{_ as l,c as p,j as i,a as e,G as t,a2 as n,B as d,o as h}from"./chunks/framework.BetCMmtc.js";const B=JSON.parse('{"title":"MLDataDevices","description":"","frontmatter":{},"headers":[],"relativePath":"api/Accelerator_Support/MLDataDevices.md","filePath":"api/Accelerator_Support/MLDataDevices.md","lastUpdated":null}'),o={name:"api/Accelerator_Support/MLDataDevices.md"},r={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"};function m(A,s,L,j,T,x){const a=d("Badge");return h(),p("div",null,[s[42]||(s[42]=i("h1",{id:"MLDataDevices-API",tabindex:"-1"},[e("MLDataDevices "),i("a",{class:"header-anchor",href:"#MLDataDevices-API","aria-label":'Permalink to "MLDataDevices {#MLDataDevices-API}"'},"​")],-1)),s[43]||(s[43]=i("p",null,[i("code",null,"MLDataDevices.jl"),e(" is a lightweight package defining rules for transferring data across devices.")],-1)),s[44]||(s[44]=i("h2",{id:"preferences",tabindex:"-1"},[e("Preferences "),i("a",{class:"header-anchor",href:"#preferences","aria-label":'Permalink to "Preferences"'},"​")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"MLDataDevices.gpu_backend!",href:"#MLDataDevices.gpu_backend!"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_backend!")],-1)),s[1]||(s[1]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=n("",5))]),s[45]||(s[45]=i("h2",{id:"Data-Transfer",tabindex:"-1"},[e("Data Transfer "),i("a",{class:"header-anchor",href:"#Data-Transfer","aria-label":'Permalink to "Data Transfer {#Data-Transfer}"'},"​")],-1)),i("details",c,[i("summary",null,[s[3]||(s[3]=i("a",{id:"MLDataDevices.cpu_device",href:"#MLDataDevices.cpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.cpu_device")],-1)),s[4]||(s[4]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=n("",3))]),i("details",k,[i("summary",null,[s[6]||(s[6]=i("a",{id:"MLDataDevices.gpu_device",href:"#MLDataDevices.gpu_device"},[i("span",{class:"jlbinding"},"MLDataDevices.gpu_device")],-1)),s[7]||(s[7]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=n("",10))]),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"MLDataDevices.reactant_device",href:"#MLDataDevices.reactant_device"},[i("span",{class:"jlbinding"},"MLDataDevices.reactant_device")],-1)),s[10]||(s[10]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=n("",5))]),s[46]||(s[46]=i("h2",{id:"miscellaneous",tabindex:"-1"},[e("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",u,[i("summary",null,[s[12]||(s[12]=i("a",{id:"MLDataDevices.reset_gpu_device!",href:"#MLDataDevices.reset_gpu_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.reset_gpu_device!")],-1)),s[13]||(s[13]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=n("",3))]),i("details",E,[i("summary",null,[s[15]||(s[15]=i("a",{id:"MLDataDevices.supported_gpu_backends",href:"#MLDataDevices.supported_gpu_backends"},[i("span",{class:"jlbinding"},"MLDataDevices.supported_gpu_backends")],-1)),s[16]||(s[16]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=n("",4))]),i("details",y,[i("summary",null,[s[18]||(s[18]=i("a",{id:"MLDataDevices.default_device_rng",href:"#MLDataDevices.default_device_rng"},[i("span",{class:"jlbinding"},"MLDataDevices.default_device_rng")],-1)),s[19]||(s[19]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=n("",3))]),i("details",b,[i("summary",null,[s[21]||(s[21]=i("a",{id:"MLDataDevices.get_device",href:"#MLDataDevices.get_device"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device")],-1)),s[22]||(s[22]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=n("",7))]),i("details",v,[i("summary",null,[s[24]||(s[24]=i("a",{id:"MLDataDevices.get_device_type",href:"#MLDataDevices.get_device_type"},[i("span",{class:"jlbinding"},"MLDataDevices.get_device_type")],-1)),s[25]||(s[25]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=n("",6))]),i("details",D,[i("summary",null,[s[27]||(s[27]=i("a",{id:"MLDataDevices.loaded",href:"#MLDataDevices.loaded"},[i("span",{class:"jlbinding"},"MLDataDevices.loaded")],-1)),s[28]||(s[28]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=n("",4))]),i("details",F,[i("summary",null,[s[30]||(s[30]=i("a",{id:"MLDataDevices.functional",href:"#MLDataDevices.functional"},[i("span",{class:"jlbinding"},"MLDataDevices.functional")],-1)),s[31]||(s[31]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=n("",4))]),i("details",C,[i("summary",null,[s[33]||(s[33]=i("a",{id:"MLDataDevices.isleaf",href:"#MLDataDevices.isleaf"},[i("span",{class:"jlbinding"},"MLDataDevices.isleaf")],-1)),s[34]||(s[34]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=n("",6))]),s[47]||(s[47]=i("h2",{id:"Multi-GPU-Support",tabindex:"-1"},[e("Multi-GPU Support "),i("a",{class:"header-anchor",href:"#Multi-GPU-Support","aria-label":'Permalink to "Multi-GPU Support {#Multi-GPU-Support}"'},"​")],-1)),i("details",f,[i("summary",null,[s[36]||(s[36]=i("a",{id:"MLDataDevices.set_device!",href:"#MLDataDevices.set_device!"},[i("span",{class:"jlbinding"},"MLDataDevices.set_device!")],-1)),s[37]||(s[37]=e()),t(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=n("",14))]),s[48]||(s[48]=i("h2",{id:"iteration",tabindex:"-1"},[e("Iteration "),i("a",{class:"header-anchor",href:"#iteration","aria-label":'Permalink to "Iteration"'},"​")],-1)),i("details",_,[i("summary",null,[s[39]||(s[39]=i("a",{id:"MLDataDevices.DeviceIterator",href:"#MLDataDevices.DeviceIterator"},[i("span",{class:"jlbinding"},"MLDataDevices.DeviceIterator")],-1)),s[40]||(s[40]=e()),t(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[41]||(s[41]=n("",9))])])}const P=l(o,[["render",m]]);export{B as __pageData,P as default}; diff --git a/dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.lean.js b/dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.lean.js deleted file mode 100644 index e07ccadc1f..0000000000 --- a/dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as o,c as n,a2 as a,j as s,a as t,G as l,B as r,o as d}from"./chunks/framework.I-x9Gl6h.js";const R=JSON.parse('{"title":"LuxCore","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/LuxCore.md","filePath":"api/Building_Blocks/LuxCore.md","lastUpdated":null}'),p={name:"api/Building_Blocks/LuxCore.md"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"};function T(D,e,w,N,O,S){const i=r("Badge");return d(),n("div",null,[e[54]||(e[54]=a('

LuxCore

LuxCore.jl defines the abstract layers for Lux. Allows users to be compatible with the entirely of Lux.jl without having such a heavy dependency. If you are depending on Lux.jl directly, you do not need to depend on LuxCore.jl (all the functionality is exported via Lux.jl).

Abstract Types

',3)),s("details",c,[s("summary",null,[e[0]||(e[0]=s("a",{id:"LuxCore.AbstractLuxLayer",href:"#LuxCore.AbstractLuxLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxLayer")],-1)),e[1]||(e[1]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[2]||(e[2]=a('
julia
abstract type AbstractLuxLayer

Abstract Type for all Lux Layers

Users implementing their custom layer, must implement

Optionally:

See also AbstractLuxContainerLayer

source

',8))]),s("details",u,[s("summary",null,[e[3]||(e[3]=s("a",{id:"LuxCore.AbstractLuxWrapperLayer",href:"#LuxCore.AbstractLuxWrapperLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxWrapperLayer")],-1)),e[4]||(e[4]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[5]||(e[5]=a('
julia
abstract type AbstractLuxWrapperLayer{layer} <: AbstractLuxLayer

See AbstractLuxContainerLayer for detailed documentation. This abstract type is very similar to AbstractLuxContainerLayer except that it allows for a single layer to be wrapped in a container.

Additionally, on calling initialparameters and initialstates, the parameters and states are not wrapped in a NamedTuple with the same name as the field.

As a convenience, we define the fallback call (::AbstractLuxWrapperLayer)(x, ps, st), which calls getfield(x, layer)(x, ps, st).

source

',5))]),s("details",h,[s("summary",null,[e[6]||(e[6]=s("a",{id:"LuxCore.AbstractLuxContainerLayer",href:"#LuxCore.AbstractLuxContainerLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxContainerLayer")],-1)),e[7]||(e[7]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[8]||(e[8]=a('
julia
abstract type AbstractLuxContainerLayer{layers} <: AbstractLuxLayer

Abstract Container Type for certain Lux Layers. layers is a tuple containing fieldnames for the layer, and constructs the parameters and states using those.

Users implementing their custom layer can extend the same functions as in AbstractLuxLayer.

Advanced Structure Manipulation

Advanced structure manipulation of these layers post construction is possible via Functors.fmap. For a more flexible interface, we recommend using Lux.Experimental.@layer_map.

fmap Support

fmap support needs to be explicitly enabled by loading Functors.jl and Setfield.jl.

Changes from Pre-1.0 Behavior

Previously if layers was a singleton tuple, initialparameters and initialstates would return the parameters and states for the single field layers. From v1.0.0 onwards, even for singleton tuples, the parameters/states are wrapped in a NamedTuple with the same name as the field. See AbstractLuxWrapperLayer to replicate the previous behavior of singleton tuples.

source

',7))]),e[55]||(e[55]=s("h2",{id:"general",tabindex:"-1"},[t("General "),s("a",{class:"header-anchor",href:"#general","aria-label":'Permalink to "General"'},"​")],-1)),s("details",b,[s("summary",null,[e[9]||(e[9]=s("a",{id:"LuxCore.apply",href:"#LuxCore.apply"},[s("span",{class:"jlbinding"},"LuxCore.apply")],-1)),e[10]||(e[10]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[11]||(e[11]=a('
julia
apply(model, x, ps, st)

In most cases this function simply calls model(x, ps, st). However, it is still recommended to call apply instead of model(x, ps, st) directly. Some of the reasons for this include:

  1. For certain types of inputs x, we might want to perform preprocessing before calling model. For eg, if x is an Array of ReverseDiff.TrackedReals this can cause significant regressions in model(x, ps, st) (since it won't hit any of the BLAS dispatches). In those cases, we would automatically convert x to a ReverseDiff.TrackedArray.

  2. Certain user defined inputs need to be applied to specific layers but we want the datatype of propagate through all the layers (even unsupported ones). In these cases, we can unpack the input in apply and pass it to the appropriate layer and then repack it before returning. See the Lux manual on Custom Input Types for a motivating example.

Tip

apply is integrated with DispatchDoctor.jl that allows automatic verification of type stability. By default this is "disable"d. For more information, see the documentation.

source

',5))]),s("details",k,[s("summary",null,[e[12]||(e[12]=s("a",{id:"LuxCore.stateless_apply",href:"#LuxCore.stateless_apply"},[s("span",{class:"jlbinding"},"LuxCore.stateless_apply")],-1)),e[13]||(e[13]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=a('
julia
stateless_apply(model, x, ps)

Calls apply and only returns the first argument. This function requires that model has an empty state of NamedTuple(). Behavior of other kinds of models are undefined and it is the responsibility of the user to ensure that the model has an empty state.

source

',3))]),s("details",g,[s("summary",null,[e[15]||(e[15]=s("a",{id:"LuxCore.check_fmap_condition",href:"#LuxCore.check_fmap_condition"},[s("span",{class:"jlbinding"},"LuxCore.check_fmap_condition")],-1)),e[16]||(e[16]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[17]||(e[17]=a('
julia
check_fmap_condition(cond, tmatch::Union{Type, Nothing}, x) -> Bool

fmaps into the structure x and see if cond is satisfied for any of the leaf elements.

Arguments

Returns

A Boolean Value

source

',7))]),s("details",y,[s("summary",null,[e[18]||(e[18]=s("a",{id:"LuxCore.contains_lux_layer",href:"#LuxCore.contains_lux_layer"},[s("span",{class:"jlbinding"},"LuxCore.contains_lux_layer")],-1)),e[19]||(e[19]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[20]||(e[20]=a('
julia
contains_lux_layer(l) -> Bool

Check if the structure l is a Lux AbstractLuxLayer or a container of such a layer.

source

',3))]),s("details",m,[s("summary",null,[e[21]||(e[21]=s("a",{id:"LuxCore.display_name",href:"#LuxCore.display_name"},[s("span",{class:"jlbinding"},"LuxCore.display_name")],-1)),e[22]||(e[22]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[23]||(e[23]=a('
julia
display_name(layer::AbstractLuxLayer)

Printed Name of the layer. If the layer has a field name that is used, else the type name is used.

source

',3))]),s("details",L,[s("summary",null,[e[24]||(e[24]=s("a",{id:"LuxCore.replicate",href:"#LuxCore.replicate"},[s("span",{class:"jlbinding"},"LuxCore.replicate")],-1)),e[25]||(e[25]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[26]||(e[26]=a('
julia
replicate(rng::AbstractRNG)

Creates a copy of the rng state depending on its type.

source

',3))]),s("details",x,[s("summary",null,[e[27]||(e[27]=s("a",{id:"LuxCore.setup",href:"#LuxCore.setup"},[s("span",{class:"jlbinding"},"LuxCore.setup")],-1)),e[28]||(e[28]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[29]||(e[29]=a('
julia
setup(rng::AbstractRNG, layer)

Shorthand for getting the parameters and states of the layer l. Is equivalent to (initialparameters(rng, l), initialstates(rng, l)).

Warning

This function is not pure, it mutates rng.

source

',4))]),e[56]||(e[56]=s("h2",{id:"parameters",tabindex:"-1"},[t("Parameters "),s("a",{class:"header-anchor",href:"#parameters","aria-label":'Permalink to "Parameters"'},"​")],-1)),s("details",f,[s("summary",null,[e[30]||(e[30]=s("a",{id:"LuxCore.initialparameters",href:"#LuxCore.initialparameters"},[s("span",{class:"jlbinding"},"LuxCore.initialparameters")],-1)),e[31]||(e[31]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[32]||(e[32]=a('
julia
initialparameters(rng::AbstractRNG, layer)

Generate the initial parameters of the layer l.

source

',3))]),s("details",C,[s("summary",null,[e[33]||(e[33]=s("a",{id:"LuxCore.parameterlength",href:"#LuxCore.parameterlength"},[s("span",{class:"jlbinding"},"LuxCore.parameterlength")],-1)),e[34]||(e[34]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[35]||(e[35]=a('
julia
parameterlength(layer)

Return the total number of parameters of the layer l.

source

',3))]),e[57]||(e[57]=s("h2",{id:"states",tabindex:"-1"},[t("States "),s("a",{class:"header-anchor",href:"#states","aria-label":'Permalink to "States"'},"​")],-1)),s("details",j,[s("summary",null,[e[36]||(e[36]=s("a",{id:"LuxCore.initialstates",href:"#LuxCore.initialstates"},[s("span",{class:"jlbinding"},"LuxCore.initialstates")],-1)),e[37]||(e[37]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[38]||(e[38]=a('
julia
initialstates(rng::AbstractRNG, layer)

Generate the initial states of the layer l.

source

',3))]),s("details",v,[s("summary",null,[e[39]||(e[39]=s("a",{id:"LuxCore.statelength",href:"#LuxCore.statelength"},[s("span",{class:"jlbinding"},"LuxCore.statelength")],-1)),e[40]||(e[40]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[41]||(e[41]=a('
julia
statelength(layer)

Return the total number of states of the layer l.

source

',3))]),s("details",E,[s("summary",null,[e[42]||(e[42]=s("a",{id:"LuxCore.testmode",href:"#LuxCore.testmode"},[s("span",{class:"jlbinding"},"LuxCore.testmode")],-1)),e[43]||(e[43]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[44]||(e[44]=a('
julia
testmode(st::NamedTuple)

Make all occurrences of training in state stVal(false).

source

',3))]),s("details",F,[s("summary",null,[e[45]||(e[45]=s("a",{id:"LuxCore.trainmode",href:"#LuxCore.trainmode"},[s("span",{class:"jlbinding"},"LuxCore.trainmode")],-1)),e[46]||(e[46]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[47]||(e[47]=a('
julia
trainmode(st::NamedTuple)

Make all occurrences of training in state stVal(true).

source

',3))]),s("details",A,[s("summary",null,[e[48]||(e[48]=s("a",{id:"LuxCore.update_state",href:"#LuxCore.update_state"},[s("span",{class:"jlbinding"},"LuxCore.update_state")],-1)),e[49]||(e[49]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[50]||(e[50]=a('
julia
update_state(st::NamedTuple, key::Symbol, value; exclude=Internal.isleaf)

Recursively update all occurrences of the key in the state st with the value. exclude is a function that is passed to Functors.fmap_with_path's exclude keyword.

Needs Functors.jl

This function requires Functors.jl to be loaded.

source

',4))]),e[58]||(e[58]=s("h2",{id:"Layer-size",tabindex:"-1"},[t("Layer size "),s("a",{class:"header-anchor",href:"#Layer-size","aria-label":'Permalink to "Layer size {#Layer-size}"'},"​")],-1)),s("details",B,[s("summary",null,[e[51]||(e[51]=s("a",{id:"LuxCore.outputsize",href:"#LuxCore.outputsize"},[s("span",{class:"jlbinding"},"LuxCore.outputsize")],-1)),e[52]||(e[52]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[53]||(e[53]=a('
julia
outputsize(layer, x, rng)

Return the output size of the layer.

The fallback implementation of this function assumes the inputs were batched, i.e., if any of the outputs are Arrays, with ndims(A) > 1, it will return size(A)[1:(end - 1)]. If this behavior is undesirable, provide a custom outputsize(layer, x, rng) implementation).

Fallback Implementation

The fallback implementation of this function is defined once Lux.jl is loaded.

Changes from Pre-1.0 Behavior

Previously it was possible to override this function by defining outputsize(layer). However, this can potentially introduce a bug that is hard to bypass. See this PR for more information.

source

',6))])])}const z=o(p,[["render",T]]);export{R as __pageData,z as default}; diff --git a/dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.js b/dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.js similarity index 89% rename from dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.js rename to dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.js index e07ccadc1f..4728496d6f 100644 --- a/dev/assets/api_Building_Blocks_LuxCore.md.BGgzNXbc.js +++ b/dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.js @@ -1 +1 @@ -import{_ as o,c as n,a2 as a,j as s,a as t,G as l,B as r,o as d}from"./chunks/framework.I-x9Gl6h.js";const R=JSON.parse('{"title":"LuxCore","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/LuxCore.md","filePath":"api/Building_Blocks/LuxCore.md","lastUpdated":null}'),p={name:"api/Building_Blocks/LuxCore.md"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"};function T(D,e,w,N,O,S){const i=r("Badge");return d(),n("div",null,[e[54]||(e[54]=a('

LuxCore

LuxCore.jl defines the abstract layers for Lux. Allows users to be compatible with the entirely of Lux.jl without having such a heavy dependency. If you are depending on Lux.jl directly, you do not need to depend on LuxCore.jl (all the functionality is exported via Lux.jl).

Abstract Types

',3)),s("details",c,[s("summary",null,[e[0]||(e[0]=s("a",{id:"LuxCore.AbstractLuxLayer",href:"#LuxCore.AbstractLuxLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxLayer")],-1)),e[1]||(e[1]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[2]||(e[2]=a('
julia
abstract type AbstractLuxLayer

Abstract Type for all Lux Layers

Users implementing their custom layer, must implement

Optionally:

See also AbstractLuxContainerLayer

source

',8))]),s("details",u,[s("summary",null,[e[3]||(e[3]=s("a",{id:"LuxCore.AbstractLuxWrapperLayer",href:"#LuxCore.AbstractLuxWrapperLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxWrapperLayer")],-1)),e[4]||(e[4]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[5]||(e[5]=a('
julia
abstract type AbstractLuxWrapperLayer{layer} <: AbstractLuxLayer

See AbstractLuxContainerLayer for detailed documentation. This abstract type is very similar to AbstractLuxContainerLayer except that it allows for a single layer to be wrapped in a container.

Additionally, on calling initialparameters and initialstates, the parameters and states are not wrapped in a NamedTuple with the same name as the field.

As a convenience, we define the fallback call (::AbstractLuxWrapperLayer)(x, ps, st), which calls getfield(x, layer)(x, ps, st).

source

',5))]),s("details",h,[s("summary",null,[e[6]||(e[6]=s("a",{id:"LuxCore.AbstractLuxContainerLayer",href:"#LuxCore.AbstractLuxContainerLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxContainerLayer")],-1)),e[7]||(e[7]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[8]||(e[8]=a('
julia
abstract type AbstractLuxContainerLayer{layers} <: AbstractLuxLayer

Abstract Container Type for certain Lux Layers. layers is a tuple containing fieldnames for the layer, and constructs the parameters and states using those.

Users implementing their custom layer can extend the same functions as in AbstractLuxLayer.

Advanced Structure Manipulation

Advanced structure manipulation of these layers post construction is possible via Functors.fmap. For a more flexible interface, we recommend using Lux.Experimental.@layer_map.

fmap Support

fmap support needs to be explicitly enabled by loading Functors.jl and Setfield.jl.

Changes from Pre-1.0 Behavior

Previously if layers was a singleton tuple, initialparameters and initialstates would return the parameters and states for the single field layers. From v1.0.0 onwards, even for singleton tuples, the parameters/states are wrapped in a NamedTuple with the same name as the field. See AbstractLuxWrapperLayer to replicate the previous behavior of singleton tuples.

source

',7))]),e[55]||(e[55]=s("h2",{id:"general",tabindex:"-1"},[t("General "),s("a",{class:"header-anchor",href:"#general","aria-label":'Permalink to "General"'},"​")],-1)),s("details",b,[s("summary",null,[e[9]||(e[9]=s("a",{id:"LuxCore.apply",href:"#LuxCore.apply"},[s("span",{class:"jlbinding"},"LuxCore.apply")],-1)),e[10]||(e[10]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[11]||(e[11]=a('
julia
apply(model, x, ps, st)

In most cases this function simply calls model(x, ps, st). However, it is still recommended to call apply instead of model(x, ps, st) directly. Some of the reasons for this include:

  1. For certain types of inputs x, we might want to perform preprocessing before calling model. For eg, if x is an Array of ReverseDiff.TrackedReals this can cause significant regressions in model(x, ps, st) (since it won't hit any of the BLAS dispatches). In those cases, we would automatically convert x to a ReverseDiff.TrackedArray.

  2. Certain user defined inputs need to be applied to specific layers but we want the datatype of propagate through all the layers (even unsupported ones). In these cases, we can unpack the input in apply and pass it to the appropriate layer and then repack it before returning. See the Lux manual on Custom Input Types for a motivating example.

Tip

apply is integrated with DispatchDoctor.jl that allows automatic verification of type stability. By default this is "disable"d. For more information, see the documentation.

source

',5))]),s("details",k,[s("summary",null,[e[12]||(e[12]=s("a",{id:"LuxCore.stateless_apply",href:"#LuxCore.stateless_apply"},[s("span",{class:"jlbinding"},"LuxCore.stateless_apply")],-1)),e[13]||(e[13]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=a('
julia
stateless_apply(model, x, ps)

Calls apply and only returns the first argument. This function requires that model has an empty state of NamedTuple(). Behavior of other kinds of models are undefined and it is the responsibility of the user to ensure that the model has an empty state.

source

',3))]),s("details",g,[s("summary",null,[e[15]||(e[15]=s("a",{id:"LuxCore.check_fmap_condition",href:"#LuxCore.check_fmap_condition"},[s("span",{class:"jlbinding"},"LuxCore.check_fmap_condition")],-1)),e[16]||(e[16]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[17]||(e[17]=a('
julia
check_fmap_condition(cond, tmatch::Union{Type, Nothing}, x) -> Bool

fmaps into the structure x and see if cond is satisfied for any of the leaf elements.

Arguments

Returns

A Boolean Value

source

',7))]),s("details",y,[s("summary",null,[e[18]||(e[18]=s("a",{id:"LuxCore.contains_lux_layer",href:"#LuxCore.contains_lux_layer"},[s("span",{class:"jlbinding"},"LuxCore.contains_lux_layer")],-1)),e[19]||(e[19]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[20]||(e[20]=a('
julia
contains_lux_layer(l) -> Bool

Check if the structure l is a Lux AbstractLuxLayer or a container of such a layer.

source

',3))]),s("details",m,[s("summary",null,[e[21]||(e[21]=s("a",{id:"LuxCore.display_name",href:"#LuxCore.display_name"},[s("span",{class:"jlbinding"},"LuxCore.display_name")],-1)),e[22]||(e[22]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[23]||(e[23]=a('
julia
display_name(layer::AbstractLuxLayer)

Printed Name of the layer. If the layer has a field name that is used, else the type name is used.

source

',3))]),s("details",L,[s("summary",null,[e[24]||(e[24]=s("a",{id:"LuxCore.replicate",href:"#LuxCore.replicate"},[s("span",{class:"jlbinding"},"LuxCore.replicate")],-1)),e[25]||(e[25]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[26]||(e[26]=a('
julia
replicate(rng::AbstractRNG)

Creates a copy of the rng state depending on its type.

source

',3))]),s("details",x,[s("summary",null,[e[27]||(e[27]=s("a",{id:"LuxCore.setup",href:"#LuxCore.setup"},[s("span",{class:"jlbinding"},"LuxCore.setup")],-1)),e[28]||(e[28]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[29]||(e[29]=a('
julia
setup(rng::AbstractRNG, layer)

Shorthand for getting the parameters and states of the layer l. Is equivalent to (initialparameters(rng, l), initialstates(rng, l)).

Warning

This function is not pure, it mutates rng.

source

',4))]),e[56]||(e[56]=s("h2",{id:"parameters",tabindex:"-1"},[t("Parameters "),s("a",{class:"header-anchor",href:"#parameters","aria-label":'Permalink to "Parameters"'},"​")],-1)),s("details",f,[s("summary",null,[e[30]||(e[30]=s("a",{id:"LuxCore.initialparameters",href:"#LuxCore.initialparameters"},[s("span",{class:"jlbinding"},"LuxCore.initialparameters")],-1)),e[31]||(e[31]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[32]||(e[32]=a('
julia
initialparameters(rng::AbstractRNG, layer)

Generate the initial parameters of the layer l.

source

',3))]),s("details",C,[s("summary",null,[e[33]||(e[33]=s("a",{id:"LuxCore.parameterlength",href:"#LuxCore.parameterlength"},[s("span",{class:"jlbinding"},"LuxCore.parameterlength")],-1)),e[34]||(e[34]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[35]||(e[35]=a('
julia
parameterlength(layer)

Return the total number of parameters of the layer l.

source

',3))]),e[57]||(e[57]=s("h2",{id:"states",tabindex:"-1"},[t("States "),s("a",{class:"header-anchor",href:"#states","aria-label":'Permalink to "States"'},"​")],-1)),s("details",j,[s("summary",null,[e[36]||(e[36]=s("a",{id:"LuxCore.initialstates",href:"#LuxCore.initialstates"},[s("span",{class:"jlbinding"},"LuxCore.initialstates")],-1)),e[37]||(e[37]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[38]||(e[38]=a('
julia
initialstates(rng::AbstractRNG, layer)

Generate the initial states of the layer l.

source

',3))]),s("details",v,[s("summary",null,[e[39]||(e[39]=s("a",{id:"LuxCore.statelength",href:"#LuxCore.statelength"},[s("span",{class:"jlbinding"},"LuxCore.statelength")],-1)),e[40]||(e[40]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[41]||(e[41]=a('
julia
statelength(layer)

Return the total number of states of the layer l.

source

',3))]),s("details",E,[s("summary",null,[e[42]||(e[42]=s("a",{id:"LuxCore.testmode",href:"#LuxCore.testmode"},[s("span",{class:"jlbinding"},"LuxCore.testmode")],-1)),e[43]||(e[43]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[44]||(e[44]=a('
julia
testmode(st::NamedTuple)

Make all occurrences of training in state stVal(false).

source

',3))]),s("details",F,[s("summary",null,[e[45]||(e[45]=s("a",{id:"LuxCore.trainmode",href:"#LuxCore.trainmode"},[s("span",{class:"jlbinding"},"LuxCore.trainmode")],-1)),e[46]||(e[46]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[47]||(e[47]=a('
julia
trainmode(st::NamedTuple)

Make all occurrences of training in state stVal(true).

source

',3))]),s("details",A,[s("summary",null,[e[48]||(e[48]=s("a",{id:"LuxCore.update_state",href:"#LuxCore.update_state"},[s("span",{class:"jlbinding"},"LuxCore.update_state")],-1)),e[49]||(e[49]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[50]||(e[50]=a('
julia
update_state(st::NamedTuple, key::Symbol, value; exclude=Internal.isleaf)

Recursively update all occurrences of the key in the state st with the value. exclude is a function that is passed to Functors.fmap_with_path's exclude keyword.

Needs Functors.jl

This function requires Functors.jl to be loaded.

source

',4))]),e[58]||(e[58]=s("h2",{id:"Layer-size",tabindex:"-1"},[t("Layer size "),s("a",{class:"header-anchor",href:"#Layer-size","aria-label":'Permalink to "Layer size {#Layer-size}"'},"​")],-1)),s("details",B,[s("summary",null,[e[51]||(e[51]=s("a",{id:"LuxCore.outputsize",href:"#LuxCore.outputsize"},[s("span",{class:"jlbinding"},"LuxCore.outputsize")],-1)),e[52]||(e[52]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[53]||(e[53]=a('
julia
outputsize(layer, x, rng)

Return the output size of the layer.

The fallback implementation of this function assumes the inputs were batched, i.e., if any of the outputs are Arrays, with ndims(A) > 1, it will return size(A)[1:(end - 1)]. If this behavior is undesirable, provide a custom outputsize(layer, x, rng) implementation).

Fallback Implementation

The fallback implementation of this function is defined once Lux.jl is loaded.

Changes from Pre-1.0 Behavior

Previously it was possible to override this function by defining outputsize(layer). However, this can potentially introduce a bug that is hard to bypass. See this PR for more information.

source

',6))])])}const z=o(p,[["render",T]]);export{R as __pageData,z as default}; +import{_ as o,c as n,a2 as a,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.BetCMmtc.js";const N=JSON.parse('{"title":"LuxCore","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/LuxCore.md","filePath":"api/Building_Blocks/LuxCore.md","lastUpdated":null}'),d={name:"api/Building_Blocks/LuxCore.md"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"};function F(S,e,B,D,P,I){const i=r("Badge");return p(),n("div",null,[e[54]||(e[54]=a('

LuxCore

LuxCore.jl defines the abstract layers for Lux. Allows users to be compatible with the entirely of Lux.jl without having such a heavy dependency. If you are depending on Lux.jl directly, you do not need to depend on LuxCore.jl (all the functionality is exported via Lux.jl).

Abstract Types

',3)),s("details",c,[s("summary",null,[e[0]||(e[0]=s("a",{id:"LuxCore.AbstractLuxLayer",href:"#LuxCore.AbstractLuxLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxLayer")],-1)),e[1]||(e[1]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[2]||(e[2]=a('
julia
abstract type AbstractLuxLayer

Abstract Type for all Lux Layers

Users implementing their custom layer, must implement

Optionally:

See also AbstractLuxContainerLayer

source

',8))]),s("details",u,[s("summary",null,[e[3]||(e[3]=s("a",{id:"LuxCore.AbstractLuxWrapperLayer",href:"#LuxCore.AbstractLuxWrapperLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxWrapperLayer")],-1)),e[4]||(e[4]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[5]||(e[5]=a('
julia
abstract type AbstractLuxWrapperLayer{layer} <: AbstractLuxLayer

See AbstractLuxContainerLayer for detailed documentation. This abstract type is very similar to AbstractLuxContainerLayer except that it allows for a single layer to be wrapped in a container.

Additionally, on calling initialparameters and initialstates, the parameters and states are not wrapped in a NamedTuple with the same name as the field.

As a convenience, we define the fallback call (::AbstractLuxWrapperLayer)(x, ps, st), which calls getfield(x, layer)(x, ps, st).

source

',5))]),s("details",h,[s("summary",null,[e[6]||(e[6]=s("a",{id:"LuxCore.AbstractLuxContainerLayer",href:"#LuxCore.AbstractLuxContainerLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxContainerLayer")],-1)),e[7]||(e[7]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[8]||(e[8]=a('
julia
abstract type AbstractLuxContainerLayer{layers} <: AbstractLuxLayer

Abstract Container Type for certain Lux Layers. layers is a tuple containing fieldnames for the layer, and constructs the parameters and states using those.

Users implementing their custom layer can extend the same functions as in AbstractLuxLayer.

Advanced Structure Manipulation

Advanced structure manipulation of these layers post construction is possible via Functors.fmap. For a more flexible interface, we recommend using Lux.Experimental.@layer_map.

fmap Support

fmap support needs to be explicitly enabled by loading Functors.jl and Setfield.jl.

Changes from Pre-1.0 Behavior

Previously if layers was a singleton tuple, initialparameters and initialstates would return the parameters and states for the single field layers. From v1.0.0 onwards, even for singleton tuples, the parameters/states are wrapped in a NamedTuple with the same name as the field. See AbstractLuxWrapperLayer to replicate the previous behavior of singleton tuples.

source

',7))]),e[55]||(e[55]=s("h2",{id:"general",tabindex:"-1"},[t("General "),s("a",{class:"header-anchor",href:"#general","aria-label":'Permalink to "General"'},"​")],-1)),s("details",b,[s("summary",null,[e[9]||(e[9]=s("a",{id:"LuxCore.apply",href:"#LuxCore.apply"},[s("span",{class:"jlbinding"},"LuxCore.apply")],-1)),e[10]||(e[10]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[11]||(e[11]=a('
julia
apply(model, x, ps, st)

In most cases this function simply calls model(x, ps, st). However, it is still recommended to call apply instead of model(x, ps, st) directly. Some of the reasons for this include:

  1. For certain types of inputs x, we might want to perform preprocessing before calling model. For eg, if x is an Array of ReverseDiff.TrackedReals this can cause significant regressions in model(x, ps, st) (since it won't hit any of the BLAS dispatches). In those cases, we would automatically convert x to a ReverseDiff.TrackedArray.

  2. Certain user defined inputs need to be applied to specific layers but we want the datatype of propagate through all the layers (even unsupported ones). In these cases, we can unpack the input in apply and pass it to the appropriate layer and then repack it before returning. See the Lux manual on Custom Input Types for a motivating example.

Tip

apply is integrated with DispatchDoctor.jl that allows automatic verification of type stability. By default this is "disable"d. For more information, see the documentation.

source

',5))]),s("details",k,[s("summary",null,[e[12]||(e[12]=s("a",{id:"LuxCore.stateless_apply",href:"#LuxCore.stateless_apply"},[s("span",{class:"jlbinding"},"LuxCore.stateless_apply")],-1)),e[13]||(e[13]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=a('
julia
stateless_apply(model, x, ps)

Calls apply and only returns the first argument. This function requires that model has an empty state of NamedTuple(). Behavior of other kinds of models are undefined and it is the responsibility of the user to ensure that the model has an empty state.

source

',3))]),s("details",g,[s("summary",null,[e[15]||(e[15]=s("a",{id:"LuxCore.check_fmap_condition",href:"#LuxCore.check_fmap_condition"},[s("span",{class:"jlbinding"},"LuxCore.check_fmap_condition")],-1)),e[16]||(e[16]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[17]||(e[17]=a('
julia
check_fmap_condition(cond, tmatch::Union{Type, Nothing}, x) -> Bool

fmaps into the structure x and see if cond is satisfied for any of the leaf elements.

Arguments

Returns

A Boolean Value

source

',7))]),s("details",y,[s("summary",null,[e[18]||(e[18]=s("a",{id:"LuxCore.contains_lux_layer",href:"#LuxCore.contains_lux_layer"},[s("span",{class:"jlbinding"},"LuxCore.contains_lux_layer")],-1)),e[19]||(e[19]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[20]||(e[20]=a('
julia
contains_lux_layer(l) -> Bool

Check if the structure l is a Lux AbstractLuxLayer or a container of such a layer.

source

',3))]),s("details",m,[s("summary",null,[e[21]||(e[21]=s("a",{id:"LuxCore.display_name",href:"#LuxCore.display_name"},[s("span",{class:"jlbinding"},"LuxCore.display_name")],-1)),e[22]||(e[22]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[23]||(e[23]=a('
julia
display_name(layer::AbstractLuxLayer)

Printed Name of the layer. If the layer has a field name that is used, else the type name is used.

source

',3))]),s("details",L,[s("summary",null,[e[24]||(e[24]=s("a",{id:"LuxCore.replicate",href:"#LuxCore.replicate"},[s("span",{class:"jlbinding"},"LuxCore.replicate")],-1)),e[25]||(e[25]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[26]||(e[26]=a('
julia
replicate(rng::AbstractRNG)

Creates a copy of the rng state depending on its type.

source

',3))]),s("details",x,[s("summary",null,[e[27]||(e[27]=s("a",{id:"LuxCore.setup",href:"#LuxCore.setup"},[s("span",{class:"jlbinding"},"LuxCore.setup")],-1)),e[28]||(e[28]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[29]||(e[29]=a('
julia
setup(rng::AbstractRNG, layer)

Shorthand for getting the parameters and states of the layer l. Is equivalent to (initialparameters(rng, l), initialstates(rng, l)).

Warning

This function is not pure, it mutates rng.

source

',4))]),e[56]||(e[56]=s("h2",{id:"parameters",tabindex:"-1"},[t("Parameters "),s("a",{class:"header-anchor",href:"#parameters","aria-label":'Permalink to "Parameters"'},"​")],-1)),s("details",C,[s("summary",null,[e[30]||(e[30]=s("a",{id:"LuxCore.initialparameters",href:"#LuxCore.initialparameters"},[s("span",{class:"jlbinding"},"LuxCore.initialparameters")],-1)),e[31]||(e[31]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[32]||(e[32]=a('
julia
initialparameters(rng::AbstractRNG, layer)

Generate the initial parameters of the layer l.

source

',3))]),s("details",_,[s("summary",null,[e[33]||(e[33]=s("a",{id:"LuxCore.parameterlength",href:"#LuxCore.parameterlength"},[s("span",{class:"jlbinding"},"LuxCore.parameterlength")],-1)),e[34]||(e[34]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[35]||(e[35]=a('
julia
parameterlength(layer)

Return the total number of parameters of the layer l.

source

',3))]),e[57]||(e[57]=s("h2",{id:"states",tabindex:"-1"},[t("States "),s("a",{class:"header-anchor",href:"#states","aria-label":'Permalink to "States"'},"​")],-1)),s("details",f,[s("summary",null,[e[36]||(e[36]=s("a",{id:"LuxCore.initialstates",href:"#LuxCore.initialstates"},[s("span",{class:"jlbinding"},"LuxCore.initialstates")],-1)),e[37]||(e[37]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[38]||(e[38]=a('
julia
initialstates(rng::AbstractRNG, layer)

Generate the initial states of the layer l.

source

',3))]),s("details",j,[s("summary",null,[e[39]||(e[39]=s("a",{id:"LuxCore.statelength",href:"#LuxCore.statelength"},[s("span",{class:"jlbinding"},"LuxCore.statelength")],-1)),e[40]||(e[40]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[41]||(e[41]=a('
julia
statelength(layer)

Return the total number of states of the layer l.

source

',3))]),s("details",T,[s("summary",null,[e[42]||(e[42]=s("a",{id:"LuxCore.testmode",href:"#LuxCore.testmode"},[s("span",{class:"jlbinding"},"LuxCore.testmode")],-1)),e[43]||(e[43]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[44]||(e[44]=a('
julia
testmode(st::NamedTuple)

Make all occurrences of training in state stVal(false).

source

',3))]),s("details",v,[s("summary",null,[e[45]||(e[45]=s("a",{id:"LuxCore.trainmode",href:"#LuxCore.trainmode"},[s("span",{class:"jlbinding"},"LuxCore.trainmode")],-1)),e[46]||(e[46]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[47]||(e[47]=a('
julia
trainmode(st::NamedTuple)

Make all occurrences of training in state stVal(true).

source

',3))]),s("details",E,[s("summary",null,[e[48]||(e[48]=s("a",{id:"LuxCore.update_state",href:"#LuxCore.update_state"},[s("span",{class:"jlbinding"},"LuxCore.update_state")],-1)),e[49]||(e[49]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[50]||(e[50]=a('
julia
update_state(st::NamedTuple, key::Symbol, value; exclude=Internal.isleaf)

Recursively update all occurrences of the key in the state st with the value. exclude is a function that is passed to Functors.fmap_with_path's exclude keyword.

Needs Functors.jl

This function requires Functors.jl to be loaded.

source

',4))]),e[58]||(e[58]=s("h2",{id:"Layer-size",tabindex:"-1"},[t("Layer size "),s("a",{class:"header-anchor",href:"#Layer-size","aria-label":'Permalink to "Layer size {#Layer-size}"'},"​")],-1)),s("details",A,[s("summary",null,[e[51]||(e[51]=s("a",{id:"LuxCore.outputsize",href:"#LuxCore.outputsize"},[s("span",{class:"jlbinding"},"LuxCore.outputsize")],-1)),e[52]||(e[52]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[53]||(e[53]=a('
julia
outputsize(layer, x, rng)

Return the output size of the layer.

The fallback implementation of this function assumes the inputs were batched, i.e., if any of the outputs are Arrays, with ndims(A) > 1, it will return size(A)[1:(end - 1)]. If this behavior is undesirable, provide a custom outputsize(layer, x, rng) implementation).

Fallback Implementation

The fallback implementation of this function is defined once Lux.jl is loaded.

Changes from Pre-1.0 Behavior

Previously it was possible to override this function by defining outputsize(layer). However, this can potentially introduce a bug that is hard to bypass. See this PR for more information.

source

',6))])])}const w=o(d,[["render",F]]);export{N as __pageData,w as default}; diff --git a/dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.lean.js b/dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.lean.js new file mode 100644 index 0000000000..eb4cfd1c08 --- /dev/null +++ b/dev/assets/api_Building_Blocks_LuxCore.md.CsmzM99Z.lean.js @@ -0,0 +1 @@ +import{_ as o,c as n,a2 as a,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.BetCMmtc.js";const N=JSON.parse('{"title":"LuxCore","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/LuxCore.md","filePath":"api/Building_Blocks/LuxCore.md","lastUpdated":null}'),d={name:"api/Building_Blocks/LuxCore.md"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"};function F(S,e,B,D,P,I){const i=r("Badge");return p(),n("div",null,[e[54]||(e[54]=a("",3)),s("details",c,[s("summary",null,[e[0]||(e[0]=s("a",{id:"LuxCore.AbstractLuxLayer",href:"#LuxCore.AbstractLuxLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxLayer")],-1)),e[1]||(e[1]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[2]||(e[2]=a("",8))]),s("details",u,[s("summary",null,[e[3]||(e[3]=s("a",{id:"LuxCore.AbstractLuxWrapperLayer",href:"#LuxCore.AbstractLuxWrapperLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxWrapperLayer")],-1)),e[4]||(e[4]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[5]||(e[5]=a("",5))]),s("details",h,[s("summary",null,[e[6]||(e[6]=s("a",{id:"LuxCore.AbstractLuxContainerLayer",href:"#LuxCore.AbstractLuxContainerLayer"},[s("span",{class:"jlbinding"},"LuxCore.AbstractLuxContainerLayer")],-1)),e[7]||(e[7]=t()),l(i,{type:"info",class:"jlObjectType jlType",text:"Type"})]),e[8]||(e[8]=a("",7))]),e[55]||(e[55]=s("h2",{id:"general",tabindex:"-1"},[t("General "),s("a",{class:"header-anchor",href:"#general","aria-label":'Permalink to "General"'},"​")],-1)),s("details",b,[s("summary",null,[e[9]||(e[9]=s("a",{id:"LuxCore.apply",href:"#LuxCore.apply"},[s("span",{class:"jlbinding"},"LuxCore.apply")],-1)),e[10]||(e[10]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[11]||(e[11]=a("",5))]),s("details",k,[s("summary",null,[e[12]||(e[12]=s("a",{id:"LuxCore.stateless_apply",href:"#LuxCore.stateless_apply"},[s("span",{class:"jlbinding"},"LuxCore.stateless_apply")],-1)),e[13]||(e[13]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=a("",3))]),s("details",g,[s("summary",null,[e[15]||(e[15]=s("a",{id:"LuxCore.check_fmap_condition",href:"#LuxCore.check_fmap_condition"},[s("span",{class:"jlbinding"},"LuxCore.check_fmap_condition")],-1)),e[16]||(e[16]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[17]||(e[17]=a("",7))]),s("details",y,[s("summary",null,[e[18]||(e[18]=s("a",{id:"LuxCore.contains_lux_layer",href:"#LuxCore.contains_lux_layer"},[s("span",{class:"jlbinding"},"LuxCore.contains_lux_layer")],-1)),e[19]||(e[19]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[20]||(e[20]=a("",3))]),s("details",m,[s("summary",null,[e[21]||(e[21]=s("a",{id:"LuxCore.display_name",href:"#LuxCore.display_name"},[s("span",{class:"jlbinding"},"LuxCore.display_name")],-1)),e[22]||(e[22]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[23]||(e[23]=a("",3))]),s("details",L,[s("summary",null,[e[24]||(e[24]=s("a",{id:"LuxCore.replicate",href:"#LuxCore.replicate"},[s("span",{class:"jlbinding"},"LuxCore.replicate")],-1)),e[25]||(e[25]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[26]||(e[26]=a("",3))]),s("details",x,[s("summary",null,[e[27]||(e[27]=s("a",{id:"LuxCore.setup",href:"#LuxCore.setup"},[s("span",{class:"jlbinding"},"LuxCore.setup")],-1)),e[28]||(e[28]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[29]||(e[29]=a("",4))]),e[56]||(e[56]=s("h2",{id:"parameters",tabindex:"-1"},[t("Parameters "),s("a",{class:"header-anchor",href:"#parameters","aria-label":'Permalink to "Parameters"'},"​")],-1)),s("details",C,[s("summary",null,[e[30]||(e[30]=s("a",{id:"LuxCore.initialparameters",href:"#LuxCore.initialparameters"},[s("span",{class:"jlbinding"},"LuxCore.initialparameters")],-1)),e[31]||(e[31]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[32]||(e[32]=a("",3))]),s("details",_,[s("summary",null,[e[33]||(e[33]=s("a",{id:"LuxCore.parameterlength",href:"#LuxCore.parameterlength"},[s("span",{class:"jlbinding"},"LuxCore.parameterlength")],-1)),e[34]||(e[34]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[35]||(e[35]=a("",3))]),e[57]||(e[57]=s("h2",{id:"states",tabindex:"-1"},[t("States "),s("a",{class:"header-anchor",href:"#states","aria-label":'Permalink to "States"'},"​")],-1)),s("details",f,[s("summary",null,[e[36]||(e[36]=s("a",{id:"LuxCore.initialstates",href:"#LuxCore.initialstates"},[s("span",{class:"jlbinding"},"LuxCore.initialstates")],-1)),e[37]||(e[37]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[38]||(e[38]=a("",3))]),s("details",j,[s("summary",null,[e[39]||(e[39]=s("a",{id:"LuxCore.statelength",href:"#LuxCore.statelength"},[s("span",{class:"jlbinding"},"LuxCore.statelength")],-1)),e[40]||(e[40]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[41]||(e[41]=a("",3))]),s("details",T,[s("summary",null,[e[42]||(e[42]=s("a",{id:"LuxCore.testmode",href:"#LuxCore.testmode"},[s("span",{class:"jlbinding"},"LuxCore.testmode")],-1)),e[43]||(e[43]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[44]||(e[44]=a("",3))]),s("details",v,[s("summary",null,[e[45]||(e[45]=s("a",{id:"LuxCore.trainmode",href:"#LuxCore.trainmode"},[s("span",{class:"jlbinding"},"LuxCore.trainmode")],-1)),e[46]||(e[46]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[47]||(e[47]=a("",3))]),s("details",E,[s("summary",null,[e[48]||(e[48]=s("a",{id:"LuxCore.update_state",href:"#LuxCore.update_state"},[s("span",{class:"jlbinding"},"LuxCore.update_state")],-1)),e[49]||(e[49]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[50]||(e[50]=a("",4))]),e[58]||(e[58]=s("h2",{id:"Layer-size",tabindex:"-1"},[t("Layer size "),s("a",{class:"header-anchor",href:"#Layer-size","aria-label":'Permalink to "Layer size {#Layer-size}"'},"​")],-1)),s("details",A,[s("summary",null,[e[51]||(e[51]=s("a",{id:"LuxCore.outputsize",href:"#LuxCore.outputsize"},[s("span",{class:"jlbinding"},"LuxCore.outputsize")],-1)),e[52]||(e[52]=t()),l(i,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[53]||(e[53]=a("",6))])])}const w=o(d,[["render",F]]);export{N as __pageData,w as default}; diff --git a/dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.lean.js b/dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.lean.js deleted file mode 100644 index c6d9d0b839..0000000000 --- a/dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.lean.js +++ /dev/null @@ -1,52 +0,0 @@ -import{_ as r,c as l,a2 as t,j as s,a,G as e,B as p,o as h}from"./chunks/framework.I-x9Gl6h.js";const Y=JSON.parse('{"title":"WeightInitializers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/WeightInitializers.md","filePath":"api/Building_Blocks/WeightInitializers.md","lastUpdated":null}'),k={name:"api/Building_Blocks/WeightInitializers.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.612ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2922.7 1000","aria-hidden":"true"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"};function V(S,i,X,Z,$,J){const n=p("Badge");return h(),l("div",null,[i[109]||(i[109]=t('

WeightInitializers

This package is a light dependency providing common weight initialization schemes for deep learning models.

Supported RNG Types

RNG Type / PackageReturned Array TypeUnsupported Functions
Random.jlArray
StableRNGs.jlArray
CUDA.CURAND.default_rng()CuArray
CUDA.default_rng()CuArray
GPUArrays.default_rng(CuArray)CuArray
AMDGPU.rocrand_rng()ROCArray
AMDGPU.gpuarrays_rng()ROCArray
GPUArrays.default_rng(ROCArray)ROCArray
Metal.gpuarrays_rng()MtlArrayorthogonal
GPUArrays.default_rng(MtlArray)MtlArrayorthogonal
oneAPI.gpuarrays_rng()oneArrayorthogonal, truncated_normal
GPUArrays.default_rng(oneArray)oneArrayorthogonal, truncated_normal

API Reference

Main Functions

',6)),s("details",d,[s("summary",null,[i[0]||(i[0]=s("a",{id:"WeightInitializers.glorot_normal",href:"#WeightInitializers.glorot_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_normal")],-1)),i[1]||(i[1]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=t(`
julia
glorot_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

`,5))]),s("details",o,[s("summary",null,[i[3]||(i[3]=s("a",{id:"WeightInitializers.glorot_uniform",href:"#WeightInitializers.glorot_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_uniform")],-1)),i[4]||(i[4]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[15]||(i[15]=t(`
julia
glorot_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}
`,1)),s("p",null,[i[7]||(i[7]=a("Return an ")),i[8]||(i[8]=s("code",null,"AbstractArray{T}",-1)),i[9]||(i[9]=a(" of the given ")),i[10]||(i[10]=s("code",null,"size",-1)),i[11]||(i[11]=a(" containing random numbers drawn from a uniform distribution on the interval ")),s("mjx-container",g,[(h(),l("svg",E,i[5]||(i[5]=[t('',1)]))),i[6]||(i[6]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"["),s("mo",null,"−"),s("mi",null,"x"),s("mo",null,","),s("mi",null,"x"),s("mo",{stretchy:"false"},"]")])],-1))]),i[12]||(i[12]=a(", where ")),i[13]||(i[13]=s("code",null,"x = gain * sqrt(6 / (fan_in + fan_out))",-1)),i[14]||(i[14]=a(". This method is described in [1] and also known as Xavier initialization."))]),i[16]||(i[16]=s("p",null,[s("strong",null,"References")],-1)),i[17]||(i[17]=s("p",null,[a('[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." '),s("em",null,"Proceedings of the thirteenth international conference on artificial intelligence and statistics"),a(". 2010.")],-1)),i[18]||(i[18]=s("p",null,[s("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/WeightInitializers/src/initializers.jl#L13-L27",target:"_blank",rel:"noreferrer"},"source")],-1))]),s("details",y,[s("summary",null,[i[19]||(i[19]=s("a",{id:"WeightInitializers.identity_init",href:"#WeightInitializers.identity_init"},[s("span",{class:"jlbinding"},"WeightInitializers.identity_init")],-1)),i[20]||(i[20]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[21]||(i[21]=t(`
julia
identity_init([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; gain::Number=1,
-    shift::Union{Integer, Tuple{Integer, Integer}}=0) -> AbstractArray{T}

Constructs an array that aims to provide an identity mapping when used as parameters in most layers of a neural network. The identity mapping is scaled by the gain parameter.

Behavior

Caveats

Arguments

Returns

Examples

julia
julia> identity_init(Xoshiro(123), Float32, 5, 5)
-5×5 Matrix{Float32}:
- 1.0  1.0  1.0  1.0  1.0
- 1.0  1.0  1.0  1.0  1.0
- 1.0  1.0  1.0  1.0  1.0
- 1.0  1.0  1.0  1.0  1.0
- 1.0  1.0  1.0  1.0  1.0
-
-julia> identity_init(Xoshiro(123), Float32, 3, 3, 1, 1; gain=1.5)
-3×3×1×1 Array{Float32, 4}:
-[:, :, 1, 1] =
- 0.0  0.0  0.0
- 0.0  1.5  0.0
- 0.0  0.0  0.0

source

`,13))]),s("details",c,[s("summary",null,[i[22]||(i[22]=s("a",{id:"WeightInitializers.kaiming_normal",href:"#WeightInitializers.kaiming_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_normal")],-1)),i[23]||(i[23]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[24]||(i[24]=t(`
julia
kaiming_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in)

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",u,[s("summary",null,[i[25]||(i[25]=s("a",{id:"WeightInitializers.kaiming_uniform",href:"#WeightInitializers.kaiming_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_uniform")],-1)),i[26]||(i[26]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[27]||(i[27]=t(`
julia
kaiming_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in).

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",F,[s("summary",null,[i[28]||(i[28]=s("a",{id:"WeightInitializers.sparse_init",href:"#WeightInitializers.sparse_init"},[s("span",{class:"jlbinding"},"WeightInitializers.sparse_init")],-1)),i[29]||(i[29]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[30]||(i[30]=t(`
julia
sparse_init([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
-    sparsity::Number, std::Number=0.01) -> AbstractArray{T}

Creates a sparsely initialized weight matrix with a specified proportion of zeroed elements, using random numbers drawn from a normal distribution for the non-zero elements. This method was introduced in [1].

Note

The sparsity parameter controls the proportion of the matrix that will be zeroed. For example, a sparsity of 0.3 means that approximately 30% of the elements will be set to zero. The non-zero elements are distributed according to a normal distribution, scaled by the std parameter.

Arguments

Returns

Examples

julia
julia> y = sparse_init(Xoshiro(123), Float32, 5, 5; sparsity=0.3, std=0.01);
-
-julia> y isa Matrix{Float32}
-true
-
-julia> size(y) == (5, 5)
-true

References

[1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

source

`,12))]),s("details",b,[s("summary",null,[i[31]||(i[31]=s("a",{id:"WeightInitializers.truncated_normal",href:"#WeightInitializers.truncated_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.truncated_normal")],-1)),i[32]||(i[32]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[33]||(i[33]=t(`
julia
truncated_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; mean = 0,
-    std = 1, lo = -2, hi = 2) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100)).

source

`,3))]),s("details",C,[s("summary",null,[i[34]||(i[34]=s("a",{id:"WeightInitializers.orthogonal",href:"#WeightInitializers.orthogonal"},[s("span",{class:"jlbinding"},"WeightInitializers.orthogonal")],-1)),i[35]||(i[35]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[36]||(i[36]=t(`
julia
orthogonal([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
-    gain = 1)  -> AbstractArray{T, length(dims)}

Return an AbstractArray{T} of the given dimensions (dims) which is a (semi) orthogonal matrix, as described in [1].

The function constructs an orthogonal or semi-orthogonal matrix depending on the specified dimensions. For two dimensions, it returns a matrix where dims = (rows, cols). For more than two dimensions, it computes an orthogonal matrix of size prod(dims[1:(end - 1)]) by dims[end] before reshaping it to the original dimensions.

Cannot construct a vector, i.e., length(dims) == 1 is forbidden.

Arguments

References

[1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

source

`,9))]),i[110]||(i[110]=s("h3",{id:"Other-Convenience-Functions",tabindex:"-1"},[a("Other Convenience Functions "),s("a",{class:"header-anchor",href:"#Other-Convenience-Functions","aria-label":'Permalink to "Other Convenience Functions {#Other-Convenience-Functions}"'},"​")],-1)),i[111]||(i[111]=s("div",{class:"warning custom-block"},[s("p",{class:"custom-block-title"},"Beware"),s("p",null,"Unlike the other functions these ones don't take a type argument.")],-1)),s("details",m,[s("summary",null,[i[37]||(i[37]=s("a",{id:"WeightInitializers.zeros16",href:"#WeightInitializers.zeros16"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros16")],-1)),i[38]||(i[38]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[39]||(i[39]=t(`
julia
zeros16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",A,[s("summary",null,[i[40]||(i[40]=s("a",{id:"WeightInitializers.ones16",href:"#WeightInitializers.ones16"},[s("span",{class:"jlbinding"},"WeightInitializers.ones16")],-1)),i[41]||(i[41]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[42]||(i[42]=t(`
julia
ones16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",f,[s("summary",null,[i[43]||(i[43]=s("a",{id:"WeightInitializers.rand16",href:"#WeightInitializers.rand16"},[s("span",{class:"jlbinding"},"WeightInitializers.rand16")],-1)),i[44]||(i[44]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[45]||(i[45]=t(`
julia
rand16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",z,[s("summary",null,[i[46]||(i[46]=s("a",{id:"WeightInitializers.randn16",href:"#WeightInitializers.randn16"},[s("span",{class:"jlbinding"},"WeightInitializers.randn16")],-1)),i[47]||(i[47]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[48]||(i[48]=t(`
julia
randn16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",j,[s("summary",null,[i[49]||(i[49]=s("a",{id:"WeightInitializers.zeros32",href:"#WeightInitializers.zeros32"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros32")],-1)),i[50]||(i[50]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[51]||(i[51]=t(`
julia
zeros32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",D,[s("summary",null,[i[52]||(i[52]=s("a",{id:"WeightInitializers.ones32",href:"#WeightInitializers.ones32"},[s("span",{class:"jlbinding"},"WeightInitializers.ones32")],-1)),i[53]||(i[53]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[54]||(i[54]=t(`
julia
ones32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",x,[s("summary",null,[i[55]||(i[55]=s("a",{id:"WeightInitializers.rand32",href:"#WeightInitializers.rand32"},[s("span",{class:"jlbinding"},"WeightInitializers.rand32")],-1)),i[56]||(i[56]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[57]||(i[57]=t(`
julia
rand32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",v,[s("summary",null,[i[58]||(i[58]=s("a",{id:"WeightInitializers.randn32",href:"#WeightInitializers.randn32"},[s("span",{class:"jlbinding"},"WeightInitializers.randn32")],-1)),i[59]||(i[59]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[60]||(i[60]=t(`
julia
randn32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",B,[s("summary",null,[i[61]||(i[61]=s("a",{id:"WeightInitializers.zeros64",href:"#WeightInitializers.zeros64"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros64")],-1)),i[62]||(i[62]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[63]||(i[63]=t(`
julia
zeros64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",I,[s("summary",null,[i[64]||(i[64]=s("a",{id:"WeightInitializers.ones64",href:"#WeightInitializers.ones64"},[s("span",{class:"jlbinding"},"WeightInitializers.ones64")],-1)),i[65]||(i[65]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[66]||(i[66]=t(`
julia
ones64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",L,[s("summary",null,[i[67]||(i[67]=s("a",{id:"WeightInitializers.rand64",href:"#WeightInitializers.rand64"},[s("span",{class:"jlbinding"},"WeightInitializers.rand64")],-1)),i[68]||(i[68]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[69]||(i[69]=t(`
julia
rand64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",T,[s("summary",null,[i[70]||(i[70]=s("a",{id:"WeightInitializers.randn64",href:"#WeightInitializers.randn64"},[s("span",{class:"jlbinding"},"WeightInitializers.randn64")],-1)),i[71]||(i[71]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[72]||(i[72]=t(`
julia
randn64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",W,[s("summary",null,[i[73]||(i[73]=s("a",{id:"WeightInitializers.zerosC16",href:"#WeightInitializers.zerosC16"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC16")],-1)),i[74]||(i[74]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[75]||(i[75]=t(`
julia
zerosC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",R,[s("summary",null,[i[76]||(i[76]=s("a",{id:"WeightInitializers.onesC16",href:"#WeightInitializers.onesC16"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC16")],-1)),i[77]||(i[77]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[78]||(i[78]=t(`
julia
onesC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",w,[s("summary",null,[i[79]||(i[79]=s("a",{id:"WeightInitializers.randC16",href:"#WeightInitializers.randC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randC16")],-1)),i[80]||(i[80]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[81]||(i[81]=t(`
julia
randC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",Q,[s("summary",null,[i[82]||(i[82]=s("a",{id:"WeightInitializers.randnC16",href:"#WeightInitializers.randnC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC16")],-1)),i[83]||(i[83]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[84]||(i[84]=t(`
julia
randnC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",N,[s("summary",null,[i[85]||(i[85]=s("a",{id:"WeightInitializers.zerosC32",href:"#WeightInitializers.zerosC32"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC32")],-1)),i[86]||(i[86]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[87]||(i[87]=t(`
julia
zerosC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",G,[s("summary",null,[i[88]||(i[88]=s("a",{id:"WeightInitializers.onesC32",href:"#WeightInitializers.onesC32"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC32")],-1)),i[89]||(i[89]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[90]||(i[90]=t(`
julia
onesC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",U,[s("summary",null,[i[91]||(i[91]=s("a",{id:"WeightInitializers.randC32",href:"#WeightInitializers.randC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randC32")],-1)),i[92]||(i[92]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[93]||(i[93]=t(`
julia
randC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",O,[s("summary",null,[i[94]||(i[94]=s("a",{id:"WeightInitializers.randnC32",href:"#WeightInitializers.randnC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC32")],-1)),i[95]||(i[95]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[96]||(i[96]=t(`
julia
randnC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",M,[s("summary",null,[i[97]||(i[97]=s("a",{id:"WeightInitializers.zerosC64",href:"#WeightInitializers.zerosC64"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC64")],-1)),i[98]||(i[98]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[99]||(i[99]=t(`
julia
zerosC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",P,[s("summary",null,[i[100]||(i[100]=s("a",{id:"WeightInitializers.onesC64",href:"#WeightInitializers.onesC64"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC64")],-1)),i[101]||(i[101]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=t(`
julia
onesC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",q,[s("summary",null,[i[103]||(i[103]=s("a",{id:"WeightInitializers.randC64",href:"#WeightInitializers.randC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randC64")],-1)),i[104]||(i[104]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[105]||(i[105]=t(`
julia
randC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",H,[s("summary",null,[i[106]||(i[106]=s("a",{id:"WeightInitializers.randnC64",href:"#WeightInitializers.randnC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC64")],-1)),i[107]||(i[107]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[108]||(i[108]=t(`
julia
randnC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a standard normal distribution.

source

`,3))])])}const _=r(k,[["render",V]]);export{Y as __pageData,_ as default}; diff --git a/dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.js b/dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.js similarity index 91% rename from dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.js rename to dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.js index c6d9d0b839..fd1259748d 100644 --- a/dev/assets/api_Building_Blocks_WeightInitializers.md.CUbJYQgk.js +++ b/dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.js @@ -1,6 +1,6 @@ -import{_ as r,c as l,a2 as t,j as s,a,G as e,B as p,o as h}from"./chunks/framework.I-x9Gl6h.js";const Y=JSON.parse('{"title":"WeightInitializers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/WeightInitializers.md","filePath":"api/Building_Blocks/WeightInitializers.md","lastUpdated":null}'),k={name:"api/Building_Blocks/WeightInitializers.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.612ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2922.7 1000","aria-hidden":"true"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"};function V(S,i,X,Z,$,J){const n=p("Badge");return h(),l("div",null,[i[109]||(i[109]=t('

WeightInitializers

This package is a light dependency providing common weight initialization schemes for deep learning models.

Supported RNG Types

RNG Type / PackageReturned Array TypeUnsupported Functions
Random.jlArray
StableRNGs.jlArray
CUDA.CURAND.default_rng()CuArray
CUDA.default_rng()CuArray
GPUArrays.default_rng(CuArray)CuArray
AMDGPU.rocrand_rng()ROCArray
AMDGPU.gpuarrays_rng()ROCArray
GPUArrays.default_rng(ROCArray)ROCArray
Metal.gpuarrays_rng()MtlArrayorthogonal
GPUArrays.default_rng(MtlArray)MtlArrayorthogonal
oneAPI.gpuarrays_rng()oneArrayorthogonal, truncated_normal
GPUArrays.default_rng(oneArray)oneArrayorthogonal, truncated_normal

API Reference

Main Functions

',6)),s("details",d,[s("summary",null,[i[0]||(i[0]=s("a",{id:"WeightInitializers.glorot_normal",href:"#WeightInitializers.glorot_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_normal")],-1)),i[1]||(i[1]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=t(`
julia
glorot_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

`,5))]),s("details",o,[s("summary",null,[i[3]||(i[3]=s("a",{id:"WeightInitializers.glorot_uniform",href:"#WeightInitializers.glorot_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_uniform")],-1)),i[4]||(i[4]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[15]||(i[15]=t(`
julia
glorot_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain = 1) -> AbstractArray{T, length(size)}
`,1)),s("p",null,[i[7]||(i[7]=a("Return an ")),i[8]||(i[8]=s("code",null,"AbstractArray{T}",-1)),i[9]||(i[9]=a(" of the given ")),i[10]||(i[10]=s("code",null,"size",-1)),i[11]||(i[11]=a(" containing random numbers drawn from a uniform distribution on the interval ")),s("mjx-container",g,[(h(),l("svg",E,i[5]||(i[5]=[t('',1)]))),i[6]||(i[6]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"["),s("mo",null,"−"),s("mi",null,"x"),s("mo",null,","),s("mi",null,"x"),s("mo",{stretchy:"false"},"]")])],-1))]),i[12]||(i[12]=a(", where ")),i[13]||(i[13]=s("code",null,"x = gain * sqrt(6 / (fan_in + fan_out))",-1)),i[14]||(i[14]=a(". This method is described in [1] and also known as Xavier initialization."))]),i[16]||(i[16]=s("p",null,[s("strong",null,"References")],-1)),i[17]||(i[17]=s("p",null,[a('[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." '),s("em",null,"Proceedings of the thirteenth international conference on artificial intelligence and statistics"),a(". 2010.")],-1)),i[18]||(i[18]=s("p",null,[s("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/WeightInitializers/src/initializers.jl#L13-L27",target:"_blank",rel:"noreferrer"},"source")],-1))]),s("details",y,[s("summary",null,[i[19]||(i[19]=s("a",{id:"WeightInitializers.identity_init",href:"#WeightInitializers.identity_init"},[s("span",{class:"jlbinding"},"WeightInitializers.identity_init")],-1)),i[20]||(i[20]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[21]||(i[21]=t(`
julia
identity_init([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; gain::Number=1,
+import{_ as r,c as e,a2 as t,j as s,a,G as l,B as p,o as h}from"./chunks/framework.BetCMmtc.js";const K=JSON.parse('{"title":"WeightInitializers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/WeightInitializers.md","filePath":"api/Building_Blocks/WeightInitializers.md","lastUpdated":null}'),k={name:"api/Building_Blocks/WeightInitializers.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.612ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2922.7 1000","aria-hidden":"true"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"};function M(q,i,H,X,Z,$){const n=p("Badge");return h(),e("div",null,[i[109]||(i[109]=t('

WeightInitializers

This package is a light dependency providing common weight initialization schemes for deep learning models.

Supported RNG Types

RNG Type / PackageReturned Array TypeUnsupported Functions
Random.jlArray
StableRNGs.jlArray
CUDA.CURAND.default_rng()CuArray
CUDA.default_rng()CuArray
GPUArrays.default_rng(CuArray)CuArray
AMDGPU.rocrand_rng()ROCArray
AMDGPU.gpuarrays_rng()ROCArray
GPUArrays.default_rng(ROCArray)ROCArray
Metal.gpuarrays_rng()MtlArrayorthogonal
GPUArrays.default_rng(MtlArray)MtlArrayorthogonal
oneAPI.gpuarrays_rng()oneArrayorthogonal, truncated_normal
GPUArrays.default_rng(oneArray)oneArrayorthogonal, truncated_normal

API Reference

Main Functions

',6)),s("details",d,[s("summary",null,[i[0]||(i[0]=s("a",{id:"WeightInitializers.glorot_normal",href:"#WeightInitializers.glorot_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_normal")],-1)),i[1]||(i[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=t(`
julia
glorot_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain = 1) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a normal distribution with standard deviation gain * sqrt(2 / (fan_in + fan_out)). This method is described in [1] and also known as Xavier initialization.

References

[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." Proceedings of the thirteenth international conference on artificial intelligence and statistics. 2010.

source

`,5))]),s("details",o,[s("summary",null,[i[3]||(i[3]=s("a",{id:"WeightInitializers.glorot_uniform",href:"#WeightInitializers.glorot_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_uniform")],-1)),i[4]||(i[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[15]||(i[15]=t(`
julia
glorot_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain = 1) -> AbstractArray{T, length(size)}
`,1)),s("p",null,[i[7]||(i[7]=a("Return an ")),i[8]||(i[8]=s("code",null,"AbstractArray{T}",-1)),i[9]||(i[9]=a(" of the given ")),i[10]||(i[10]=s("code",null,"size",-1)),i[11]||(i[11]=a(" containing random numbers drawn from a uniform distribution on the interval ")),s("mjx-container",g,[(h(),e("svg",E,i[5]||(i[5]=[t('',1)]))),i[6]||(i[6]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"["),s("mo",null,"−"),s("mi",null,"x"),s("mo",null,","),s("mi",null,"x"),s("mo",{stretchy:"false"},"]")])],-1))]),i[12]||(i[12]=a(", where ")),i[13]||(i[13]=s("code",null,"x = gain * sqrt(6 / (fan_in + fan_out))",-1)),i[14]||(i[14]=a(". This method is described in [1] and also known as Xavier initialization."))]),i[16]||(i[16]=s("p",null,[s("strong",null,"References")],-1)),i[17]||(i[17]=s("p",null,[a('[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." '),s("em",null,"Proceedings of the thirteenth international conference on artificial intelligence and statistics"),a(". 2010.")],-1)),i[18]||(i[18]=s("p",null,[s("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/WeightInitializers/src/initializers.jl#L13-L27",target:"_blank",rel:"noreferrer"},"source")],-1))]),s("details",y,[s("summary",null,[i[19]||(i[19]=s("a",{id:"WeightInitializers.identity_init",href:"#WeightInitializers.identity_init"},[s("span",{class:"jlbinding"},"WeightInitializers.identity_init")],-1)),i[20]||(i[20]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[21]||(i[21]=t(`
julia
identity_init([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; gain::Number=1,
     shift::Union{Integer, Tuple{Integer, Integer}}=0) -> AbstractArray{T}

Constructs an array that aims to provide an identity mapping when used as parameters in most layers of a neural network. The identity mapping is scaled by the gain parameter.

Behavior

  • 1D: Returns a Vector of zeros (useful for biases in layers where input_size == output_size).

  • 2D: Returns an identity matrix (useful for fully connected layers with equal input and output sizes).

  • More than 2D: Returns a tensor where the central slice along the last two dimensions is an identity matrix, and the rest are zeros (useful for convolutional layers, simulating an identity convolution).

Caveats

  • Not all layers will result in an identity mapping when using this initializer. Exceptions include recurrent and normalization layers.

  • Layers must have input_size == output_size for a perfect identity mapping. In cases where this condition is not met, the function pads extra dimensions with zeros.

  • For convolutional layers to achieve an identity mapping, kernel sizes must be odd, and appropriate padding must be applied to ensure the output feature maps are the same size as the input feature maps.

Arguments

  • rng::AbstractRNG: An optional random number generator, included for consistency with other initializers but ignored since the output is deterministic.

  • T::Type{<:Number}: The numeric type of the array elements.

  • size...: The dimensions of the array to be initialized.

  • gain::Number=1: A scaling factor applied to the identity mapping.

  • shift::Union{Integer, Tuple{Integer, Integer}}=0: An integer or a tuple specifying the circular shift applied to the output array.

Returns

  • AbstractArray{T}: An array initialized to represent an identity mapping, scaled by gain and optionally shifted by shift.

Examples

julia
julia> identity_init(Xoshiro(123), Float32, 5, 5)
 5×5 Matrix{Float32}:
  1.0  1.0  1.0  1.0  1.0
@@ -14,39 +14,39 @@ import{_ as r,c as l,a2 as t,j as s,a,G as e,B as p,o as h}from"./chunks/framewo
 [:, :, 1, 1] =
  0.0  0.0  0.0
  0.0  1.5  0.0
- 0.0  0.0  0.0

source

`,13))]),s("details",c,[s("summary",null,[i[22]||(i[22]=s("a",{id:"WeightInitializers.kaiming_normal",href:"#WeightInitializers.kaiming_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_normal")],-1)),i[23]||(i[23]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[24]||(i[24]=t(`
julia
kaiming_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in)

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",u,[s("summary",null,[i[25]||(i[25]=s("a",{id:"WeightInitializers.kaiming_uniform",href:"#WeightInitializers.kaiming_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_uniform")],-1)),i[26]||(i[26]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[27]||(i[27]=t(`
julia
kaiming_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
-    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in).

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",F,[s("summary",null,[i[28]||(i[28]=s("a",{id:"WeightInitializers.sparse_init",href:"#WeightInitializers.sparse_init"},[s("span",{class:"jlbinding"},"WeightInitializers.sparse_init")],-1)),i[29]||(i[29]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[30]||(i[30]=t(`
julia
sparse_init([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
+ 0.0  0.0  0.0

source

`,13))]),s("details",c,[s("summary",null,[i[22]||(i[22]=s("a",{id:"WeightInitializers.kaiming_normal",href:"#WeightInitializers.kaiming_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_normal")],-1)),i[23]||(i[23]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[24]||(i[24]=t(`
julia
kaiming_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers taken from a normal distribution standard deviation gain / sqrt(fan_in)

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",u,[s("summary",null,[i[25]||(i[25]=s("a",{id:"WeightInitializers.kaiming_uniform",href:"#WeightInitializers.kaiming_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_uniform")],-1)),i[26]||(i[26]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[27]||(i[27]=t(`
julia
kaiming_uniform([::AbstractRNG=Utils.default_rng()], [T=Float32], size...;
+    gain =T(2)) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size containing random numbers drawn from a uniform distribution on the interval [-x, x], where x = gain * sqrt(3/fan_in).

References

[1] He, Kaiming, et al. "Delving deep into rectifiers: Surpassing human-level performance on imagenet classification." Proceedings of the IEEE international conference on computer vision. 2015.

source

`,5))]),s("details",F,[s("summary",null,[i[28]||(i[28]=s("a",{id:"WeightInitializers.sparse_init",href:"#WeightInitializers.sparse_init"},[s("span",{class:"jlbinding"},"WeightInitializers.sparse_init")],-1)),i[29]||(i[29]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[30]||(i[30]=t(`
julia
sparse_init([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
     sparsity::Number, std::Number=0.01) -> AbstractArray{T}

Creates a sparsely initialized weight matrix with a specified proportion of zeroed elements, using random numbers drawn from a normal distribution for the non-zero elements. This method was introduced in [1].

Note

The sparsity parameter controls the proportion of the matrix that will be zeroed. For example, a sparsity of 0.3 means that approximately 30% of the elements will be set to zero. The non-zero elements are distributed according to a normal distribution, scaled by the std parameter.

Arguments

  • rng::AbstractRNG: The random number generator to use.

  • T::Type{<:Number}: The numeric type of the elements in the returned array.

  • dims::Integer...: The dimensions of the weight matrix to be generated.

  • sparsity::Number: The proportion of elements to be zeroed. Must be between 0 and 1.

  • std::Number=0.01: The standard deviation of the normal distribution before applying gain.

Returns

  • AbstractArray{T}: A sparsely initialized weight matrix of dimensions dims and type T.

Examples

julia
julia> y = sparse_init(Xoshiro(123), Float32, 5, 5; sparsity=0.3, std=0.01);
 
 julia> y isa Matrix{Float32}
 true
 
 julia> size(y) == (5, 5)
-true

References

[1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

source

`,12))]),s("details",b,[s("summary",null,[i[31]||(i[31]=s("a",{id:"WeightInitializers.truncated_normal",href:"#WeightInitializers.truncated_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.truncated_normal")],-1)),i[32]||(i[32]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[33]||(i[33]=t(`
julia
truncated_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; mean = 0,
-    std = 1, lo = -2, hi = 2) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100)).

source

`,3))]),s("details",C,[s("summary",null,[i[34]||(i[34]=s("a",{id:"WeightInitializers.orthogonal",href:"#WeightInitializers.orthogonal"},[s("span",{class:"jlbinding"},"WeightInitializers.orthogonal")],-1)),i[35]||(i[35]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[36]||(i[36]=t(`
julia
orthogonal([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
-    gain = 1)  -> AbstractArray{T, length(dims)}

Return an AbstractArray{T} of the given dimensions (dims) which is a (semi) orthogonal matrix, as described in [1].

The function constructs an orthogonal or semi-orthogonal matrix depending on the specified dimensions. For two dimensions, it returns a matrix where dims = (rows, cols). For more than two dimensions, it computes an orthogonal matrix of size prod(dims[1:(end - 1)]) by dims[end] before reshaping it to the original dimensions.

Cannot construct a vector, i.e., length(dims) == 1 is forbidden.

Arguments

  • rng::AbstractRNG: Random number generator.

  • T::Type{<:Real}: The type of the elements in the array.

  • dims::Integer...: The dimensions of the array.

  • gain::Number: Scaling factor for the elements of the orthogonal matrix.

References

[1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

source

`,9))]),i[110]||(i[110]=s("h3",{id:"Other-Convenience-Functions",tabindex:"-1"},[a("Other Convenience Functions "),s("a",{class:"header-anchor",href:"#Other-Convenience-Functions","aria-label":'Permalink to "Other Convenience Functions {#Other-Convenience-Functions}"'},"​")],-1)),i[111]||(i[111]=s("div",{class:"warning custom-block"},[s("p",{class:"custom-block-title"},"Beware"),s("p",null,"Unlike the other functions these ones don't take a type argument.")],-1)),s("details",m,[s("summary",null,[i[37]||(i[37]=s("a",{id:"WeightInitializers.zeros16",href:"#WeightInitializers.zeros16"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros16")],-1)),i[38]||(i[38]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[39]||(i[39]=t(`
julia
zeros16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",A,[s("summary",null,[i[40]||(i[40]=s("a",{id:"WeightInitializers.ones16",href:"#WeightInitializers.ones16"},[s("span",{class:"jlbinding"},"WeightInitializers.ones16")],-1)),i[41]||(i[41]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[42]||(i[42]=t(`
julia
ones16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",f,[s("summary",null,[i[43]||(i[43]=s("a",{id:"WeightInitializers.rand16",href:"#WeightInitializers.rand16"},[s("span",{class:"jlbinding"},"WeightInitializers.rand16")],-1)),i[44]||(i[44]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[45]||(i[45]=t(`
julia
rand16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",z,[s("summary",null,[i[46]||(i[46]=s("a",{id:"WeightInitializers.randn16",href:"#WeightInitializers.randn16"},[s("span",{class:"jlbinding"},"WeightInitializers.randn16")],-1)),i[47]||(i[47]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[48]||(i[48]=t(`
julia
randn16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",j,[s("summary",null,[i[49]||(i[49]=s("a",{id:"WeightInitializers.zeros32",href:"#WeightInitializers.zeros32"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros32")],-1)),i[50]||(i[50]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[51]||(i[51]=t(`
julia
zeros32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",D,[s("summary",null,[i[52]||(i[52]=s("a",{id:"WeightInitializers.ones32",href:"#WeightInitializers.ones32"},[s("span",{class:"jlbinding"},"WeightInitializers.ones32")],-1)),i[53]||(i[53]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[54]||(i[54]=t(`
julia
ones32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",x,[s("summary",null,[i[55]||(i[55]=s("a",{id:"WeightInitializers.rand32",href:"#WeightInitializers.rand32"},[s("span",{class:"jlbinding"},"WeightInitializers.rand32")],-1)),i[56]||(i[56]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[57]||(i[57]=t(`
julia
rand32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",v,[s("summary",null,[i[58]||(i[58]=s("a",{id:"WeightInitializers.randn32",href:"#WeightInitializers.randn32"},[s("span",{class:"jlbinding"},"WeightInitializers.randn32")],-1)),i[59]||(i[59]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[60]||(i[60]=t(`
julia
randn32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",B,[s("summary",null,[i[61]||(i[61]=s("a",{id:"WeightInitializers.zeros64",href:"#WeightInitializers.zeros64"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros64")],-1)),i[62]||(i[62]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[63]||(i[63]=t(`
julia
zeros64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",I,[s("summary",null,[i[64]||(i[64]=s("a",{id:"WeightInitializers.ones64",href:"#WeightInitializers.ones64"},[s("span",{class:"jlbinding"},"WeightInitializers.ones64")],-1)),i[65]||(i[65]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[66]||(i[66]=t(`
julia
ones64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",L,[s("summary",null,[i[67]||(i[67]=s("a",{id:"WeightInitializers.rand64",href:"#WeightInitializers.rand64"},[s("span",{class:"jlbinding"},"WeightInitializers.rand64")],-1)),i[68]||(i[68]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[69]||(i[69]=t(`
julia
rand64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",T,[s("summary",null,[i[70]||(i[70]=s("a",{id:"WeightInitializers.randn64",href:"#WeightInitializers.randn64"},[s("span",{class:"jlbinding"},"WeightInitializers.randn64")],-1)),i[71]||(i[71]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[72]||(i[72]=t(`
julia
randn64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",W,[s("summary",null,[i[73]||(i[73]=s("a",{id:"WeightInitializers.zerosC16",href:"#WeightInitializers.zerosC16"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC16")],-1)),i[74]||(i[74]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[75]||(i[75]=t(`
julia
zerosC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",R,[s("summary",null,[i[76]||(i[76]=s("a",{id:"WeightInitializers.onesC16",href:"#WeightInitializers.onesC16"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC16")],-1)),i[77]||(i[77]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[78]||(i[78]=t(`
julia
onesC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",w,[s("summary",null,[i[79]||(i[79]=s("a",{id:"WeightInitializers.randC16",href:"#WeightInitializers.randC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randC16")],-1)),i[80]||(i[80]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[81]||(i[81]=t(`
julia
randC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",Q,[s("summary",null,[i[82]||(i[82]=s("a",{id:"WeightInitializers.randnC16",href:"#WeightInitializers.randnC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC16")],-1)),i[83]||(i[83]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[84]||(i[84]=t(`
julia
randnC16([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",N,[s("summary",null,[i[85]||(i[85]=s("a",{id:"WeightInitializers.zerosC32",href:"#WeightInitializers.zerosC32"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC32")],-1)),i[86]||(i[86]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[87]||(i[87]=t(`
julia
zerosC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",G,[s("summary",null,[i[88]||(i[88]=s("a",{id:"WeightInitializers.onesC32",href:"#WeightInitializers.onesC32"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC32")],-1)),i[89]||(i[89]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[90]||(i[90]=t(`
julia
onesC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",U,[s("summary",null,[i[91]||(i[91]=s("a",{id:"WeightInitializers.randC32",href:"#WeightInitializers.randC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randC32")],-1)),i[92]||(i[92]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[93]||(i[93]=t(`
julia
randC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",O,[s("summary",null,[i[94]||(i[94]=s("a",{id:"WeightInitializers.randnC32",href:"#WeightInitializers.randnC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC32")],-1)),i[95]||(i[95]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[96]||(i[96]=t(`
julia
randnC32([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",M,[s("summary",null,[i[97]||(i[97]=s("a",{id:"WeightInitializers.zerosC64",href:"#WeightInitializers.zerosC64"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC64")],-1)),i[98]||(i[98]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[99]||(i[99]=t(`
julia
zerosC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",P,[s("summary",null,[i[100]||(i[100]=s("a",{id:"WeightInitializers.onesC64",href:"#WeightInitializers.onesC64"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC64")],-1)),i[101]||(i[101]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=t(`
julia
onesC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",q,[s("summary",null,[i[103]||(i[103]=s("a",{id:"WeightInitializers.randC64",href:"#WeightInitializers.randC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randC64")],-1)),i[104]||(i[104]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[105]||(i[105]=t(`
julia
randC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",H,[s("summary",null,[i[106]||(i[106]=s("a",{id:"WeightInitializers.randnC64",href:"#WeightInitializers.randnC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC64")],-1)),i[107]||(i[107]=a()),e(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[108]||(i[108]=t(`
julia
randnC64([::AbstractRNG=Utils.default_rng()], size...;
-    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a standard normal distribution.

source

`,3))])])}const _=r(k,[["render",V]]);export{Y as __pageData,_ as default}; +true

References

[1] Martens, J, "Deep learning via Hessian-free optimization" Proceedings of the 27th International Conference on International Conference on Machine Learning. 2010.

source

`,12))]),s("details",b,[s("summary",null,[i[31]||(i[31]=s("a",{id:"WeightInitializers.truncated_normal",href:"#WeightInitializers.truncated_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.truncated_normal")],-1)),i[32]||(i[32]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[33]||(i[33]=t(`
julia
truncated_normal([::AbstractRNG=Utils.default_rng()], [T=Float32], size...; mean = 0,
+    std = 1, lo = -2, hi = 2) -> AbstractArray{T, length(size)}

Return an AbstractArray{T} of the given size where each element is drawn from a truncated normal distribution. The numbers are distributed like filter(x -> lo ≤ x ≤ hi, mean .+ std .* randn(100)).

source

`,3))]),s("details",C,[s("summary",null,[i[34]||(i[34]=s("a",{id:"WeightInitializers.orthogonal",href:"#WeightInitializers.orthogonal"},[s("span",{class:"jlbinding"},"WeightInitializers.orthogonal")],-1)),i[35]||(i[35]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[36]||(i[36]=t(`
julia
orthogonal([::AbstractRNG=Utils.default_rng()], [T=Float32], dims::Integer...;
+    gain = 1)  -> AbstractArray{T, length(dims)}

Return an AbstractArray{T} of the given dimensions (dims) which is a (semi) orthogonal matrix, as described in [1].

The function constructs an orthogonal or semi-orthogonal matrix depending on the specified dimensions. For two dimensions, it returns a matrix where dims = (rows, cols). For more than two dimensions, it computes an orthogonal matrix of size prod(dims[1:(end - 1)]) by dims[end] before reshaping it to the original dimensions.

Cannot construct a vector, i.e., length(dims) == 1 is forbidden.

Arguments

References

[1] Saxe, McClelland, Ganguli. "Exact solutions to the nonlinear dynamics of learning in deep linear neural networks", ICLR 2014, https://arxiv.org/abs/1312.6120

source

`,9))]),i[110]||(i[110]=s("h3",{id:"Other-Convenience-Functions",tabindex:"-1"},[a("Other Convenience Functions "),s("a",{class:"header-anchor",href:"#Other-Convenience-Functions","aria-label":'Permalink to "Other Convenience Functions {#Other-Convenience-Functions}"'},"​")],-1)),i[111]||(i[111]=s("div",{class:"warning custom-block"},[s("p",{class:"custom-block-title"},"Beware"),s("p",null,"Unlike the other functions these ones don't take a type argument.")],-1)),s("details",A,[s("summary",null,[i[37]||(i[37]=s("a",{id:"WeightInitializers.zeros16",href:"#WeightInitializers.zeros16"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros16")],-1)),i[38]||(i[38]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[39]||(i[39]=t(`
julia
zeros16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",m,[s("summary",null,[i[40]||(i[40]=s("a",{id:"WeightInitializers.ones16",href:"#WeightInitializers.ones16"},[s("span",{class:"jlbinding"},"WeightInitializers.ones16")],-1)),i[41]||(i[41]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[42]||(i[42]=t(`
julia
ones16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",_,[s("summary",null,[i[43]||(i[43]=s("a",{id:"WeightInitializers.rand16",href:"#WeightInitializers.rand16"},[s("span",{class:"jlbinding"},"WeightInitializers.rand16")],-1)),i[44]||(i[44]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[45]||(i[45]=t(`
julia
rand16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",f,[s("summary",null,[i[46]||(i[46]=s("a",{id:"WeightInitializers.randn16",href:"#WeightInitializers.randn16"},[s("span",{class:"jlbinding"},"WeightInitializers.randn16")],-1)),i[47]||(i[47]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[48]||(i[48]=t(`
julia
randn16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float16, length(size)}

Return an AbstractArray{Float16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",T,[s("summary",null,[i[49]||(i[49]=s("a",{id:"WeightInitializers.zeros32",href:"#WeightInitializers.zeros32"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros32")],-1)),i[50]||(i[50]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[51]||(i[51]=t(`
julia
zeros32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",z,[s("summary",null,[i[52]||(i[52]=s("a",{id:"WeightInitializers.ones32",href:"#WeightInitializers.ones32"},[s("span",{class:"jlbinding"},"WeightInitializers.ones32")],-1)),i[53]||(i[53]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[54]||(i[54]=t(`
julia
ones32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",D,[s("summary",null,[i[55]||(i[55]=s("a",{id:"WeightInitializers.rand32",href:"#WeightInitializers.rand32"},[s("span",{class:"jlbinding"},"WeightInitializers.rand32")],-1)),i[56]||(i[56]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[57]||(i[57]=t(`
julia
rand32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",j,[s("summary",null,[i[58]||(i[58]=s("a",{id:"WeightInitializers.randn32",href:"#WeightInitializers.randn32"},[s("span",{class:"jlbinding"},"WeightInitializers.randn32")],-1)),i[59]||(i[59]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[60]||(i[60]=t(`
julia
randn32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float32, length(size)}

Return an AbstractArray{Float32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",x,[s("summary",null,[i[61]||(i[61]=s("a",{id:"WeightInitializers.zeros64",href:"#WeightInitializers.zeros64"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros64")],-1)),i[62]||(i[62]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[63]||(i[63]=t(`
julia
zeros64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",v,[s("summary",null,[i[64]||(i[64]=s("a",{id:"WeightInitializers.ones64",href:"#WeightInitializers.ones64"},[s("span",{class:"jlbinding"},"WeightInitializers.ones64")],-1)),i[65]||(i[65]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[66]||(i[66]=t(`
julia
ones64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",I,[s("summary",null,[i[67]||(i[67]=s("a",{id:"WeightInitializers.rand64",href:"#WeightInitializers.rand64"},[s("span",{class:"jlbinding"},"WeightInitializers.rand64")],-1)),i[68]||(i[68]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[69]||(i[69]=t(`
julia
rand64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",B,[s("summary",null,[i[70]||(i[70]=s("a",{id:"WeightInitializers.randn64",href:"#WeightInitializers.randn64"},[s("span",{class:"jlbinding"},"WeightInitializers.randn64")],-1)),i[71]||(i[71]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[72]||(i[72]=t(`
julia
randn64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{Float64, length(size)}

Return an AbstractArray{Float64} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",L,[s("summary",null,[i[73]||(i[73]=s("a",{id:"WeightInitializers.zerosC16",href:"#WeightInitializers.zerosC16"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC16")],-1)),i[74]||(i[74]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[75]||(i[75]=t(`
julia
zerosC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",W,[s("summary",null,[i[76]||(i[76]=s("a",{id:"WeightInitializers.onesC16",href:"#WeightInitializers.onesC16"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC16")],-1)),i[77]||(i[77]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[78]||(i[78]=t(`
julia
onesC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",R,[s("summary",null,[i[79]||(i[79]=s("a",{id:"WeightInitializers.randC16",href:"#WeightInitializers.randC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randC16")],-1)),i[80]||(i[80]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[81]||(i[81]=t(`
julia
randC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",S,[s("summary",null,[i[82]||(i[82]=s("a",{id:"WeightInitializers.randnC16",href:"#WeightInitializers.randnC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC16")],-1)),i[83]||(i[83]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[84]||(i[84]=t(`
julia
randnC16([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF16, length(size)}

Return an AbstractArray{ComplexF16} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",P,[s("summary",null,[i[85]||(i[85]=s("a",{id:"WeightInitializers.zerosC32",href:"#WeightInitializers.zerosC32"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC32")],-1)),i[86]||(i[86]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[87]||(i[87]=t(`
julia
zerosC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",N,[s("summary",null,[i[88]||(i[88]=s("a",{id:"WeightInitializers.onesC32",href:"#WeightInitializers.onesC32"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC32")],-1)),i[89]||(i[89]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[90]||(i[90]=t(`
julia
onesC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",w,[s("summary",null,[i[91]||(i[91]=s("a",{id:"WeightInitializers.randC32",href:"#WeightInitializers.randC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randC32")],-1)),i[92]||(i[92]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[93]||(i[93]=t(`
julia
randC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",V,[s("summary",null,[i[94]||(i[94]=s("a",{id:"WeightInitializers.randnC32",href:"#WeightInitializers.randnC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC32")],-1)),i[95]||(i[95]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[96]||(i[96]=t(`
julia
randnC32([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF32, length(size)}

Return an AbstractArray{ComplexF32} of the given size containing random numbers from a standard normal distribution.

source

`,3))]),s("details",Q,[s("summary",null,[i[97]||(i[97]=s("a",{id:"WeightInitializers.zerosC64",href:"#WeightInitializers.zerosC64"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC64")],-1)),i[98]||(i[98]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[99]||(i[99]=t(`
julia
zerosC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of zeros.

source

`,3))]),s("details",G,[s("summary",null,[i[100]||(i[100]=s("a",{id:"WeightInitializers.onesC64",href:"#WeightInitializers.onesC64"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC64")],-1)),i[101]||(i[101]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=t(`
julia
onesC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing an AbstractArray of ones.

source

`,3))]),s("details",U,[s("summary",null,[i[103]||(i[103]=s("a",{id:"WeightInitializers.randC64",href:"#WeightInitializers.randC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randC64")],-1)),i[104]||(i[104]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[105]||(i[105]=t(`
julia
randC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a uniform distribution.

source

`,3))]),s("details",O,[s("summary",null,[i[106]||(i[106]=s("a",{id:"WeightInitializers.randnC64",href:"#WeightInitializers.randnC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC64")],-1)),i[107]||(i[107]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[108]||(i[108]=t(`
julia
randnC64([::AbstractRNG=Utils.default_rng()], size...;
+    kwargs...) -> AbstractArray{ComplexF64, length(size)}

Return an AbstractArray{ComplexF64} of the given size containing random numbers from a standard normal distribution.

source

`,3))])])}const Y=r(k,[["render",M]]);export{K as __pageData,Y as default}; diff --git a/dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.lean.js b/dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.lean.js new file mode 100644 index 0000000000..149f00051f --- /dev/null +++ b/dev/assets/api_Building_Blocks_WeightInitializers.md.ZsEkwReB.lean.js @@ -0,0 +1 @@ +import{_ as r,c as e,a2 as t,j as s,a,G as l,B as p,o as h}from"./chunks/framework.BetCMmtc.js";const K=JSON.parse('{"title":"WeightInitializers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Building_Blocks/WeightInitializers.md","filePath":"api/Building_Blocks/WeightInitializers.md","lastUpdated":null}'),k={name:"api/Building_Blocks/WeightInitializers.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.612ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2922.7 1000","aria-hidden":"true"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"};function M(q,i,H,X,Z,$){const n=p("Badge");return h(),e("div",null,[i[109]||(i[109]=t("",6)),s("details",d,[s("summary",null,[i[0]||(i[0]=s("a",{id:"WeightInitializers.glorot_normal",href:"#WeightInitializers.glorot_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_normal")],-1)),i[1]||(i[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=t("",5))]),s("details",o,[s("summary",null,[i[3]||(i[3]=s("a",{id:"WeightInitializers.glorot_uniform",href:"#WeightInitializers.glorot_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.glorot_uniform")],-1)),i[4]||(i[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[15]||(i[15]=t("",1)),s("p",null,[i[7]||(i[7]=a("Return an ")),i[8]||(i[8]=s("code",null,"AbstractArray{T}",-1)),i[9]||(i[9]=a(" of the given ")),i[10]||(i[10]=s("code",null,"size",-1)),i[11]||(i[11]=a(" containing random numbers drawn from a uniform distribution on the interval ")),s("mjx-container",g,[(h(),e("svg",E,i[5]||(i[5]=[t("",1)]))),i[6]||(i[6]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"["),s("mo",null,"−"),s("mi",null,"x"),s("mo",null,","),s("mi",null,"x"),s("mo",{stretchy:"false"},"]")])],-1))]),i[12]||(i[12]=a(", where ")),i[13]||(i[13]=s("code",null,"x = gain * sqrt(6 / (fan_in + fan_out))",-1)),i[14]||(i[14]=a(". This method is described in [1] and also known as Xavier initialization."))]),i[16]||(i[16]=s("p",null,[s("strong",null,"References")],-1)),i[17]||(i[17]=s("p",null,[a('[1] Glorot, Xavier, and Yoshua Bengio. "Understanding the difficulty of training deep feedforward neural networks." '),s("em",null,"Proceedings of the thirteenth international conference on artificial intelligence and statistics"),a(". 2010.")],-1)),i[18]||(i[18]=s("p",null,[s("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/WeightInitializers/src/initializers.jl#L13-L27",target:"_blank",rel:"noreferrer"},"source")],-1))]),s("details",y,[s("summary",null,[i[19]||(i[19]=s("a",{id:"WeightInitializers.identity_init",href:"#WeightInitializers.identity_init"},[s("span",{class:"jlbinding"},"WeightInitializers.identity_init")],-1)),i[20]||(i[20]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[21]||(i[21]=t("",13))]),s("details",c,[s("summary",null,[i[22]||(i[22]=s("a",{id:"WeightInitializers.kaiming_normal",href:"#WeightInitializers.kaiming_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_normal")],-1)),i[23]||(i[23]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[24]||(i[24]=t("",5))]),s("details",u,[s("summary",null,[i[25]||(i[25]=s("a",{id:"WeightInitializers.kaiming_uniform",href:"#WeightInitializers.kaiming_uniform"},[s("span",{class:"jlbinding"},"WeightInitializers.kaiming_uniform")],-1)),i[26]||(i[26]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[27]||(i[27]=t("",5))]),s("details",F,[s("summary",null,[i[28]||(i[28]=s("a",{id:"WeightInitializers.sparse_init",href:"#WeightInitializers.sparse_init"},[s("span",{class:"jlbinding"},"WeightInitializers.sparse_init")],-1)),i[29]||(i[29]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[30]||(i[30]=t("",12))]),s("details",b,[s("summary",null,[i[31]||(i[31]=s("a",{id:"WeightInitializers.truncated_normal",href:"#WeightInitializers.truncated_normal"},[s("span",{class:"jlbinding"},"WeightInitializers.truncated_normal")],-1)),i[32]||(i[32]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[33]||(i[33]=t("",3))]),s("details",C,[s("summary",null,[i[34]||(i[34]=s("a",{id:"WeightInitializers.orthogonal",href:"#WeightInitializers.orthogonal"},[s("span",{class:"jlbinding"},"WeightInitializers.orthogonal")],-1)),i[35]||(i[35]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[36]||(i[36]=t("",9))]),i[110]||(i[110]=s("h3",{id:"Other-Convenience-Functions",tabindex:"-1"},[a("Other Convenience Functions "),s("a",{class:"header-anchor",href:"#Other-Convenience-Functions","aria-label":'Permalink to "Other Convenience Functions {#Other-Convenience-Functions}"'},"​")],-1)),i[111]||(i[111]=s("div",{class:"warning custom-block"},[s("p",{class:"custom-block-title"},"Beware"),s("p",null,"Unlike the other functions these ones don't take a type argument.")],-1)),s("details",A,[s("summary",null,[i[37]||(i[37]=s("a",{id:"WeightInitializers.zeros16",href:"#WeightInitializers.zeros16"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros16")],-1)),i[38]||(i[38]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[39]||(i[39]=t("",3))]),s("details",m,[s("summary",null,[i[40]||(i[40]=s("a",{id:"WeightInitializers.ones16",href:"#WeightInitializers.ones16"},[s("span",{class:"jlbinding"},"WeightInitializers.ones16")],-1)),i[41]||(i[41]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[42]||(i[42]=t("",3))]),s("details",_,[s("summary",null,[i[43]||(i[43]=s("a",{id:"WeightInitializers.rand16",href:"#WeightInitializers.rand16"},[s("span",{class:"jlbinding"},"WeightInitializers.rand16")],-1)),i[44]||(i[44]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[45]||(i[45]=t("",3))]),s("details",f,[s("summary",null,[i[46]||(i[46]=s("a",{id:"WeightInitializers.randn16",href:"#WeightInitializers.randn16"},[s("span",{class:"jlbinding"},"WeightInitializers.randn16")],-1)),i[47]||(i[47]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[48]||(i[48]=t("",3))]),s("details",T,[s("summary",null,[i[49]||(i[49]=s("a",{id:"WeightInitializers.zeros32",href:"#WeightInitializers.zeros32"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros32")],-1)),i[50]||(i[50]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[51]||(i[51]=t("",3))]),s("details",z,[s("summary",null,[i[52]||(i[52]=s("a",{id:"WeightInitializers.ones32",href:"#WeightInitializers.ones32"},[s("span",{class:"jlbinding"},"WeightInitializers.ones32")],-1)),i[53]||(i[53]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[54]||(i[54]=t("",3))]),s("details",D,[s("summary",null,[i[55]||(i[55]=s("a",{id:"WeightInitializers.rand32",href:"#WeightInitializers.rand32"},[s("span",{class:"jlbinding"},"WeightInitializers.rand32")],-1)),i[56]||(i[56]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[57]||(i[57]=t("",3))]),s("details",j,[s("summary",null,[i[58]||(i[58]=s("a",{id:"WeightInitializers.randn32",href:"#WeightInitializers.randn32"},[s("span",{class:"jlbinding"},"WeightInitializers.randn32")],-1)),i[59]||(i[59]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[60]||(i[60]=t("",3))]),s("details",x,[s("summary",null,[i[61]||(i[61]=s("a",{id:"WeightInitializers.zeros64",href:"#WeightInitializers.zeros64"},[s("span",{class:"jlbinding"},"WeightInitializers.zeros64")],-1)),i[62]||(i[62]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[63]||(i[63]=t("",3))]),s("details",v,[s("summary",null,[i[64]||(i[64]=s("a",{id:"WeightInitializers.ones64",href:"#WeightInitializers.ones64"},[s("span",{class:"jlbinding"},"WeightInitializers.ones64")],-1)),i[65]||(i[65]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[66]||(i[66]=t("",3))]),s("details",I,[s("summary",null,[i[67]||(i[67]=s("a",{id:"WeightInitializers.rand64",href:"#WeightInitializers.rand64"},[s("span",{class:"jlbinding"},"WeightInitializers.rand64")],-1)),i[68]||(i[68]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[69]||(i[69]=t("",3))]),s("details",B,[s("summary",null,[i[70]||(i[70]=s("a",{id:"WeightInitializers.randn64",href:"#WeightInitializers.randn64"},[s("span",{class:"jlbinding"},"WeightInitializers.randn64")],-1)),i[71]||(i[71]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[72]||(i[72]=t("",3))]),s("details",L,[s("summary",null,[i[73]||(i[73]=s("a",{id:"WeightInitializers.zerosC16",href:"#WeightInitializers.zerosC16"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC16")],-1)),i[74]||(i[74]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[75]||(i[75]=t("",3))]),s("details",W,[s("summary",null,[i[76]||(i[76]=s("a",{id:"WeightInitializers.onesC16",href:"#WeightInitializers.onesC16"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC16")],-1)),i[77]||(i[77]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[78]||(i[78]=t("",3))]),s("details",R,[s("summary",null,[i[79]||(i[79]=s("a",{id:"WeightInitializers.randC16",href:"#WeightInitializers.randC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randC16")],-1)),i[80]||(i[80]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[81]||(i[81]=t("",3))]),s("details",S,[s("summary",null,[i[82]||(i[82]=s("a",{id:"WeightInitializers.randnC16",href:"#WeightInitializers.randnC16"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC16")],-1)),i[83]||(i[83]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[84]||(i[84]=t("",3))]),s("details",P,[s("summary",null,[i[85]||(i[85]=s("a",{id:"WeightInitializers.zerosC32",href:"#WeightInitializers.zerosC32"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC32")],-1)),i[86]||(i[86]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[87]||(i[87]=t("",3))]),s("details",N,[s("summary",null,[i[88]||(i[88]=s("a",{id:"WeightInitializers.onesC32",href:"#WeightInitializers.onesC32"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC32")],-1)),i[89]||(i[89]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[90]||(i[90]=t("",3))]),s("details",w,[s("summary",null,[i[91]||(i[91]=s("a",{id:"WeightInitializers.randC32",href:"#WeightInitializers.randC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randC32")],-1)),i[92]||(i[92]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[93]||(i[93]=t("",3))]),s("details",V,[s("summary",null,[i[94]||(i[94]=s("a",{id:"WeightInitializers.randnC32",href:"#WeightInitializers.randnC32"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC32")],-1)),i[95]||(i[95]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[96]||(i[96]=t("",3))]),s("details",Q,[s("summary",null,[i[97]||(i[97]=s("a",{id:"WeightInitializers.zerosC64",href:"#WeightInitializers.zerosC64"},[s("span",{class:"jlbinding"},"WeightInitializers.zerosC64")],-1)),i[98]||(i[98]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[99]||(i[99]=t("",3))]),s("details",G,[s("summary",null,[i[100]||(i[100]=s("a",{id:"WeightInitializers.onesC64",href:"#WeightInitializers.onesC64"},[s("span",{class:"jlbinding"},"WeightInitializers.onesC64")],-1)),i[101]||(i[101]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=t("",3))]),s("details",U,[s("summary",null,[i[103]||(i[103]=s("a",{id:"WeightInitializers.randC64",href:"#WeightInitializers.randC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randC64")],-1)),i[104]||(i[104]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[105]||(i[105]=t("",3))]),s("details",O,[s("summary",null,[i[106]||(i[106]=s("a",{id:"WeightInitializers.randnC64",href:"#WeightInitializers.randnC64"},[s("span",{class:"jlbinding"},"WeightInitializers.randnC64")],-1)),i[107]||(i[107]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[108]||(i[108]=t("",3))])])}const Y=r(k,[["render",M]]);export{K as __pageData,Y as default}; diff --git a/dev/assets/api_Lux_autodiff.md.Wvm0sUp0.js b/dev/assets/api_Lux_autodiff.md.CvcnLNnS.js similarity index 95% rename from dev/assets/api_Lux_autodiff.md.Wvm0sUp0.js rename to dev/assets/api_Lux_autodiff.md.CvcnLNnS.js index 6bfb191e84..961ec7d1a8 100644 --- a/dev/assets/api_Lux_autodiff.md.Wvm0sUp0.js +++ b/dev/assets/api_Lux_autodiff.md.CvcnLNnS.js @@ -1 +1 @@ -import{_ as n,c as i,j as t,a,G as l,a2 as s,B as r,o as d}from"./chunks/framework.I-x9Gl6h.js";const v=JSON.parse('{"title":"Automatic Differentiation Helpers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/autodiff.md","filePath":"api/Lux/autodiff.md","lastUpdated":null}'),Q={name:"api/Lux/autodiff.md"},p={class:"jldocstring custom-block"},c={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.812ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 3010.7 1799","aria-hidden":"true"},m={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.126ex",height:"4.536ex",role:"img",focusable:"false",viewBox:"0 -1355.3 3591.5 2004.8","aria-hidden":"true"},g={class:"jldocstring custom-block"};function f(b,e,x,k,y,w){const o=r("Badge");return d(),i("div",null,[e[19]||(e[19]=t("h1",{id:"autodiff-lux-helpers",tabindex:"-1"},[a("Automatic Differentiation Helpers "),t("a",{class:"header-anchor",href:"#autodiff-lux-helpers","aria-label":'Permalink to "Automatic Differentiation Helpers {#autodiff-lux-helpers}"'},"​")],-1)),e[20]||(e[20]=t("h2",{id:"JVP-and-VJP-Wrappers",tabindex:"-1"},[a("JVP & VJP Wrappers "),t("a",{class:"header-anchor",href:"#JVP-and-VJP-Wrappers","aria-label":'Permalink to "JVP & VJP Wrappers {#JVP-and-VJP-Wrappers}"'},"​")],-1)),t("details",p,[t("summary",null,[e[0]||(e[0]=t("a",{id:"Lux.jacobian_vector_product",href:"#Lux.jacobian_vector_product"},[t("span",{class:"jlbinding"},"Lux.jacobian_vector_product")],-1)),e[1]||(e[1]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[6]||(e[6]=s('
julia
jacobian_vector_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[4]||(e[4]=a("Compute the Jacobian-Vector Product ")),t("mjx-container",c,[(d(),i("svg",T,e[2]||(e[2]=[s('',1)]))),e[3]||(e[3]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"u")])],-1))]),e[5]||(e[5]=a(". This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD."))]),e[7]||(e[7]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),t("details",m,[t("summary",null,[e[8]||(e[8]=t("a",{id:"Lux.vector_jacobian_product",href:"#Lux.vector_jacobian_product"},[t("span",{class:"jlbinding"},"Lux.vector_jacobian_product")],-1)),e[9]||(e[9]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=s('
julia
vector_jacobian_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[12]||(e[12]=a("Compute the Vector-Jacobian Product ")),t("mjx-container",u,[(d(),i("svg",h,e[10]||(e[10]=[s('',1)]))),e[11]||(e[11]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msup",null,[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"T")]),t("mi",null,"u")])],-1))]),e[13]||(e[13]=a(". This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD."))]),e[15]||(e[15]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),e[21]||(e[21]=t("h2",{id:"Batched-AD",tabindex:"-1"},[a("Batched AD "),t("a",{class:"header-anchor",href:"#Batched-AD","aria-label":'Permalink to "Batched AD {#Batched-AD}"'},"​")],-1)),t("details",g,[t("summary",null,[e[16]||(e[16]=t("a",{id:"Lux.batched_jacobian",href:"#Lux.batched_jacobian"},[t("span",{class:"jlbinding"},"Lux.batched_jacobian")],-1)),e[17]||(e[17]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[18]||(e[18]=s('
julia
batched_jacobian(f, backend::AbstractADType, x::AbstractArray)

Computes the Jacobian of a function f with respect to a batch of inputs x. This expects the following properties for y = f(x):

  1. ndims(y) ≥ 2

  2. size(y, ndims(y)) == size(x, ndims(x))

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff
AutoZygoteZygote.jl

Arguments

Returns

Danger

f(x) must not be inter-mixing the batch dimensions, else the result will be incorrect. For example, if f contains operations like batch normalization, then the result will be incorrect.

source

',11))]),e[22]||(e[22]=t("h2",{id:"Nested-2nd-Order-AD",tabindex:"-1"},[a("Nested 2nd Order AD "),t("a",{class:"header-anchor",href:"#Nested-2nd-Order-AD","aria-label":'Permalink to "Nested 2nd Order AD {#Nested-2nd-Order-AD}"'},"​")],-1)),e[23]||(e[23]=t("p",null,[a("Consult the "),t("a",{href:"/dev/manual/nested_autodiff#nested_autodiff"},"manual page on Nested AD"),a(" for information on nested automatic differentiation.")],-1))])}const L=n(Q,[["render",f]]);export{v as __pageData,L as default}; +import{_ as d,c as i,j as t,a,G as l,a2 as s,B as r,o as n}from"./chunks/framework.BetCMmtc.js";const A=JSON.parse('{"title":"Automatic Differentiation Helpers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/autodiff.md","filePath":"api/Lux/autodiff.md","lastUpdated":null}'),T={name:"api/Lux/autodiff.md"},Q={class:"jldocstring custom-block"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.812ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 3010.7 1799","aria-hidden":"true"},m={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.126ex",height:"4.536ex",role:"img",focusable:"false",viewBox:"0 -1355.3 3591.5 2004.8","aria-hidden":"true"},g={class:"jldocstring custom-block"};function f(b,e,x,k,y,_){const o=r("Badge");return n(),i("div",null,[e[19]||(e[19]=t("h1",{id:"autodiff-lux-helpers",tabindex:"-1"},[a("Automatic Differentiation Helpers "),t("a",{class:"header-anchor",href:"#autodiff-lux-helpers","aria-label":'Permalink to "Automatic Differentiation Helpers {#autodiff-lux-helpers}"'},"​")],-1)),e[20]||(e[20]=t("h2",{id:"JVP-and-VJP-Wrappers",tabindex:"-1"},[a("JVP & VJP Wrappers "),t("a",{class:"header-anchor",href:"#JVP-and-VJP-Wrappers","aria-label":'Permalink to "JVP & VJP Wrappers {#JVP-and-VJP-Wrappers}"'},"​")],-1)),t("details",Q,[t("summary",null,[e[0]||(e[0]=t("a",{id:"Lux.jacobian_vector_product",href:"#Lux.jacobian_vector_product"},[t("span",{class:"jlbinding"},"Lux.jacobian_vector_product")],-1)),e[1]||(e[1]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[6]||(e[6]=s('
julia
jacobian_vector_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[4]||(e[4]=a("Compute the Jacobian-Vector Product ")),t("mjx-container",p,[(n(),i("svg",c,e[2]||(e[2]=[s('',1)]))),e[3]||(e[3]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"u")])],-1))]),e[5]||(e[5]=a(". This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD."))]),e[7]||(e[7]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),t("details",m,[t("summary",null,[e[8]||(e[8]=t("a",{id:"Lux.vector_jacobian_product",href:"#Lux.vector_jacobian_product"},[t("span",{class:"jlbinding"},"Lux.vector_jacobian_product")],-1)),e[9]||(e[9]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=s('
julia
vector_jacobian_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[12]||(e[12]=a("Compute the Vector-Jacobian Product ")),t("mjx-container",u,[(n(),i("svg",h,e[10]||(e[10]=[s('',1)]))),e[11]||(e[11]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msup",null,[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"T")]),t("mi",null,"u")])],-1))]),e[13]||(e[13]=a(". This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD."))]),e[15]||(e[15]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),e[21]||(e[21]=t("h2",{id:"Batched-AD",tabindex:"-1"},[a("Batched AD "),t("a",{class:"header-anchor",href:"#Batched-AD","aria-label":'Permalink to "Batched AD {#Batched-AD}"'},"​")],-1)),t("details",g,[t("summary",null,[e[16]||(e[16]=t("a",{id:"Lux.batched_jacobian",href:"#Lux.batched_jacobian"},[t("span",{class:"jlbinding"},"Lux.batched_jacobian")],-1)),e[17]||(e[17]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[18]||(e[18]=s('
julia
batched_jacobian(f, backend::AbstractADType, x::AbstractArray)

Computes the Jacobian of a function f with respect to a batch of inputs x. This expects the following properties for y = f(x):

  1. ndims(y) ≥ 2

  2. size(y, ndims(y)) == size(x, ndims(x))

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff
AutoZygoteZygote.jl

Arguments

Returns

Danger

f(x) must not be inter-mixing the batch dimensions, else the result will be incorrect. For example, if f contains operations like batch normalization, then the result will be incorrect.

source

',11))]),e[22]||(e[22]=t("h2",{id:"Nested-2nd-Order-AD",tabindex:"-1"},[a("Nested 2nd Order AD "),t("a",{class:"header-anchor",href:"#Nested-2nd-Order-AD","aria-label":'Permalink to "Nested 2nd Order AD {#Nested-2nd-Order-AD}"'},"​")],-1)),e[23]||(e[23]=t("p",null,[a("Consult the "),t("a",{href:"/dev/manual/nested_autodiff#nested_autodiff"},"manual page on Nested AD"),a(" for information on nested automatic differentiation.")],-1))])}const j=d(T,[["render",f]]);export{A as __pageData,j as default}; diff --git a/dev/assets/api_Lux_autodiff.md.CvcnLNnS.lean.js b/dev/assets/api_Lux_autodiff.md.CvcnLNnS.lean.js new file mode 100644 index 0000000000..8bca8bbfe8 --- /dev/null +++ b/dev/assets/api_Lux_autodiff.md.CvcnLNnS.lean.js @@ -0,0 +1 @@ +import{_ as d,c as i,j as t,a,G as l,a2 as s,B as r,o as n}from"./chunks/framework.BetCMmtc.js";const A=JSON.parse('{"title":"Automatic Differentiation Helpers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/autodiff.md","filePath":"api/Lux/autodiff.md","lastUpdated":null}'),T={name:"api/Lux/autodiff.md"},Q={class:"jldocstring custom-block"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.812ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 3010.7 1799","aria-hidden":"true"},m={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.126ex",height:"4.536ex",role:"img",focusable:"false",viewBox:"0 -1355.3 3591.5 2004.8","aria-hidden":"true"},g={class:"jldocstring custom-block"};function f(b,e,x,k,y,_){const o=r("Badge");return n(),i("div",null,[e[19]||(e[19]=t("h1",{id:"autodiff-lux-helpers",tabindex:"-1"},[a("Automatic Differentiation Helpers "),t("a",{class:"header-anchor",href:"#autodiff-lux-helpers","aria-label":'Permalink to "Automatic Differentiation Helpers {#autodiff-lux-helpers}"'},"​")],-1)),e[20]||(e[20]=t("h2",{id:"JVP-and-VJP-Wrappers",tabindex:"-1"},[a("JVP & VJP Wrappers "),t("a",{class:"header-anchor",href:"#JVP-and-VJP-Wrappers","aria-label":'Permalink to "JVP & VJP Wrappers {#JVP-and-VJP-Wrappers}"'},"​")],-1)),t("details",Q,[t("summary",null,[e[0]||(e[0]=t("a",{id:"Lux.jacobian_vector_product",href:"#Lux.jacobian_vector_product"},[t("span",{class:"jlbinding"},"Lux.jacobian_vector_product")],-1)),e[1]||(e[1]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[6]||(e[6]=s("",1)),t("p",null,[e[4]||(e[4]=a("Compute the Jacobian-Vector Product ")),t("mjx-container",p,[(n(),i("svg",c,e[2]||(e[2]=[s("",1)]))),e[3]||(e[3]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"u")])],-1))]),e[5]||(e[5]=a(". This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD."))]),e[7]||(e[7]=s("",8))]),t("details",m,[t("summary",null,[e[8]||(e[8]=t("a",{id:"Lux.vector_jacobian_product",href:"#Lux.vector_jacobian_product"},[t("span",{class:"jlbinding"},"Lux.vector_jacobian_product")],-1)),e[9]||(e[9]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=s("",1)),t("p",null,[e[12]||(e[12]=a("Compute the Vector-Jacobian Product ")),t("mjx-container",u,[(n(),i("svg",h,e[10]||(e[10]=[s("",1)]))),e[11]||(e[11]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msup",null,[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"T")]),t("mi",null,"u")])],-1))]),e[13]||(e[13]=a(". This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD."))]),e[15]||(e[15]=s("",8))]),e[21]||(e[21]=t("h2",{id:"Batched-AD",tabindex:"-1"},[a("Batched AD "),t("a",{class:"header-anchor",href:"#Batched-AD","aria-label":'Permalink to "Batched AD {#Batched-AD}"'},"​")],-1)),t("details",g,[t("summary",null,[e[16]||(e[16]=t("a",{id:"Lux.batched_jacobian",href:"#Lux.batched_jacobian"},[t("span",{class:"jlbinding"},"Lux.batched_jacobian")],-1)),e[17]||(e[17]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[18]||(e[18]=s("",11))]),e[22]||(e[22]=t("h2",{id:"Nested-2nd-Order-AD",tabindex:"-1"},[a("Nested 2nd Order AD "),t("a",{class:"header-anchor",href:"#Nested-2nd-Order-AD","aria-label":'Permalink to "Nested 2nd Order AD {#Nested-2nd-Order-AD}"'},"​")],-1)),e[23]||(e[23]=t("p",null,[a("Consult the "),t("a",{href:"/dev/manual/nested_autodiff#nested_autodiff"},"manual page on Nested AD"),a(" for information on nested automatic differentiation.")],-1))])}const j=d(T,[["render",f]]);export{A as __pageData,j as default}; diff --git a/dev/assets/api_Lux_autodiff.md.Wvm0sUp0.lean.js b/dev/assets/api_Lux_autodiff.md.Wvm0sUp0.lean.js deleted file mode 100644 index 6bfb191e84..0000000000 --- a/dev/assets/api_Lux_autodiff.md.Wvm0sUp0.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as n,c as i,j as t,a,G as l,a2 as s,B as r,o as d}from"./chunks/framework.I-x9Gl6h.js";const v=JSON.parse('{"title":"Automatic Differentiation Helpers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/autodiff.md","filePath":"api/Lux/autodiff.md","lastUpdated":null}'),Q={name:"api/Lux/autodiff.md"},p={class:"jldocstring custom-block"},c={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.812ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 3010.7 1799","aria-hidden":"true"},m={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.126ex",height:"4.536ex",role:"img",focusable:"false",viewBox:"0 -1355.3 3591.5 2004.8","aria-hidden":"true"},g={class:"jldocstring custom-block"};function f(b,e,x,k,y,w){const o=r("Badge");return d(),i("div",null,[e[19]||(e[19]=t("h1",{id:"autodiff-lux-helpers",tabindex:"-1"},[a("Automatic Differentiation Helpers "),t("a",{class:"header-anchor",href:"#autodiff-lux-helpers","aria-label":'Permalink to "Automatic Differentiation Helpers {#autodiff-lux-helpers}"'},"​")],-1)),e[20]||(e[20]=t("h2",{id:"JVP-and-VJP-Wrappers",tabindex:"-1"},[a("JVP & VJP Wrappers "),t("a",{class:"header-anchor",href:"#JVP-and-VJP-Wrappers","aria-label":'Permalink to "JVP & VJP Wrappers {#JVP-and-VJP-Wrappers}"'},"​")],-1)),t("details",p,[t("summary",null,[e[0]||(e[0]=t("a",{id:"Lux.jacobian_vector_product",href:"#Lux.jacobian_vector_product"},[t("span",{class:"jlbinding"},"Lux.jacobian_vector_product")],-1)),e[1]||(e[1]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[6]||(e[6]=s('
julia
jacobian_vector_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[4]||(e[4]=a("Compute the Jacobian-Vector Product ")),t("mjx-container",c,[(d(),i("svg",T,e[2]||(e[2]=[s('',1)]))),e[3]||(e[3]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"u")])],-1))]),e[5]||(e[5]=a(". This is a wrapper around AD backends but allows us to compute gradients of jacobian-vector products efficiently using mixed-mode AD."))]),e[7]||(e[7]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),t("details",m,[t("summary",null,[e[8]||(e[8]=t("a",{id:"Lux.vector_jacobian_product",href:"#Lux.vector_jacobian_product"},[t("span",{class:"jlbinding"},"Lux.vector_jacobian_product")],-1)),e[9]||(e[9]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[14]||(e[14]=s('
julia
vector_jacobian_product(f, backend::AbstractADType, x, u)
',1)),t("p",null,[e[12]||(e[12]=a("Compute the Vector-Jacobian Product ")),t("mjx-container",u,[(d(),i("svg",h,e[10]||(e[10]=[s('',1)]))),e[11]||(e[11]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msup",null,[t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"("),t("mfrac",null,[t("mrow",null,[t("mi",null,"∂"),t("mi",null,"f")]),t("mrow",null,[t("mi",null,"∂"),t("mi",null,"x")])]),t("mo",{"data-mjx-texclass":"CLOSE"},")")]),t("mi",null,"T")]),t("mi",null,"u")])],-1))]),e[13]||(e[13]=a(". This is a wrapper around AD backends but allows us to compute gradients of vector-jacobian products efficiently using mixed-mode AD."))]),e[15]||(e[15]=s('

Backends & AD Packages

Supported BackendsPackages Needed
AutoZygoteZygote.jl

Warning

Gradient wrt u in the reverse pass is always dropped.

Arguments

Returns

source

',8))]),e[21]||(e[21]=t("h2",{id:"Batched-AD",tabindex:"-1"},[a("Batched AD "),t("a",{class:"header-anchor",href:"#Batched-AD","aria-label":'Permalink to "Batched AD {#Batched-AD}"'},"​")],-1)),t("details",g,[t("summary",null,[e[16]||(e[16]=t("a",{id:"Lux.batched_jacobian",href:"#Lux.batched_jacobian"},[t("span",{class:"jlbinding"},"Lux.batched_jacobian")],-1)),e[17]||(e[17]=a()),l(o,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),e[18]||(e[18]=s('
julia
batched_jacobian(f, backend::AbstractADType, x::AbstractArray)

Computes the Jacobian of a function f with respect to a batch of inputs x. This expects the following properties for y = f(x):

  1. ndims(y) ≥ 2

  2. size(y, ndims(y)) == size(x, ndims(x))

Backends & AD Packages

Supported BackendsPackages Needed
AutoForwardDiff
AutoZygoteZygote.jl

Arguments

Returns

Danger

f(x) must not be inter-mixing the batch dimensions, else the result will be incorrect. For example, if f contains operations like batch normalization, then the result will be incorrect.

source

',11))]),e[22]||(e[22]=t("h2",{id:"Nested-2nd-Order-AD",tabindex:"-1"},[a("Nested 2nd Order AD "),t("a",{class:"header-anchor",href:"#Nested-2nd-Order-AD","aria-label":'Permalink to "Nested 2nd Order AD {#Nested-2nd-Order-AD}"'},"​")],-1)),e[23]||(e[23]=t("p",null,[a("Consult the "),t("a",{href:"/dev/manual/nested_autodiff#nested_autodiff"},"manual page on Nested AD"),a(" for information on nested automatic differentiation.")],-1))])}const L=n(Q,[["render",f]]);export{v as __pageData,L as default}; diff --git a/dev/assets/api_Lux_contrib.md.BwHwNoK7.lean.js b/dev/assets/api_Lux_contrib.md.BwHwNoK7.lean.js deleted file mode 100644 index 1a6c1071da..0000000000 --- a/dev/assets/api_Lux_contrib.md.BwHwNoK7.lean.js +++ /dev/null @@ -1,55 +0,0 @@ -import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framework.I-x9Gl6h.js";const A=JSON.parse('{"title":"Experimental Features","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/contrib.md","filePath":"api/Lux/contrib.md","lastUpdated":null}'),r={name:"api/Lux/contrib.md"},d={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"};function F(m,s,C,b,D,f){const t=p("Badge");return k(),l("div",null,[s[21]||(s[21]=e('

Experimental Features

All features listed on this page are experimental which means:

  1. No SemVer Guarantees. We use code here to iterate fast. That said, historically we have never broken any code in this module and have always provided a deprecation period.

  2. Expect edge-cases and report them. It will help us move these features out of experimental sooner.

  3. None of the features are exported.

Parameter Freezing

',4)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Lux.Experimental.FrozenLayer",href:"#Lux.Experimental.FrozenLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.FrozenLayer")],-1)),s[1]||(s[1]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[2]||(s[2]=e(`
julia
FrozenLayer(l::AbstractLuxLayer, which_params::Optional{Tuple})

Freeze the parameters with name which_params of the layer l.

Use Lux.Experimental.freeze instead

It is always recommended to use the Lux.Experimental.freeze function instead of directly using the FrozenLayer constructor.

No checks for which_params

There are no checks for which_params. For example, if the original layer has parameters named (:weight, :bias), and which_params is set to (:myweight,) then none of the parameters are frozen and no error is thrown.

Arguments

Extended Help

Parameters

States

Note on Internal Layer Implementation

The inner layer should work with NamedTuple parameters. In order to support custom parameter types, users need to implement Lux.Utils.merge(::CustomParamType, ::NamedTuple) or extend Lux.Utils.named_tuple(::CustomParamType) to return a NamedTuple.

Example

julia
julia> Lux.Experimental.FrozenLayer(Dense(2 => 2), (:weight,))
-FrozenLayer(Dense(2 => 2), (:weight,))  # 2 parameters, plus 4 non-trainable

See also Lux.Experimental.freeze, Lux.Experimental.unfreeze.

source

`,17))]),i("details",E,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.Experimental.freeze",href:"#Lux.Experimental.freeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.freeze")],-1)),s[4]||(s[4]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e(`
julia
freeze(l::AbstractLuxLayer, which_params::Optional{Tuple} = nothing)

Constructs a version of l with which_params frozen. If which_params is nothing, then all parameters are frozen.

source

julia
freeze(l::AbstractLuxLayer, ps, st::NamedTuple,
-    which_params::Optional{Tuple} = nothing)

Construct a Lux.Experimental.FrozenLayer for l with the current parameters and states. If which_params is nothing, then all parameters are frozen.

source

`,6))]),i("details",o,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.Experimental.unfreeze",href:"#Lux.Experimental.unfreeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.unfreeze")],-1)),s[7]||(s[7]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e('
julia
unfreeze(l::FrozenLayer)

Unfreezes the layer l.

source

julia
unfreeze(l::FrozenLayer, ps, st::NamedTuple)

Unwraps a Lux.Experimental.FrozenLayer l with the current parameters and states.

source

',6))]),s[22]||(s[22]=i("p",null,[a("For detailed usage example look at the "),i("a",{href:"/dev/manual/freezing_model_parameters#freezing-model-parameters"},"manual page"),a(".")],-1)),s[23]||(s[23]=i("h2",{id:"Map-over-Layer",tabindex:"-1"},[a("Map over Layer "),i("a",{class:"header-anchor",href:"#Map-over-Layer","aria-label":'Permalink to "Map over Layer {#Map-over-Layer}"'},"​")],-1)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Lux.Experimental.layer_map",href:"#Lux.Experimental.layer_map"},[i("span",{class:"jlbinding"},"Lux.Experimental.layer_map")],-1)),s[10]||(s[10]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=e(`
julia
layer_map(f, l::AbstractLuxLayer, ps, st::NamedTuple)

Map the function f over the model l, with the parameters ps and states st. This is different from Functors.fmap since it zips the layers, parameters, and states and invokes the function on all of them together.

KeyPath provided to the function

The KeyPath depths on the structure of the parameters and states. This is of consequence exclusively for AbstractLuxWrapperLayer where the structure of the layer doesn't match the structure of the parameters and states. In the example, provided below, the KeyPath is (:chain, :dense_1) for the first layer (following the structure in ps) while accessing the same layer in the chain is done with ( :chain, :layers, :dense_1).

Call Signature for f

Extended Help

Example

julia
julia> using Lux, Random
-
-julia> c = Parallel(
-           +; chain=Chain(; dense_1=Dense(2 => 3), bn=BatchNorm(3), dense_2=Dense(3 => 5)),
-           dense_3=Dense(5 => 1));
-
-julia> rng = Random.default_rng();
-
-julia> ps, st = Lux.setup(rng, c);
-
-julia> # Makes parameters of Dense Layers inside Chain zero
-       function zero_dense_params(l, ps, st, name)
-           if l isa Dense
-               println("zeroing params of $name")
-               ps = merge(ps, (; weight=zero.(ps.weight), bias=zero.(ps.bias)))
-           end
-           return l, ps, st
-       end;
-
-julia> _, ps_new, _ = Lux.Experimental.layer_map(zero_dense_params, c, ps, st);
-zeroing params of KeyPath(:chain, :dense_1)
-zeroing params of KeyPath(:chain, :dense_2)
-zeroing params of KeyPath(:dense_3,)
-
-julia> all(iszero, (ps_new.chain.dense_1.weight, ps_new.chain.dense_1.bias,
-                    ps_new.chain.dense_2.weight, ps_new.chain.dense_2.bias,
-                    ps_new.dense_3.weight, ps_new.dense_3.bias))
-true

source

`,9))]),s[24]||(s[24]=i("h2",{id:"Debugging-Functionality",tabindex:"-1"},[a("Debugging Functionality "),i("a",{class:"header-anchor",href:"#Debugging-Functionality","aria-label":'Permalink to "Debugging Functionality {#Debugging-Functionality}"'},"​")],-1)),s[25]||(s[25]=i("p",null,"Model not working properly! Here are some functionalities to help you debug you Lux model.",-1)),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.Experimental.@debug_mode",href:"#Lux.Experimental.@debug_mode"},[i("span",{class:"jlbinding"},"Lux.Experimental.@debug_mode")],-1)),s[13]||(s[13]=a()),n(t,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
julia
@debug_mode layer kwargs...

Recurses into the layer and replaces the inner most non Container Layers with a Lux.Experimental.DebugLayer.

See Lux.Experimental.DebugLayer for details about the Keyword Arguments.

source

',4))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.Experimental.DebugLayer",href:"#Lux.Experimental.DebugLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.DebugLayer")],-1)),s[16]||(s[16]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=e(`
julia
DebugLayer(layer::AbstractLuxLayer;
-    nan_check::Union{Symbol, StaticSymbol, Val}=static(:both),
-    error_check::Union{StaticBool, Bool, Val{true}, Val{false}}=True(),
-    location::KeyPath=KeyPath())

A wrapper over Lux layers that adds checks for NaNs and errors. This is useful for debugging.

Arguments

Extended Help

Keyword Arguments

Input / Output

Inputs and outputs are the same as the layer unless one of the nan_check or error_check criteria is met.

If nan_check is enabled and NaNs are detected then a DomainError is thrown. If error_check is enabled, then any errors in the layer are thrown with useful information to track where the error originates.

ChainRules Compatible Reverse Mode AD Tools

nan_check for the backward mode only works with ChainRules Compatible Reverse Mode AD Tools currently.

Disable After Debugging

This layer is only meant to be used for debugging. If used for actual training or inference, will lead to extremely bad performance.

See Lux.Experimental.@debug_mode to construct this layer.

source

`,14))]),s[26]||(s[26]=i("h2",{id:"Tied-Parameters",tabindex:"-1"},[a("Tied Parameters "),i("a",{class:"header-anchor",href:"#Tied-Parameters","aria-label":'Permalink to "Tied Parameters {#Tied-Parameters}"'},"​")],-1)),i("details",u,[i("summary",null,[s[18]||(s[18]=i("a",{id:"Lux.Experimental.share_parameters",href:"#Lux.Experimental.share_parameters"},[i("span",{class:"jlbinding"},"Lux.Experimental.share_parameters")],-1)),s[19]||(s[19]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=e(`
julia
share_parameters(ps, sharing)
-share_parameters(ps, sharing, new_parameters)

Updates the parameters in ps with a common set of parameters new_parameters that are shared between each list in the nested list sharing. (That was kind of a mouthful, the example should make it clear).

Arguments

Returns

Updated Parameters having the same structure as ps.

Example

julia
julia> model = Chain(; d1=Dense(2 => 4, tanh),
-           d3=Chain(; l1=Dense(4 => 2), l2=Dense(2 => 4)), d2=Dense(4 => 2))
-Chain(
-    d1 = Dense(2 => 4, tanh),           # 12 parameters
-    d3 = Chain(
-        l1 = Dense(4 => 2),             # 10 parameters
-        l2 = Dense(2 => 4),             # 12 parameters
-    ),
-    d2 = Dense(4 => 2),                 # 10 parameters
-)         # Total: 44 parameters,
-          #        plus 0 states.
-
-julia> ps, st = Lux.setup(Xoshiro(0), model);
-
-julia> # share parameters of (d1 and d3.l1) and (d3.l2 and d2)
-       ps = Lux.Experimental.share_parameters(ps, (("d3.l2", "d1"), ("d2", "d3.l1")));
-
-julia> ps.d3.l2.weight === ps.d1.weight &&
-           ps.d3.l2.bias === ps.d1.bias &&
-           ps.d2.weight === ps.d3.l1.weight &&
-           ps.d2.bias === ps.d3.l1.bias
-true

ComponentArrays

ComponentArrays doesn't allow sharing parameters. Converting the returned parameters to a ComponentArray will silently cause the parameter sharing to be undone.

source

`,10))])])}const L=h(r,[["render",F]]);export{A as __pageData,L as default}; diff --git a/dev/assets/api_Lux_contrib.md.BwHwNoK7.js b/dev/assets/api_Lux_contrib.md.DbSqHdk6.js similarity index 97% rename from dev/assets/api_Lux_contrib.md.BwHwNoK7.js rename to dev/assets/api_Lux_contrib.md.DbSqHdk6.js index 1a6c1071da..5faa7af722 100644 --- a/dev/assets/api_Lux_contrib.md.BwHwNoK7.js +++ b/dev/assets/api_Lux_contrib.md.DbSqHdk6.js @@ -1,6 +1,6 @@ -import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framework.I-x9Gl6h.js";const A=JSON.parse('{"title":"Experimental Features","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/contrib.md","filePath":"api/Lux/contrib.md","lastUpdated":null}'),r={name:"api/Lux/contrib.md"},d={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"};function F(m,s,C,b,D,f){const t=p("Badge");return k(),l("div",null,[s[21]||(s[21]=e('

Experimental Features

All features listed on this page are experimental which means:

  1. No SemVer Guarantees. We use code here to iterate fast. That said, historically we have never broken any code in this module and have always provided a deprecation period.

  2. Expect edge-cases and report them. It will help us move these features out of experimental sooner.

  3. None of the features are exported.

Parameter Freezing

',4)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Lux.Experimental.FrozenLayer",href:"#Lux.Experimental.FrozenLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.FrozenLayer")],-1)),s[1]||(s[1]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[2]||(s[2]=e(`
julia
FrozenLayer(l::AbstractLuxLayer, which_params::Optional{Tuple})

Freeze the parameters with name which_params of the layer l.

Use Lux.Experimental.freeze instead

It is always recommended to use the Lux.Experimental.freeze function instead of directly using the FrozenLayer constructor.

No checks for which_params

There are no checks for which_params. For example, if the original layer has parameters named (:weight, :bias), and which_params is set to (:myweight,) then none of the parameters are frozen and no error is thrown.

Arguments

Extended Help

Parameters

States

Note on Internal Layer Implementation

The inner layer should work with NamedTuple parameters. In order to support custom parameter types, users need to implement Lux.Utils.merge(::CustomParamType, ::NamedTuple) or extend Lux.Utils.named_tuple(::CustomParamType) to return a NamedTuple.

Example

julia
julia> Lux.Experimental.FrozenLayer(Dense(2 => 2), (:weight,))
-FrozenLayer(Dense(2 => 2), (:weight,))  # 2 parameters, plus 4 non-trainable

See also Lux.Experimental.freeze, Lux.Experimental.unfreeze.

source

`,17))]),i("details",E,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.Experimental.freeze",href:"#Lux.Experimental.freeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.freeze")],-1)),s[4]||(s[4]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e(`
julia
freeze(l::AbstractLuxLayer, which_params::Optional{Tuple} = nothing)

Constructs a version of l with which_params frozen. If which_params is nothing, then all parameters are frozen.

source

julia
freeze(l::AbstractLuxLayer, ps, st::NamedTuple,
-    which_params::Optional{Tuple} = nothing)

Construct a Lux.Experimental.FrozenLayer for l with the current parameters and states. If which_params is nothing, then all parameters are frozen.

source

`,6))]),i("details",o,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.Experimental.unfreeze",href:"#Lux.Experimental.unfreeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.unfreeze")],-1)),s[7]||(s[7]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e('
julia
unfreeze(l::FrozenLayer)

Unfreezes the layer l.

source

julia
unfreeze(l::FrozenLayer, ps, st::NamedTuple)

Unwraps a Lux.Experimental.FrozenLayer l with the current parameters and states.

source

',6))]),s[22]||(s[22]=i("p",null,[a("For detailed usage example look at the "),i("a",{href:"/dev/manual/freezing_model_parameters#freezing-model-parameters"},"manual page"),a(".")],-1)),s[23]||(s[23]=i("h2",{id:"Map-over-Layer",tabindex:"-1"},[a("Map over Layer "),i("a",{class:"header-anchor",href:"#Map-over-Layer","aria-label":'Permalink to "Map over Layer {#Map-over-Layer}"'},"​")],-1)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Lux.Experimental.layer_map",href:"#Lux.Experimental.layer_map"},[i("span",{class:"jlbinding"},"Lux.Experimental.layer_map")],-1)),s[10]||(s[10]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=e(`
julia
layer_map(f, l::AbstractLuxLayer, ps, st::NamedTuple)

Map the function f over the model l, with the parameters ps and states st. This is different from Functors.fmap since it zips the layers, parameters, and states and invokes the function on all of them together.

KeyPath provided to the function

The KeyPath depths on the structure of the parameters and states. This is of consequence exclusively for AbstractLuxWrapperLayer where the structure of the layer doesn't match the structure of the parameters and states. In the example, provided below, the KeyPath is (:chain, :dense_1) for the first layer (following the structure in ps) while accessing the same layer in the chain is done with ( :chain, :layers, :dense_1).

Call Signature for f

Extended Help

Example

julia
julia> using Lux, Random
+import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framework.BetCMmtc.js";const f=JSON.parse('{"title":"Experimental Features","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/contrib.md","filePath":"api/Lux/contrib.md","lastUpdated":null}'),r={name:"api/Lux/contrib.md"},d={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"};function F(m,s,C,b,D,A){const t=p("Badge");return k(),l("div",null,[s[21]||(s[21]=e('

Experimental Features

All features listed on this page are experimental which means:

  1. No SemVer Guarantees. We use code here to iterate fast. That said, historically we have never broken any code in this module and have always provided a deprecation period.

  2. Expect edge-cases and report them. It will help us move these features out of experimental sooner.

  3. None of the features are exported.

Parameter Freezing

',4)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Lux.Experimental.FrozenLayer",href:"#Lux.Experimental.FrozenLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.FrozenLayer")],-1)),s[1]||(s[1]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[2]||(s[2]=e(`
julia
FrozenLayer(l::AbstractLuxLayer, which_params::Optional{Tuple})

Freeze the parameters with name which_params of the layer l.

Use Lux.Experimental.freeze instead

It is always recommended to use the Lux.Experimental.freeze function instead of directly using the FrozenLayer constructor.

No checks for which_params

There are no checks for which_params. For example, if the original layer has parameters named (:weight, :bias), and which_params is set to (:myweight,) then none of the parameters are frozen and no error is thrown.

Arguments

  • l: Lux AbstractLuxLayer.

  • which_params: Parameter Names to be Frozen. Can be set to nothing, in which case all parameters are frozen.

Extended Help

Parameters

  • Parameters of the layer l excluding which_params.

States

  • frozen_params: Parameters that are frozen, i.e., which_params.

  • states: The state of the inner layer l.

Note on Internal Layer Implementation

The inner layer should work with NamedTuple parameters. In order to support custom parameter types, users need to implement Lux.Utils.merge(::CustomParamType, ::NamedTuple) or extend Lux.Utils.named_tuple(::CustomParamType) to return a NamedTuple.

Example

julia
julia> Lux.Experimental.FrozenLayer(Dense(2 => 2), (:weight,))
+FrozenLayer(Dense(2 => 2), (:weight,))  # 2 parameters, plus 4 non-trainable

See also Lux.Experimental.freeze, Lux.Experimental.unfreeze.

source

`,17))]),i("details",E,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.Experimental.freeze",href:"#Lux.Experimental.freeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.freeze")],-1)),s[4]||(s[4]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e(`
julia
freeze(l::AbstractLuxLayer, which_params::Optional{Tuple} = nothing)

Constructs a version of l with which_params frozen. If which_params is nothing, then all parameters are frozen.

source

julia
freeze(l::AbstractLuxLayer, ps, st::NamedTuple,
+    which_params::Optional{Tuple} = nothing)

Construct a Lux.Experimental.FrozenLayer for l with the current parameters and states. If which_params is nothing, then all parameters are frozen.

source

`,6))]),i("details",o,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.Experimental.unfreeze",href:"#Lux.Experimental.unfreeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.unfreeze")],-1)),s[7]||(s[7]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e('
julia
unfreeze(l::FrozenLayer)

Unfreezes the layer l.

source

julia
unfreeze(l::FrozenLayer, ps, st::NamedTuple)

Unwraps a Lux.Experimental.FrozenLayer l with the current parameters and states.

source

',6))]),s[22]||(s[22]=i("p",null,[a("For detailed usage example look at the "),i("a",{href:"/dev/manual/freezing_model_parameters#freezing-model-parameters"},"manual page"),a(".")],-1)),s[23]||(s[23]=i("h2",{id:"Map-over-Layer",tabindex:"-1"},[a("Map over Layer "),i("a",{class:"header-anchor",href:"#Map-over-Layer","aria-label":'Permalink to "Map over Layer {#Map-over-Layer}"'},"​")],-1)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Lux.Experimental.layer_map",href:"#Lux.Experimental.layer_map"},[i("span",{class:"jlbinding"},"Lux.Experimental.layer_map")],-1)),s[10]||(s[10]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=e(`
julia
layer_map(f, l::AbstractLuxLayer, ps, st::NamedTuple)

Map the function f over the model l, with the parameters ps and states st. This is different from Functors.fmap since it zips the layers, parameters, and states and invokes the function on all of them together.

KeyPath provided to the function

The KeyPath depths on the structure of the parameters and states. This is of consequence exclusively for AbstractLuxWrapperLayer where the structure of the layer doesn't match the structure of the parameters and states. In the example, provided below, the KeyPath is (:chain, :dense_1) for the first layer (following the structure in ps) while accessing the same layer in the chain is done with ( :chain, :layers, :dense_1).

Call Signature for f

  • Must take 4 inputs – AbstractLuxLayer, Corresponding Parameters, Corresponding States, and the Functors.KeyPath to the layer.

  • Must return a tuple of 3 elements – AbstractLuxLayer, new parameters and the new states.

Extended Help

Example

julia
julia> using Lux, Random
 
 julia> c = Parallel(
            +; chain=Chain(; dense_1=Dense(2 => 3), bn=BatchNorm(3), dense_2=Dense(3 => 5)),
@@ -27,10 +27,10 @@ import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framewo
 julia> all(iszero, (ps_new.chain.dense_1.weight, ps_new.chain.dense_1.bias,
                     ps_new.chain.dense_2.weight, ps_new.chain.dense_2.bias,
                     ps_new.dense_3.weight, ps_new.dense_3.bias))
-true

source

`,9))]),s[24]||(s[24]=i("h2",{id:"Debugging-Functionality",tabindex:"-1"},[a("Debugging Functionality "),i("a",{class:"header-anchor",href:"#Debugging-Functionality","aria-label":'Permalink to "Debugging Functionality {#Debugging-Functionality}"'},"​")],-1)),s[25]||(s[25]=i("p",null,"Model not working properly! Here are some functionalities to help you debug you Lux model.",-1)),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.Experimental.@debug_mode",href:"#Lux.Experimental.@debug_mode"},[i("span",{class:"jlbinding"},"Lux.Experimental.@debug_mode")],-1)),s[13]||(s[13]=a()),n(t,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
julia
@debug_mode layer kwargs...

Recurses into the layer and replaces the inner most non Container Layers with a Lux.Experimental.DebugLayer.

See Lux.Experimental.DebugLayer for details about the Keyword Arguments.

source

',4))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.Experimental.DebugLayer",href:"#Lux.Experimental.DebugLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.DebugLayer")],-1)),s[16]||(s[16]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=e(`
julia
DebugLayer(layer::AbstractLuxLayer;
+true

source

`,9))]),s[24]||(s[24]=i("h2",{id:"Debugging-Functionality",tabindex:"-1"},[a("Debugging Functionality "),i("a",{class:"header-anchor",href:"#Debugging-Functionality","aria-label":'Permalink to "Debugging Functionality {#Debugging-Functionality}"'},"​")],-1)),s[25]||(s[25]=i("p",null,"Model not working properly! Here are some functionalities to help you debug you Lux model.",-1)),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.Experimental.@debug_mode",href:"#Lux.Experimental.@debug_mode"},[i("span",{class:"jlbinding"},"Lux.Experimental.@debug_mode")],-1)),s[13]||(s[13]=a()),n(t,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
julia
@debug_mode layer kwargs...

Recurses into the layer and replaces the inner most non Container Layers with a Lux.Experimental.DebugLayer.

See Lux.Experimental.DebugLayer for details about the Keyword Arguments.

source

',4))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.Experimental.DebugLayer",href:"#Lux.Experimental.DebugLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.DebugLayer")],-1)),s[16]||(s[16]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=e(`
julia
DebugLayer(layer::AbstractLuxLayer;
     nan_check::Union{Symbol, StaticSymbol, Val}=static(:both),
     error_check::Union{StaticBool, Bool, Val{true}, Val{false}}=True(),
-    location::KeyPath=KeyPath())

A wrapper over Lux layers that adds checks for NaNs and errors. This is useful for debugging.

Arguments

  • layer: The layer to be wrapped.

Extended Help

Keyword Arguments

  • nan_check: Whether to check for NaNs in the input, parameters, and states. Can be :both, :forward, :backward, or :none.

  • error_check: Whether to check for errors in the layer. If true, will throw an error if the layer fails.

  • location: The location of the layer. Use Lux.Experimental.@debug_mode to construct this layer to populate this value correctly.

Input / Output

Inputs and outputs are the same as the layer unless one of the nan_check or error_check criteria is met.

If nan_check is enabled and NaNs are detected then a DomainError is thrown. If error_check is enabled, then any errors in the layer are thrown with useful information to track where the error originates.

ChainRules Compatible Reverse Mode AD Tools

nan_check for the backward mode only works with ChainRules Compatible Reverse Mode AD Tools currently.

Disable After Debugging

This layer is only meant to be used for debugging. If used for actual training or inference, will lead to extremely bad performance.

See Lux.Experimental.@debug_mode to construct this layer.

source

`,14))]),s[26]||(s[26]=i("h2",{id:"Tied-Parameters",tabindex:"-1"},[a("Tied Parameters "),i("a",{class:"header-anchor",href:"#Tied-Parameters","aria-label":'Permalink to "Tied Parameters {#Tied-Parameters}"'},"​")],-1)),i("details",u,[i("summary",null,[s[18]||(s[18]=i("a",{id:"Lux.Experimental.share_parameters",href:"#Lux.Experimental.share_parameters"},[i("span",{class:"jlbinding"},"Lux.Experimental.share_parameters")],-1)),s[19]||(s[19]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=e(`
julia
share_parameters(ps, sharing)
+    location::KeyPath=KeyPath())

A wrapper over Lux layers that adds checks for NaNs and errors. This is useful for debugging.

Arguments

  • layer: The layer to be wrapped.

Extended Help

Keyword Arguments

  • nan_check: Whether to check for NaNs in the input, parameters, and states. Can be :both, :forward, :backward, or :none.

  • error_check: Whether to check for errors in the layer. If true, will throw an error if the layer fails.

  • location: The location of the layer. Use Lux.Experimental.@debug_mode to construct this layer to populate this value correctly.

Input / Output

Inputs and outputs are the same as the layer unless one of the nan_check or error_check criteria is met.

If nan_check is enabled and NaNs are detected then a DomainError is thrown. If error_check is enabled, then any errors in the layer are thrown with useful information to track where the error originates.

ChainRules Compatible Reverse Mode AD Tools

nan_check for the backward mode only works with ChainRules Compatible Reverse Mode AD Tools currently.

Disable After Debugging

This layer is only meant to be used for debugging. If used for actual training or inference, will lead to extremely bad performance.

See Lux.Experimental.@debug_mode to construct this layer.

source

`,14))]),s[26]||(s[26]=i("h2",{id:"Tied-Parameters",tabindex:"-1"},[a("Tied Parameters "),i("a",{class:"header-anchor",href:"#Tied-Parameters","aria-label":'Permalink to "Tied Parameters {#Tied-Parameters}"'},"​")],-1)),i("details",u,[i("summary",null,[s[18]||(s[18]=i("a",{id:"Lux.Experimental.share_parameters",href:"#Lux.Experimental.share_parameters"},[i("span",{class:"jlbinding"},"Lux.Experimental.share_parameters")],-1)),s[19]||(s[19]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=e(`
julia
share_parameters(ps, sharing)
 share_parameters(ps, sharing, new_parameters)

Updates the parameters in ps with a common set of parameters new_parameters that are shared between each list in the nested list sharing. (That was kind of a mouthful, the example should make it clear).

Arguments

  • ps: Original parameters.

  • sharing: A nested list of lists of accessors of ps which need to shate the parameters (See the example for details). (Each list in the list must be disjoint)

  • new_parameters: If passed the length of new_parameters must be equal to the length of sharing. For each vector in sharing the corresponding parameter in new_parameters will be used. (If not passed, the parameters corresponding to the first element of each vector in sharing will be used).

Returns

Updated Parameters having the same structure as ps.

Example

julia
julia> model = Chain(; d1=Dense(2 => 4, tanh),
            d3=Chain(; l1=Dense(4 => 2), l2=Dense(2 => 4)), d2=Dense(4 => 2))
 Chain(
@@ -52,4 +52,4 @@ import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framewo
            ps.d3.l2.bias === ps.d1.bias &&
            ps.d2.weight === ps.d3.l1.weight &&
            ps.d2.bias === ps.d3.l1.bias
-true

ComponentArrays

ComponentArrays doesn't allow sharing parameters. Converting the returned parameters to a ComponentArray will silently cause the parameter sharing to be undone.

source

`,10))])])}const L=h(r,[["render",F]]);export{A as __pageData,L as default}; +true

ComponentArrays

ComponentArrays doesn't allow sharing parameters. Converting the returned parameters to a ComponentArray will silently cause the parameter sharing to be undone.

source

`,10))])])}const L=h(r,[["render",F]]);export{f as __pageData,L as default}; diff --git a/dev/assets/api_Lux_contrib.md.DbSqHdk6.lean.js b/dev/assets/api_Lux_contrib.md.DbSqHdk6.lean.js new file mode 100644 index 0000000000..0edcb57d83 --- /dev/null +++ b/dev/assets/api_Lux_contrib.md.DbSqHdk6.lean.js @@ -0,0 +1 @@ +import{_ as h,c as l,a2 as e,j as i,a,G as n,B as p,o as k}from"./chunks/framework.BetCMmtc.js";const f=JSON.parse('{"title":"Experimental Features","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/contrib.md","filePath":"api/Lux/contrib.md","lastUpdated":null}'),r={name:"api/Lux/contrib.md"},d={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"};function F(m,s,C,b,D,A){const t=p("Badge");return k(),l("div",null,[s[21]||(s[21]=e("",4)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Lux.Experimental.FrozenLayer",href:"#Lux.Experimental.FrozenLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.FrozenLayer")],-1)),s[1]||(s[1]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[2]||(s[2]=e("",17))]),i("details",E,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.Experimental.freeze",href:"#Lux.Experimental.freeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.freeze")],-1)),s[4]||(s[4]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e("",6))]),i("details",o,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.Experimental.unfreeze",href:"#Lux.Experimental.unfreeze"},[i("span",{class:"jlbinding"},"Lux.Experimental.unfreeze")],-1)),s[7]||(s[7]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e("",6))]),s[22]||(s[22]=i("p",null,[a("For detailed usage example look at the "),i("a",{href:"/dev/manual/freezing_model_parameters#freezing-model-parameters"},"manual page"),a(".")],-1)),s[23]||(s[23]=i("h2",{id:"Map-over-Layer",tabindex:"-1"},[a("Map over Layer "),i("a",{class:"header-anchor",href:"#Map-over-Layer","aria-label":'Permalink to "Map over Layer {#Map-over-Layer}"'},"​")],-1)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Lux.Experimental.layer_map",href:"#Lux.Experimental.layer_map"},[i("span",{class:"jlbinding"},"Lux.Experimental.layer_map")],-1)),s[10]||(s[10]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=e("",9))]),s[24]||(s[24]=i("h2",{id:"Debugging-Functionality",tabindex:"-1"},[a("Debugging Functionality "),i("a",{class:"header-anchor",href:"#Debugging-Functionality","aria-label":'Permalink to "Debugging Functionality {#Debugging-Functionality}"'},"​")],-1)),s[25]||(s[25]=i("p",null,"Model not working properly! Here are some functionalities to help you debug you Lux model.",-1)),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.Experimental.@debug_mode",href:"#Lux.Experimental.@debug_mode"},[i("span",{class:"jlbinding"},"Lux.Experimental.@debug_mode")],-1)),s[13]||(s[13]=a()),n(t,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e("",4))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.Experimental.DebugLayer",href:"#Lux.Experimental.DebugLayer"},[i("span",{class:"jlbinding"},"Lux.Experimental.DebugLayer")],-1)),s[16]||(s[16]=a()),n(t,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=e("",14))]),s[26]||(s[26]=i("h2",{id:"Tied-Parameters",tabindex:"-1"},[a("Tied Parameters "),i("a",{class:"header-anchor",href:"#Tied-Parameters","aria-label":'Permalink to "Tied Parameters {#Tied-Parameters}"'},"​")],-1)),i("details",u,[i("summary",null,[s[18]||(s[18]=i("a",{id:"Lux.Experimental.share_parameters",href:"#Lux.Experimental.share_parameters"},[i("span",{class:"jlbinding"},"Lux.Experimental.share_parameters")],-1)),s[19]||(s[19]=a()),n(t,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=e("",10))])])}const L=h(r,[["render",F]]);export{f as __pageData,L as default}; diff --git a/dev/assets/api_Lux_distributed_utils.md.BXHaY16P.js b/dev/assets/api_Lux_distributed_utils.md.106ZB8SY.js similarity index 92% rename from dev/assets/api_Lux_distributed_utils.md.BXHaY16P.js rename to dev/assets/api_Lux_distributed_utils.md.106ZB8SY.js index 243c24ca1a..cf80fac0cb 100644 --- a/dev/assets/api_Lux_distributed_utils.md.BXHaY16P.js +++ b/dev/assets/api_Lux_distributed_utils.md.106ZB8SY.js @@ -1,4 +1,4 @@ -import{_ as n,c as d,a2 as e,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.I-x9Gl6h.js";const I=JSON.parse('{"title":"Distributed Utils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/distributed_utils.md","filePath":"api/Lux/distributed_utils.md","lastUpdated":null}'),o={name:"api/Lux/distributed_utils.md"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"};function F(v,i,x,D,B,U){const a=r("Badge");return p(),d("div",null,[i[39]||(i[39]=e('

Distributed Utils

Note

These functionalities are available via the Lux.DistributedUtils module.

Backends

',3)),s("details",k,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.MPIBackend",href:"#Lux.MPIBackend"},[s("span",{class:"jlbinding"},"Lux.MPIBackend")],-1)),i[1]||(i[1]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=e('
julia
MPIBackend(comm = nothing)

Create an MPI backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(MPIBackend).

source

',3))]),s("details",c,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.NCCLBackend",href:"#Lux.NCCLBackend"},[s("span",{class:"jlbinding"},"Lux.NCCLBackend")],-1)),i[4]||(i[4]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[5]||(i[5]=e('
julia
NCCLBackend(comm = nothing, mpi_backend = nothing)

Create an NCCL backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(NCCLBackend).

source

',3))]),i[40]||(i[40]=s("h2",{id:"initialization",tabindex:"-1"},[t("Initialization "),s("a",{class:"header-anchor",href:"#initialization","aria-label":'Permalink to "Initialization"'},"​")],-1)),s("details",h,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.DistributedUtils.initialize",href:"#Lux.DistributedUtils.initialize"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialize")],-1)),i[7]||(i[7]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
julia
initialize(backend::Type{<:AbstractLuxDistributedBackend}; kwargs...)

Initialize the given backend. Users can supply cuda_devices and amdgpu_devices to initialize the backend with the given devices. These can be set to missing to prevent initialization of the given device type. If set to nothing, and the backend is functional we assign GPUs in a round-robin fashion. Finally, a list of integers can be supplied to initialize the backend with the given devices.

Possible values for backend are:

source

',5))]),s("details",u,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.DistributedUtils.initialized",href:"#Lux.DistributedUtils.initialized"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialized")],-1)),i[10]||(i[10]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
julia
initialized(backend::Type{<:AbstractLuxDistributedBackend})

Check if the given backend is initialized.

source

',3))]),s("details",b,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.DistributedUtils.get_distributed_backend",href:"#Lux.DistributedUtils.get_distributed_backend"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.get_distributed_backend")],-1)),i[13]||(i[13]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
julia
get_distributed_backend(backend::Type{<:AbstractLuxDistributedBackend})

Get the distributed backend for the given backend type. Possible values are:

Danger

initialize(backend; kwargs...) must be called before calling this function.

source

',5))]),i[41]||(i[41]=s("h2",{id:"Helper-Functions",tabindex:"-1"},[t("Helper Functions "),s("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),s("details",g,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.DistributedUtils.local_rank",href:"#Lux.DistributedUtils.local_rank"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.local_rank")],-1)),i[16]||(i[16]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e('
julia
local_rank(backend::AbstractLuxDistributedBackend)

Get the local rank for the given backend.

source

',3))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.DistributedUtils.total_workers",href:"#Lux.DistributedUtils.total_workers"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.total_workers")],-1)),i[19]||(i[19]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
julia
total_workers(backend::AbstractLuxDistributedBackend)

Get the total number of workers for the given backend.

source

',3))]),i[42]||(i[42]=s("h2",{id:"Communication-Primitives",tabindex:"-1"},[t("Communication Primitives "),s("a",{class:"header-anchor",href:"#Communication-Primitives","aria-label":'Permalink to "Communication Primitives {#Communication-Primitives}"'},"​")],-1)),s("details",m,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.DistributedUtils.allreduce!",href:"#Lux.DistributedUtils.allreduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.allreduce!")],-1)),i[22]||(i[22]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
julia
allreduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op)
-allreduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op)

Backend Agnostic API to perform an allreduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.DistributedUtils.bcast!",href:"#Lux.DistributedUtils.bcast!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.bcast!")],-1)),i[25]||(i[25]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e(`
julia
bcast!(backend::AbstractLuxDistributedBackend, sendrecvbuf; root::Int=0)
-bcast!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf; root::Int=0)

Backend Agnostic API to broadcast the given buffer sendrecvbuf or sendbuf to all workers into recvbuf. The value at root will be broadcasted to all other workers.

source

`,3))]),s("details",C,[s("summary",null,[i[27]||(i[27]=s("a",{id:"Lux.DistributedUtils.reduce!",href:"#Lux.DistributedUtils.reduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.reduce!")],-1)),i[28]||(i[28]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[29]||(i[29]=e(`
julia
reduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op; root::Int=0)
-reduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op; root::Int=0)

Backend Agnostic API to perform a reduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",f,[s("summary",null,[i[30]||(i[30]=s("a",{id:"Lux.DistributedUtils.synchronize!!",href:"#Lux.DistributedUtils.synchronize!!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.synchronize!!")],-1)),i[31]||(i[31]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[32]||(i[32]=e('
julia
synchronize!!(backend::AbstractLuxDistributedBackend, ps; root::Int=0)

Synchronize the given structure ps using the given backend. The value at root will be broadcasted to all other workers.

source

',3))]),i[43]||(i[43]=s("h2",{id:"Optimizers.jl-Integration",tabindex:"-1"},[t("Optimizers.jl Integration "),s("a",{class:"header-anchor",href:"#Optimizers.jl-Integration","aria-label":'Permalink to "Optimizers.jl Integration {#Optimizers.jl-Integration}"'},"​")],-1)),s("details",L,[s("summary",null,[i[33]||(i[33]=s("a",{id:"Lux.DistributedUtils.DistributedOptimizer",href:"#Lux.DistributedUtils.DistributedOptimizer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedOptimizer")],-1)),i[34]||(i[34]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[35]||(i[35]=e('
julia
DistributedOptimizer(backend::AbstractLuxDistributedBacked, optimizer)

Wrap the optimizer in a DistributedOptimizer. Before updating the parameters, this averages the gradients across the processes using Allreduce.

Arguments

source

',5))]),i[44]||(i[44]=s("h2",{id:"MLUtils.jl-Integration",tabindex:"-1"},[t("MLUtils.jl Integration "),s("a",{class:"header-anchor",href:"#MLUtils.jl-Integration","aria-label":'Permalink to "MLUtils.jl Integration {#MLUtils.jl-Integration}"'},"​")],-1)),s("details",j,[s("summary",null,[i[36]||(i[36]=s("a",{id:"Lux.DistributedUtils.DistributedDataContainer",href:"#Lux.DistributedUtils.DistributedDataContainer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedDataContainer")],-1)),i[37]||(i[37]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[38]||(i[38]=e('
julia
DistributedDataContainer(backend::AbstractLuxDistributedBackend, data)

data must be compatible with MLUtils interface. The returned container is compatible with MLUtils interface and is used to partition the dataset across the available processes.

Load MLUtils.jl

MLUtils.jl must be installed and loaded before using this.

source

',4))])])}const z=n(o,[["render",F]]);export{I as __pageData,z as default}; +import{_ as n,c as d,a2 as e,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.BetCMmtc.js";const B=JSON.parse('{"title":"Distributed Utils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/distributed_utils.md","filePath":"api/Lux/distributed_utils.md","lastUpdated":null}'),o={name:"api/Lux/distributed_utils.md"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"};function F(D,i,v,x,f,T){const a=r("Badge");return p(),d("div",null,[i[39]||(i[39]=e('

Distributed Utils

Note

These functionalities are available via the Lux.DistributedUtils module.

Backends

',3)),s("details",k,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.MPIBackend",href:"#Lux.MPIBackend"},[s("span",{class:"jlbinding"},"Lux.MPIBackend")],-1)),i[1]||(i[1]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=e('
julia
MPIBackend(comm = nothing)

Create an MPI backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(MPIBackend).

source

',3))]),s("details",c,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.NCCLBackend",href:"#Lux.NCCLBackend"},[s("span",{class:"jlbinding"},"Lux.NCCLBackend")],-1)),i[4]||(i[4]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[5]||(i[5]=e('
julia
NCCLBackend(comm = nothing, mpi_backend = nothing)

Create an NCCL backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(NCCLBackend).

source

',3))]),i[40]||(i[40]=s("h2",{id:"initialization",tabindex:"-1"},[t("Initialization "),s("a",{class:"header-anchor",href:"#initialization","aria-label":'Permalink to "Initialization"'},"​")],-1)),s("details",h,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.DistributedUtils.initialize",href:"#Lux.DistributedUtils.initialize"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialize")],-1)),i[7]||(i[7]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
julia
initialize(backend::Type{<:AbstractLuxDistributedBackend}; kwargs...)

Initialize the given backend. Users can supply cuda_devices and amdgpu_devices to initialize the backend with the given devices. These can be set to missing to prevent initialization of the given device type. If set to nothing, and the backend is functional we assign GPUs in a round-robin fashion. Finally, a list of integers can be supplied to initialize the backend with the given devices.

Possible values for backend are:

source

',5))]),s("details",u,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.DistributedUtils.initialized",href:"#Lux.DistributedUtils.initialized"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialized")],-1)),i[10]||(i[10]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
julia
initialized(backend::Type{<:AbstractLuxDistributedBackend})

Check if the given backend is initialized.

source

',3))]),s("details",b,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.DistributedUtils.get_distributed_backend",href:"#Lux.DistributedUtils.get_distributed_backend"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.get_distributed_backend")],-1)),i[13]||(i[13]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
julia
get_distributed_backend(backend::Type{<:AbstractLuxDistributedBackend})

Get the distributed backend for the given backend type. Possible values are:

Danger

initialize(backend; kwargs...) must be called before calling this function.

source

',5))]),i[41]||(i[41]=s("h2",{id:"Helper-Functions",tabindex:"-1"},[t("Helper Functions "),s("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),s("details",g,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.DistributedUtils.local_rank",href:"#Lux.DistributedUtils.local_rank"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.local_rank")],-1)),i[16]||(i[16]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e('
julia
local_rank(backend::AbstractLuxDistributedBackend)

Get the local rank for the given backend.

source

',3))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.DistributedUtils.total_workers",href:"#Lux.DistributedUtils.total_workers"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.total_workers")],-1)),i[19]||(i[19]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
julia
total_workers(backend::AbstractLuxDistributedBackend)

Get the total number of workers for the given backend.

source

',3))]),i[42]||(i[42]=s("h2",{id:"Communication-Primitives",tabindex:"-1"},[t("Communication Primitives "),s("a",{class:"header-anchor",href:"#Communication-Primitives","aria-label":'Permalink to "Communication Primitives {#Communication-Primitives}"'},"​")],-1)),s("details",C,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.DistributedUtils.allreduce!",href:"#Lux.DistributedUtils.allreduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.allreduce!")],-1)),i[22]||(i[22]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
julia
allreduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op)
+allreduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op)

Backend Agnostic API to perform an allreduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",_,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.DistributedUtils.bcast!",href:"#Lux.DistributedUtils.bcast!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.bcast!")],-1)),i[25]||(i[25]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e(`
julia
bcast!(backend::AbstractLuxDistributedBackend, sendrecvbuf; root::Int=0)
+bcast!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf; root::Int=0)

Backend Agnostic API to broadcast the given buffer sendrecvbuf or sendbuf to all workers into recvbuf. The value at root will be broadcasted to all other workers.

source

`,3))]),s("details",E,[s("summary",null,[i[27]||(i[27]=s("a",{id:"Lux.DistributedUtils.reduce!",href:"#Lux.DistributedUtils.reduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.reduce!")],-1)),i[28]||(i[28]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[29]||(i[29]=e(`
julia
reduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op; root::Int=0)
+reduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op; root::Int=0)

Backend Agnostic API to perform a reduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",m,[s("summary",null,[i[30]||(i[30]=s("a",{id:"Lux.DistributedUtils.synchronize!!",href:"#Lux.DistributedUtils.synchronize!!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.synchronize!!")],-1)),i[31]||(i[31]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[32]||(i[32]=e('
julia
synchronize!!(backend::AbstractLuxDistributedBackend, ps; root::Int=0)

Synchronize the given structure ps using the given backend. The value at root will be broadcasted to all other workers.

source

',3))]),i[43]||(i[43]=s("h2",{id:"Optimizers.jl-Integration",tabindex:"-1"},[t("Optimizers.jl Integration "),s("a",{class:"header-anchor",href:"#Optimizers.jl-Integration","aria-label":'Permalink to "Optimizers.jl Integration {#Optimizers.jl-Integration}"'},"​")],-1)),s("details",L,[s("summary",null,[i[33]||(i[33]=s("a",{id:"Lux.DistributedUtils.DistributedOptimizer",href:"#Lux.DistributedUtils.DistributedOptimizer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedOptimizer")],-1)),i[34]||(i[34]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[35]||(i[35]=e('
julia
DistributedOptimizer(backend::AbstractLuxDistributedBacked, optimizer)

Wrap the optimizer in a DistributedOptimizer. Before updating the parameters, this averages the gradients across the processes using Allreduce.

Arguments

source

',5))]),i[44]||(i[44]=s("h2",{id:"MLUtils.jl-Integration",tabindex:"-1"},[t("MLUtils.jl Integration "),s("a",{class:"header-anchor",href:"#MLUtils.jl-Integration","aria-label":'Permalink to "MLUtils.jl Integration {#MLUtils.jl-Integration}"'},"​")],-1)),s("details",j,[s("summary",null,[i[36]||(i[36]=s("a",{id:"Lux.DistributedUtils.DistributedDataContainer",href:"#Lux.DistributedUtils.DistributedDataContainer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedDataContainer")],-1)),i[37]||(i[37]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[38]||(i[38]=e('
julia
DistributedDataContainer(backend::AbstractLuxDistributedBackend, data)

data must be compatible with MLUtils interface. The returned container is compatible with MLUtils interface and is used to partition the dataset across the available processes.

Load MLUtils.jl

MLUtils.jl must be installed and loaded before using this.

source

',4))])])}const I=n(o,[["render",F]]);export{B as __pageData,I as default}; diff --git a/dev/assets/api_Lux_distributed_utils.md.106ZB8SY.lean.js b/dev/assets/api_Lux_distributed_utils.md.106ZB8SY.lean.js new file mode 100644 index 0000000000..6e5965dcb2 --- /dev/null +++ b/dev/assets/api_Lux_distributed_utils.md.106ZB8SY.lean.js @@ -0,0 +1 @@ +import{_ as n,c as d,a2 as e,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.BetCMmtc.js";const B=JSON.parse('{"title":"Distributed Utils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/distributed_utils.md","filePath":"api/Lux/distributed_utils.md","lastUpdated":null}'),o={name:"api/Lux/distributed_utils.md"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"};function F(D,i,v,x,f,T){const a=r("Badge");return p(),d("div",null,[i[39]||(i[39]=e("",3)),s("details",k,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.MPIBackend",href:"#Lux.MPIBackend"},[s("span",{class:"jlbinding"},"Lux.MPIBackend")],-1)),i[1]||(i[1]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=e("",3))]),s("details",c,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.NCCLBackend",href:"#Lux.NCCLBackend"},[s("span",{class:"jlbinding"},"Lux.NCCLBackend")],-1)),i[4]||(i[4]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[5]||(i[5]=e("",3))]),i[40]||(i[40]=s("h2",{id:"initialization",tabindex:"-1"},[t("Initialization "),s("a",{class:"header-anchor",href:"#initialization","aria-label":'Permalink to "Initialization"'},"​")],-1)),s("details",h,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.DistributedUtils.initialize",href:"#Lux.DistributedUtils.initialize"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialize")],-1)),i[7]||(i[7]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e("",5))]),s("details",u,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.DistributedUtils.initialized",href:"#Lux.DistributedUtils.initialized"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialized")],-1)),i[10]||(i[10]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e("",3))]),s("details",b,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.DistributedUtils.get_distributed_backend",href:"#Lux.DistributedUtils.get_distributed_backend"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.get_distributed_backend")],-1)),i[13]||(i[13]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e("",5))]),i[41]||(i[41]=s("h2",{id:"Helper-Functions",tabindex:"-1"},[t("Helper Functions "),s("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),s("details",g,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.DistributedUtils.local_rank",href:"#Lux.DistributedUtils.local_rank"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.local_rank")],-1)),i[16]||(i[16]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e("",3))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.DistributedUtils.total_workers",href:"#Lux.DistributedUtils.total_workers"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.total_workers")],-1)),i[19]||(i[19]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e("",3))]),i[42]||(i[42]=s("h2",{id:"Communication-Primitives",tabindex:"-1"},[t("Communication Primitives "),s("a",{class:"header-anchor",href:"#Communication-Primitives","aria-label":'Permalink to "Communication Primitives {#Communication-Primitives}"'},"​")],-1)),s("details",C,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.DistributedUtils.allreduce!",href:"#Lux.DistributedUtils.allreduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.allreduce!")],-1)),i[22]||(i[22]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e("",4))]),s("details",_,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.DistributedUtils.bcast!",href:"#Lux.DistributedUtils.bcast!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.bcast!")],-1)),i[25]||(i[25]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e("",3))]),s("details",E,[s("summary",null,[i[27]||(i[27]=s("a",{id:"Lux.DistributedUtils.reduce!",href:"#Lux.DistributedUtils.reduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.reduce!")],-1)),i[28]||(i[28]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[29]||(i[29]=e("",4))]),s("details",m,[s("summary",null,[i[30]||(i[30]=s("a",{id:"Lux.DistributedUtils.synchronize!!",href:"#Lux.DistributedUtils.synchronize!!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.synchronize!!")],-1)),i[31]||(i[31]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[32]||(i[32]=e("",3))]),i[43]||(i[43]=s("h2",{id:"Optimizers.jl-Integration",tabindex:"-1"},[t("Optimizers.jl Integration "),s("a",{class:"header-anchor",href:"#Optimizers.jl-Integration","aria-label":'Permalink to "Optimizers.jl Integration {#Optimizers.jl-Integration}"'},"​")],-1)),s("details",L,[s("summary",null,[i[33]||(i[33]=s("a",{id:"Lux.DistributedUtils.DistributedOptimizer",href:"#Lux.DistributedUtils.DistributedOptimizer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedOptimizer")],-1)),i[34]||(i[34]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[35]||(i[35]=e("",5))]),i[44]||(i[44]=s("h2",{id:"MLUtils.jl-Integration",tabindex:"-1"},[t("MLUtils.jl Integration "),s("a",{class:"header-anchor",href:"#MLUtils.jl-Integration","aria-label":'Permalink to "MLUtils.jl Integration {#MLUtils.jl-Integration}"'},"​")],-1)),s("details",j,[s("summary",null,[i[36]||(i[36]=s("a",{id:"Lux.DistributedUtils.DistributedDataContainer",href:"#Lux.DistributedUtils.DistributedDataContainer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedDataContainer")],-1)),i[37]||(i[37]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[38]||(i[38]=e("",4))])])}const I=n(o,[["render",F]]);export{B as __pageData,I as default}; diff --git a/dev/assets/api_Lux_distributed_utils.md.BXHaY16P.lean.js b/dev/assets/api_Lux_distributed_utils.md.BXHaY16P.lean.js deleted file mode 100644 index 243c24ca1a..0000000000 --- a/dev/assets/api_Lux_distributed_utils.md.BXHaY16P.lean.js +++ /dev/null @@ -1,4 +0,0 @@ -import{_ as n,c as d,a2 as e,j as s,a as t,G as l,B as r,o as p}from"./chunks/framework.I-x9Gl6h.js";const I=JSON.parse('{"title":"Distributed Utils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/distributed_utils.md","filePath":"api/Lux/distributed_utils.md","lastUpdated":null}'),o={name:"api/Lux/distributed_utils.md"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"};function F(v,i,x,D,B,U){const a=r("Badge");return p(),d("div",null,[i[39]||(i[39]=e('

Distributed Utils

Note

These functionalities are available via the Lux.DistributedUtils module.

Backends

',3)),s("details",k,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.MPIBackend",href:"#Lux.MPIBackend"},[s("span",{class:"jlbinding"},"Lux.MPIBackend")],-1)),i[1]||(i[1]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=e('
julia
MPIBackend(comm = nothing)

Create an MPI backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(MPIBackend).

source

',3))]),s("details",c,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.NCCLBackend",href:"#Lux.NCCLBackend"},[s("span",{class:"jlbinding"},"Lux.NCCLBackend")],-1)),i[4]||(i[4]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[5]||(i[5]=e('
julia
NCCLBackend(comm = nothing, mpi_backend = nothing)

Create an NCCL backend for distributed training. Users should not use this function directly. Instead use DistributedUtils.get_distributed_backend(NCCLBackend).

source

',3))]),i[40]||(i[40]=s("h2",{id:"initialization",tabindex:"-1"},[t("Initialization "),s("a",{class:"header-anchor",href:"#initialization","aria-label":'Permalink to "Initialization"'},"​")],-1)),s("details",h,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.DistributedUtils.initialize",href:"#Lux.DistributedUtils.initialize"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialize")],-1)),i[7]||(i[7]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
julia
initialize(backend::Type{<:AbstractLuxDistributedBackend}; kwargs...)

Initialize the given backend. Users can supply cuda_devices and amdgpu_devices to initialize the backend with the given devices. These can be set to missing to prevent initialization of the given device type. If set to nothing, and the backend is functional we assign GPUs in a round-robin fashion. Finally, a list of integers can be supplied to initialize the backend with the given devices.

Possible values for backend are:

source

',5))]),s("details",u,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.DistributedUtils.initialized",href:"#Lux.DistributedUtils.initialized"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.initialized")],-1)),i[10]||(i[10]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
julia
initialized(backend::Type{<:AbstractLuxDistributedBackend})

Check if the given backend is initialized.

source

',3))]),s("details",b,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.DistributedUtils.get_distributed_backend",href:"#Lux.DistributedUtils.get_distributed_backend"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.get_distributed_backend")],-1)),i[13]||(i[13]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
julia
get_distributed_backend(backend::Type{<:AbstractLuxDistributedBackend})

Get the distributed backend for the given backend type. Possible values are:

Danger

initialize(backend; kwargs...) must be called before calling this function.

source

',5))]),i[41]||(i[41]=s("h2",{id:"Helper-Functions",tabindex:"-1"},[t("Helper Functions "),s("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),s("details",g,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.DistributedUtils.local_rank",href:"#Lux.DistributedUtils.local_rank"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.local_rank")],-1)),i[16]||(i[16]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e('
julia
local_rank(backend::AbstractLuxDistributedBackend)

Get the local rank for the given backend.

source

',3))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.DistributedUtils.total_workers",href:"#Lux.DistributedUtils.total_workers"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.total_workers")],-1)),i[19]||(i[19]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
julia
total_workers(backend::AbstractLuxDistributedBackend)

Get the total number of workers for the given backend.

source

',3))]),i[42]||(i[42]=s("h2",{id:"Communication-Primitives",tabindex:"-1"},[t("Communication Primitives "),s("a",{class:"header-anchor",href:"#Communication-Primitives","aria-label":'Permalink to "Communication Primitives {#Communication-Primitives}"'},"​")],-1)),s("details",m,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.DistributedUtils.allreduce!",href:"#Lux.DistributedUtils.allreduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.allreduce!")],-1)),i[22]||(i[22]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
julia
allreduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op)
-allreduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op)

Backend Agnostic API to perform an allreduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.DistributedUtils.bcast!",href:"#Lux.DistributedUtils.bcast!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.bcast!")],-1)),i[25]||(i[25]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e(`
julia
bcast!(backend::AbstractLuxDistributedBackend, sendrecvbuf; root::Int=0)
-bcast!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf; root::Int=0)

Backend Agnostic API to broadcast the given buffer sendrecvbuf or sendbuf to all workers into recvbuf. The value at root will be broadcasted to all other workers.

source

`,3))]),s("details",C,[s("summary",null,[i[27]||(i[27]=s("a",{id:"Lux.DistributedUtils.reduce!",href:"#Lux.DistributedUtils.reduce!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.reduce!")],-1)),i[28]||(i[28]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[29]||(i[29]=e(`
julia
reduce!(backend::AbstractLuxDistributedBackend, sendrecvbuf, op; root::Int=0)
-reduce!(backend::AbstractLuxDistributedBackend, sendbuf, recvbuf, op; root::Int=0)

Backend Agnostic API to perform a reduce operation on the given buffer sendrecvbuf or sendbuf and store the result in recvbuf.

op allows a special DistributedUtils.avg operation that averages the result across all workers.

source

`,4))]),s("details",f,[s("summary",null,[i[30]||(i[30]=s("a",{id:"Lux.DistributedUtils.synchronize!!",href:"#Lux.DistributedUtils.synchronize!!"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.synchronize!!")],-1)),i[31]||(i[31]=t()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[32]||(i[32]=e('
julia
synchronize!!(backend::AbstractLuxDistributedBackend, ps; root::Int=0)

Synchronize the given structure ps using the given backend. The value at root will be broadcasted to all other workers.

source

',3))]),i[43]||(i[43]=s("h2",{id:"Optimizers.jl-Integration",tabindex:"-1"},[t("Optimizers.jl Integration "),s("a",{class:"header-anchor",href:"#Optimizers.jl-Integration","aria-label":'Permalink to "Optimizers.jl Integration {#Optimizers.jl-Integration}"'},"​")],-1)),s("details",L,[s("summary",null,[i[33]||(i[33]=s("a",{id:"Lux.DistributedUtils.DistributedOptimizer",href:"#Lux.DistributedUtils.DistributedOptimizer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedOptimizer")],-1)),i[34]||(i[34]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[35]||(i[35]=e('
julia
DistributedOptimizer(backend::AbstractLuxDistributedBacked, optimizer)

Wrap the optimizer in a DistributedOptimizer. Before updating the parameters, this averages the gradients across the processes using Allreduce.

Arguments

source

',5))]),i[44]||(i[44]=s("h2",{id:"MLUtils.jl-Integration",tabindex:"-1"},[t("MLUtils.jl Integration "),s("a",{class:"header-anchor",href:"#MLUtils.jl-Integration","aria-label":'Permalink to "MLUtils.jl Integration {#MLUtils.jl-Integration}"'},"​")],-1)),s("details",j,[s("summary",null,[i[36]||(i[36]=s("a",{id:"Lux.DistributedUtils.DistributedDataContainer",href:"#Lux.DistributedUtils.DistributedDataContainer"},[s("span",{class:"jlbinding"},"Lux.DistributedUtils.DistributedDataContainer")],-1)),i[37]||(i[37]=t()),l(a,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[38]||(i[38]=e('
julia
DistributedDataContainer(backend::AbstractLuxDistributedBackend, data)

data must be compatible with MLUtils interface. The returned container is compatible with MLUtils interface and is used to partition the dataset across the available processes.

Load MLUtils.jl

MLUtils.jl must be installed and loaded before using this.

source

',4))])])}const z=n(o,[["render",F]]);export{I as __pageData,z as default}; diff --git a/dev/assets/api_Lux_interop.md.BHPjmyrL.lean.js b/dev/assets/api_Lux_interop.md.BHPjmyrL.lean.js deleted file mode 100644 index a2e65cc04f..0000000000 --- a/dev/assets/api_Lux_interop.md.BHPjmyrL.lean.js +++ /dev/null @@ -1,32 +0,0 @@ -import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/framework.I-x9Gl6h.js";const A=JSON.parse('{"title":"Interoperability between Lux and other packages","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/interop.md","filePath":"api/Lux/interop.md","lastUpdated":null}'),r={name:"api/Lux/interop.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"};function u(F,s,C,m,b,x){const e=h("Badge");return k(),p("div",null,[s[18]||(s[18]=a('

Interoperability between Lux and other packages

Switching from older frameworks

Flux Models to Lux Models

Flux.jl has been around in the Julia ecosystem for a long time and has a large userbase, hence we provide a way to convert Flux models to Lux models.

Tip

Accessing these functions require manually loading Flux, i.e., using Flux must be present somewhere in the code for these to be used.

',5)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Adapt.adapt-Tuple{FromFluxAdaptor, Any}",href:"#Adapt.adapt-Tuple{FromFluxAdaptor, Any}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[1]||(s[1]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[2]||(s[2]=a('
julia
Adapt.adapt(from::FromFluxAdaptor, L)

Adapt a Flux model L to Lux model. See FromFluxAdaptor for more details.

source

',3))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.FromFluxAdaptor",href:"#Lux.FromFluxAdaptor"},[i("span",{class:"jlbinding"},"Lux.FromFluxAdaptor")],-1)),s[4]||(s[4]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[5]||(s[5]=a(`
julia
FromFluxAdaptor(preserve_ps_st::Bool=false, force_preserve::Bool=false)

Convert a Flux Model to Lux Model.

active field

This always ignores the active field of some of the Flux layers. This is almost never going to be supported.

Keyword Arguments

Example

julia
julia> import Flux
-
-julia> using Adapt, Lux, Random
-
-julia> m = Flux.Chain(Flux.Dense(2 => 3, relu), Flux.Dense(3 => 2));
-
-julia> m2 = adapt(FromFluxAdaptor(), m); # or FromFluxAdaptor()(m.layers)
-
-julia> x = randn(Float32, 2, 32);
-
-julia> ps, st = Lux.setup(Random.default_rng(), m2);
-
-julia> size(first(m2(x, ps, st)))
-(2, 32)

source

`,8))]),i("details",E,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.FluxLayer",href:"#Lux.FluxLayer"},[i("span",{class:"jlbinding"},"Lux.FluxLayer")],-1)),s[7]||(s[7]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[8]||(s[8]=a('
julia
FluxLayer(layer)

Serves as a compatibility layer between Flux and Lux. This uses Optimisers.destructure API internally.

Warning

Lux was written to overcome the limitations of destructure + Flux. It is recommended to rewrite your layer in Lux instead of using this layer.

Warning

Introducing this Layer in your model will lead to type instabilities, given the way Optimisers.destructure works.

Arguments

Parameters

source

',9))]),s[19]||(s[19]=a('

Using a different backend for Lux

Lux Models to Simple Chains

SimpleChains.jl provides a way to train Small Neural Networks really fast on CPUs. See this blog post for more details. This section describes how to convert Lux models to SimpleChains models while preserving the layer interface.

Tip

Accessing these functions require manually loading SimpleChains, i.e., using SimpleChains must be present somewhere in the code for these to be used.

',4)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}",href:"#Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[10]||(s[10]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[11]||(s[11]=a('
julia
Adapt.adapt(from::ToSimpleChainsAdaptor, L::AbstractLuxLayer)

Adapt a Simple Chains model to Lux model. See ToSimpleChainsAdaptor for more details.

source

',3))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.ToSimpleChainsAdaptor",href:"#Lux.ToSimpleChainsAdaptor"},[i("span",{class:"jlbinding"},"Lux.ToSimpleChainsAdaptor")],-1)),s[13]||(s[13]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[14]||(s[14]=a(`
julia
ToSimpleChainsAdaptor(input_dims, convert_to_array::Bool=false)

Adaptor for converting a Lux Model to SimpleChains. The returned model is still a Lux model, and satisfies the AbstractLuxLayer interfacem but all internal calculations are performed using SimpleChains.

Warning

There is no way to preserve trained parameters and states when converting to SimpleChains.jl.

Warning

Any kind of initialization function is not preserved when converting to SimpleChains.jl.

Arguments

Example

julia
julia> import SimpleChains
-
-julia> using Adapt, Lux, Random
-
-julia> lux_model = Chain(Conv((5, 5), 1 => 6, relu), MaxPool((2, 2)),
-           Conv((5, 5), 6 => 16, relu), MaxPool((2, 2)), FlattenLayer(3),
-           Chain(Dense(256 => 128, relu), Dense(128 => 84, relu), Dense(84 => 10)));
-
-julia> adaptor = ToSimpleChainsAdaptor((28, 28, 1));
-
-julia> simple_chains_model = adapt(adaptor, lux_model); # or adaptor(lux_model)
-
-julia> ps, st = Lux.setup(Random.default_rng(), simple_chains_model);
-
-julia> x = randn(Float32, 28, 28, 1, 1);
-
-julia> size(first(simple_chains_model(x, ps, st)))
-(10, 1)

source

`,9))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.SimpleChainsLayer",href:"#Lux.SimpleChainsLayer"},[i("span",{class:"jlbinding"},"Lux.SimpleChainsLayer")],-1)),s[16]||(s[16]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=a(`
julia
SimpleChainsLayer(layer, to_array::Union{Bool, Val}=Val(false))
-SimpleChainsLayer(layer, lux_layer, to_array)

Wraps a SimpleChains layer into a Lux layer. All operations are performed using SimpleChains but the layer satisfies the AbstractLuxLayer interface.

ToArray is a boolean flag that determines whether the output should be converted to a regular Array or not. Default is false.

Arguments

source

`,6))])])}const L=n(r,[["render",u]]);export{A as __pageData,L as default}; diff --git a/dev/assets/api_Lux_interop.md.BHPjmyrL.js b/dev/assets/api_Lux_interop.md.D74VGhe2.js similarity index 97% rename from dev/assets/api_Lux_interop.md.BHPjmyrL.js rename to dev/assets/api_Lux_interop.md.D74VGhe2.js index a2e65cc04f..2773b83a16 100644 --- a/dev/assets/api_Lux_interop.md.BHPjmyrL.js +++ b/dev/assets/api_Lux_interop.md.D74VGhe2.js @@ -1,4 +1,4 @@ -import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/framework.I-x9Gl6h.js";const A=JSON.parse('{"title":"Interoperability between Lux and other packages","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/interop.md","filePath":"api/Lux/interop.md","lastUpdated":null}'),r={name:"api/Lux/interop.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"};function u(F,s,C,m,b,x){const e=h("Badge");return k(),p("div",null,[s[18]||(s[18]=a('

Interoperability between Lux and other packages

Switching from older frameworks

Flux Models to Lux Models

Flux.jl has been around in the Julia ecosystem for a long time and has a large userbase, hence we provide a way to convert Flux models to Lux models.

Tip

Accessing these functions require manually loading Flux, i.e., using Flux must be present somewhere in the code for these to be used.

',5)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Adapt.adapt-Tuple{FromFluxAdaptor, Any}",href:"#Adapt.adapt-Tuple{FromFluxAdaptor, Any}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[1]||(s[1]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[2]||(s[2]=a('
julia
Adapt.adapt(from::FromFluxAdaptor, L)

Adapt a Flux model L to Lux model. See FromFluxAdaptor for more details.

source

',3))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.FromFluxAdaptor",href:"#Lux.FromFluxAdaptor"},[i("span",{class:"jlbinding"},"Lux.FromFluxAdaptor")],-1)),s[4]||(s[4]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[5]||(s[5]=a(`
julia
FromFluxAdaptor(preserve_ps_st::Bool=false, force_preserve::Bool=false)

Convert a Flux Model to Lux Model.

active field

This always ignores the active field of some of the Flux layers. This is almost never going to be supported.

Keyword Arguments

Example

julia
julia> import Flux
+import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/framework.BetCMmtc.js";const f=JSON.parse('{"title":"Interoperability between Lux and other packages","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/interop.md","filePath":"api/Lux/interop.md","lastUpdated":null}'),r={name:"api/Lux/interop.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"};function u(F,s,C,m,b,x){const e=h("Badge");return k(),p("div",null,[s[18]||(s[18]=a('

Interoperability between Lux and other packages

Switching from older frameworks

Flux Models to Lux Models

Flux.jl has been around in the Julia ecosystem for a long time and has a large userbase, hence we provide a way to convert Flux models to Lux models.

Tip

Accessing these functions require manually loading Flux, i.e., using Flux must be present somewhere in the code for these to be used.

',5)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Adapt.adapt-Tuple{FromFluxAdaptor, Any}",href:"#Adapt.adapt-Tuple{FromFluxAdaptor, Any}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[1]||(s[1]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[2]||(s[2]=a('
julia
Adapt.adapt(from::FromFluxAdaptor, L)

Adapt a Flux model L to Lux model. See FromFluxAdaptor for more details.

source

',3))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.FromFluxAdaptor",href:"#Lux.FromFluxAdaptor"},[i("span",{class:"jlbinding"},"Lux.FromFluxAdaptor")],-1)),s[4]||(s[4]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[5]||(s[5]=a(`
julia
FromFluxAdaptor(preserve_ps_st::Bool=false, force_preserve::Bool=false)

Convert a Flux Model to Lux Model.

active field

This always ignores the active field of some of the Flux layers. This is almost never going to be supported.

Keyword Arguments

  • preserve_ps_st: Set to true to preserve the states and parameters of the layer. This attempts the best possible way to preserve the original model. But it might fail. If you need to override possible failures, set force_preserve to true.

  • force_preserve: Some of the transformations with state and parameters preservation haven't been implemented yet, in these cases, if force_transform is false a warning will be printed and a core Lux layer will be returned. Else, it will create a FluxLayer.

Example

julia
julia> import Flux
 
 julia> using Adapt, Lux, Random
 
@@ -11,7 +11,7 @@ import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/fr
 julia> ps, st = Lux.setup(Random.default_rng(), m2);
 
 julia> size(first(m2(x, ps, st)))
-(2, 32)

source

`,8))]),i("details",E,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.FluxLayer",href:"#Lux.FluxLayer"},[i("span",{class:"jlbinding"},"Lux.FluxLayer")],-1)),s[7]||(s[7]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[8]||(s[8]=a('
julia
FluxLayer(layer)

Serves as a compatibility layer between Flux and Lux. This uses Optimisers.destructure API internally.

Warning

Lux was written to overcome the limitations of destructure + Flux. It is recommended to rewrite your layer in Lux instead of using this layer.

Warning

Introducing this Layer in your model will lead to type instabilities, given the way Optimisers.destructure works.

Arguments

  • layer: Flux layer

Parameters

  • p: Flattened parameters of the layer

source

',9))]),s[19]||(s[19]=a('

Using a different backend for Lux

Lux Models to Simple Chains

SimpleChains.jl provides a way to train Small Neural Networks really fast on CPUs. See this blog post for more details. This section describes how to convert Lux models to SimpleChains models while preserving the layer interface.

Tip

Accessing these functions require manually loading SimpleChains, i.e., using SimpleChains must be present somewhere in the code for these to be used.

',4)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}",href:"#Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[10]||(s[10]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[11]||(s[11]=a('
julia
Adapt.adapt(from::ToSimpleChainsAdaptor, L::AbstractLuxLayer)

Adapt a Simple Chains model to Lux model. See ToSimpleChainsAdaptor for more details.

source

',3))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.ToSimpleChainsAdaptor",href:"#Lux.ToSimpleChainsAdaptor"},[i("span",{class:"jlbinding"},"Lux.ToSimpleChainsAdaptor")],-1)),s[13]||(s[13]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[14]||(s[14]=a(`
julia
ToSimpleChainsAdaptor(input_dims, convert_to_array::Bool=false)

Adaptor for converting a Lux Model to SimpleChains. The returned model is still a Lux model, and satisfies the AbstractLuxLayer interfacem but all internal calculations are performed using SimpleChains.

Warning

There is no way to preserve trained parameters and states when converting to SimpleChains.jl.

Warning

Any kind of initialization function is not preserved when converting to SimpleChains.jl.

Arguments

  • input_dims: Tuple of input dimensions excluding the batch dimension. These must be of static type as SimpleChains expects.

  • convert_to_array: SimpleChains.jl by default outputs StrideArraysCore.StrideArray, but this might not compose well with other packages. If convert_to_array is set to true, the output will be converted to a regular Array.

Example

julia
julia> import SimpleChains
+(2, 32)

source

`,8))]),i("details",E,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.FluxLayer",href:"#Lux.FluxLayer"},[i("span",{class:"jlbinding"},"Lux.FluxLayer")],-1)),s[7]||(s[7]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[8]||(s[8]=a('
julia
FluxLayer(layer)

Serves as a compatibility layer between Flux and Lux. This uses Optimisers.destructure API internally.

Warning

Lux was written to overcome the limitations of destructure + Flux. It is recommended to rewrite your layer in Lux instead of using this layer.

Warning

Introducing this Layer in your model will lead to type instabilities, given the way Optimisers.destructure works.

Arguments

  • layer: Flux layer

Parameters

  • p: Flattened parameters of the layer

source

',9))]),s[19]||(s[19]=a('

Using a different backend for Lux

Lux Models to Simple Chains

SimpleChains.jl provides a way to train Small Neural Networks really fast on CPUs. See this blog post for more details. This section describes how to convert Lux models to SimpleChains models while preserving the layer interface.

Tip

Accessing these functions require manually loading SimpleChains, i.e., using SimpleChains must be present somewhere in the code for these to be used.

',4)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}",href:"#Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[10]||(s[10]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[11]||(s[11]=a('
julia
Adapt.adapt(from::ToSimpleChainsAdaptor, L::AbstractLuxLayer)

Adapt a Simple Chains model to Lux model. See ToSimpleChainsAdaptor for more details.

source

',3))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.ToSimpleChainsAdaptor",href:"#Lux.ToSimpleChainsAdaptor"},[i("span",{class:"jlbinding"},"Lux.ToSimpleChainsAdaptor")],-1)),s[13]||(s[13]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[14]||(s[14]=a(`
julia
ToSimpleChainsAdaptor(input_dims, convert_to_array::Bool=false)

Adaptor for converting a Lux Model to SimpleChains. The returned model is still a Lux model, and satisfies the AbstractLuxLayer interfacem but all internal calculations are performed using SimpleChains.

Warning

There is no way to preserve trained parameters and states when converting to SimpleChains.jl.

Warning

Any kind of initialization function is not preserved when converting to SimpleChains.jl.

Arguments

  • input_dims: Tuple of input dimensions excluding the batch dimension. These must be of static type as SimpleChains expects.

  • convert_to_array: SimpleChains.jl by default outputs StrideArraysCore.StrideArray, but this might not compose well with other packages. If convert_to_array is set to true, the output will be converted to a regular Array.

Example

julia
julia> import SimpleChains
 
 julia> using Adapt, Lux, Random
 
@@ -28,5 +28,5 @@ import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/fr
 julia> x = randn(Float32, 28, 28, 1, 1);
 
 julia> size(first(simple_chains_model(x, ps, st)))
-(10, 1)

source

`,9))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.SimpleChainsLayer",href:"#Lux.SimpleChainsLayer"},[i("span",{class:"jlbinding"},"Lux.SimpleChainsLayer")],-1)),s[16]||(s[16]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=a(`
julia
SimpleChainsLayer(layer, to_array::Union{Bool, Val}=Val(false))
-SimpleChainsLayer(layer, lux_layer, to_array)

Wraps a SimpleChains layer into a Lux layer. All operations are performed using SimpleChains but the layer satisfies the AbstractLuxLayer interface.

ToArray is a boolean flag that determines whether the output should be converted to a regular Array or not. Default is false.

Arguments

  • layer: SimpleChains layer

  • lux_layer: Potentially equivalent Lux layer that is used for printing

source

`,6))])])}const L=n(r,[["render",u]]);export{A as __pageData,L as default}; +(10, 1)

source

`,9))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.SimpleChainsLayer",href:"#Lux.SimpleChainsLayer"},[i("span",{class:"jlbinding"},"Lux.SimpleChainsLayer")],-1)),s[16]||(s[16]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=a(`
julia
SimpleChainsLayer(layer, to_array::Union{Bool, Val}=Val(false))
+SimpleChainsLayer(layer, lux_layer, to_array)

Wraps a SimpleChains layer into a Lux layer. All operations are performed using SimpleChains but the layer satisfies the AbstractLuxLayer interface.

ToArray is a boolean flag that determines whether the output should be converted to a regular Array or not. Default is false.

Arguments

source

`,6))])])}const _=n(r,[["render",u]]);export{f as __pageData,_ as default}; diff --git a/dev/assets/api_Lux_interop.md.D74VGhe2.lean.js b/dev/assets/api_Lux_interop.md.D74VGhe2.lean.js new file mode 100644 index 0000000000..d1273d1a61 --- /dev/null +++ b/dev/assets/api_Lux_interop.md.D74VGhe2.lean.js @@ -0,0 +1 @@ +import{_ as n,c as p,a2 as a,j as i,a as t,G as l,B as h,o as k}from"./chunks/framework.BetCMmtc.js";const f=JSON.parse('{"title":"Interoperability between Lux and other packages","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/interop.md","filePath":"api/Lux/interop.md","lastUpdated":null}'),r={name:"api/Lux/interop.md"},d={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"};function u(F,s,C,m,b,x){const e=h("Badge");return k(),p("div",null,[s[18]||(s[18]=a("",5)),i("details",d,[i("summary",null,[s[0]||(s[0]=i("a",{id:"Adapt.adapt-Tuple{FromFluxAdaptor, Any}",href:"#Adapt.adapt-Tuple{FromFluxAdaptor, Any}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[1]||(s[1]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[2]||(s[2]=a("",3))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"Lux.FromFluxAdaptor",href:"#Lux.FromFluxAdaptor"},[i("span",{class:"jlbinding"},"Lux.FromFluxAdaptor")],-1)),s[4]||(s[4]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[5]||(s[5]=a("",8))]),i("details",E,[i("summary",null,[s[6]||(s[6]=i("a",{id:"Lux.FluxLayer",href:"#Lux.FluxLayer"},[i("span",{class:"jlbinding"},"Lux.FluxLayer")],-1)),s[7]||(s[7]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[8]||(s[8]=a("",9))]),s[19]||(s[19]=a("",4)),i("details",g,[i("summary",null,[s[9]||(s[9]=i("a",{id:"Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}",href:"#Adapt.adapt-Tuple{ToSimpleChainsAdaptor, AbstractLuxLayer}"},[i("span",{class:"jlbinding"},"Adapt.adapt")],-1)),s[10]||(s[10]=t()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),s[11]||(s[11]=a("",3))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"Lux.ToSimpleChainsAdaptor",href:"#Lux.ToSimpleChainsAdaptor"},[i("span",{class:"jlbinding"},"Lux.ToSimpleChainsAdaptor")],-1)),s[13]||(s[13]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[14]||(s[14]=a("",9))]),i("details",c,[i("summary",null,[s[15]||(s[15]=i("a",{id:"Lux.SimpleChainsLayer",href:"#Lux.SimpleChainsLayer"},[i("span",{class:"jlbinding"},"Lux.SimpleChainsLayer")],-1)),s[16]||(s[16]=t()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=a("",6))])])}const _=n(r,[["render",u]]);export{f as __pageData,_ as default}; diff --git a/dev/assets/api_Lux_layers.md.DiMdFKta.lean.js b/dev/assets/api_Lux_layers.md.DiMdFKta.lean.js deleted file mode 100644 index e8b6038633..0000000000 --- a/dev/assets/api_Lux_layers.md.DiMdFKta.lean.js +++ /dev/null @@ -1,149 +0,0 @@ -import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/framework.I-x9Gl6h.js";const o2=JSON.parse('{"title":"Built-In Layers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/layers.md","filePath":"api/Lux/layers.md","lastUpdated":null}'),o={name:"api/Lux/layers.md"},r={class:"jldocstring custom-block"},p={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},E={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},v={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},j={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},V={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},O={class:"jldocstring custom-block"},z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-5.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"51.473ex",height:"11.422ex",role:"img",focusable:"false",viewBox:"0 -2774.4 22750.9 5048.7","aria-hidden":"true"},S={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},_={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"15.326ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 6774.2 1000","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"16.435ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 7264.2 1000","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.831ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5229.2 1000","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},t1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},a1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},s1={class:"jldocstring custom-block"},i1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-8.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.257ex",height:"17.424ex",role:"img",focusable:"false",viewBox:"0 -4100.7 17793.6 7701.4","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},Q1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.018ex",height:"1.357ex",role:"img",focusable:"false",viewBox:"0 -442 1776.1 599.8","aria-hidden":"true"},r1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},p1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"19.753ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 8730.9 1045","aria-hidden":"true"},h1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.231ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 9384.3 1045","aria-hidden":"true"},g1={class:"jldocstring custom-block"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"67.61ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 29883.5 1037.2","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},E1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},L1={class:"jldocstring custom-block"},x1={class:"jldocstring custom-block"},b1={class:"jldocstring custom-block"},w1={class:"jldocstring custom-block"},H1={class:"jldocstring custom-block"},C1={class:"jldocstring custom-block"},F1={class:"jldocstring custom-block"},D1={class:"jldocstring custom-block"},M1={class:"jldocstring custom-block"},v1={class:"jldocstring custom-block"},Z1={class:"jldocstring custom-block"},j1={class:"jldocstring custom-block"},A1={class:"jldocstring custom-block"},B1={class:"jldocstring custom-block"},V1={class:"jldocstring custom-block"},N1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.059ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10192.1 891","aria-hidden":"true"},O1={class:"jldocstring custom-block"},z1={class:"jldocstring custom-block"},I1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},P1={class:"jldocstring custom-block"},_1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},W1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},X1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},U1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},$1={class:"jldocstring custom-block"},Y1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},t2={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.172ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.071ex",height:"4.704ex",role:"img",focusable:"false",viewBox:"0 -1119 4451.6 2079","aria-hidden":"true"},a2={class:"jldocstring custom-block"},s2={class:"jldocstring custom-block"};function i2(e2,a,l2,n2,Q2,T2){const e=d("Badge");return Q(),n("div",null,[a[269]||(a[269]=t("h1",{id:"Built-In-Layers",tabindex:"-1"},[s("Built-In Layers "),t("a",{class:"header-anchor",href:"#Built-In-Layers","aria-label":'Permalink to "Built-In Layers {#Built-In-Layers}"'},"​")],-1)),a[270]||(a[270]=t("h2",{id:"containers",tabindex:"-1"},[s("Containers "),t("a",{class:"header-anchor",href:"#containers","aria-label":'Permalink to "Containers"'},"​")],-1)),t("details",r,[t("summary",null,[a[0]||(a[0]=t("a",{id:"Lux.BranchLayer",href:"#Lux.BranchLayer"},[t("span",{class:"jlbinding"},"Lux.BranchLayer")],-1)),a[1]||(a[1]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[2]||(a[2]=i(`
julia
BranchLayer(layers...)
-BranchLayer(; name=nothing, layers...)

Takes an input x and passes it through all the layers and returns a tuple of the outputs.

Arguments

Extended Help

Inputs

Returns

Parameters

States

Comparison with Parallel

This is slightly different from Parallel(nothing, layers...)

Example

An easy way to replicate an input to an NTuple is to do

julia
julia> BranchLayer(NoOpLayer(), NoOpLayer(), NoOpLayer())
-BranchLayer(
-    layer_1 = NoOpLayer(),
-    layer_2 = NoOpLayer(),
-    layer_3 = NoOpLayer(),
-)         # Total: 0 parameters,
-          #        plus 0 states.

source

`,18))]),t("details",p,[t("summary",null,[a[3]||(a[3]=t("a",{id:"Lux.Chain",href:"#Lux.Chain"},[t("span",{class:"jlbinding"},"Lux.Chain")],-1)),a[4]||(a[4]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[5]||(a[5]=i(`
julia
Chain(layers...; name=nothing)
-Chain(; layers..., name=nothing)

Collects multiple layers / functions to be called in sequence on a given input.

Arguments

Extended Help

Inputs

Input x is passed sequentially to each layer, and must conform to the input requirements of the internal layers.

Returns

Parameters

States

Miscellaneous Properties

Example

julia
julia> Chain(Dense(2, 3, relu), BatchNorm(3), Dense(3, 2))
-Chain(
-    layer_1 = Dense(2 => 3, relu),      # 9 parameters
-    layer_2 = BatchNorm(3, affine=true, track_stats=true),  # 6 parameters, plus 7
-    layer_3 = Dense(3 => 2),            # 8 parameters
-)         # Total: 23 parameters,
-          #        plus 7 states.
-
-julia> Chain(Dense(2, 3, relu), BatchNorm(3), Dense(3, 2); name="MyFancyChain")
-MyFancyChain(
-    layer_1 = Dense(2 => 3, relu),      # 9 parameters
-    layer_2 = BatchNorm(3, affine=true, track_stats=true),  # 6 parameters, plus 7
-    layer_3 = Dense(3 => 2),            # 8 parameters
-)         # Total: 23 parameters,
-          #        plus 7 states.

source

`,18))]),t("details",h,[t("summary",null,[a[6]||(a[6]=t("a",{id:"Lux.PairwiseFusion",href:"#Lux.PairwiseFusion"},[t("span",{class:"jlbinding"},"Lux.PairwiseFusion")],-1)),a[7]||(a[7]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[8]||(a[8]=i(`
julia
PairwiseFusion(connection, layers...; name=nothing)
-PairwiseFusion(connection; name=nothing, layers...)
-PairwiseFusion(; connection, layers..., name=nothing)
x1 → layer1 → y1 ↘
-                  connection → layer2 → y2 ↘
-              x2 ↗                          connection → y3
-                                        x3 ↗

Arguments

Extended Help

Inputs

Layer behaves differently based on input type:

  1. If the input x is a tuple of length N + 1, then the layers must be a tuple of length N. The computation is as follows
julia
y = x[1]
-for i in 1:N
-    y = connection(x[i + 1], layers[i](y))
-end
  1. Any other kind of input
julia
y = x
-for i in 1:N
-    y = connection(x, layers[i](y))
-end

Returns

Parameters

States

source

`,18))]),t("details",m,[t("summary",null,[a[9]||(a[9]=t("a",{id:"Lux.Parallel",href:"#Lux.Parallel"},[t("span",{class:"jlbinding"},"Lux.Parallel")],-1)),a[10]||(a[10]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[11]||(a[11]=i(`
julia
Parallel(connection, layers...; name=nothing)
-Parallel(connection; name=nothing, layers...)
-Parallel(; connection, layers..., name=nothing)

Create a layer which passes an input to each path in layers, before reducing the output with connection.

Arguments

Extended Help

Inputs

Returns

Parameters

States

See also SkipConnection which is Parallel with one identity.

Example

julia
julia> model = Parallel(nothing, Dense(2, 1), Dense(2, 1))
-Parallel(
-    layer_1 = Dense(2 => 1),            # 3 parameters
-    layer_2 = Dense(2 => 1),            # 3 parameters
-)         # Total: 6 parameters,
-          #        plus 0 states.
-
-julia> using Random;
-       rng = Random.seed!(123);
-       ps, st = Lux.setup(rng, model);
-       x1 = randn(rng, Float32, 2);
-       x2 = randn(rng, Float32, 2);
-
-julia> size.(first(model((x1, x2), ps, st)))
-((1,), (1,))

source

`,17))]),t("details",g,[t("summary",null,[a[12]||(a[12]=t("a",{id:"Lux.SkipConnection",href:"#Lux.SkipConnection"},[t("span",{class:"jlbinding"},"Lux.SkipConnection")],-1)),a[13]||(a[13]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[14]||(a[14]=i(`
julia
SkipConnection(layers, connection; name=nothing)
-SkipConnection(; layers, connection, name=nothing)

Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

The simplest "ResNet"-type connection is just SkipConnection(layer, +).

Arguments

Extended Help

Inputs

Returns

Parameters

States

See Parallel for a more general implementation.

source

`,16))]),t("details",k,[t("summary",null,[a[15]||(a[15]=t("a",{id:"Lux.RepeatedLayer",href:"#Lux.RepeatedLayer"},[t("span",{class:"jlbinding"},"Lux.RepeatedLayer")],-1)),a[16]||(a[16]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[17]||(a[17]=i(`
julia
RepeatedLayer(model; repeats::Val = Val(10), input_injection::Val = Val(false))

Iteratively applies model for repeats number of times. The initial input is passed into the model repeatedly if input_injection = Val(true). This layer unrolls the computation, however, semantically this is same as:

julia
res = x
-for i in 1:repeats
-    res, st = model(res, ps, st)
-end
julia
res = x
-for i in 1:repeats
-    res, st = model((res, x), ps, st)
-end

It is expected that repeats will be a reasonable number below 20, beyond that compile times for gradients might be unreasonably high.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

States

source

`,21))]),a[271]||(a[271]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[a[18]||(a[18]=t("a",{id:"Lux.Conv",href:"#Lux.Conv"},[t("span",{class:"jlbinding"},"Lux.Conv")],-1)),a[19]||(a[19]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[22]||(a[22]=i(`
julia
Conv(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
-     activation=identity; init_weight=nothing, init_bias=nothing, stride=1,
-     pad=0, dilation=1, groups=1, use_bias=True(), cross_correlation=False())

Standard convolutional layer.

Conv 2D

Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100 x 100 RGB image would be a 100 x 100 x 3 x 1 array, and a batch of 50 would be a 100 x 100 x 3 x 50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5, 5), a 2-tuple of integers. To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N + 2, where size(x, N + 1) == in_chs is the number of input channels, and size(x, ndims(x)) is the number of observations in a batch.

Warning

Frameworks like Pytorch perform cross-correlation in their convolution layers. Pass cross_correlation=true to use cross-correlation instead.

Arguments

Extended Help

Keyword Arguments

Inputs

Returns

`,13)),t("mjx-container",u,[(Q(),n("svg",y,a[20]||(a[20]=[i('',1)]))),a[21]||(a[21]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[23]||(a[23]=i('

Parameters

source

',4))]),t("details",E,[t("summary",null,[a[24]||(a[24]=t("a",{id:"Lux.ConvTranspose",href:"#Lux.ConvTranspose"},[t("span",{class:"jlbinding"},"Lux.ConvTranspose")],-1)),a[25]||(a[25]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[26]||(a[26]=i(`
julia
ConvTranspose(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
-              activation=identity; init_weight=glorot_uniform, init_bias=zeros32,
-              stride=1, pad=0, outpad=0, dilation=1, groups=1, use_bias=True(),
-              cross_correlation=False())

Standard convolutional transpose layer.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

Parameters

source

`,14))]),a[272]||(a[272]=t("h2",{id:"Dropout-Layers",tabindex:"-1"},[s("Dropout Layers "),t("a",{class:"header-anchor",href:"#Dropout-Layers","aria-label":'Permalink to "Dropout Layers {#Dropout-Layers}"'},"​")],-1)),t("details",f,[t("summary",null,[a[27]||(a[27]=t("a",{id:"Lux.AlphaDropout",href:"#Lux.AlphaDropout"},[t("span",{class:"jlbinding"},"Lux.AlphaDropout")],-1)),a[28]||(a[28]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[29]||(a[29]=i('
julia
AlphaDropout(p::Real)

AlphaDropout layer.

Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also Dropout, VariationalHiddenDropout

source

',13))]),t("details",L,[t("summary",null,[a[30]||(a[30]=t("a",{id:"Lux.Dropout",href:"#Lux.Dropout"},[t("span",{class:"jlbinding"},"Lux.Dropout")],-1)),a[31]||(a[31]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[32]||(a[32]=i('
julia
Dropout(p; dims=:)

Dropout layer.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, VariationalHiddenDropout

source

',15))]),t("details",x,[t("summary",null,[a[33]||(a[33]=t("a",{id:"Lux.VariationalHiddenDropout",href:"#Lux.VariationalHiddenDropout"},[t("span",{class:"jlbinding"},"Lux.VariationalHiddenDropout")],-1)),a[34]||(a[34]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[35]||(a[35]=i('
julia
VariationalHiddenDropout(p; dims=:)

VariationalHiddenDropout layer. The only difference from Dropout is that the mask is retained until Lux.update_state(l, :update_mask, Val(true)) is called.

Arguments

Keyword Arguments

Inputs

Returns

States

Call Lux.testmode to switch to test mode.

See also AlphaDropout, Dropout

source

',15))]),a[273]||(a[273]=t("h2",{id:"Pooling-Layers",tabindex:"-1"},[s("Pooling Layers "),t("a",{class:"header-anchor",href:"#Pooling-Layers","aria-label":'Permalink to "Pooling Layers {#Pooling-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[a[36]||(a[36]=t("a",{id:"Lux.AdaptiveLPPool",href:"#Lux.AdaptiveLPPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveLPPool")],-1)),a[37]||(a[37]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[38]||(a[38]=i('
julia
AdaptiveLPPool(output_size; p=2)

Adaptive LP Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

',10))]),t("details",w,[t("summary",null,[a[39]||(a[39]=t("a",{id:"Lux.AdaptiveMaxPool",href:"#Lux.AdaptiveMaxPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMaxPool")],-1)),a[40]||(a[40]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[41]||(a[41]=i('
julia
AdaptiveMaxPool(output_size)

Adaptive Max Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

',9))]),t("details",H,[t("summary",null,[a[42]||(a[42]=t("a",{id:"Lux.AdaptiveMeanPool",href:"#Lux.AdaptiveMeanPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMeanPool")],-1)),a[43]||(a[43]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[44]||(a[44]=i('
julia
AdaptiveMeanPool(output_size)

Adaptive Mean Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

Arguments

Inputs

Returns

source

',9))]),t("details",C,[t("summary",null,[a[45]||(a[45]=t("a",{id:"Lux.GlobalLPPool",href:"#Lux.GlobalLPPool"},[t("span",{class:"jlbinding"},"Lux.GlobalLPPool")],-1)),a[46]||(a[46]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[47]||(a[47]=i('
julia
GlobalLPPool(; p=2)

Global LP Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

GPU Support

This layer is currently only supported on CPU.

Inputs

Returns

source

',8))]),t("details",F,[t("summary",null,[a[48]||(a[48]=t("a",{id:"Lux.GlobalMaxPool",href:"#Lux.GlobalMaxPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMaxPool")],-1)),a[49]||(a[49]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[50]||(a[50]=i('
julia
GlobalMaxPool()

Global Max Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

',7))]),t("details",D,[t("summary",null,[a[51]||(a[51]=t("a",{id:"Lux.GlobalMeanPool",href:"#Lux.GlobalMeanPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMeanPool")],-1)),a[52]||(a[52]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[53]||(a[53]=i('
julia
GlobalMeanPool()

Global Mean Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

Inputs

Returns

source

',7))]),t("details",M,[t("summary",null,[a[54]||(a[54]=t("a",{id:"Lux.LPPool",href:"#Lux.LPPool"},[t("span",{class:"jlbinding"},"Lux.LPPool")],-1)),a[55]||(a[55]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[58]||(a[58]=i('
julia
LPPool(window; stride=window, pad=0, dilation=1, p=2)

LP Pooling layer, which replaces all pixels in a block of size window with the reduction operation: lp.

Arguments

Keyword Arguments

GPU Support

This layer is currently only supported on CPU.

Extended Help

Inputs

Returns

',12)),t("mjx-container",v,[(Q(),n("svg",Z,a[56]||(a[56]=[i('',1)]))),a[57]||(a[57]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[59]||(a[59]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[60]||(a[60]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L251",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",j,[t("summary",null,[a[61]||(a[61]=t("a",{id:"Lux.MaxPool",href:"#Lux.MaxPool"},[t("span",{class:"jlbinding"},"Lux.MaxPool")],-1)),a[62]||(a[62]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[65]||(a[65]=i('
julia
MaxPool(window; stride=window, pad=0, dilation=1)

Max Pooling layer, which replaces all pixels in a block of size window with the reduction operation: max.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

',11)),t("mjx-container",A,[(Q(),n("svg",B,a[63]||(a[63]=[i('',1)]))),a[64]||(a[64]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[66]||(a[66]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[67]||(a[67]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",V,[t("summary",null,[a[68]||(a[68]=t("a",{id:"Lux.MeanPool",href:"#Lux.MeanPool"},[t("span",{class:"jlbinding"},"Lux.MeanPool")],-1)),a[69]||(a[69]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[72]||(a[72]=i('
julia
MeanPool(window; stride=window, pad=0, dilation=1)

Mean Pooling layer, which replaces all pixels in a block of size window with the reduction operation: mean.

Arguments

Keyword Arguments

Extended Help

Inputs

Returns

',11)),t("mjx-container",N,[(Q(),n("svg",R,a[70]||(a[70]=[i('',1)]))),a[71]||(a[71]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[73]||(a[73]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[74]||(a[74]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),a[274]||(a[274]=t("h2",{id:"Recurrent-Layers",tabindex:"-1"},[s("Recurrent Layers "),t("a",{class:"header-anchor",href:"#Recurrent-Layers","aria-label":'Permalink to "Recurrent Layers {#Recurrent-Layers}"'},"​")],-1)),t("details",O,[t("summary",null,[a[75]||(a[75]=t("a",{id:"Lux.GRUCell",href:"#Lux.GRUCell"},[t("span",{class:"jlbinding"},"Lux.GRUCell")],-1)),a[76]||(a[76]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[120]||(a[120]=i(`
julia
GRUCell((in_dims, out_dims)::Pair{<:Int,<:Int}; use_bias=true, train_state::Bool=false,
-        init_weight=nothing, init_bias=nothing, init_state=zeros32)

Gated Recurrent Unit (GRU) Cell

`,2)),t("mjx-container",z,[(Q(),n("svg",I,a[77]||(a[77]=[i('',1)]))),a[78]||(a[78]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"r")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"z")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"n")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"tanh"),t("mo",{"data-mjx-texclass":"NONE"},"⁡"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"+"),t("mi",null,"r"),t("mo",null,"⋅"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{stretchy:"false"},")"),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mo",{stretchy:"false"},"("),t("mn",null,"1"),t("mo",null,"−"),t("mi",null,"z"),t("mo",{stretchy:"false"},")"),t("mo",null,"⋅"),t("mi",null,"n"),t("mo",null,"+"),t("mi",null,"z"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])])])])])])],-1))]),a[121]||(a[121]=i("

Arguments

Inputs

Returns

",5)),t("ul",null,[t("li",null,[a[87]||(a[87]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[81]||(a[81]=s("Output ")),t("mjx-container",S,[(Q(),n("svg",P,a[79]||(a[79]=[i('',1)]))),a[80]||(a[80]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[82]||(a[82]=s(" of shape ")),a[83]||(a[83]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[86]||(a[86]=s("Tuple containing new hidden state ")),t("mjx-container",_,[(Q(),n("svg",G,a[84]||(a[84]=[i('',1)]))),a[85]||(a[85]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[88]||(a[88]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[122]||(a[122]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[91]||(a[91]=t("code",null,"weight_ih",-1)),a[92]||(a[92]=s(": Concatenated Weights to map from input space ")),t("mjx-container",W,[(Q(),n("svg",X,a[89]||(a[89]=[i('',1)]))),a[90]||(a[90]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[93]||(a[93]=s("."))])]),t("li",null,[t("p",null,[a[96]||(a[96]=t("code",null,"weight_hh",-1)),a[97]||(a[97]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",U,[(Q(),n("svg",q,a[94]||(a[94]=[i('',1)]))),a[95]||(a[95]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[98]||(a[98]=s("."))])]),t("li",null,[t("p",null,[a[101]||(a[101]=t("code",null,"bias_ih",-1)),a[102]||(a[102]=s(": Concatenated Bias vector for the input space ")),t("mjx-container",J,[(Q(),n("svg",K,a[99]||(a[99]=[i('',1)]))),a[100]||(a[100]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[103]||(a[103]=s(" (not present if ")),a[104]||(a[104]=t("code",null,"use_bias=false",-1)),a[105]||(a[105]=s(")."))])]),t("li",null,[t("p",null,[a[108]||(a[108]=t("code",null,"bias_hh",-1)),a[109]||(a[109]=s(": Concatenated Bias vector for the hidden space ")),t("mjx-container",$,[(Q(),n("svg",Y,a[106]||(a[106]=[i('',1)]))),a[107]||(a[107]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[110]||(a[110]=s(" (not present if ")),a[111]||(a[111]=t("code",null,"use_bias=false",-1)),a[112]||(a[112]=s(")."))])]),t("li",null,[t("p",null,[a[115]||(a[115]=t("code",null,"hidden_state",-1)),a[116]||(a[116]=s(": Initial hidden state vector (not present if ")),a[117]||(a[117]=t("code",null,"train_state=false",-1)),a[118]||(a[118]=s(") ")),t("mjx-container",t1,[(Q(),n("svg",a1,a[113]||(a[113]=[i('',1)]))),a[114]||(a[114]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[119]||(a[119]=s("."))])])]),a[123]||(a[123]=t("p",null,[t("strong",null,"States")],-1)),a[124]||(a[124]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[125]||(a[125]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/recurrent.jl#L488",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",s1,[t("summary",null,[a[126]||(a[126]=t("a",{id:"Lux.LSTMCell",href:"#Lux.LSTMCell"},[t("span",{class:"jlbinding"},"Lux.LSTMCell")],-1)),a[127]||(a[127]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[153]||(a[153]=i(`
julia
LSTMCell(in_dims => out_dims; use_bias::Bool=true, train_state::Bool=false,
-         train_memory::Bool=false, init_weight=nothing, init_bias=nothing,
-         init_state=zeros32, init_memory=zeros32)

Long Short-Term (LSTM) Cell

`,2)),t("mjx-container",i1,[(Q(),n("svg",e1,a[128]||(a[128]=[i('',1)]))),a[129]||(a[129]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"i")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"f")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"f")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"g")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"g")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"o")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"o")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"f"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"i"),t("mo",null,"⋅"),t("mi",null,"g")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"o"),t("mo",null,"⋅"),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",{stretchy:"false"},")")])])])])],-1))]),a[154]||(a[154]=i("

Arguments

Inputs

Returns

",5)),t("ul",null,[t("li",null,[a[141]||(a[141]=t("p",null,"Tuple Containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[132]||(a[132]=s("Output ")),t("mjx-container",l1,[(Q(),n("svg",n1,a[130]||(a[130]=[i('',1)]))),a[131]||(a[131]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[133]||(a[133]=s(" of shape ")),a[134]||(a[134]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[139]||(a[139]=s("Tuple containing new hidden state ")),t("mjx-container",Q1,[(Q(),n("svg",T1,a[135]||(a[135]=[i('',1)]))),a[136]||(a[136]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[140]||(a[140]=s(" and new memory ")),t("mjx-container",d1,[(Q(),n("svg",o1,a[137]||(a[137]=[i('',1)]))),a[138]||(a[138]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[142]||(a[142]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[155]||(a[155]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[145]||(a[145]=t("code",null,"weight_ih",-1)),a[146]||(a[146]=s(": Concatenated Weights to map from input space ")),t("mjx-container",r1,[(Q(),n("svg",p1,a[143]||(a[143]=[i('',1)]))),a[144]||(a[144]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[147]||(a[147]=s("."))])]),t("li",null,[t("p",null,[a[150]||(a[150]=t("code",null,"weight_hh",-1)),a[151]||(a[151]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",h1,[(Q(),n("svg",m1,a[148]||(a[148]=[i('',1)]))),a[149]||(a[149]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))])])]),a[152]||(a[152]=i("
  • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false)

  • bias_hh: Concatenated Bias vector for the hidden-hidden connection (not present if use_bias=false)

  • hidden_state: Initial hidden state vector (not present if train_state=false)

  • memory: Initial memory vector (not present if train_memory=false)

  • ",4))]),a[156]||(a[156]=t("p",null,[t("strong",null,"States")],-1)),a[157]||(a[157]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[158]||(a[158]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/recurrent.jl#L309",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",g1,[t("summary",null,[a[159]||(a[159]=t("a",{id:"Lux.RNNCell",href:"#Lux.RNNCell"},[t("span",{class:"jlbinding"},"Lux.RNNCell")],-1)),a[160]||(a[160]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[173]||(a[173]=i(`
    julia
    RNNCell(in_dims => out_dims, activation=tanh; use_bias=True(), train_state=False(),
    -    init_bias=nothing, init_weight=nothing, init_state=zeros32)

    An Elman RNNCell cell with activation (typically set to tanh or relu).

    `,2)),t("p",null,[t("mjx-container",k1,[(Q(),n("svg",c1,a[161]||(a[161]=[i('',1)]))),a[162]||(a[162]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",null,"="),t("mi",null,"a"),t("mi",null,"c"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"v"),t("mi",null,"a"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"o"),t("mi",null,"n"),t("mo",{stretchy:"false"},"("),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"+"),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",{stretchy:"false"},")")])],-1))])]),a[174]||(a[174]=i("

    Arguments

    Inputs

    Returns

    ",5)),t("ul",null,[t("li",null,[a[171]||(a[171]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[165]||(a[165]=s("Output ")),t("mjx-container",u1,[(Q(),n("svg",y1,a[163]||(a[163]=[i('',1)]))),a[164]||(a[164]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[166]||(a[166]=s(" of shape ")),a[167]||(a[167]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[170]||(a[170]=s("Tuple containing new hidden state ")),t("mjx-container",E1,[(Q(),n("svg",f1,a[168]||(a[168]=[i('',1)]))),a[169]||(a[169]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[172]||(a[172]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[175]||(a[175]=i('

    Parameters

    States

    source

    ',5))]),t("details",L1,[t("summary",null,[a[176]||(a[176]=t("a",{id:"Lux.Recurrence",href:"#Lux.Recurrence"},[t("span",{class:"jlbinding"},"Lux.Recurrence")],-1)),a[177]||(a[177]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[178]||(a[178]=i(`
    julia
    Recurrence(cell;
    -    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex(),
    -    return_sequence::Bool=false)

    Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) to automatically operate over a sequence of inputs.

    Relation to Flux.Recur

    This is completely distinct from Flux.Recur. It doesn't make the cell stateful, rather allows operating on an entire sequence of inputs at once. See StatefulRecurrentCell for functionality similar to Flux.Recur.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Tip

    Frameworks like Tensorflow have special implementation of StackedRNNCells to handle sequentially composed RNN Cells. In Lux, one can simple stack multiple Recurrence blocks in a Chain to achieve the same.

    Chain(
    -    Recurrence(RNNCell(inputsize => latentsize); return_sequence=true),
    -    Recurrence(RNNCell(latentsize => latentsize); return_sequence=true),
    -    :
    -    x -> stack(x; dims=2)
    -)

    For some discussion on this topic, see https://github.com/LuxDL/Lux.jl/issues/472.

    source

    `,14))]),t("details",x1,[t("summary",null,[a[179]||(a[179]=t("a",{id:"Lux.StatefulRecurrentCell",href:"#Lux.StatefulRecurrentCell"},[t("span",{class:"jlbinding"},"Lux.StatefulRecurrentCell")],-1)),a[180]||(a[180]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[181]||(a[181]=i('
    julia
    StatefulRecurrentCell(cell)

    Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) and makes it stateful.

    To avoid undefined behavior, once the processing of a single sequence of data is complete, update the state with Lux.update_state(st, :carry, nothing).

    Arguments

    Inputs

    Returns

    States

    source

    ',12))]),t("details",b1,[t("summary",null,[a[182]||(a[182]=t("a",{id:"Lux.BidirectionalRNN",href:"#Lux.BidirectionalRNN"},[t("span",{class:"jlbinding"},"Lux.BidirectionalRNN")],-1)),a[183]||(a[183]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[184]||(a[184]=i(`
    julia
    BidirectionalRNN(cell::AbstractRecurrentCell,
    -    backward_cell::Union{AbstractRecurrentCell, Nothing}=nothing;
    -    merge_mode::Union{Function, Nothing}=vcat,
    -    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex())

    Bidirectional RNN wrapper.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    source

    `,16))]),a[275]||(a[275]=t("h2",{id:"Linear-Layers",tabindex:"-1"},[s("Linear Layers "),t("a",{class:"header-anchor",href:"#Linear-Layers","aria-label":'Permalink to "Linear Layers {#Linear-Layers}"'},"​")],-1)),t("details",w1,[t("summary",null,[a[185]||(a[185]=t("a",{id:"Lux.Bilinear",href:"#Lux.Bilinear"},[t("span",{class:"jlbinding"},"Lux.Bilinear")],-1)),a[186]||(a[186]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[187]||(a[187]=i(`
    julia
    Bilinear((in1_dims, in2_dims) => out, activation=identity; init_weight=nothing,
    -         init_bias=nothing, use_bias=True())
    -Bilinear(in12_dims => out, activation=identity; init_weight=nothing,
    -         init_bias=nothing, use_bias=True())

    Create a fully connected layer between two inputs and an output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i in 1:out:

    z[i] = activation(x' * W[i, :, :] * y + bias[i])

    If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,15))]),t("details",H1,[t("summary",null,[a[188]||(a[188]=t("a",{id:"Lux.Dense",href:"#Lux.Dense"},[t("span",{class:"jlbinding"},"Lux.Dense")],-1)),a[189]||(a[189]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[190]||(a[190]=i(`
    julia
    Dense(in_dims => out_dims, activation=identity; init_weight=nothing,
    -      init_bias=nothing, use_bias=True())

    Create a traditional fully connected layer, whose forward pass is given by: y = activation.(weight * x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,13))]),t("details",C1,[t("summary",null,[a[191]||(a[191]=t("a",{id:"Lux.Embedding",href:"#Lux.Embedding"},[t("span",{class:"jlbinding"},"Lux.Embedding")],-1)),a[192]||(a[192]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[193]||(a[193]=i('
    julia
    Embedding(in_dims => out_dims; init_weight=rand32)

    A lookup table that stores embeddings of dimension out_dims for a vocabulary of size in_dims. When the vocabulary is multi-dimensional, the input is expected to be a tuple of Cartesian indices.

    This layer is often used to store word embeddings and retrieve them using indices.

    Arguments

    Keyword Arguments

    Input

    Returns

    source

    ',12))]),t("details",F1,[t("summary",null,[a[194]||(a[194]=t("a",{id:"Lux.Scale",href:"#Lux.Scale"},[t("span",{class:"jlbinding"},"Lux.Scale")],-1)),a[195]||(a[195]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[196]||(a[196]=i('
    julia
    Scale(dims, activation=identity; init_weight=ones32, init_bias=zeros32, use_bias=True())

    Create a Sparsely Connected Layer with a very specific structure (only Diagonal Elements are non-zero). The forward pass is given by: y = activation.(weight .* x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    ',13))]),a[276]||(a[276]=t("h2",{id:"Misc.-Helper-Layers",tabindex:"-1"},[s("Misc. Helper Layers "),t("a",{class:"header-anchor",href:"#Misc.-Helper-Layers","aria-label":'Permalink to "Misc. Helper Layers {#Misc.-Helper-Layers}"'},"​")],-1)),t("details",D1,[t("summary",null,[a[197]||(a[197]=t("a",{id:"Lux.FlattenLayer",href:"#Lux.FlattenLayer"},[t("span",{class:"jlbinding"},"Lux.FlattenLayer")],-1)),a[198]||(a[198]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[199]||(a[199]=i(`
    julia
    FlattenLayer(; N = nothing)

    Flattens the passed array into a matrix.

    Keyword Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = FlattenLayer()
    -FlattenLayer{Nothing}(nothing)
    -
    -julia> rng = Random.default_rng();
    -       Random.seed!(rng, 0);
    -       ps, st = Lux.setup(rng, model);
    -       x = randn(rng, Float32, (2, 2, 2, 2));
    -
    -julia> y, st_new = model(x, ps, st);
    -       size(y)
    -(8, 2)

    source

    `,11))]),t("details",M1,[t("summary",null,[a[200]||(a[200]=t("a",{id:"Lux.Maxout",href:"#Lux.Maxout"},[t("span",{class:"jlbinding"},"Lux.Maxout")],-1)),a[201]||(a[201]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[202]||(a[202]=i(`
    julia
    Maxout(layers...)
    -Maxout(; layers...)
    -Maxout(f::Function, n_alts::Int)

    This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the the internal layers' outputs.

    Maxout over linear dense layers satisfies the universal approximation theorem. See [1].

    See also Parallel to reduce with other operators.

    Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    References

    [1] Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389

    source

    `,18))]),t("details",v1,[t("summary",null,[a[203]||(a[203]=t("a",{id:"Lux.NoOpLayer",href:"#Lux.NoOpLayer"},[t("span",{class:"jlbinding"},"Lux.NoOpLayer")],-1)),a[204]||(a[204]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[205]||(a[205]=i(`
    julia
    NoOpLayer()

    As the name suggests does nothing but allows pretty printing of layers. Whatever input is passed is returned.

    Example

    julia
    julia> model = NoOpLayer()
    -NoOpLayer()
    -
    -julia> rng = Random.default_rng();
    -       Random.seed!(rng, 0);
    -       ps, st = Lux.setup(rng, model);
    -       x = 1
    -1
    -
    -julia> y, st_new = model(x, ps, st)
    -(1, NamedTuple())

    source

    `,5))]),t("details",Z1,[t("summary",null,[a[206]||(a[206]=t("a",{id:"Lux.ReshapeLayer",href:"#Lux.ReshapeLayer"},[t("span",{class:"jlbinding"},"Lux.ReshapeLayer")],-1)),a[207]||(a[207]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[208]||(a[208]=i(`
    julia
    ReshapeLayer(dims)

    Reshapes the passed array to have a size of (dims..., :)

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReshapeLayer((2, 2))
    -ReshapeLayer(output_dims = (2, 2, :))
    -
    -julia> rng = Random.default_rng();
    -       Random.seed!(rng, 0);
    -       ps, st = Lux.setup(rng, model);
    -       x = randn(rng, Float32, (4, 1, 3));
    -
    -julia> y, st_new = model(x, ps, st);
    -       size(y)
    -(2, 2, 3)

    source

    `,11))]),t("details",j1,[t("summary",null,[a[209]||(a[209]=t("a",{id:"Lux.SelectDim",href:"#Lux.SelectDim"},[t("span",{class:"jlbinding"},"Lux.SelectDim")],-1)),a[210]||(a[210]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[211]||(a[211]=i('
    julia
    SelectDim(dim, i)

    Return a view of all the data of the input x where the index for dimension dim equals i. Equivalent to view(x,:,:,...,i,:,:,...) where i is in position d.

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",A1,[t("summary",null,[a[212]||(a[212]=t("a",{id:"Lux.WrappedFunction",href:"#Lux.WrappedFunction"},[t("span",{class:"jlbinding"},"Lux.WrappedFunction")],-1)),a[213]||(a[213]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[214]||(a[214]=i('
    julia
    WrappedFunction(f)

    Wraps a stateless and parameter less function. Might be used when a function is added to Chain. For example, Chain(x -> relu.(x)) would not work and the right thing to do would be Chain((x, ps, st) -> (relu.(x), st)). An easier thing to do would be Chain(WrappedFunction(Base.Fix1(broadcast, relu)))

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",B1,[t("summary",null,[a[215]||(a[215]=t("a",{id:"Lux.ReverseSequence",href:"#Lux.ReverseSequence"},[t("span",{class:"jlbinding"},"Lux.ReverseSequence")],-1)),a[216]||(a[216]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[217]||(a[217]=i(`
    julia
    ReverseSequence(dim = nothing)

    Reverse the specified dimension dims of the passed array

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReverseSequence()
    -ReverseSequence{Nothing}(nothing)
    -
    -julia> rng = Random.default_rng();
    -       Random.seed!(rng, 0);
    -       ps, st = Lux.setup(rng, model);
    -       x = [1.0, 2.0, 3.0];
    -
    -julia> y, st_new = model(x, ps, st)
    -([3.0, 2.0, 1.0], NamedTuple())

    source

    `,11))]),a[277]||(a[277]=t("h2",{id:"Normalization-Layers",tabindex:"-1"},[s("Normalization Layers "),t("a",{class:"header-anchor",href:"#Normalization-Layers","aria-label":'Permalink to "Normalization Layers {#Normalization-Layers}"'},"​")],-1)),t("details",V1,[t("summary",null,[a[218]||(a[218]=t("a",{id:"Lux.BatchNorm",href:"#Lux.BatchNorm"},[t("span",{class:"jlbinding"},"Lux.BatchNorm")],-1)),a[219]||(a[219]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[225]||(a[225]=i(`
    julia
    BatchNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    -          affine=True(), track_stats=True(), epsilon=1f-5, momentum=0.1f0)

    Batch Normalization layer.

    `,2)),t("p",null,[a[222]||(a[222]=t("code",null,"BatchNorm",-1)),a[223]||(a[223]=s(" computes the mean and variance for each ")),t("mjx-container",N1,[(Q(),n("svg",R1,a[220]||(a[220]=[i('',1)]))),a[221]||(a[221]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mi",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mi",null,"×"),t("mn",null,"1"),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),a[224]||(a[224]=s(" input slice and normalises the input accordingly."))]),a[226]||(a[226]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), BatchNorm(64, relu), Dense(64 => 10), BatchNorm(10))
    -Chain(
    -    layer_1 = Dense(784 => 64),         # 50_240 parameters
    -    layer_2 = BatchNorm(64, relu, affine=true, track_stats=true),  # 128 parameters, plus 129
    -    layer_3 = Dense(64 => 10),          # 650 parameters
    -    layer_4 = BatchNorm(10, affine=true, track_stats=true),  # 20 parameters, plus 21
    -)         # Total: 51_038 parameters,
    -          #        plus 150 states.

    Warning

    Passing a batch size of 1, during training will result in an error.

    See also BatchNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,19))]),t("details",O1,[t("summary",null,[a[227]||(a[227]=t("a",{id:"Lux.GroupNorm",href:"#Lux.GroupNorm"},[t("span",{class:"jlbinding"},"Lux.GroupNorm")],-1)),a[228]||(a[228]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[229]||(a[229]=i(`
    julia
    GroupNorm(chs::Integer, groups::Integer, activation=identity; init_bias=zeros32,
    -          init_scale=ones32, affine=true, epsilon=1f-5)

    Group Normalization layer.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), GroupNorm(64, 4, relu), Dense(64 => 10), GroupNorm(10, 5))
    -Chain(
    -    layer_1 = Dense(784 => 64),         # 50_240 parameters
    -    layer_2 = GroupNorm(64, 4, relu, affine=true),  # 128 parameters
    -    layer_3 = Dense(64 => 10),          # 650 parameters
    -    layer_4 = GroupNorm(10, 5, affine=true),  # 20 parameters
    -)         # Total: 51_038 parameters,
    -          #        plus 0 states.

    See also GroupNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",z1,[t("summary",null,[a[230]||(a[230]=t("a",{id:"Lux.InstanceNorm",href:"#Lux.InstanceNorm"},[t("span",{class:"jlbinding"},"Lux.InstanceNorm")],-1)),a[231]||(a[231]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[236]||(a[236]=i(`
    julia
    InstanceNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    -             affine=False(), track_stats=False(), epsilon=1f-5, momentum=0.1f0)

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[a[234]||(a[234]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",I1,[(Q(),n("svg",S1,a[232]||(a[232]=[i('',1)]))),a[233]||(a[233]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),a[235]||(a[235]=s("` input slice and normalises the input accordingly."))]),a[237]||(a[237]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), InstanceNorm(64, relu; affine=true), Dense(64 => 10),
    -           InstanceNorm(10, relu; affine=true))
    -Chain(
    -    layer_1 = Dense(784 => 64),         # 50_240 parameters
    -    layer_2 = InstanceNorm(64, relu, affine=true, track_stats=false),  # 128 parameters, plus 1
    -    layer_3 = Dense(64 => 10),          # 650 parameters
    -    layer_4 = InstanceNorm(10, relu, affine=true, track_stats=false),  # 20 parameters, plus 1
    -)         # Total: 51_038 parameters,
    -          #        plus 2 states.

    References

    [1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

    See also BatchNorm, GroupNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",P1,[t("summary",null,[a[238]||(a[238]=t("a",{id:"Lux.LayerNorm",href:"#Lux.LayerNorm"},[t("span",{class:"jlbinding"},"Lux.LayerNorm")],-1)),a[239]||(a[239]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[255]||(a[255]=i(`
    julia
    LayerNorm(shape::NTuple{N, Int}, activation=identity; epsilon=1f-5, dims=Colon(),
    -          affine=true, init_bias=zeros32, init_scale=ones32)

    Computes mean and standard deviation over the whole input array, and uses these to normalize the whole array. Optionally applies an elementwise affine transformation afterwards.

    `,2)),t("p",null,[a[242]||(a[242]=s("Given an input array ")),t("mjx-container",_1,[(Q(),n("svg",G1,a[240]||(a[240]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),a[241]||(a[241]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),a[243]||(a[243]=s(", this layer computes"))]),t("mjx-container",W1,[(Q(),n("svg",X1,a[244]||(a[244]=[i('',1)]))),a[245]||(a[245]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),t("p",null,[a[250]||(a[250]=s("where ")),t("mjx-container",U1,[(Q(),n("svg",q1,a[246]||(a[246]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),a[247]||(a[247]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),a[251]||(a[251]=s(" & ")),t("mjx-container",J1,[(Q(),n("svg",K1,a[248]||(a[248]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),a[249]||(a[249]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),a[252]||(a[252]=s(" are trainable parameters if ")),a[253]||(a[253]=t("code",null,"affine=true",-1)),a[254]||(a[254]=s("."))]),a[256]||(a[256]=i('

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    source

    ',12))]),t("details",$1,[t("summary",null,[a[257]||(a[257]=t("a",{id:"Lux.WeightNorm",href:"#Lux.WeightNorm"},[t("span",{class:"jlbinding"},"Lux.WeightNorm")],-1)),a[258]||(a[258]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[261]||(a[261]=i(`
    julia
    WeightNorm(layer::AbstractLuxLayer, which_params::NTuple{N, Symbol},
    -           dims::Union{Tuple, Nothing}=nothing)

    Applies weight normalization to a parameter in the given layer.

    `,2)),t("mjx-container",Y1,[(Q(),n("svg",t2,a[259]||(a[259]=[i('',1)]))),a[260]||(a[260]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"w"),t("mo",null,"="),t("mi",null,"g"),t("mfrac",null,[t("mi",null,"v"),t("mrow",null,[t("mo",{"data-mjx-texclass":"ORD"},"∥"),t("mi",null,"v"),t("mo",{"data-mjx-texclass":"ORD"},"∥")])])])],-1))]),a[262]||(a[262]=i('

    Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This updates the parameters in which_params (e.g. weight) using two parameters: one specifying the magnitude (e.g. weight_g) and one specifying the direction (e.g. weight_v).

    Arguments

    Inputs

    Returns

    Parameters

    States

    source

    ',12))]),a[278]||(a[278]=t("h2",{id:"upsampling",tabindex:"-1"},[s("Upsampling "),t("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),t("details",a2,[t("summary",null,[a[263]||(a[263]=t("a",{id:"Lux.PixelShuffle",href:"#Lux.PixelShuffle"},[t("span",{class:"jlbinding"},"Lux.PixelShuffle")],-1)),a[264]||(a[264]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[265]||(a[265]=i('
    julia
    PixelShuffle(r::Int)

    Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

    See NNlib.pixel_shuffle for more details.

    Arguments

    Inputs

    Returns

    source

    ',10))]),t("details",s2,[t("summary",null,[a[266]||(a[266]=t("a",{id:"Lux.Upsample",href:"#Lux.Upsample"},[t("span",{class:"jlbinding"},"Lux.Upsample")],-1)),a[267]||(a[267]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[268]||(a[268]=i(`
    julia
    Upsample(mode = :nearest; [scale, size, align_corners=false])
    -Upsample(scale, mode = :nearest)

    Upsampling Layer.

    Layer Construction

    Option 1

    Exactly one of two keywords must be specified:

    Option 2

    Currently supported upsampling modes and corresponding NNlib's methods are:

    Extended Help

    Other Keyword Arguments

    Inputs

    Returns

    source

    `,19))])])}const r2=T(o,[["render",i2]]);export{o2 as __pageData,r2 as default}; diff --git a/dev/assets/api_Lux_layers.md.DiMdFKta.js b/dev/assets/api_Lux_layers.md.WaBurqvX.js similarity index 98% rename from dev/assets/api_Lux_layers.md.DiMdFKta.js rename to dev/assets/api_Lux_layers.md.WaBurqvX.js index e8b6038633..e2790aa0aa 100644 --- a/dev/assets/api_Lux_layers.md.DiMdFKta.js +++ b/dev/assets/api_Lux_layers.md.WaBurqvX.js @@ -1,11 +1,11 @@ -import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/framework.I-x9Gl6h.js";const o2=JSON.parse('{"title":"Built-In Layers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/layers.md","filePath":"api/Lux/layers.md","lastUpdated":null}'),o={name:"api/Lux/layers.md"},r={class:"jldocstring custom-block"},p={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},E={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},v={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},j={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},V={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},O={class:"jldocstring custom-block"},z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-5.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"51.473ex",height:"11.422ex",role:"img",focusable:"false",viewBox:"0 -2774.4 22750.9 5048.7","aria-hidden":"true"},S={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},_={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"15.326ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 6774.2 1000","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"16.435ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 7264.2 1000","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.831ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5229.2 1000","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},t1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},a1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},s1={class:"jldocstring custom-block"},i1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-8.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.257ex",height:"17.424ex",role:"img",focusable:"false",viewBox:"0 -4100.7 17793.6 7701.4","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},Q1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.018ex",height:"1.357ex",role:"img",focusable:"false",viewBox:"0 -442 1776.1 599.8","aria-hidden":"true"},r1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},p1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"19.753ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 8730.9 1045","aria-hidden":"true"},h1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.231ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 9384.3 1045","aria-hidden":"true"},g1={class:"jldocstring custom-block"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"67.61ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 29883.5 1037.2","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},E1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},L1={class:"jldocstring custom-block"},x1={class:"jldocstring custom-block"},b1={class:"jldocstring custom-block"},w1={class:"jldocstring custom-block"},H1={class:"jldocstring custom-block"},C1={class:"jldocstring custom-block"},F1={class:"jldocstring custom-block"},D1={class:"jldocstring custom-block"},M1={class:"jldocstring custom-block"},v1={class:"jldocstring custom-block"},Z1={class:"jldocstring custom-block"},j1={class:"jldocstring custom-block"},A1={class:"jldocstring custom-block"},B1={class:"jldocstring custom-block"},V1={class:"jldocstring custom-block"},N1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.059ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10192.1 891","aria-hidden":"true"},O1={class:"jldocstring custom-block"},z1={class:"jldocstring custom-block"},I1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},P1={class:"jldocstring custom-block"},_1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},W1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},X1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},U1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},$1={class:"jldocstring custom-block"},Y1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},t2={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.172ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.071ex",height:"4.704ex",role:"img",focusable:"false",viewBox:"0 -1119 4451.6 2079","aria-hidden":"true"},a2={class:"jldocstring custom-block"},s2={class:"jldocstring custom-block"};function i2(e2,a,l2,n2,Q2,T2){const e=d("Badge");return Q(),n("div",null,[a[269]||(a[269]=t("h1",{id:"Built-In-Layers",tabindex:"-1"},[s("Built-In Layers "),t("a",{class:"header-anchor",href:"#Built-In-Layers","aria-label":'Permalink to "Built-In Layers {#Built-In-Layers}"'},"​")],-1)),a[270]||(a[270]=t("h2",{id:"containers",tabindex:"-1"},[s("Containers "),t("a",{class:"header-anchor",href:"#containers","aria-label":'Permalink to "Containers"'},"​")],-1)),t("details",r,[t("summary",null,[a[0]||(a[0]=t("a",{id:"Lux.BranchLayer",href:"#Lux.BranchLayer"},[t("span",{class:"jlbinding"},"Lux.BranchLayer")],-1)),a[1]||(a[1]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[2]||(a[2]=i(`
    julia
    BranchLayer(layers...)
    +import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/framework.BetCMmtc.js";const o2=JSON.parse('{"title":"Built-In Layers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/layers.md","filePath":"api/Lux/layers.md","lastUpdated":null}'),o={name:"api/Lux/layers.md"},r={class:"jldocstring custom-block"},p={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},E={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},M={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},v={class:"jldocstring custom-block"},Z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},j={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},V={class:"jldocstring custom-block"},B={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},N={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},S={class:"jldocstring custom-block"},R={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-5.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"51.473ex",height:"11.422ex",role:"img",focusable:"false",viewBox:"0 -2774.4 22750.9 5048.7","aria-hidden":"true"},P={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},O={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"15.326ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 6774.2 1000","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"16.435ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 7264.2 1000","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.831ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5229.2 1000","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},t1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},a1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},s1={class:"jldocstring custom-block"},i1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-8.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.257ex",height:"17.424ex",role:"img",focusable:"false",viewBox:"0 -4100.7 17793.6 7701.4","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},Q1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.018ex",height:"1.357ex",role:"img",focusable:"false",viewBox:"0 -442 1776.1 599.8","aria-hidden":"true"},r1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},p1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"19.753ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 8730.9 1045","aria-hidden":"true"},h1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.231ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 9384.3 1045","aria-hidden":"true"},g1={class:"jldocstring custom-block"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"67.61ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 29883.5 1037.2","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},E1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},L1={class:"jldocstring custom-block"},x1={class:"jldocstring custom-block"},b1={class:"jldocstring custom-block"},w1={class:"jldocstring custom-block"},H1={class:"jldocstring custom-block"},C1={class:"jldocstring custom-block"},D1={class:"jldocstring custom-block"},F1={class:"jldocstring custom-block"},_1={class:"jldocstring custom-block"},M1={class:"jldocstring custom-block"},A1={class:"jldocstring custom-block"},v1={class:"jldocstring custom-block"},Z1={class:"jldocstring custom-block"},j1={class:"jldocstring custom-block"},V1={class:"jldocstring custom-block"},B1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},N1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.059ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10192.1 891","aria-hidden":"true"},S1={class:"jldocstring custom-block"},R1={class:"jldocstring custom-block"},I1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},O1={class:"jldocstring custom-block"},z1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},W1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},X1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},U1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},$1={class:"jldocstring custom-block"},Y1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},t2={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.172ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.071ex",height:"4.704ex",role:"img",focusable:"false",viewBox:"0 -1119 4451.6 2079","aria-hidden":"true"},a2={class:"jldocstring custom-block"},s2={class:"jldocstring custom-block"};function i2(e2,a,l2,n2,Q2,T2){const e=d("Badge");return Q(),n("div",null,[a[269]||(a[269]=t("h1",{id:"Built-In-Layers",tabindex:"-1"},[s("Built-In Layers "),t("a",{class:"header-anchor",href:"#Built-In-Layers","aria-label":'Permalink to "Built-In Layers {#Built-In-Layers}"'},"​")],-1)),a[270]||(a[270]=t("h2",{id:"containers",tabindex:"-1"},[s("Containers "),t("a",{class:"header-anchor",href:"#containers","aria-label":'Permalink to "Containers"'},"​")],-1)),t("details",r,[t("summary",null,[a[0]||(a[0]=t("a",{id:"Lux.BranchLayer",href:"#Lux.BranchLayer"},[t("span",{class:"jlbinding"},"Lux.BranchLayer")],-1)),a[1]||(a[1]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[2]||(a[2]=i(`
    julia
    BranchLayer(layers...)
     BranchLayer(; name=nothing, layers...)

    Takes an input x and passes it through all the layers and returns a tuple of the outputs.

    Arguments

    • Layers can be specified in two formats:
      • A list of N Lux layers

      • Specified as N keyword arguments.

    Extended Help

    Inputs

    • x: Will be directly passed to each of the layers

    Returns

    • Tuple: (layer_1(x), layer_2(x), ..., layer_N(x)) (naming changes if using the kwargs API)

    • Updated state of the layers

    Parameters

    • Parameters of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    States

    • States of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    Comparison with Parallel

    This is slightly different from Parallel(nothing, layers...)

    • If the input is a tuple, Parallel will pass each element individually to each layer.

    • BranchLayer essentially assumes 1 input comes in and is branched out into N outputs.

    Example

    An easy way to replicate an input to an NTuple is to do

    julia
    julia> BranchLayer(NoOpLayer(), NoOpLayer(), NoOpLayer())
     BranchLayer(
         layer_1 = NoOpLayer(),
         layer_2 = NoOpLayer(),
         layer_3 = NoOpLayer(),
     )         # Total: 0 parameters,
    -          #        plus 0 states.

    source

    `,18))]),t("details",p,[t("summary",null,[a[3]||(a[3]=t("a",{id:"Lux.Chain",href:"#Lux.Chain"},[t("span",{class:"jlbinding"},"Lux.Chain")],-1)),a[4]||(a[4]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[5]||(a[5]=i(`
    julia
    Chain(layers...; name=nothing)
    +          #        plus 0 states.

    source

    `,18))]),t("details",p,[t("summary",null,[a[3]||(a[3]=t("a",{id:"Lux.Chain",href:"#Lux.Chain"},[t("span",{class:"jlbinding"},"Lux.Chain")],-1)),a[4]||(a[4]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[5]||(a[5]=i(`
    julia
    Chain(layers...; name=nothing)
     Chain(; layers..., name=nothing)

    Collects multiple layers / functions to be called in sequence on a given input.

    Arguments

    • Layers can be specified in two formats:
      • A list of N Lux layers

      • Specified as N keyword arguments.

    Extended Help

    Inputs

    Input x is passed sequentially to each layer, and must conform to the input requirements of the internal layers.

    Returns

    • Output after sequentially applying all the layers to x

    • Updated model states

    Parameters

    • Parameters of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    States

    • States of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    Miscellaneous Properties

    • Allows indexing and field access syntax. We can access the ith layer by m[i] or m.layer_i. We can also index using ranges or arrays.

    Example

    julia
    julia> Chain(Dense(2, 3, relu), BatchNorm(3), Dense(3, 2))
     Chain(
         layer_1 = Dense(2 => 3, relu),      # 9 parameters
    @@ -20,7 +20,7 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
         layer_2 = BatchNorm(3, affine=true, track_stats=true),  # 6 parameters, plus 7
         layer_3 = Dense(3 => 2),            # 8 parameters
     )         # Total: 23 parameters,
    -          #        plus 7 states.

    source

    `,18))]),t("details",h,[t("summary",null,[a[6]||(a[6]=t("a",{id:"Lux.PairwiseFusion",href:"#Lux.PairwiseFusion"},[t("span",{class:"jlbinding"},"Lux.PairwiseFusion")],-1)),a[7]||(a[7]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[8]||(a[8]=i(`
    julia
    PairwiseFusion(connection, layers...; name=nothing)
    +          #        plus 7 states.

    source

    `,18))]),t("details",h,[t("summary",null,[a[6]||(a[6]=t("a",{id:"Lux.PairwiseFusion",href:"#Lux.PairwiseFusion"},[t("span",{class:"jlbinding"},"Lux.PairwiseFusion")],-1)),a[7]||(a[7]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[8]||(a[8]=i(`
    julia
    PairwiseFusion(connection, layers...; name=nothing)
     PairwiseFusion(connection; name=nothing, layers...)
     PairwiseFusion(; connection, layers..., name=nothing)
    x1 → layer1 → y1 ↘
                       connection → layer2 → y2 ↘
    @@ -31,7 +31,7 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
     end
    1. Any other kind of input
    julia
    y = x
     for i in 1:N
         y = connection(x, layers[i](y))
    -end

    Returns

    • See Inputs section for how the return value is computed

    • Updated model state for all the contained layers

    Parameters

    • Parameters of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    States

    • States of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    source

    `,18))]),t("details",m,[t("summary",null,[a[9]||(a[9]=t("a",{id:"Lux.Parallel",href:"#Lux.Parallel"},[t("span",{class:"jlbinding"},"Lux.Parallel")],-1)),a[10]||(a[10]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[11]||(a[11]=i(`
    julia
    Parallel(connection, layers...; name=nothing)
    +end

    Returns

    • See Inputs section for how the return value is computed

    • Updated model state for all the contained layers

    Parameters

    • Parameters of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    States

    • States of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    source

    `,18))]),t("details",m,[t("summary",null,[a[9]||(a[9]=t("a",{id:"Lux.Parallel",href:"#Lux.Parallel"},[t("span",{class:"jlbinding"},"Lux.Parallel")],-1)),a[10]||(a[10]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[11]||(a[11]=i(`
    julia
    Parallel(connection, layers...; name=nothing)
     Parallel(connection; name=nothing, layers...)
     Parallel(; connection, layers..., name=nothing)

    Create a layer which passes an input to each path in layers, before reducing the output with connection.

    Arguments

    • connection: An N-argument function that is called after passing the input through each layer. If connection = nothing, we return a tuple Parallel(nothing, f, g)(x, y) = (f(x), g(y))

    • Layers can be specified in two formats:

      • A list of N Lux layers

      • Specified as N keyword arguments.

    Extended Help

    Inputs

    • x: If x is not a tuple, then return is computed as connection([l(x) for l in layers]...). Else one is passed to each layer, thus Parallel(+, f, g)(x, y) = f(x) + g(y).

    Returns

    • See the Inputs section for how the output is computed

    • Updated state of the layers

    Parameters

    • Parameters of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    States

    • States of each layer wrapped in a NamedTuple with fields = layer_1, layer_2, ..., layer_N (naming changes if using the kwargs API)

    See also SkipConnection which is Parallel with one identity.

    Example

    julia
    julia> model = Parallel(nothing, Dense(2, 1), Dense(2, 1))
     Parallel(
    @@ -47,37 +47,37 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
            x2 = randn(rng, Float32, 2);
     
     julia> size.(first(model((x1, x2), ps, st)))
    -((1,), (1,))

    source

    `,17))]),t("details",g,[t("summary",null,[a[12]||(a[12]=t("a",{id:"Lux.SkipConnection",href:"#Lux.SkipConnection"},[t("span",{class:"jlbinding"},"Lux.SkipConnection")],-1)),a[13]||(a[13]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[14]||(a[14]=i(`
    julia
    SkipConnection(layers, connection; name=nothing)
    -SkipConnection(; layers, connection, name=nothing)

    Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

    The simplest "ResNet"-type connection is just SkipConnection(layer, +).

    Arguments

    • layer: Layer or Chain of layers to be applied to the input

    • connection:

      • A 2-argument function that takes layer(input) and the input OR

      • An AbstractLuxLayer that takes (layer(input), input) as input

    Extended Help

    Inputs

    • x: Will be passed directly to layer

    Returns

    • Output of connection(layer(input), input)

    • Updated state of layer

    Parameters

    • Parameters of layer OR

    • If connection is an AbstractLuxLayer, then NamedTuple with fields :layers and :connection

    States

    • States of layer OR

    • If connection is an AbstractLuxLayer, then NamedTuple with fields :layers and :connection

    See Parallel for a more general implementation.

    source

    `,16))]),t("details",k,[t("summary",null,[a[15]||(a[15]=t("a",{id:"Lux.RepeatedLayer",href:"#Lux.RepeatedLayer"},[t("span",{class:"jlbinding"},"Lux.RepeatedLayer")],-1)),a[16]||(a[16]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[17]||(a[17]=i(`
    julia
    RepeatedLayer(model; repeats::Val = Val(10), input_injection::Val = Val(false))

    Iteratively applies model for repeats number of times. The initial input is passed into the model repeatedly if input_injection = Val(true). This layer unrolls the computation, however, semantically this is same as:

    • input_injection = Val(false)
    julia
    res = x
    +((1,), (1,))

    source

    `,17))]),t("details",g,[t("summary",null,[a[12]||(a[12]=t("a",{id:"Lux.SkipConnection",href:"#Lux.SkipConnection"},[t("span",{class:"jlbinding"},"Lux.SkipConnection")],-1)),a[13]||(a[13]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[14]||(a[14]=i(`
    julia
    SkipConnection(layers, connection; name=nothing)
    +SkipConnection(; layers, connection, name=nothing)

    Create a skip connection which consists of a layer or Chain of consecutive layers and a shortcut connection linking the block's input to the output through a user-supplied 2-argument callable. The first argument to the callable will be propagated through the given layer while the second is the unchanged, "skipped" input.

    The simplest "ResNet"-type connection is just SkipConnection(layer, +).

    Arguments

    • layer: Layer or Chain of layers to be applied to the input

    • connection:

      • A 2-argument function that takes layer(input) and the input OR

      • An AbstractLuxLayer that takes (layer(input), input) as input

    Extended Help

    Inputs

    • x: Will be passed directly to layer

    Returns

    • Output of connection(layer(input), input)

    • Updated state of layer

    Parameters

    • Parameters of layer OR

    • If connection is an AbstractLuxLayer, then NamedTuple with fields :layers and :connection

    States

    • States of layer OR

    • If connection is an AbstractLuxLayer, then NamedTuple with fields :layers and :connection

    See Parallel for a more general implementation.

    source

    `,16))]),t("details",k,[t("summary",null,[a[15]||(a[15]=t("a",{id:"Lux.RepeatedLayer",href:"#Lux.RepeatedLayer"},[t("span",{class:"jlbinding"},"Lux.RepeatedLayer")],-1)),a[16]||(a[16]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[17]||(a[17]=i(`
    julia
    RepeatedLayer(model; repeats::Val = Val(10), input_injection::Val = Val(false))

    Iteratively applies model for repeats number of times. The initial input is passed into the model repeatedly if input_injection = Val(true). This layer unrolls the computation, however, semantically this is same as:

    • input_injection = Val(false)
    julia
    res = x
     for i in 1:repeats
         res, st = model(res, ps, st)
     end
    • input_injection = Val(true)
    julia
    res = x
     for i in 1:repeats
         res, st = model((res, x), ps, st)
    -end

    It is expected that repeats will be a reasonable number below 20, beyond that compile times for gradients might be unreasonably high.

    Arguments

    • model must be an AbstractLuxLayer

    Keyword Arguments

    • repeats: Number of times to apply the model

    • input_injection: If true, then the input is passed to the model along with the output

    Extended Help

    Inputs

    • x: Input as described above

    Returns

    • Output is computed by as described above

    • Updated state of the model

    Parameters

    • Parameters of model

    States

    • State of model

    source

    `,21))]),a[271]||(a[271]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[a[18]||(a[18]=t("a",{id:"Lux.Conv",href:"#Lux.Conv"},[t("span",{class:"jlbinding"},"Lux.Conv")],-1)),a[19]||(a[19]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[22]||(a[22]=i(`
    julia
    Conv(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
    +end

    It is expected that repeats will be a reasonable number below 20, beyond that compile times for gradients might be unreasonably high.

    Arguments

    • model must be an AbstractLuxLayer

    Keyword Arguments

    • repeats: Number of times to apply the model

    • input_injection: If true, then the input is passed to the model along with the output

    Extended Help

    Inputs

    • x: Input as described above

    Returns

    • Output is computed by as described above

    • Updated state of the model

    Parameters

    • Parameters of model

    States

    • State of model

    source

    `,21))]),a[271]||(a[271]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[a[18]||(a[18]=t("a",{id:"Lux.Conv",href:"#Lux.Conv"},[t("span",{class:"jlbinding"},"Lux.Conv")],-1)),a[19]||(a[19]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[22]||(a[22]=i(`
    julia
    Conv(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
          activation=identity; init_weight=nothing, init_bias=nothing, stride=1,
    -     pad=0, dilation=1, groups=1, use_bias=True(), cross_correlation=False())

    Standard convolutional layer.

    Conv 2D

    Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100 x 100 RGB image would be a 100 x 100 x 3 x 1 array, and a batch of 50 would be a 100 x 100 x 3 x 50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5, 5), a 2-tuple of integers. To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N + 2, where size(x, N + 1) == in_chs is the number of input channels, and size(x, ndims(x)) is the number of observations in a batch.

    Warning

    Frameworks like Pytorch perform cross-correlation in their convolution layers. Pass cross_correlation=true to use cross-correlation instead.

    Arguments

    • k: Tuple of integers specifying the size of the convolutional kernel. Eg, for 2D convolutions length(k) == 2

    • in_chs: Number of input channels

    • out_chs: Number of input and output channels

    • activation: Activation Function

    Extended Help

    Keyword Arguments

    • init_weight: Controls the initialization of the weight parameter. If nothing, then we use kaiming_uniform with gain computed on the basis of the activation function (taken from Pytorch nn.init.calculate_gain).

    • init_bias: Controls the initialization of the bias parameter. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(fan_in)).

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

      • Periodic padding can achieved by pre-empting the layer with a WrappedFunction(x -> NNlib.circular_pad(x, N_pad; dims=pad_dims))

    • groups: Expected to be an Int. It specifies the number of groups to divide a convolution into (set groups = in_chs for Depthwise Convolutions). in_chs and out_chs must be divisible by groups.

    • use_bias: Trainable bias can be disabled entirely by setting this to false.

    • cross_correlation: If true, perform cross-correlation instead of convolution. Prior to v1, Lux used to have a CrossCor layer which performed cross-correlation. This was removed in v1 in favor of Conv with cross_correlation=true.

    Inputs

    • x: Data satisfying ndims(x) == N + 2 && size(x, N - 1) == in_chs, i.e. size(x) = (I_N, ..., I_1, C_in, N)

    Returns

    • Output of the convolution y of size (O_N, ..., O_1, C_out, N) where
    `,13)),t("mjx-container",u,[(Q(),n("svg",y,a[20]||(a[20]=[i('',1)]))),a[21]||(a[21]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[23]||(a[23]=i('
    • Empty NamedTuple()

    Parameters

    • weight: Convolution kernel

    • bias: Bias (present if use_bias=true)

    source

    ',4))]),t("details",E,[t("summary",null,[a[24]||(a[24]=t("a",{id:"Lux.ConvTranspose",href:"#Lux.ConvTranspose"},[t("span",{class:"jlbinding"},"Lux.ConvTranspose")],-1)),a[25]||(a[25]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[26]||(a[26]=i(`
    julia
    ConvTranspose(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
    +     pad=0, dilation=1, groups=1, use_bias=True(), cross_correlation=False())

    Standard convolutional layer.

    Conv 2D

    Image data should be stored in WHCN order (width, height, channels, batch). In other words, a 100 x 100 RGB image would be a 100 x 100 x 3 x 1 array, and a batch of 50 would be a 100 x 100 x 3 x 50 array. This has N = 2 spatial dimensions, and needs a kernel size like (5, 5), a 2-tuple of integers. To take convolutions along N feature dimensions, this layer expects as input an array with ndims(x) == N + 2, where size(x, N + 1) == in_chs is the number of input channels, and size(x, ndims(x)) is the number of observations in a batch.

    Warning

    Frameworks like Pytorch perform cross-correlation in their convolution layers. Pass cross_correlation=true to use cross-correlation instead.

    Arguments

    • k: Tuple of integers specifying the size of the convolutional kernel. Eg, for 2D convolutions length(k) == 2

    • in_chs: Number of input channels

    • out_chs: Number of input and output channels

    • activation: Activation Function

    Extended Help

    Keyword Arguments

    • init_weight: Controls the initialization of the weight parameter. If nothing, then we use kaiming_uniform with gain computed on the basis of the activation function (taken from Pytorch nn.init.calculate_gain).

    • init_bias: Controls the initialization of the bias parameter. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(fan_in)).

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

      • Periodic padding can achieved by pre-empting the layer with a WrappedFunction(x -> NNlib.circular_pad(x, N_pad; dims=pad_dims))

    • groups: Expected to be an Int. It specifies the number of groups to divide a convolution into (set groups = in_chs for Depthwise Convolutions). in_chs and out_chs must be divisible by groups.

    • use_bias: Trainable bias can be disabled entirely by setting this to false.

    • cross_correlation: If true, perform cross-correlation instead of convolution. Prior to v1, Lux used to have a CrossCor layer which performed cross-correlation. This was removed in v1 in favor of Conv with cross_correlation=true.

    Inputs

    • x: Data satisfying ndims(x) == N + 2 && size(x, N - 1) == in_chs, i.e. size(x) = (I_N, ..., I_1, C_in, N)

    Returns

    • Output of the convolution y of size (O_N, ..., O_1, C_out, N) where
    `,13)),t("mjx-container",u,[(Q(),n("svg",y,a[20]||(a[20]=[i('',1)]))),a[21]||(a[21]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[23]||(a[23]=i('
    • Empty NamedTuple()

    Parameters

    • weight: Convolution kernel

    • bias: Bias (present if use_bias=true)

    source

    ',4))]),t("details",E,[t("summary",null,[a[24]||(a[24]=t("a",{id:"Lux.ConvTranspose",href:"#Lux.ConvTranspose"},[t("span",{class:"jlbinding"},"Lux.ConvTranspose")],-1)),a[25]||(a[25]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[26]||(a[26]=i(`
    julia
    ConvTranspose(k::NTuple{N,Integer}, (in_chs => out_chs)::Pair{<:Integer,<:Integer},
                   activation=identity; init_weight=glorot_uniform, init_bias=zeros32,
                   stride=1, pad=0, outpad=0, dilation=1, groups=1, use_bias=True(),
    -              cross_correlation=False())

    Standard convolutional transpose layer.

    Arguments

    • k: Tuple of integers specifying the size of the convolutional kernel. Eg, for 2D convolutions length(k) == 2

    • in_chs: Number of input channels

    • out_chs: Number of input and output channels

    • activation: Activation Function

    Keyword Arguments

    • init_weight: Controls the initialization of the weight parameter. If nothing, then we use kaiming_uniform with gain computed on the basis of the activation function (taken from Pytorch nn.init.calculate_gain).

    • init_bias: Controls the initialization of the bias parameter. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(fan_in)).

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) * stride (possibly rounded) for each spatial dimension.

    • groups: Expected to be an Int. It specifies the number of groups to divide a convolution into (set groups = in_chs for Depthwise Convolutions). in_chs and out_chs must be divisible by groups.

    • use_bias: Trainable bias can be disabled entirely by setting this to false.

    • cross_correlation: If true, perform transposed cross-correlation instead of transposed convolution.

    • outpad: To converse Conv inversability when stride > 1, outpad can be used to increase the size of the output in the desired dimensions. Whereas pad is used to zero-pad the input, outpad only affects the output shape.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2 && size(x, N - 1) == in_chs, i.e. size(x) = (I_N, ..., I_1, C_in, N)

    Returns

    • Output of the convolution transpose y of size (O_N, ..., O_1, C_out, N) where

    • Empty NamedTuple()

    Parameters

    • weight: Convolution Transpose kernel

    • bias: Bias (present if use_bias=true)

    source

    `,14))]),a[272]||(a[272]=t("h2",{id:"Dropout-Layers",tabindex:"-1"},[s("Dropout Layers "),t("a",{class:"header-anchor",href:"#Dropout-Layers","aria-label":'Permalink to "Dropout Layers {#Dropout-Layers}"'},"​")],-1)),t("details",f,[t("summary",null,[a[27]||(a[27]=t("a",{id:"Lux.AlphaDropout",href:"#Lux.AlphaDropout"},[t("span",{class:"jlbinding"},"Lux.AlphaDropout")],-1)),a[28]||(a[28]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[29]||(a[29]=i('
    julia
    AlphaDropout(p::Real)

    AlphaDropout layer.

    Arguments

    • p: Probability of Dropout
      • if p = 0 then NoOpLayer is returned.

      • if p = 1 then WrappedLayer(Base.Fix1(broadcast, zero)) is returned.

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    Call Lux.testmode to switch to test mode.

    See also Dropout, VariationalHiddenDropout

    source

    ',13))]),t("details",L,[t("summary",null,[a[30]||(a[30]=t("a",{id:"Lux.Dropout",href:"#Lux.Dropout"},[t("span",{class:"jlbinding"},"Lux.Dropout")],-1)),a[31]||(a[31]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[32]||(a[32]=i('
    julia
    Dropout(p; dims=:)

    Dropout layer.

    Arguments

    • p: Probability of Dropout (if p = 0 then NoOpLayer is returned)

    Keyword Arguments

    • To apply dropout along certain dimension(s), specify the dims keyword. e.g. Dropout(p; dims = (3,4)) will randomly zero out entire channels on WHCN input (also called 2D dropout).

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    Call Lux.testmode to switch to test mode.

    See also AlphaDropout, VariationalHiddenDropout

    source

    ',15))]),t("details",x,[t("summary",null,[a[33]||(a[33]=t("a",{id:"Lux.VariationalHiddenDropout",href:"#Lux.VariationalHiddenDropout"},[t("span",{class:"jlbinding"},"Lux.VariationalHiddenDropout")],-1)),a[34]||(a[34]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[35]||(a[35]=i('
    julia
    VariationalHiddenDropout(p; dims=:)

    VariationalHiddenDropout layer. The only difference from Dropout is that the mask is retained until Lux.update_state(l, :update_mask, Val(true)) is called.

    Arguments

    • p: Probability of Dropout (if p = 0 then NoOpLayer is returned)

    Keyword Arguments

    • To apply dropout along certain dimension(s), specify the dims keyword. e.g. VariationalHiddenDropout(p; dims = 3) will randomly zero out entire channels on WHCN input (also called 2D dropout).

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    • mask: Dropout mask. Initilly set to nothing. After every run, contains the mask applied in that call

    • update_mask: Stores whether new mask needs to be generated in the current call

    Call Lux.testmode to switch to test mode.

    See also AlphaDropout, Dropout

    source

    ',15))]),a[273]||(a[273]=t("h2",{id:"Pooling-Layers",tabindex:"-1"},[s("Pooling Layers "),t("a",{class:"header-anchor",href:"#Pooling-Layers","aria-label":'Permalink to "Pooling Layers {#Pooling-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[a[36]||(a[36]=t("a",{id:"Lux.AdaptiveLPPool",href:"#Lux.AdaptiveLPPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveLPPool")],-1)),a[37]||(a[37]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[38]||(a[38]=i('
    julia
    AdaptiveLPPool(output_size; p=2)

    Adaptive LP Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    GPU Support

    This layer is currently only supported on CPU.

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',10))]),t("details",w,[t("summary",null,[a[39]||(a[39]=t("a",{id:"Lux.AdaptiveMaxPool",href:"#Lux.AdaptiveMaxPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMaxPool")],-1)),a[40]||(a[40]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[41]||(a[41]=i('
    julia
    AdaptiveMaxPool(output_size)

    Adaptive Max Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',9))]),t("details",H,[t("summary",null,[a[42]||(a[42]=t("a",{id:"Lux.AdaptiveMeanPool",href:"#Lux.AdaptiveMeanPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMeanPool")],-1)),a[43]||(a[43]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[44]||(a[44]=i('
    julia
    AdaptiveMeanPool(output_size)

    Adaptive Mean Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',9))]),t("details",C,[t("summary",null,[a[45]||(a[45]=t("a",{id:"Lux.GlobalLPPool",href:"#Lux.GlobalLPPool"},[t("span",{class:"jlbinding"},"Lux.GlobalLPPool")],-1)),a[46]||(a[46]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[47]||(a[47]=i('
    julia
    GlobalLPPool(; p=2)

    Global LP Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    GPU Support

    This layer is currently only supported on CPU.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',8))]),t("details",F,[t("summary",null,[a[48]||(a[48]=t("a",{id:"Lux.GlobalMaxPool",href:"#Lux.GlobalMaxPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMaxPool")],-1)),a[49]||(a[49]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[50]||(a[50]=i('
    julia
    GlobalMaxPool()

    Global Max Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',7))]),t("details",D,[t("summary",null,[a[51]||(a[51]=t("a",{id:"Lux.GlobalMeanPool",href:"#Lux.GlobalMeanPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMeanPool")],-1)),a[52]||(a[52]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[53]||(a[53]=i('
    julia
    GlobalMeanPool()

    Global Mean Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',7))]),t("details",M,[t("summary",null,[a[54]||(a[54]=t("a",{id:"Lux.LPPool",href:"#Lux.LPPool"},[t("span",{class:"jlbinding"},"Lux.LPPool")],-1)),a[55]||(a[55]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[58]||(a[58]=i('
    julia
    LPPool(window; stride=window, pad=0, dilation=1, p=2)

    LP Pooling layer, which replaces all pixels in a block of size window with the reduction operation: lp.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    GPU Support

    This layer is currently only supported on CPU.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',12)),t("mjx-container",v,[(Q(),n("svg",Z,a[56]||(a[56]=[i('',1)]))),a[57]||(a[57]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[59]||(a[59]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[60]||(a[60]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L251",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",j,[t("summary",null,[a[61]||(a[61]=t("a",{id:"Lux.MaxPool",href:"#Lux.MaxPool"},[t("span",{class:"jlbinding"},"Lux.MaxPool")],-1)),a[62]||(a[62]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[65]||(a[65]=i('
    julia
    MaxPool(window; stride=window, pad=0, dilation=1)

    Max Pooling layer, which replaces all pixels in a block of size window with the reduction operation: max.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',11)),t("mjx-container",A,[(Q(),n("svg",B,a[63]||(a[63]=[i('',1)]))),a[64]||(a[64]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[66]||(a[66]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[67]||(a[67]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",V,[t("summary",null,[a[68]||(a[68]=t("a",{id:"Lux.MeanPool",href:"#Lux.MeanPool"},[t("span",{class:"jlbinding"},"Lux.MeanPool")],-1)),a[69]||(a[69]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[72]||(a[72]=i('
    julia
    MeanPool(window; stride=window, pad=0, dilation=1)

    Mean Pooling layer, which replaces all pixels in a block of size window with the reduction operation: mean.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',11)),t("mjx-container",N,[(Q(),n("svg",R,a[70]||(a[70]=[i('',1)]))),a[71]||(a[71]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[73]||(a[73]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[74]||(a[74]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),a[274]||(a[274]=t("h2",{id:"Recurrent-Layers",tabindex:"-1"},[s("Recurrent Layers "),t("a",{class:"header-anchor",href:"#Recurrent-Layers","aria-label":'Permalink to "Recurrent Layers {#Recurrent-Layers}"'},"​")],-1)),t("details",O,[t("summary",null,[a[75]||(a[75]=t("a",{id:"Lux.GRUCell",href:"#Lux.GRUCell"},[t("span",{class:"jlbinding"},"Lux.GRUCell")],-1)),a[76]||(a[76]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[120]||(a[120]=i(`
    julia
    GRUCell((in_dims, out_dims)::Pair{<:Int,<:Int}; use_bias=true, train_state::Bool=false,
    -        init_weight=nothing, init_bias=nothing, init_state=zeros32)

    Gated Recurrent Unit (GRU) Cell

    `,2)),t("mjx-container",z,[(Q(),n("svg",I,a[77]||(a[77]=[i('',1)]))),a[78]||(a[78]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"r")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"z")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"n")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"tanh"),t("mo",{"data-mjx-texclass":"NONE"},"⁡"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"+"),t("mi",null,"r"),t("mo",null,"⋅"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{stretchy:"false"},")"),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mo",{stretchy:"false"},"("),t("mn",null,"1"),t("mo",null,"−"),t("mi",null,"z"),t("mo",{stretchy:"false"},")"),t("mo",null,"⋅"),t("mi",null,"n"),t("mo",null,"+"),t("mi",null,"z"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])])])])])])],-1))]),a[121]||(a[121]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State) Dimension

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • init_bias: Initializer for bias. Must be a tuple containing 3 functions. If a single value is passed, it is copied into a 3 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. Must be a tuple containing 3 functions. If a single value is passed, it is copied into a 3 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[87]||(a[87]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[81]||(a[81]=s("Output ")),t("mjx-container",S,[(Q(),n("svg",P,a[79]||(a[79]=[i('',1)]))),a[80]||(a[80]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[82]||(a[82]=s(" of shape ")),a[83]||(a[83]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[86]||(a[86]=s("Tuple containing new hidden state ")),t("mjx-container",_,[(Q(),n("svg",G,a[84]||(a[84]=[i('',1)]))),a[85]||(a[85]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[88]||(a[88]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[122]||(a[122]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[91]||(a[91]=t("code",null,"weight_ih",-1)),a[92]||(a[92]=s(": Concatenated Weights to map from input space ")),t("mjx-container",W,[(Q(),n("svg",X,a[89]||(a[89]=[i('',1)]))),a[90]||(a[90]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[93]||(a[93]=s("."))])]),t("li",null,[t("p",null,[a[96]||(a[96]=t("code",null,"weight_hh",-1)),a[97]||(a[97]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",U,[(Q(),n("svg",q,a[94]||(a[94]=[i('',1)]))),a[95]||(a[95]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[98]||(a[98]=s("."))])]),t("li",null,[t("p",null,[a[101]||(a[101]=t("code",null,"bias_ih",-1)),a[102]||(a[102]=s(": Concatenated Bias vector for the input space ")),t("mjx-container",J,[(Q(),n("svg",K,a[99]||(a[99]=[i('',1)]))),a[100]||(a[100]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[103]||(a[103]=s(" (not present if ")),a[104]||(a[104]=t("code",null,"use_bias=false",-1)),a[105]||(a[105]=s(")."))])]),t("li",null,[t("p",null,[a[108]||(a[108]=t("code",null,"bias_hh",-1)),a[109]||(a[109]=s(": Concatenated Bias vector for the hidden space ")),t("mjx-container",$,[(Q(),n("svg",Y,a[106]||(a[106]=[i('',1)]))),a[107]||(a[107]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[110]||(a[110]=s(" (not present if ")),a[111]||(a[111]=t("code",null,"use_bias=false",-1)),a[112]||(a[112]=s(")."))])]),t("li",null,[t("p",null,[a[115]||(a[115]=t("code",null,"hidden_state",-1)),a[116]||(a[116]=s(": Initial hidden state vector (not present if ")),a[117]||(a[117]=t("code",null,"train_state=false",-1)),a[118]||(a[118]=s(") ")),t("mjx-container",t1,[(Q(),n("svg",a1,a[113]||(a[113]=[i('',1)]))),a[114]||(a[114]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[119]||(a[119]=s("."))])])]),a[123]||(a[123]=t("p",null,[t("strong",null,"States")],-1)),a[124]||(a[124]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[125]||(a[125]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/recurrent.jl#L488",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",s1,[t("summary",null,[a[126]||(a[126]=t("a",{id:"Lux.LSTMCell",href:"#Lux.LSTMCell"},[t("span",{class:"jlbinding"},"Lux.LSTMCell")],-1)),a[127]||(a[127]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[153]||(a[153]=i(`
    julia
    LSTMCell(in_dims => out_dims; use_bias::Bool=true, train_state::Bool=false,
    +              cross_correlation=False())

    Standard convolutional transpose layer.

    Arguments

    • k: Tuple of integers specifying the size of the convolutional kernel. Eg, for 2D convolutions length(k) == 2

    • in_chs: Number of input channels

    • out_chs: Number of input and output channels

    • activation: Activation Function

    Keyword Arguments

    • init_weight: Controls the initialization of the weight parameter. If nothing, then we use kaiming_uniform with gain computed on the basis of the activation function (taken from Pytorch nn.init.calculate_gain).

    • init_bias: Controls the initialization of the bias parameter. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(fan_in)).

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) * stride (possibly rounded) for each spatial dimension.

    • groups: Expected to be an Int. It specifies the number of groups to divide a convolution into (set groups = in_chs for Depthwise Convolutions). in_chs and out_chs must be divisible by groups.

    • use_bias: Trainable bias can be disabled entirely by setting this to false.

    • cross_correlation: If true, perform transposed cross-correlation instead of transposed convolution.

    • outpad: To converse Conv inversability when stride > 1, outpad can be used to increase the size of the output in the desired dimensions. Whereas pad is used to zero-pad the input, outpad only affects the output shape.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2 && size(x, N - 1) == in_chs, i.e. size(x) = (I_N, ..., I_1, C_in, N)

    Returns

    • Output of the convolution transpose y of size (O_N, ..., O_1, C_out, N) where

    • Empty NamedTuple()

    Parameters

    • weight: Convolution Transpose kernel

    • bias: Bias (present if use_bias=true)

    source

    `,14))]),a[272]||(a[272]=t("h2",{id:"Dropout-Layers",tabindex:"-1"},[s("Dropout Layers "),t("a",{class:"header-anchor",href:"#Dropout-Layers","aria-label":'Permalink to "Dropout Layers {#Dropout-Layers}"'},"​")],-1)),t("details",f,[t("summary",null,[a[27]||(a[27]=t("a",{id:"Lux.AlphaDropout",href:"#Lux.AlphaDropout"},[t("span",{class:"jlbinding"},"Lux.AlphaDropout")],-1)),a[28]||(a[28]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[29]||(a[29]=i('
    julia
    AlphaDropout(p::Real)

    AlphaDropout layer.

    Arguments

    • p: Probability of Dropout
      • if p = 0 then NoOpLayer is returned.

      • if p = 1 then WrappedLayer(Base.Fix1(broadcast, zero)) is returned.

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    Call Lux.testmode to switch to test mode.

    See also Dropout, VariationalHiddenDropout

    source

    ',13))]),t("details",L,[t("summary",null,[a[30]||(a[30]=t("a",{id:"Lux.Dropout",href:"#Lux.Dropout"},[t("span",{class:"jlbinding"},"Lux.Dropout")],-1)),a[31]||(a[31]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[32]||(a[32]=i('
    julia
    Dropout(p; dims=:)

    Dropout layer.

    Arguments

    • p: Probability of Dropout (if p = 0 then NoOpLayer is returned)

    Keyword Arguments

    • To apply dropout along certain dimension(s), specify the dims keyword. e.g. Dropout(p; dims = (3,4)) will randomly zero out entire channels on WHCN input (also called 2D dropout).

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    Call Lux.testmode to switch to test mode.

    See also AlphaDropout, VariationalHiddenDropout

    source

    ',15))]),t("details",x,[t("summary",null,[a[33]||(a[33]=t("a",{id:"Lux.VariationalHiddenDropout",href:"#Lux.VariationalHiddenDropout"},[t("span",{class:"jlbinding"},"Lux.VariationalHiddenDropout")],-1)),a[34]||(a[34]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[35]||(a[35]=i('
    julia
    VariationalHiddenDropout(p; dims=:)

    VariationalHiddenDropout layer. The only difference from Dropout is that the mask is retained until Lux.update_state(l, :update_mask, Val(true)) is called.

    Arguments

    • p: Probability of Dropout (if p = 0 then NoOpLayer is returned)

    Keyword Arguments

    • To apply dropout along certain dimension(s), specify the dims keyword. e.g. VariationalHiddenDropout(p; dims = 3) will randomly zero out entire channels on WHCN input (also called 2D dropout).

    Inputs

    • x: Must be an AbstractArray

    Returns

    • x with dropout mask applied if training=Val(true) else just x

    • State with updated rng

    States

    • rng: Pseudo Random Number Generator

    • training: Used to check if training/inference mode

    • mask: Dropout mask. Initilly set to nothing. After every run, contains the mask applied in that call

    • update_mask: Stores whether new mask needs to be generated in the current call

    Call Lux.testmode to switch to test mode.

    See also AlphaDropout, Dropout

    source

    ',15))]),a[273]||(a[273]=t("h2",{id:"Pooling-Layers",tabindex:"-1"},[s("Pooling Layers "),t("a",{class:"header-anchor",href:"#Pooling-Layers","aria-label":'Permalink to "Pooling Layers {#Pooling-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[a[36]||(a[36]=t("a",{id:"Lux.AdaptiveLPPool",href:"#Lux.AdaptiveLPPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveLPPool")],-1)),a[37]||(a[37]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[38]||(a[38]=i('
    julia
    AdaptiveLPPool(output_size; p=2)

    Adaptive LP Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    GPU Support

    This layer is currently only supported on CPU.

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',10))]),t("details",w,[t("summary",null,[a[39]||(a[39]=t("a",{id:"Lux.AdaptiveMaxPool",href:"#Lux.AdaptiveMaxPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMaxPool")],-1)),a[40]||(a[40]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[41]||(a[41]=i('
    julia
    AdaptiveMaxPool(output_size)

    Adaptive Max Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',9))]),t("details",H,[t("summary",null,[a[42]||(a[42]=t("a",{id:"Lux.AdaptiveMeanPool",href:"#Lux.AdaptiveMeanPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMeanPool")],-1)),a[43]||(a[43]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[44]||(a[44]=i('
    julia
    AdaptiveMeanPool(output_size)

    Adaptive Mean Pooling layer. Calculates the necessary window size such that its output has size(y)[1:N] == output_size.

    Arguments

    • output_size: Size of the first N dimensions for the output

    Inputs

    • x: Expects as input an array with ndims(x) == N + 2, i.e. channel and batch dimensions, after the N feature dimensions, where N = length(output_size).

    Returns

    • Output of size (out..., C, N)

    • Empty NamedTuple()

    source

    ',9))]),t("details",C,[t("summary",null,[a[45]||(a[45]=t("a",{id:"Lux.GlobalLPPool",href:"#Lux.GlobalLPPool"},[t("span",{class:"jlbinding"},"Lux.GlobalLPPool")],-1)),a[46]||(a[46]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[47]||(a[47]=i('
    julia
    GlobalLPPool(; p=2)

    Global LP Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    GPU Support

    This layer is currently only supported on CPU.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',8))]),t("details",D,[t("summary",null,[a[48]||(a[48]=t("a",{id:"Lux.GlobalMaxPool",href:"#Lux.GlobalMaxPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMaxPool")],-1)),a[49]||(a[49]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[50]||(a[50]=i('
    julia
    GlobalMaxPool()

    Global Max Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',7))]),t("details",F,[t("summary",null,[a[51]||(a[51]=t("a",{id:"Lux.GlobalMeanPool",href:"#Lux.GlobalMeanPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMeanPool")],-1)),a[52]||(a[52]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[53]||(a[53]=i('
    julia
    GlobalMeanPool()

    Global Mean Pooling layer. Transforms (w, h, c, b)-shaped input into (1, 1, c, b)-shaped output, by performing mean pooling on the complete (w, h)-shaped feature maps.

    Inputs

    • x: Data satisfying ndims(x) > 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (1, ..., 1, C, N)

    • Empty NamedTuple()

    source

    ',7))]),t("details",_,[t("summary",null,[a[54]||(a[54]=t("a",{id:"Lux.LPPool",href:"#Lux.LPPool"},[t("span",{class:"jlbinding"},"Lux.LPPool")],-1)),a[55]||(a[55]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[58]||(a[58]=i('
    julia
    LPPool(window; stride=window, pad=0, dilation=1, p=2)

    LP Pooling layer, which replaces all pixels in a block of size window with the reduction operation: lp.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    GPU Support

    This layer is currently only supported on CPU.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',12)),t("mjx-container",M,[(Q(),n("svg",A,a[56]||(a[56]=[i('',1)]))),a[57]||(a[57]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[59]||(a[59]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[60]||(a[60]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L251",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",v,[t("summary",null,[a[61]||(a[61]=t("a",{id:"Lux.MaxPool",href:"#Lux.MaxPool"},[t("span",{class:"jlbinding"},"Lux.MaxPool")],-1)),a[62]||(a[62]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[65]||(a[65]=i('
    julia
    MaxPool(window; stride=window, pad=0, dilation=1)

    Max Pooling layer, which replaces all pixels in a block of size window with the reduction operation: max.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',11)),t("mjx-container",Z,[(Q(),n("svg",j,a[63]||(a[63]=[i('',1)]))),a[64]||(a[64]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[66]||(a[66]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[67]||(a[67]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",V,[t("summary",null,[a[68]||(a[68]=t("a",{id:"Lux.MeanPool",href:"#Lux.MeanPool"},[t("span",{class:"jlbinding"},"Lux.MeanPool")],-1)),a[69]||(a[69]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[72]||(a[72]=i('
    julia
    MeanPool(window; stride=window, pad=0, dilation=1)

    Mean Pooling layer, which replaces all pixels in a block of size window with the reduction operation: mean.

    Arguments

    • window: Tuple of integers specifying the size of the window. Eg, for 2D pooling length(window) == 2

    Keyword Arguments

    • stride: Should each be either single integer, or a tuple with N integers

    • dilation: Should each be either single integer, or a tuple with N integers

    • pad: Specifies the number of elements added to the borders of the data array. It can be

      • a single integer for equal padding all around,

      • a tuple of N integers, to apply the same padding at begin/end of each spatial dimension,

      • a tuple of 2*N integers, for asymmetric padding, or

      • the singleton SamePad(), to calculate padding such that size(output,d) == size(x,d) / stride (possibly rounded) for each spatial dimension.

    Extended Help

    Inputs

    • x: Data satisfying ndims(x) == N + 2, i.e. size(x) = (I_N, ..., I_1, C, N)

    Returns

    • Output of the pooling y of size (O_N, ..., O_1, C, N) where
    ',11)),t("mjx-container",B,[(Q(),n("svg",N,a[70]||(a[70]=[i('',1)]))),a[71]||(a[71]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[73]||(a[73]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[74]||(a[74]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),a[274]||(a[274]=t("h2",{id:"Recurrent-Layers",tabindex:"-1"},[s("Recurrent Layers "),t("a",{class:"header-anchor",href:"#Recurrent-Layers","aria-label":'Permalink to "Recurrent Layers {#Recurrent-Layers}"'},"​")],-1)),t("details",S,[t("summary",null,[a[75]||(a[75]=t("a",{id:"Lux.GRUCell",href:"#Lux.GRUCell"},[t("span",{class:"jlbinding"},"Lux.GRUCell")],-1)),a[76]||(a[76]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[120]||(a[120]=i(`
    julia
    GRUCell((in_dims, out_dims)::Pair{<:Int,<:Int}; use_bias=true, train_state::Bool=false,
    +        init_weight=nothing, init_bias=nothing, init_state=zeros32)

    Gated Recurrent Unit (GRU) Cell

    `,2)),t("mjx-container",R,[(Q(),n("svg",I,a[77]||(a[77]=[i('',1)]))),a[78]||(a[78]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"r")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"z")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"n")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"tanh"),t("mo",{"data-mjx-texclass":"NONE"},"⁡"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"+"),t("mi",null,"r"),t("mo",null,"⋅"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{stretchy:"false"},")"),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mo",{stretchy:"false"},"("),t("mn",null,"1"),t("mo",null,"−"),t("mi",null,"z"),t("mo",{stretchy:"false"},")"),t("mo",null,"⋅"),t("mi",null,"n"),t("mo",null,"+"),t("mi",null,"z"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])])])])])])],-1))]),a[121]||(a[121]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State) Dimension

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • init_bias: Initializer for bias. Must be a tuple containing 3 functions. If a single value is passed, it is copied into a 3 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. Must be a tuple containing 3 functions. If a single value is passed, it is copied into a 3 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[87]||(a[87]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[81]||(a[81]=s("Output ")),t("mjx-container",P,[(Q(),n("svg",O,a[79]||(a[79]=[i('',1)]))),a[80]||(a[80]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[82]||(a[82]=s(" of shape ")),a[83]||(a[83]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[86]||(a[86]=s("Tuple containing new hidden state ")),t("mjx-container",z,[(Q(),n("svg",G,a[84]||(a[84]=[i('',1)]))),a[85]||(a[85]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[88]||(a[88]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[122]||(a[122]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[91]||(a[91]=t("code",null,"weight_ih",-1)),a[92]||(a[92]=s(": Concatenated Weights to map from input space ")),t("mjx-container",W,[(Q(),n("svg",X,a[89]||(a[89]=[i('',1)]))),a[90]||(a[90]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[93]||(a[93]=s("."))])]),t("li",null,[t("p",null,[a[96]||(a[96]=t("code",null,"weight_hh",-1)),a[97]||(a[97]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",U,[(Q(),n("svg",q,a[94]||(a[94]=[i('',1)]))),a[95]||(a[95]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[98]||(a[98]=s("."))])]),t("li",null,[t("p",null,[a[101]||(a[101]=t("code",null,"bias_ih",-1)),a[102]||(a[102]=s(": Concatenated Bias vector for the input space ")),t("mjx-container",J,[(Q(),n("svg",K,a[99]||(a[99]=[i('',1)]))),a[100]||(a[100]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[103]||(a[103]=s(" (not present if ")),a[104]||(a[104]=t("code",null,"use_bias=false",-1)),a[105]||(a[105]=s(")."))])]),t("li",null,[t("p",null,[a[108]||(a[108]=t("code",null,"bias_hh",-1)),a[109]||(a[109]=s(": Concatenated Bias vector for the hidden space ")),t("mjx-container",$,[(Q(),n("svg",Y,a[106]||(a[106]=[i('',1)]))),a[107]||(a[107]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[110]||(a[110]=s(" (not present if ")),a[111]||(a[111]=t("code",null,"use_bias=false",-1)),a[112]||(a[112]=s(")."))])]),t("li",null,[t("p",null,[a[115]||(a[115]=t("code",null,"hidden_state",-1)),a[116]||(a[116]=s(": Initial hidden state vector (not present if ")),a[117]||(a[117]=t("code",null,"train_state=false",-1)),a[118]||(a[118]=s(") ")),t("mjx-container",t1,[(Q(),n("svg",a1,a[113]||(a[113]=[i('',1)]))),a[114]||(a[114]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[119]||(a[119]=s("."))])])]),a[123]||(a[123]=t("p",null,[t("strong",null,"States")],-1)),a[124]||(a[124]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[125]||(a[125]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/recurrent.jl#L488",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",s1,[t("summary",null,[a[126]||(a[126]=t("a",{id:"Lux.LSTMCell",href:"#Lux.LSTMCell"},[t("span",{class:"jlbinding"},"Lux.LSTMCell")],-1)),a[127]||(a[127]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[153]||(a[153]=i(`
    julia
    LSTMCell(in_dims => out_dims; use_bias::Bool=true, train_state::Bool=false,
              train_memory::Bool=false, init_weight=nothing, init_bias=nothing,
    -         init_state=zeros32, init_memory=zeros32)

    Long Short-Term (LSTM) Cell

    `,2)),t("mjx-container",i1,[(Q(),n("svg",e1,a[128]||(a[128]=[i('',1)]))),a[129]||(a[129]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"i")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"f")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"f")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"g")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"g")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"o")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"o")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"f"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"i"),t("mo",null,"⋅"),t("mi",null,"g")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"o"),t("mo",null,"⋅"),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",{stretchy:"false"},")")])])])])],-1))]),a[154]||(a[154]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State & Memory) Dimension

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • train_memory: Trainable initial memory can be activated by setting this to true

    • init_bias: Initializer for bias. Must be a tuple containing 4 functions. If a single value is passed, it is copied into a 4 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. Must be a tuple containing 4 functions. If a single value is passed, it is copied into a 4 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    • init_memory: Initializer for memory

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to false - Creates a hidden state using init_state, hidden memory using init_memory and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to false - Repeats hidden_state vector from the parameters to match the shape of x, creates hidden memory using init_memory and proceeds to Case 2.

    • Case 1c: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to true - Creates a hidden state using init_state, repeats the memory vector from parameters to match the shape of x and proceeds to Case 2.

    • Case 1d: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to true - Repeats the hidden state and memory vectors from the parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, c)) is provided, then the output and a tuple containing the updated hidden state and memory is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[141]||(a[141]=t("p",null,"Tuple Containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[132]||(a[132]=s("Output ")),t("mjx-container",l1,[(Q(),n("svg",n1,a[130]||(a[130]=[i('',1)]))),a[131]||(a[131]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[133]||(a[133]=s(" of shape ")),a[134]||(a[134]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[139]||(a[139]=s("Tuple containing new hidden state ")),t("mjx-container",Q1,[(Q(),n("svg",T1,a[135]||(a[135]=[i('',1)]))),a[136]||(a[136]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[140]||(a[140]=s(" and new memory ")),t("mjx-container",d1,[(Q(),n("svg",o1,a[137]||(a[137]=[i('',1)]))),a[138]||(a[138]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[142]||(a[142]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[155]||(a[155]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[145]||(a[145]=t("code",null,"weight_ih",-1)),a[146]||(a[146]=s(": Concatenated Weights to map from input space ")),t("mjx-container",r1,[(Q(),n("svg",p1,a[143]||(a[143]=[i('',1)]))),a[144]||(a[144]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[147]||(a[147]=s("."))])]),t("li",null,[t("p",null,[a[150]||(a[150]=t("code",null,"weight_hh",-1)),a[151]||(a[151]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",h1,[(Q(),n("svg",m1,a[148]||(a[148]=[i('',1)]))),a[149]||(a[149]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))])])]),a[152]||(a[152]=i("
  • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false)

  • bias_hh: Concatenated Bias vector for the hidden-hidden connection (not present if use_bias=false)

  • hidden_state: Initial hidden state vector (not present if train_state=false)

  • memory: Initial memory vector (not present if train_memory=false)

  • ",4))]),a[156]||(a[156]=t("p",null,[t("strong",null,"States")],-1)),a[157]||(a[157]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[158]||(a[158]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/src/layers/recurrent.jl#L309",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",g1,[t("summary",null,[a[159]||(a[159]=t("a",{id:"Lux.RNNCell",href:"#Lux.RNNCell"},[t("span",{class:"jlbinding"},"Lux.RNNCell")],-1)),a[160]||(a[160]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[173]||(a[173]=i(`
    julia
    RNNCell(in_dims => out_dims, activation=tanh; use_bias=True(), train_state=False(),
    -    init_bias=nothing, init_weight=nothing, init_state=zeros32)

    An Elman RNNCell cell with activation (typically set to tanh or relu).

    `,2)),t("p",null,[t("mjx-container",k1,[(Q(),n("svg",c1,a[161]||(a[161]=[i('',1)]))),a[162]||(a[162]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",null,"="),t("mi",null,"a"),t("mi",null,"c"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"v"),t("mi",null,"a"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"o"),t("mi",null,"n"),t("mo",{stretchy:"false"},"("),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"+"),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",{stretchy:"false"},")")])],-1))])]),a[174]||(a[174]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State) Dimension

    • activation: Activation function

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • init_bias: Initializer for bias. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[171]||(a[171]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[165]||(a[165]=s("Output ")),t("mjx-container",u1,[(Q(),n("svg",y1,a[163]||(a[163]=[i('',1)]))),a[164]||(a[164]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[166]||(a[166]=s(" of shape ")),a[167]||(a[167]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[170]||(a[170]=s("Tuple containing new hidden state ")),t("mjx-container",E1,[(Q(),n("svg",f1,a[168]||(a[168]=[i('',1)]))),a[169]||(a[169]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[172]||(a[172]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[175]||(a[175]=i('

    Parameters

    • weight_ih: Maps the input to the hidden state.

    • weight_hh: Maps the hidden state to the hidden state.

    • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false)

    • bias_hh: Bias vector for the hidden-hidden connection (not present if use_bias=false)

    • hidden_state: Initial hidden state vector (not present if train_state=false)

    States

    • rng: Controls the randomness (if any) in the initial state generation

    source

    ',5))]),t("details",L1,[t("summary",null,[a[176]||(a[176]=t("a",{id:"Lux.Recurrence",href:"#Lux.Recurrence"},[t("span",{class:"jlbinding"},"Lux.Recurrence")],-1)),a[177]||(a[177]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[178]||(a[178]=i(`
    julia
    Recurrence(cell;
    +         init_state=zeros32, init_memory=zeros32)

    Long Short-Term (LSTM) Cell

    `,2)),t("mjx-container",i1,[(Q(),n("svg",e1,a[128]||(a[128]=[i('',1)]))),a[129]||(a[129]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"i")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"f")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"f")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"g")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"g")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"o")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"o")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"f"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"i"),t("mo",null,"⋅"),t("mi",null,"g")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"o"),t("mo",null,"⋅"),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",{stretchy:"false"},")")])])])])],-1))]),a[154]||(a[154]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State & Memory) Dimension

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • train_memory: Trainable initial memory can be activated by setting this to true

    • init_bias: Initializer for bias. Must be a tuple containing 4 functions. If a single value is passed, it is copied into a 4 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. Must be a tuple containing 4 functions. If a single value is passed, it is copied into a 4 element tuple. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    • init_memory: Initializer for memory

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to false - Creates a hidden state using init_state, hidden memory using init_memory and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to false - Repeats hidden_state vector from the parameters to match the shape of x, creates hidden memory using init_memory and proceeds to Case 2.

    • Case 1c: Only a single input x of shape (in_dims, batch_size), train_state is set to false, train_memory is set to true - Creates a hidden state using init_state, repeats the memory vector from parameters to match the shape of x and proceeds to Case 2.

    • Case 1d: Only a single input x of shape (in_dims, batch_size), train_state is set to true, train_memory is set to true - Repeats the hidden state and memory vectors from the parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, c)) is provided, then the output and a tuple containing the updated hidden state and memory is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[141]||(a[141]=t("p",null,"Tuple Containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[132]||(a[132]=s("Output ")),t("mjx-container",l1,[(Q(),n("svg",n1,a[130]||(a[130]=[i('',1)]))),a[131]||(a[131]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[133]||(a[133]=s(" of shape ")),a[134]||(a[134]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[139]||(a[139]=s("Tuple containing new hidden state ")),t("mjx-container",Q1,[(Q(),n("svg",T1,a[135]||(a[135]=[i('',1)]))),a[136]||(a[136]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[140]||(a[140]=s(" and new memory ")),t("mjx-container",d1,[(Q(),n("svg",o1,a[137]||(a[137]=[i('',1)]))),a[138]||(a[138]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[142]||(a[142]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[155]||(a[155]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[145]||(a[145]=t("code",null,"weight_ih",-1)),a[146]||(a[146]=s(": Concatenated Weights to map from input space ")),t("mjx-container",r1,[(Q(),n("svg",p1,a[143]||(a[143]=[i('',1)]))),a[144]||(a[144]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[147]||(a[147]=s("."))])]),t("li",null,[t("p",null,[a[150]||(a[150]=t("code",null,"weight_hh",-1)),a[151]||(a[151]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",h1,[(Q(),n("svg",m1,a[148]||(a[148]=[i('',1)]))),a[149]||(a[149]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))])])]),a[152]||(a[152]=i("
  • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false)

  • bias_hh: Concatenated Bias vector for the hidden-hidden connection (not present if use_bias=false)

  • hidden_state: Initial hidden state vector (not present if train_state=false)

  • memory: Initial memory vector (not present if train_memory=false)

  • ",4))]),a[156]||(a[156]=t("p",null,[t("strong",null,"States")],-1)),a[157]||(a[157]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[158]||(a[158]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/recurrent.jl#L309",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",g1,[t("summary",null,[a[159]||(a[159]=t("a",{id:"Lux.RNNCell",href:"#Lux.RNNCell"},[t("span",{class:"jlbinding"},"Lux.RNNCell")],-1)),a[160]||(a[160]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[173]||(a[173]=i(`
    julia
    RNNCell(in_dims => out_dims, activation=tanh; use_bias=True(), train_state=False(),
    +    init_bias=nothing, init_weight=nothing, init_state=zeros32)

    An Elman RNNCell cell with activation (typically set to tanh or relu).

    `,2)),t("p",null,[t("mjx-container",k1,[(Q(),n("svg",c1,a[161]||(a[161]=[i('',1)]))),a[162]||(a[162]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",null,"="),t("mi",null,"a"),t("mi",null,"c"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"v"),t("mi",null,"a"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"o"),t("mi",null,"n"),t("mo",{stretchy:"false"},"("),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"+"),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",{stretchy:"false"},")")])],-1))])]),a[174]||(a[174]=i("

    Arguments

    • in_dims: Input Dimension

    • out_dims: Output (Hidden State) Dimension

    • activation: Activation function

    • use_bias: Set to false to deactivate bias

    • train_state: Trainable initial hidden state can be activated by setting this to true

    • init_bias: Initializer for bias. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_weight: Initializer for weight. If nothing, then we use uniform distribution with bounds -bound and bound where bound = inv(sqrt(out_dims)).

    • init_state: Initializer for hidden state

    Inputs

    • Case 1a: Only a single input x of shape (in_dims, batch_size), train_state is set to false - Creates a hidden state using init_state and proceeds to Case 2.

    • Case 1b: Only a single input x of shape (in_dims, batch_size), train_state is set to true - Repeats hidden_state from parameters to match the shape of x and proceeds to Case 2.

    • Case 2: Tuple (x, (h, )) is provided, then the output and a tuple containing the updated hidden state is returned.

    Returns

    ",5)),t("ul",null,[t("li",null,[a[171]||(a[171]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[165]||(a[165]=s("Output ")),t("mjx-container",u1,[(Q(),n("svg",y1,a[163]||(a[163]=[i('',1)]))),a[164]||(a[164]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[166]||(a[166]=s(" of shape ")),a[167]||(a[167]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[170]||(a[170]=s("Tuple containing new hidden state ")),t("mjx-container",E1,[(Q(),n("svg",f1,a[168]||(a[168]=[i('',1)]))),a[169]||(a[169]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[172]||(a[172]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[175]||(a[175]=i('

    Parameters

    • weight_ih: Maps the input to the hidden state.

    • weight_hh: Maps the hidden state to the hidden state.

    • bias_ih: Bias vector for the input-hidden connection (not present if use_bias=false)

    • bias_hh: Bias vector for the hidden-hidden connection (not present if use_bias=false)

    • hidden_state: Initial hidden state vector (not present if train_state=false)

    States

    • rng: Controls the randomness (if any) in the initial state generation

    source

    ',5))]),t("details",L1,[t("summary",null,[a[176]||(a[176]=t("a",{id:"Lux.Recurrence",href:"#Lux.Recurrence"},[t("span",{class:"jlbinding"},"Lux.Recurrence")],-1)),a[177]||(a[177]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[178]||(a[178]=i(`
    julia
    Recurrence(cell;
         ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex(),
         return_sequence::Bool=false)

    Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) to automatically operate over a sequence of inputs.

    Relation to Flux.Recur

    This is completely distinct from Flux.Recur. It doesn't make the cell stateful, rather allows operating on an entire sequence of inputs at once. See StatefulRecurrentCell for functionality similar to Flux.Recur.

    Arguments

    • cell: A recurrent cell. See RNNCell, LSTMCell, GRUCell, for how the inputs/outputs of a recurrent cell must be structured.

    Keyword Arguments

    • return_sequence: If true returns the entire sequence of outputs, else returns only the last output. Defaults to false.

    • ordering: The ordering of the batch and time dimensions in the input. Defaults to BatchLastIndex(). Alternatively can be set to TimeLastIndex().

    Extended Help

    Inputs

    • If x is a
      • Tuple or Vector: Each element is fed to the cell sequentially.

      • Array (except a Vector): It is spliced along the penultimate dimension and each slice is fed to the cell sequentially.

    Returns

    • Output of the cell for the entire sequence.

    • Update state of the cell.

    Tip

    Frameworks like Tensorflow have special implementation of StackedRNNCells to handle sequentially composed RNN Cells. In Lux, one can simple stack multiple Recurrence blocks in a Chain to achieve the same.

    Chain(
         Recurrence(RNNCell(inputsize => latentsize); return_sequence=true),
         Recurrence(RNNCell(latentsize => latentsize); return_sequence=true),
         :
         x -> stack(x; dims=2)
    -)

    For some discussion on this topic, see https://github.com/LuxDL/Lux.jl/issues/472.

    source

    `,14))]),t("details",x1,[t("summary",null,[a[179]||(a[179]=t("a",{id:"Lux.StatefulRecurrentCell",href:"#Lux.StatefulRecurrentCell"},[t("span",{class:"jlbinding"},"Lux.StatefulRecurrentCell")],-1)),a[180]||(a[180]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[181]||(a[181]=i('
    julia
    StatefulRecurrentCell(cell)

    Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) and makes it stateful.

    To avoid undefined behavior, once the processing of a single sequence of data is complete, update the state with Lux.update_state(st, :carry, nothing).

    Arguments

    • cell: A recurrent cell. See RNNCell, LSTMCell, GRUCell, for how the inputs/outputs of a recurrent cell must be structured.

    Inputs

    • Input to the cell.

    Returns

    • Output of the cell for the entire sequence.

    • Update state of the cell and updated carry.

    States

    • NamedTuple containing:
      • cell: Same as cell.

      • carry: The carry state of the cell.

    source

    ',12))]),t("details",b1,[t("summary",null,[a[182]||(a[182]=t("a",{id:"Lux.BidirectionalRNN",href:"#Lux.BidirectionalRNN"},[t("span",{class:"jlbinding"},"Lux.BidirectionalRNN")],-1)),a[183]||(a[183]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[184]||(a[184]=i(`
    julia
    BidirectionalRNN(cell::AbstractRecurrentCell,
    +)

    For some discussion on this topic, see https://github.com/LuxDL/Lux.jl/issues/472.

    source

    `,14))]),t("details",x1,[t("summary",null,[a[179]||(a[179]=t("a",{id:"Lux.StatefulRecurrentCell",href:"#Lux.StatefulRecurrentCell"},[t("span",{class:"jlbinding"},"Lux.StatefulRecurrentCell")],-1)),a[180]||(a[180]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[181]||(a[181]=i('
    julia
    StatefulRecurrentCell(cell)

    Wraps a recurrent cell (like RNNCell, LSTMCell, GRUCell) and makes it stateful.

    To avoid undefined behavior, once the processing of a single sequence of data is complete, update the state with Lux.update_state(st, :carry, nothing).

    Arguments

    Inputs

    Returns

    States

    source

    ',12))]),t("details",b1,[t("summary",null,[a[182]||(a[182]=t("a",{id:"Lux.BidirectionalRNN",href:"#Lux.BidirectionalRNN"},[t("span",{class:"jlbinding"},"Lux.BidirectionalRNN")],-1)),a[183]||(a[183]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[184]||(a[184]=i(`
    julia
    BidirectionalRNN(cell::AbstractRecurrentCell,
         backward_cell::Union{AbstractRecurrentCell, Nothing}=nothing;
         merge_mode::Union{Function, Nothing}=vcat,
    -    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex())

    Bidirectional RNN wrapper.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    source

    `,16))]),a[275]||(a[275]=t("h2",{id:"Linear-Layers",tabindex:"-1"},[s("Linear Layers "),t("a",{class:"header-anchor",href:"#Linear-Layers","aria-label":'Permalink to "Linear Layers {#Linear-Layers}"'},"​")],-1)),t("details",w1,[t("summary",null,[a[185]||(a[185]=t("a",{id:"Lux.Bilinear",href:"#Lux.Bilinear"},[t("span",{class:"jlbinding"},"Lux.Bilinear")],-1)),a[186]||(a[186]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[187]||(a[187]=i(`
    julia
    Bilinear((in1_dims, in2_dims) => out, activation=identity; init_weight=nothing,
    +    ordering::AbstractTimeSeriesDataBatchOrdering=BatchLastIndex())

    Bidirectional RNN wrapper.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    source

    `,16))]),a[275]||(a[275]=t("h2",{id:"Linear-Layers",tabindex:"-1"},[s("Linear Layers "),t("a",{class:"header-anchor",href:"#Linear-Layers","aria-label":'Permalink to "Linear Layers {#Linear-Layers}"'},"​")],-1)),t("details",w1,[t("summary",null,[a[185]||(a[185]=t("a",{id:"Lux.Bilinear",href:"#Lux.Bilinear"},[t("span",{class:"jlbinding"},"Lux.Bilinear")],-1)),a[186]||(a[186]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[187]||(a[187]=i(`
    julia
    Bilinear((in1_dims, in2_dims) => out, activation=identity; init_weight=nothing,
              init_bias=nothing, use_bias=True())
     Bilinear(in12_dims => out, activation=identity; init_weight=nothing,
    -         init_bias=nothing, use_bias=True())

    Create a fully connected layer between two inputs and an output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i in 1:out:

    z[i] = activation(x' * W[i, :, :] * y + bias[i])

    If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,15))]),t("details",H1,[t("summary",null,[a[188]||(a[188]=t("a",{id:"Lux.Dense",href:"#Lux.Dense"},[t("span",{class:"jlbinding"},"Lux.Dense")],-1)),a[189]||(a[189]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[190]||(a[190]=i(`
    julia
    Dense(in_dims => out_dims, activation=identity; init_weight=nothing,
    -      init_bias=nothing, use_bias=True())

    Create a traditional fully connected layer, whose forward pass is given by: y = activation.(weight * x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,13))]),t("details",C1,[t("summary",null,[a[191]||(a[191]=t("a",{id:"Lux.Embedding",href:"#Lux.Embedding"},[t("span",{class:"jlbinding"},"Lux.Embedding")],-1)),a[192]||(a[192]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[193]||(a[193]=i('
    julia
    Embedding(in_dims => out_dims; init_weight=rand32)

    A lookup table that stores embeddings of dimension out_dims for a vocabulary of size in_dims. When the vocabulary is multi-dimensional, the input is expected to be a tuple of Cartesian indices.

    This layer is often used to store word embeddings and retrieve them using indices.

    Arguments

    Keyword Arguments

    Input

    Returns

    source

    ',12))]),t("details",F1,[t("summary",null,[a[194]||(a[194]=t("a",{id:"Lux.Scale",href:"#Lux.Scale"},[t("span",{class:"jlbinding"},"Lux.Scale")],-1)),a[195]||(a[195]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[196]||(a[196]=i('
    julia
    Scale(dims, activation=identity; init_weight=ones32, init_bias=zeros32, use_bias=True())

    Create a Sparsely Connected Layer with a very specific structure (only Diagonal Elements are non-zero). The forward pass is given by: y = activation.(weight .* x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    ',13))]),a[276]||(a[276]=t("h2",{id:"Misc.-Helper-Layers",tabindex:"-1"},[s("Misc. Helper Layers "),t("a",{class:"header-anchor",href:"#Misc.-Helper-Layers","aria-label":'Permalink to "Misc. Helper Layers {#Misc.-Helper-Layers}"'},"​")],-1)),t("details",D1,[t("summary",null,[a[197]||(a[197]=t("a",{id:"Lux.FlattenLayer",href:"#Lux.FlattenLayer"},[t("span",{class:"jlbinding"},"Lux.FlattenLayer")],-1)),a[198]||(a[198]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[199]||(a[199]=i(`
    julia
    FlattenLayer(; N = nothing)

    Flattens the passed array into a matrix.

    Keyword Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = FlattenLayer()
    +         init_bias=nothing, use_bias=True())

    Create a fully connected layer between two inputs and an output, and otherwise similar to Dense. Its output, given vectors x & y, is another vector z with, for all i in 1:out:

    z[i] = activation(x' * W[i, :, :] * y + bias[i])

    If x and y are matrices, then each column of the output z = B(x, y) is of this form, with B the Bilinear layer.

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,15))]),t("details",H1,[t("summary",null,[a[188]||(a[188]=t("a",{id:"Lux.Dense",href:"#Lux.Dense"},[t("span",{class:"jlbinding"},"Lux.Dense")],-1)),a[189]||(a[189]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[190]||(a[190]=i(`
    julia
    Dense(in_dims => out_dims, activation=identity; init_weight=nothing,
    +      init_bias=nothing, use_bias=True())

    Create a traditional fully connected layer, whose forward pass is given by: y = activation.(weight * x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    `,13))]),t("details",C1,[t("summary",null,[a[191]||(a[191]=t("a",{id:"Lux.Embedding",href:"#Lux.Embedding"},[t("span",{class:"jlbinding"},"Lux.Embedding")],-1)),a[192]||(a[192]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[193]||(a[193]=i('
    julia
    Embedding(in_dims => out_dims; init_weight=rand32)

    A lookup table that stores embeddings of dimension out_dims for a vocabulary of size in_dims. When the vocabulary is multi-dimensional, the input is expected to be a tuple of Cartesian indices.

    This layer is often used to store word embeddings and retrieve them using indices.

    Arguments

    Keyword Arguments

    Input

    Returns

    source

    ',12))]),t("details",D1,[t("summary",null,[a[194]||(a[194]=t("a",{id:"Lux.Scale",href:"#Lux.Scale"},[t("span",{class:"jlbinding"},"Lux.Scale")],-1)),a[195]||(a[195]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[196]||(a[196]=i('
    julia
    Scale(dims, activation=identity; init_weight=ones32, init_bias=zeros32, use_bias=True())

    Create a Sparsely Connected Layer with a very specific structure (only Diagonal Elements are non-zero). The forward pass is given by: y = activation.(weight .* x .+ bias)

    Arguments

    Keyword Arguments

    Input

    Returns

    Parameters

    source

    ',13))]),a[276]||(a[276]=t("h2",{id:"Misc.-Helper-Layers",tabindex:"-1"},[s("Misc. Helper Layers "),t("a",{class:"header-anchor",href:"#Misc.-Helper-Layers","aria-label":'Permalink to "Misc. Helper Layers {#Misc.-Helper-Layers}"'},"​")],-1)),t("details",F1,[t("summary",null,[a[197]||(a[197]=t("a",{id:"Lux.FlattenLayer",href:"#Lux.FlattenLayer"},[t("span",{class:"jlbinding"},"Lux.FlattenLayer")],-1)),a[198]||(a[198]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[199]||(a[199]=i(`
    julia
    FlattenLayer(; N = nothing)

    Flattens the passed array into a matrix.

    Keyword Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = FlattenLayer()
     FlattenLayer{Nothing}(nothing)
     
     julia> rng = Random.default_rng();
    @@ -87,9 +87,9 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
     
     julia> y, st_new = model(x, ps, st);
            size(y)
    -(8, 2)

    source

    `,11))]),t("details",M1,[t("summary",null,[a[200]||(a[200]=t("a",{id:"Lux.Maxout",href:"#Lux.Maxout"},[t("span",{class:"jlbinding"},"Lux.Maxout")],-1)),a[201]||(a[201]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[202]||(a[202]=i(`
    julia
    Maxout(layers...)
    +(8, 2)

    source

    `,11))]),t("details",_1,[t("summary",null,[a[200]||(a[200]=t("a",{id:"Lux.Maxout",href:"#Lux.Maxout"},[t("span",{class:"jlbinding"},"Lux.Maxout")],-1)),a[201]||(a[201]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[202]||(a[202]=i(`
    julia
    Maxout(layers...)
     Maxout(; layers...)
    -Maxout(f::Function, n_alts::Int)

    This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the the internal layers' outputs.

    Maxout over linear dense layers satisfies the universal approximation theorem. See [1].

    See also Parallel to reduce with other operators.

    Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    References

    [1] Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389

    source

    `,18))]),t("details",v1,[t("summary",null,[a[203]||(a[203]=t("a",{id:"Lux.NoOpLayer",href:"#Lux.NoOpLayer"},[t("span",{class:"jlbinding"},"Lux.NoOpLayer")],-1)),a[204]||(a[204]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[205]||(a[205]=i(`
    julia
    NoOpLayer()

    As the name suggests does nothing but allows pretty printing of layers. Whatever input is passed is returned.

    Example

    julia
    julia> model = NoOpLayer()
    +Maxout(f::Function, n_alts::Int)

    This contains a number of internal layers, each of which receives the same input. Its output is the elementwise maximum of the the internal layers' outputs.

    Maxout over linear dense layers satisfies the universal approximation theorem. See [1].

    See also Parallel to reduce with other operators.

    Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    References

    [1] Goodfellow, Warde-Farley, Mirza, Courville & Bengio "Maxout Networks" https://arxiv.org/abs/1302.4389

    source

    `,18))]),t("details",M1,[t("summary",null,[a[203]||(a[203]=t("a",{id:"Lux.NoOpLayer",href:"#Lux.NoOpLayer"},[t("span",{class:"jlbinding"},"Lux.NoOpLayer")],-1)),a[204]||(a[204]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[205]||(a[205]=i(`
    julia
    NoOpLayer()

    As the name suggests does nothing but allows pretty printing of layers. Whatever input is passed is returned.

    Example

    julia
    julia> model = NoOpLayer()
     NoOpLayer()
     
     julia> rng = Random.default_rng();
    @@ -99,7 +99,7 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
     1
     
     julia> y, st_new = model(x, ps, st)
    -(1, NamedTuple())

    source

    `,5))]),t("details",Z1,[t("summary",null,[a[206]||(a[206]=t("a",{id:"Lux.ReshapeLayer",href:"#Lux.ReshapeLayer"},[t("span",{class:"jlbinding"},"Lux.ReshapeLayer")],-1)),a[207]||(a[207]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[208]||(a[208]=i(`
    julia
    ReshapeLayer(dims)

    Reshapes the passed array to have a size of (dims..., :)

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReshapeLayer((2, 2))
    +(1, NamedTuple())

    source

    `,5))]),t("details",A1,[t("summary",null,[a[206]||(a[206]=t("a",{id:"Lux.ReshapeLayer",href:"#Lux.ReshapeLayer"},[t("span",{class:"jlbinding"},"Lux.ReshapeLayer")],-1)),a[207]||(a[207]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[208]||(a[208]=i(`
    julia
    ReshapeLayer(dims)

    Reshapes the passed array to have a size of (dims..., :)

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReshapeLayer((2, 2))
     ReshapeLayer(output_dims = (2, 2, :))
     
     julia> rng = Random.default_rng();
    @@ -109,7 +109,7 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
     
     julia> y, st_new = model(x, ps, st);
            size(y)
    -(2, 2, 3)

    source

    `,11))]),t("details",j1,[t("summary",null,[a[209]||(a[209]=t("a",{id:"Lux.SelectDim",href:"#Lux.SelectDim"},[t("span",{class:"jlbinding"},"Lux.SelectDim")],-1)),a[210]||(a[210]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[211]||(a[211]=i('
    julia
    SelectDim(dim, i)

    Return a view of all the data of the input x where the index for dimension dim equals i. Equivalent to view(x,:,:,...,i,:,:,...) where i is in position d.

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",A1,[t("summary",null,[a[212]||(a[212]=t("a",{id:"Lux.WrappedFunction",href:"#Lux.WrappedFunction"},[t("span",{class:"jlbinding"},"Lux.WrappedFunction")],-1)),a[213]||(a[213]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[214]||(a[214]=i('
    julia
    WrappedFunction(f)

    Wraps a stateless and parameter less function. Might be used when a function is added to Chain. For example, Chain(x -> relu.(x)) would not work and the right thing to do would be Chain((x, ps, st) -> (relu.(x), st)). An easier thing to do would be Chain(WrappedFunction(Base.Fix1(broadcast, relu)))

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",B1,[t("summary",null,[a[215]||(a[215]=t("a",{id:"Lux.ReverseSequence",href:"#Lux.ReverseSequence"},[t("span",{class:"jlbinding"},"Lux.ReverseSequence")],-1)),a[216]||(a[216]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[217]||(a[217]=i(`
    julia
    ReverseSequence(dim = nothing)

    Reverse the specified dimension dims of the passed array

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReverseSequence()
    +(2, 2, 3)

    source

    `,11))]),t("details",v1,[t("summary",null,[a[209]||(a[209]=t("a",{id:"Lux.SelectDim",href:"#Lux.SelectDim"},[t("span",{class:"jlbinding"},"Lux.SelectDim")],-1)),a[210]||(a[210]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[211]||(a[211]=i('
    julia
    SelectDim(dim, i)

    Return a view of all the data of the input x where the index for dimension dim equals i. Equivalent to view(x,:,:,...,i,:,:,...) where i is in position d.

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",Z1,[t("summary",null,[a[212]||(a[212]=t("a",{id:"Lux.WrappedFunction",href:"#Lux.WrappedFunction"},[t("span",{class:"jlbinding"},"Lux.WrappedFunction")],-1)),a[213]||(a[213]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[214]||(a[214]=i('
    julia
    WrappedFunction(f)

    Wraps a stateless and parameter less function. Might be used when a function is added to Chain. For example, Chain(x -> relu.(x)) would not work and the right thing to do would be Chain((x, ps, st) -> (relu.(x), st)). An easier thing to do would be Chain(WrappedFunction(Base.Fix1(broadcast, relu)))

    Arguments

    Inputs

    Returns

    source

    ',9))]),t("details",j1,[t("summary",null,[a[215]||(a[215]=t("a",{id:"Lux.ReverseSequence",href:"#Lux.ReverseSequence"},[t("span",{class:"jlbinding"},"Lux.ReverseSequence")],-1)),a[216]||(a[216]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[217]||(a[217]=i(`
    julia
    ReverseSequence(dim = nothing)

    Reverse the specified dimension dims of the passed array

    Arguments

    Inputs

    Returns

    Example

    julia
    julia> model = ReverseSequence()
     ReverseSequence{Nothing}(nothing)
     
     julia> rng = Random.default_rng();
    @@ -118,15 +118,15 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
            x = [1.0, 2.0, 3.0];
     
     julia> y, st_new = model(x, ps, st)
    -([3.0, 2.0, 1.0], NamedTuple())

    source

    `,11))]),a[277]||(a[277]=t("h2",{id:"Normalization-Layers",tabindex:"-1"},[s("Normalization Layers "),t("a",{class:"header-anchor",href:"#Normalization-Layers","aria-label":'Permalink to "Normalization Layers {#Normalization-Layers}"'},"​")],-1)),t("details",V1,[t("summary",null,[a[218]||(a[218]=t("a",{id:"Lux.BatchNorm",href:"#Lux.BatchNorm"},[t("span",{class:"jlbinding"},"Lux.BatchNorm")],-1)),a[219]||(a[219]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[225]||(a[225]=i(`
    julia
    BatchNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    -          affine=True(), track_stats=True(), epsilon=1f-5, momentum=0.1f0)

    Batch Normalization layer.

    `,2)),t("p",null,[a[222]||(a[222]=t("code",null,"BatchNorm",-1)),a[223]||(a[223]=s(" computes the mean and variance for each ")),t("mjx-container",N1,[(Q(),n("svg",R1,a[220]||(a[220]=[i('',1)]))),a[221]||(a[221]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mi",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mi",null,"×"),t("mn",null,"1"),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),a[224]||(a[224]=s(" input slice and normalises the input accordingly."))]),a[226]||(a[226]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), BatchNorm(64, relu), Dense(64 => 10), BatchNorm(10))
    +([3.0, 2.0, 1.0], NamedTuple())

    source

    `,11))]),a[277]||(a[277]=t("h2",{id:"Normalization-Layers",tabindex:"-1"},[s("Normalization Layers "),t("a",{class:"header-anchor",href:"#Normalization-Layers","aria-label":'Permalink to "Normalization Layers {#Normalization-Layers}"'},"​")],-1)),t("details",V1,[t("summary",null,[a[218]||(a[218]=t("a",{id:"Lux.BatchNorm",href:"#Lux.BatchNorm"},[t("span",{class:"jlbinding"},"Lux.BatchNorm")],-1)),a[219]||(a[219]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[225]||(a[225]=i(`
    julia
    BatchNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    +          affine=True(), track_stats=True(), epsilon=1f-5, momentum=0.1f0)

    Batch Normalization layer.

    `,2)),t("p",null,[a[222]||(a[222]=t("code",null,"BatchNorm",-1)),a[223]||(a[223]=s(" computes the mean and variance for each ")),t("mjx-container",B1,[(Q(),n("svg",N1,a[220]||(a[220]=[i('',1)]))),a[221]||(a[221]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mi",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mi",null,"×"),t("mn",null,"1"),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),a[224]||(a[224]=s(" input slice and normalises the input accordingly."))]),a[226]||(a[226]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), BatchNorm(64, relu), Dense(64 => 10), BatchNorm(10))
     Chain(
         layer_1 = Dense(784 => 64),         # 50_240 parameters
         layer_2 = BatchNorm(64, relu, affine=true, track_stats=true),  # 128 parameters, plus 129
         layer_3 = Dense(64 => 10),          # 650 parameters
         layer_4 = BatchNorm(10, affine=true, track_stats=true),  # 20 parameters, plus 21
     )         # Total: 51_038 parameters,
    -          #        plus 150 states.

    Warning

    Passing a batch size of 1, during training will result in an error.

    See also BatchNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,19))]),t("details",O1,[t("summary",null,[a[227]||(a[227]=t("a",{id:"Lux.GroupNorm",href:"#Lux.GroupNorm"},[t("span",{class:"jlbinding"},"Lux.GroupNorm")],-1)),a[228]||(a[228]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[229]||(a[229]=i(`
    julia
    GroupNorm(chs::Integer, groups::Integer, activation=identity; init_bias=zeros32,
    +          #        plus 150 states.

    Warning

    Passing a batch size of 1, during training will result in an error.

    See also BatchNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,19))]),t("details",S1,[t("summary",null,[a[227]||(a[227]=t("a",{id:"Lux.GroupNorm",href:"#Lux.GroupNorm"},[t("span",{class:"jlbinding"},"Lux.GroupNorm")],-1)),a[228]||(a[228]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[229]||(a[229]=i(`
    julia
    GroupNorm(chs::Integer, groups::Integer, activation=identity; init_bias=zeros32,
               init_scale=ones32, affine=true, epsilon=1f-5)

    Group Normalization layer.

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), GroupNorm(64, 4, relu), Dense(64 => 10), GroupNorm(10, 5))
     Chain(
         layer_1 = Dense(784 => 64),         # 50_240 parameters
    @@ -134,8 +134,8 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
         layer_3 = Dense(64 => 10),          # 650 parameters
         layer_4 = GroupNorm(10, 5, affine=true),  # 20 parameters
     )         # Total: 51_038 parameters,
    -          #        plus 0 states.

    See also GroupNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",z1,[t("summary",null,[a[230]||(a[230]=t("a",{id:"Lux.InstanceNorm",href:"#Lux.InstanceNorm"},[t("span",{class:"jlbinding"},"Lux.InstanceNorm")],-1)),a[231]||(a[231]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[236]||(a[236]=i(`
    julia
    InstanceNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    -             affine=False(), track_stats=False(), epsilon=1f-5, momentum=0.1f0)

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[a[234]||(a[234]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",I1,[(Q(),n("svg",S1,a[232]||(a[232]=[i('',1)]))),a[233]||(a[233]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),a[235]||(a[235]=s("` input slice and normalises the input accordingly."))]),a[237]||(a[237]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), InstanceNorm(64, relu; affine=true), Dense(64 => 10),
    +          #        plus 0 states.

    See also GroupNorm, InstanceNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",R1,[t("summary",null,[a[230]||(a[230]=t("a",{id:"Lux.InstanceNorm",href:"#Lux.InstanceNorm"},[t("span",{class:"jlbinding"},"Lux.InstanceNorm")],-1)),a[231]||(a[231]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[236]||(a[236]=i(`
    julia
    InstanceNorm(chs::Integer, activation=identity; init_bias=zeros32, init_scale=ones32,
    +             affine=False(), track_stats=False(), epsilon=1f-5, momentum=0.1f0)

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[a[234]||(a[234]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",I1,[(Q(),n("svg",P1,a[232]||(a[232]=[i('',1)]))),a[233]||(a[233]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),a[235]||(a[235]=s("` input slice and normalises the input accordingly."))]),a[237]||(a[237]=i(`

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    States

    Use Lux.testmode during inference.

    Example

    julia
    julia> Chain(Dense(784 => 64), InstanceNorm(64, relu; affine=true), Dense(64 => 10),
                InstanceNorm(10, relu; affine=true))
     Chain(
         layer_1 = Dense(784 => 64),         # 50_240 parameters
    @@ -143,7 +143,7 @@ import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/fr
         layer_3 = Dense(64 => 10),          # 650 parameters
         layer_4 = InstanceNorm(10, relu, affine=true, track_stats=false),  # 20 parameters, plus 1
     )         # Total: 51_038 parameters,
    -          #        plus 2 states.

    References

    [1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

    See also BatchNorm, GroupNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",P1,[t("summary",null,[a[238]||(a[238]=t("a",{id:"Lux.LayerNorm",href:"#Lux.LayerNorm"},[t("span",{class:"jlbinding"},"Lux.LayerNorm")],-1)),a[239]||(a[239]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[255]||(a[255]=i(`
    julia
    LayerNorm(shape::NTuple{N, Int}, activation=identity; epsilon=1f-5, dims=Colon(),
    -          affine=true, init_bias=zeros32, init_scale=ones32)

    Computes mean and standard deviation over the whole input array, and uses these to normalize the whole array. Optionally applies an elementwise affine transformation afterwards.

    `,2)),t("p",null,[a[242]||(a[242]=s("Given an input array ")),t("mjx-container",_1,[(Q(),n("svg",G1,a[240]||(a[240]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),a[241]||(a[241]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),a[243]||(a[243]=s(", this layer computes"))]),t("mjx-container",W1,[(Q(),n("svg",X1,a[244]||(a[244]=[i('',1)]))),a[245]||(a[245]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),t("p",null,[a[250]||(a[250]=s("where ")),t("mjx-container",U1,[(Q(),n("svg",q1,a[246]||(a[246]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),a[247]||(a[247]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),a[251]||(a[251]=s(" & ")),t("mjx-container",J1,[(Q(),n("svg",K1,a[248]||(a[248]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),a[249]||(a[249]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),a[252]||(a[252]=s(" are trainable parameters if ")),a[253]||(a[253]=t("code",null,"affine=true",-1)),a[254]||(a[254]=s("."))]),a[256]||(a[256]=i('

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    source

    ',12))]),t("details",$1,[t("summary",null,[a[257]||(a[257]=t("a",{id:"Lux.WeightNorm",href:"#Lux.WeightNorm"},[t("span",{class:"jlbinding"},"Lux.WeightNorm")],-1)),a[258]||(a[258]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[261]||(a[261]=i(`
    julia
    WeightNorm(layer::AbstractLuxLayer, which_params::NTuple{N, Symbol},
    -           dims::Union{Tuple, Nothing}=nothing)

    Applies weight normalization to a parameter in the given layer.

    `,2)),t("mjx-container",Y1,[(Q(),n("svg",t2,a[259]||(a[259]=[i('',1)]))),a[260]||(a[260]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"w"),t("mo",null,"="),t("mi",null,"g"),t("mfrac",null,[t("mi",null,"v"),t("mrow",null,[t("mo",{"data-mjx-texclass":"ORD"},"∥"),t("mi",null,"v"),t("mo",{"data-mjx-texclass":"ORD"},"∥")])])])],-1))]),a[262]||(a[262]=i('

    Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This updates the parameters in which_params (e.g. weight) using two parameters: one specifying the magnitude (e.g. weight_g) and one specifying the direction (e.g. weight_v).

    Arguments

    Inputs

    Returns

    Parameters

    States

    source

    ',12))]),a[278]||(a[278]=t("h2",{id:"upsampling",tabindex:"-1"},[s("Upsampling "),t("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),t("details",a2,[t("summary",null,[a[263]||(a[263]=t("a",{id:"Lux.PixelShuffle",href:"#Lux.PixelShuffle"},[t("span",{class:"jlbinding"},"Lux.PixelShuffle")],-1)),a[264]||(a[264]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[265]||(a[265]=i('
    julia
    PixelShuffle(r::Int)

    Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

    See NNlib.pixel_shuffle for more details.

    Arguments

    Inputs

    Returns

    source

    ',10))]),t("details",s2,[t("summary",null,[a[266]||(a[266]=t("a",{id:"Lux.Upsample",href:"#Lux.Upsample"},[t("span",{class:"jlbinding"},"Lux.Upsample")],-1)),a[267]||(a[267]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[268]||(a[268]=i(`
    julia
    Upsample(mode = :nearest; [scale, size, align_corners=false])
    -Upsample(scale, mode = :nearest)

    Upsampling Layer.

    Layer Construction

    Option 1

    Exactly one of two keywords must be specified:

    Option 2

    Currently supported upsampling modes and corresponding NNlib's methods are:

    Extended Help

    Other Keyword Arguments

    Inputs

    Returns

    source

    `,19))])])}const r2=T(o,[["render",i2]]);export{o2 as __pageData,r2 as default}; + # plus 2 states.

    References

    [1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).

    See also BatchNorm, GroupNorm, LayerNorm, WeightNorm

    source

    `,20))]),t("details",O1,[t("summary",null,[a[238]||(a[238]=t("a",{id:"Lux.LayerNorm",href:"#Lux.LayerNorm"},[t("span",{class:"jlbinding"},"Lux.LayerNorm")],-1)),a[239]||(a[239]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[255]||(a[255]=i(`
    julia
    LayerNorm(shape::NTuple{N, Int}, activation=identity; epsilon=1f-5, dims=Colon(),
    +          affine=true, init_bias=zeros32, init_scale=ones32)

    Computes mean and standard deviation over the whole input array, and uses these to normalize the whole array. Optionally applies an elementwise affine transformation afterwards.

    `,2)),t("p",null,[a[242]||(a[242]=s("Given an input array ")),t("mjx-container",z1,[(Q(),n("svg",G1,a[240]||(a[240]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),a[241]||(a[241]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),a[243]||(a[243]=s(", this layer computes"))]),t("mjx-container",W1,[(Q(),n("svg",X1,a[244]||(a[244]=[i('',1)]))),a[245]||(a[245]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),t("p",null,[a[250]||(a[250]=s("where ")),t("mjx-container",U1,[(Q(),n("svg",q1,a[246]||(a[246]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),a[247]||(a[247]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),a[251]||(a[251]=s(" & ")),t("mjx-container",J1,[(Q(),n("svg",K1,a[248]||(a[248]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),a[249]||(a[249]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),a[252]||(a[252]=s(" are trainable parameters if ")),a[253]||(a[253]=t("code",null,"affine=true",-1)),a[254]||(a[254]=s("."))]),a[256]||(a[256]=i('

    Arguments

    Keyword Arguments

    Extended Help

    Inputs

    Returns

    Parameters

    source

    ',12))]),t("details",$1,[t("summary",null,[a[257]||(a[257]=t("a",{id:"Lux.WeightNorm",href:"#Lux.WeightNorm"},[t("span",{class:"jlbinding"},"Lux.WeightNorm")],-1)),a[258]||(a[258]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[261]||(a[261]=i(`
    julia
    WeightNorm(layer::AbstractLuxLayer, which_params::NTuple{N, Symbol},
    +           dims::Union{Tuple, Nothing}=nothing)

    Applies weight normalization to a parameter in the given layer.

    `,2)),t("mjx-container",Y1,[(Q(),n("svg",t2,a[259]||(a[259]=[i('',1)]))),a[260]||(a[260]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"w"),t("mo",null,"="),t("mi",null,"g"),t("mfrac",null,[t("mi",null,"v"),t("mrow",null,[t("mo",{"data-mjx-texclass":"ORD"},"∥"),t("mi",null,"v"),t("mo",{"data-mjx-texclass":"ORD"},"∥")])])])],-1))]),a[262]||(a[262]=i('

    Weight normalization is a reparameterization that decouples the magnitude of a weight tensor from its direction. This updates the parameters in which_params (e.g. weight) using two parameters: one specifying the magnitude (e.g. weight_g) and one specifying the direction (e.g. weight_v).

    Arguments

    Inputs

    Returns

    Parameters

    States

    source

    ',12))]),a[278]||(a[278]=t("h2",{id:"upsampling",tabindex:"-1"},[s("Upsampling "),t("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),t("details",a2,[t("summary",null,[a[263]||(a[263]=t("a",{id:"Lux.PixelShuffle",href:"#Lux.PixelShuffle"},[t("span",{class:"jlbinding"},"Lux.PixelShuffle")],-1)),a[264]||(a[264]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[265]||(a[265]=i('
    julia
    PixelShuffle(r::Int)

    Pixel shuffling layer with upscale factor r. Usually used for generating higher resolution images while upscaling them.

    See NNlib.pixel_shuffle for more details.

    Arguments

    Inputs

    Returns

    source

    ',10))]),t("details",s2,[t("summary",null,[a[266]||(a[266]=t("a",{id:"Lux.Upsample",href:"#Lux.Upsample"},[t("span",{class:"jlbinding"},"Lux.Upsample")],-1)),a[267]||(a[267]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[268]||(a[268]=i(`
    julia
    Upsample(mode = :nearest; [scale, size, align_corners=false])
    +Upsample(scale, mode = :nearest)

    Upsampling Layer.

    Layer Construction

    Option 1

    Exactly one of two keywords must be specified:

    Option 2

    Currently supported upsampling modes and corresponding NNlib's methods are:

    Extended Help

    Other Keyword Arguments

    Inputs

    Returns

    source

    `,19))])])}const r2=T(o,[["render",i2]]);export{o2 as __pageData,r2 as default}; diff --git a/dev/assets/api_Lux_layers.md.WaBurqvX.lean.js b/dev/assets/api_Lux_layers.md.WaBurqvX.lean.js new file mode 100644 index 0000000000..005219ec74 --- /dev/null +++ b/dev/assets/api_Lux_layers.md.WaBurqvX.lean.js @@ -0,0 +1 @@ +import{_ as T,c as n,j as t,a as s,G as l,a2 as i,B as d,o as Q}from"./chunks/framework.BetCMmtc.js";const o2=JSON.parse('{"title":"Built-In Layers","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/layers.md","filePath":"api/Lux/layers.md","lastUpdated":null}'),o={name:"api/Lux/layers.md"},r={class:"jldocstring custom-block"},p={class:"jldocstring custom-block"},h={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},E={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},M={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},v={class:"jldocstring custom-block"},Z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},j={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},V={class:"jldocstring custom-block"},B={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},N={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"45.995ex",height:"5.686ex",role:"img",focusable:"false",viewBox:"0 -1563.5 20329.9 2513","aria-hidden":"true"},S={class:"jldocstring custom-block"},R={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-5.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"51.473ex",height:"11.422ex",role:"img",focusable:"false",viewBox:"0 -2774.4 22750.9 5048.7","aria-hidden":"true"},P={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},O={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"15.326ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 6774.2 1000","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"16.435ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 7264.2 1000","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.831ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5229.2 1000","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},t1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},a1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.939ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 5719.2 1000","aria-hidden":"true"},s1={class:"jldocstring custom-block"},i1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-8.146ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.257ex",height:"17.424ex",role:"img",focusable:"false",viewBox:"0 -4100.7 17793.6 7701.4","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},Q1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.018ex",height:"1.357ex",role:"img",focusable:"false",viewBox:"0 -442 1776.1 599.8","aria-hidden":"true"},r1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},p1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"19.753ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 8730.9 1045","aria-hidden":"true"},h1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.667ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.231ex",height:"2.364ex",role:"img",focusable:"false",viewBox:"0 -750 9384.3 1045","aria-hidden":"true"},g1={class:"jldocstring custom-block"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"67.61ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 29883.5 1037.2","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},E1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"4.342ex",height:"1.927ex",role:"img",focusable:"false",viewBox:"0 -694 1919.1 851.8","aria-hidden":"true"},L1={class:"jldocstring custom-block"},x1={class:"jldocstring custom-block"},b1={class:"jldocstring custom-block"},w1={class:"jldocstring custom-block"},H1={class:"jldocstring custom-block"},C1={class:"jldocstring custom-block"},D1={class:"jldocstring custom-block"},F1={class:"jldocstring custom-block"},_1={class:"jldocstring custom-block"},M1={class:"jldocstring custom-block"},A1={class:"jldocstring custom-block"},v1={class:"jldocstring custom-block"},Z1={class:"jldocstring custom-block"},j1={class:"jldocstring custom-block"},V1={class:"jldocstring custom-block"},B1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},N1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.059ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10192.1 891","aria-hidden":"true"},S1={class:"jldocstring custom-block"},R1={class:"jldocstring custom-block"},I1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},O1={class:"jldocstring custom-block"},z1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},W1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},X1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},U1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},$1={class:"jldocstring custom-block"},Y1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},t2={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.172ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.071ex",height:"4.704ex",role:"img",focusable:"false",viewBox:"0 -1119 4451.6 2079","aria-hidden":"true"},a2={class:"jldocstring custom-block"},s2={class:"jldocstring custom-block"};function i2(e2,a,l2,n2,Q2,T2){const e=d("Badge");return Q(),n("div",null,[a[269]||(a[269]=t("h1",{id:"Built-In-Layers",tabindex:"-1"},[s("Built-In Layers "),t("a",{class:"header-anchor",href:"#Built-In-Layers","aria-label":'Permalink to "Built-In Layers {#Built-In-Layers}"'},"​")],-1)),a[270]||(a[270]=t("h2",{id:"containers",tabindex:"-1"},[s("Containers "),t("a",{class:"header-anchor",href:"#containers","aria-label":'Permalink to "Containers"'},"​")],-1)),t("details",r,[t("summary",null,[a[0]||(a[0]=t("a",{id:"Lux.BranchLayer",href:"#Lux.BranchLayer"},[t("span",{class:"jlbinding"},"Lux.BranchLayer")],-1)),a[1]||(a[1]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[2]||(a[2]=i("",18))]),t("details",p,[t("summary",null,[a[3]||(a[3]=t("a",{id:"Lux.Chain",href:"#Lux.Chain"},[t("span",{class:"jlbinding"},"Lux.Chain")],-1)),a[4]||(a[4]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[5]||(a[5]=i("",18))]),t("details",h,[t("summary",null,[a[6]||(a[6]=t("a",{id:"Lux.PairwiseFusion",href:"#Lux.PairwiseFusion"},[t("span",{class:"jlbinding"},"Lux.PairwiseFusion")],-1)),a[7]||(a[7]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[8]||(a[8]=i("",18))]),t("details",m,[t("summary",null,[a[9]||(a[9]=t("a",{id:"Lux.Parallel",href:"#Lux.Parallel"},[t("span",{class:"jlbinding"},"Lux.Parallel")],-1)),a[10]||(a[10]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[11]||(a[11]=i("",17))]),t("details",g,[t("summary",null,[a[12]||(a[12]=t("a",{id:"Lux.SkipConnection",href:"#Lux.SkipConnection"},[t("span",{class:"jlbinding"},"Lux.SkipConnection")],-1)),a[13]||(a[13]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[14]||(a[14]=i("",16))]),t("details",k,[t("summary",null,[a[15]||(a[15]=t("a",{id:"Lux.RepeatedLayer",href:"#Lux.RepeatedLayer"},[t("span",{class:"jlbinding"},"Lux.RepeatedLayer")],-1)),a[16]||(a[16]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[17]||(a[17]=i("",21))]),a[271]||(a[271]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[a[18]||(a[18]=t("a",{id:"Lux.Conv",href:"#Lux.Conv"},[t("span",{class:"jlbinding"},"Lux.Conv")],-1)),a[19]||(a[19]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[22]||(a[22]=i("",13)),t("mjx-container",u,[(Q(),n("svg",y,a[20]||(a[20]=[i("",1)]))),a[21]||(a[21]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[23]||(a[23]=i("",4))]),t("details",E,[t("summary",null,[a[24]||(a[24]=t("a",{id:"Lux.ConvTranspose",href:"#Lux.ConvTranspose"},[t("span",{class:"jlbinding"},"Lux.ConvTranspose")],-1)),a[25]||(a[25]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[26]||(a[26]=i("",14))]),a[272]||(a[272]=t("h2",{id:"Dropout-Layers",tabindex:"-1"},[s("Dropout Layers "),t("a",{class:"header-anchor",href:"#Dropout-Layers","aria-label":'Permalink to "Dropout Layers {#Dropout-Layers}"'},"​")],-1)),t("details",f,[t("summary",null,[a[27]||(a[27]=t("a",{id:"Lux.AlphaDropout",href:"#Lux.AlphaDropout"},[t("span",{class:"jlbinding"},"Lux.AlphaDropout")],-1)),a[28]||(a[28]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[29]||(a[29]=i("",13))]),t("details",L,[t("summary",null,[a[30]||(a[30]=t("a",{id:"Lux.Dropout",href:"#Lux.Dropout"},[t("span",{class:"jlbinding"},"Lux.Dropout")],-1)),a[31]||(a[31]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[32]||(a[32]=i("",15))]),t("details",x,[t("summary",null,[a[33]||(a[33]=t("a",{id:"Lux.VariationalHiddenDropout",href:"#Lux.VariationalHiddenDropout"},[t("span",{class:"jlbinding"},"Lux.VariationalHiddenDropout")],-1)),a[34]||(a[34]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[35]||(a[35]=i("",15))]),a[273]||(a[273]=t("h2",{id:"Pooling-Layers",tabindex:"-1"},[s("Pooling Layers "),t("a",{class:"header-anchor",href:"#Pooling-Layers","aria-label":'Permalink to "Pooling Layers {#Pooling-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[a[36]||(a[36]=t("a",{id:"Lux.AdaptiveLPPool",href:"#Lux.AdaptiveLPPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveLPPool")],-1)),a[37]||(a[37]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[38]||(a[38]=i("",10))]),t("details",w,[t("summary",null,[a[39]||(a[39]=t("a",{id:"Lux.AdaptiveMaxPool",href:"#Lux.AdaptiveMaxPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMaxPool")],-1)),a[40]||(a[40]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[41]||(a[41]=i("",9))]),t("details",H,[t("summary",null,[a[42]||(a[42]=t("a",{id:"Lux.AdaptiveMeanPool",href:"#Lux.AdaptiveMeanPool"},[t("span",{class:"jlbinding"},"Lux.AdaptiveMeanPool")],-1)),a[43]||(a[43]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[44]||(a[44]=i("",9))]),t("details",C,[t("summary",null,[a[45]||(a[45]=t("a",{id:"Lux.GlobalLPPool",href:"#Lux.GlobalLPPool"},[t("span",{class:"jlbinding"},"Lux.GlobalLPPool")],-1)),a[46]||(a[46]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[47]||(a[47]=i("",8))]),t("details",D,[t("summary",null,[a[48]||(a[48]=t("a",{id:"Lux.GlobalMaxPool",href:"#Lux.GlobalMaxPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMaxPool")],-1)),a[49]||(a[49]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[50]||(a[50]=i("",7))]),t("details",F,[t("summary",null,[a[51]||(a[51]=t("a",{id:"Lux.GlobalMeanPool",href:"#Lux.GlobalMeanPool"},[t("span",{class:"jlbinding"},"Lux.GlobalMeanPool")],-1)),a[52]||(a[52]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[53]||(a[53]=i("",7))]),t("details",_,[t("summary",null,[a[54]||(a[54]=t("a",{id:"Lux.LPPool",href:"#Lux.LPPool"},[t("span",{class:"jlbinding"},"Lux.LPPool")],-1)),a[55]||(a[55]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[58]||(a[58]=i("",12)),t("mjx-container",M,[(Q(),n("svg",A,a[56]||(a[56]=[i("",1)]))),a[57]||(a[57]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[59]||(a[59]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[60]||(a[60]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L251",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",v,[t("summary",null,[a[61]||(a[61]=t("a",{id:"Lux.MaxPool",href:"#Lux.MaxPool"},[t("span",{class:"jlbinding"},"Lux.MaxPool")],-1)),a[62]||(a[62]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[65]||(a[65]=i("",11)),t("mjx-container",Z,[(Q(),n("svg",j,a[63]||(a[63]=[i("",1)]))),a[64]||(a[64]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[66]||(a[66]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[67]||(a[67]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",V,[t("summary",null,[a[68]||(a[68]=t("a",{id:"Lux.MeanPool",href:"#Lux.MeanPool"},[t("span",{class:"jlbinding"},"Lux.MeanPool")],-1)),a[69]||(a[69]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[72]||(a[72]=i("",11)),t("mjx-container",B,[(Q(),n("svg",N,a[70]||(a[70]=[i("",1)]))),a[71]||(a[71]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("msub",null,[t("mi",null,"O"),t("mi",null,"i")]),t("mo",null,"="),t("mrow",{"data-mjx-texclass":"INNER"},[t("mo",{"data-mjx-texclass":"OPEN"},"⌊"),t("mfrac",null,[t("mrow",null,[t("msub",null,[t("mi",null,"I"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mi",null,"i")]),t("mo",null,"+"),t("msub",null,[t("mi",null,"p"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mo",{stretchy:"false"},"("),t("mi",null,"i"),t("mo",null,"+"),t("mi",null,"N"),t("mo",{stretchy:"false"},")"),t("mi",{mathvariant:"normal"},"%"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),t("mi",null,"p"),t("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|")])]),t("mo",null,"−"),t("msub",null,[t("mi",null,"d"),t("mi",null,"i")]),t("mo",null,"×"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"k"),t("mi",null,"i")]),t("mo",null,"−"),t("mn",null,"1"),t("mo",{stretchy:"false"},")")]),t("msub",null,[t("mi",null,"s"),t("mi",null,"i")])]),t("mo",null,"+"),t("mn",null,"1"),t("mo",{"data-mjx-texclass":"CLOSE"},"⌋")])])],-1))]),a[73]||(a[73]=t("ul",null,[t("li",null,[s("Empty "),t("code",null,"NamedTuple()")])],-1)),a[74]||(a[74]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/pooling.jl#L203-L247",target:"_blank",rel:"noreferrer"},"source")],-1))]),a[274]||(a[274]=t("h2",{id:"Recurrent-Layers",tabindex:"-1"},[s("Recurrent Layers "),t("a",{class:"header-anchor",href:"#Recurrent-Layers","aria-label":'Permalink to "Recurrent Layers {#Recurrent-Layers}"'},"​")],-1)),t("details",S,[t("summary",null,[a[75]||(a[75]=t("a",{id:"Lux.GRUCell",href:"#Lux.GRUCell"},[t("span",{class:"jlbinding"},"Lux.GRUCell")],-1)),a[76]||(a[76]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[120]||(a[120]=i("",2)),t("mjx-container",R,[(Q(),n("svg",I,a[77]||(a[77]=[i("",1)]))),a[78]||(a[78]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"r")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"z")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"n")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"tanh"),t("mo",{"data-mjx-texclass":"NONE"},"⁡"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",null,"+"),t("mi",null,"r"),t("mo",null,"⋅"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{stretchy:"false"},")"),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mo",{stretchy:"false"},"("),t("mn",null,"1"),t("mo",null,"−"),t("mi",null,"z"),t("mo",{stretchy:"false"},")"),t("mo",null,"⋅"),t("mi",null,"n"),t("mo",null,"+"),t("mi",null,"z"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])])])])])])],-1))]),a[121]||(a[121]=i("",5)),t("ul",null,[t("li",null,[a[87]||(a[87]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[81]||(a[81]=s("Output ")),t("mjx-container",P,[(Q(),n("svg",O,a[79]||(a[79]=[i("",1)]))),a[80]||(a[80]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[82]||(a[82]=s(" of shape ")),a[83]||(a[83]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[86]||(a[86]=s("Tuple containing new hidden state ")),t("mjx-container",z,[(Q(),n("svg",G,a[84]||(a[84]=[i("",1)]))),a[85]||(a[85]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[88]||(a[88]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[122]||(a[122]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[91]||(a[91]=t("code",null,"weight_ih",-1)),a[92]||(a[92]=s(": Concatenated Weights to map from input space ")),t("mjx-container",W,[(Q(),n("svg",X,a[89]||(a[89]=[i("",1)]))),a[90]||(a[90]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[93]||(a[93]=s("."))])]),t("li",null,[t("p",null,[a[96]||(a[96]=t("code",null,"weight_hh",-1)),a[97]||(a[97]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",U,[(Q(),n("svg",q,a[94]||(a[94]=[i("",1)]))),a[95]||(a[95]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[98]||(a[98]=s("."))])]),t("li",null,[t("p",null,[a[101]||(a[101]=t("code",null,"bias_ih",-1)),a[102]||(a[102]=s(": Concatenated Bias vector for the input space ")),t("mjx-container",J,[(Q(),n("svg",K,a[99]||(a[99]=[i("",1)]))),a[100]||(a[100]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[103]||(a[103]=s(" (not present if ")),a[104]||(a[104]=t("code",null,"use_bias=false",-1)),a[105]||(a[105]=s(")."))])]),t("li",null,[t("p",null,[a[108]||(a[108]=t("code",null,"bias_hh",-1)),a[109]||(a[109]=s(": Concatenated Bias vector for the hidden space ")),t("mjx-container",$,[(Q(),n("svg",Y,a[106]||(a[106]=[i("",1)]))),a[107]||(a[107]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[110]||(a[110]=s(" (not present if ")),a[111]||(a[111]=t("code",null,"use_bias=false",-1)),a[112]||(a[112]=s(")."))])]),t("li",null,[t("p",null,[a[115]||(a[115]=t("code",null,"hidden_state",-1)),a[116]||(a[116]=s(": Initial hidden state vector (not present if ")),a[117]||(a[117]=t("code",null,"train_state=false",-1)),a[118]||(a[118]=s(") ")),t("mjx-container",t1,[(Q(),n("svg",a1,a[113]||(a[113]=[i("",1)]))),a[114]||(a[114]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"r")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"z")])]),t("mo",null,","),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"n")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[119]||(a[119]=s("."))])])]),a[123]||(a[123]=t("p",null,[t("strong",null,"States")],-1)),a[124]||(a[124]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[125]||(a[125]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/recurrent.jl#L488",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",s1,[t("summary",null,[a[126]||(a[126]=t("a",{id:"Lux.LSTMCell",href:"#Lux.LSTMCell"},[t("span",{class:"jlbinding"},"Lux.LSTMCell")],-1)),a[127]||(a[127]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[153]||(a[153]=i("",2)),t("mjx-container",i1,[(Q(),n("svg",e1,a[128]||(a[128]=[i("",1)]))),a[129]||(a[129]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mtable",{displaystyle:"true",columnalign:"right left",columnspacing:"0em",rowspacing:"3pt"},[t("mtr",null,[t("mtd",null,[t("mi",null,"i")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"f")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"f")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"g")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"g")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("mi",null,"o")]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"σ"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("msub",null,[t("mi",null,"b"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"o")])]),t("mo",{stretchy:"false"},")")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"f"),t("mo",null,"⋅"),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"i"),t("mo",null,"⋅"),t("mi",null,"g")])]),t("mtr",null,[t("mtd",null,[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])]),t("mtd",null,[t("mi"),t("mo",null,"="),t("mi",null,"o"),t("mo",null,"⋅"),t("mi",null,"t"),t("mi",null,"a"),t("mi",null,"n"),t("mi",null,"h"),t("mo",{stretchy:"false"},"("),t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",{stretchy:"false"},")")])])])])],-1))]),a[154]||(a[154]=i("",5)),t("ul",null,[t("li",null,[a[141]||(a[141]=t("p",null,"Tuple Containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[132]||(a[132]=s("Output ")),t("mjx-container",l1,[(Q(),n("svg",n1,a[130]||(a[130]=[i("",1)]))),a[131]||(a[131]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[133]||(a[133]=s(" of shape ")),a[134]||(a[134]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[139]||(a[139]=s("Tuple containing new hidden state ")),t("mjx-container",Q1,[(Q(),n("svg",T1,a[135]||(a[135]=[i("",1)]))),a[136]||(a[136]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[140]||(a[140]=s(" and new memory ")),t("mjx-container",d1,[(Q(),n("svg",o1,a[137]||(a[137]=[i("",1)]))),a[138]||(a[138]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"c"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[142]||(a[142]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[155]||(a[155]=t("p",null,[t("strong",null,"Parameters")],-1)),t("ul",null,[t("li",null,[t("p",null,[a[145]||(a[145]=t("code",null,"weight_ih",-1)),a[146]||(a[146]=s(": Concatenated Weights to map from input space ")),t("mjx-container",r1,[(Q(),n("svg",p1,a[143]||(a[143]=[i("",1)]))),a[144]||(a[144]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[147]||(a[147]=s("."))])]),t("li",null,[t("p",null,[a[150]||(a[150]=t("code",null,"weight_hh",-1)),a[151]||(a[151]=s(": Concatenated Weights to map from hidden space ")),t("mjx-container",h1,[(Q(),n("svg",m1,a[148]||(a[148]=[i("",1)]))),a[149]||(a[149]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mo",{fence:"false",stretchy:"false"},"{"),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"i")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"f")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"g")])]),t("mo",null,","),t("msub",null,[t("mi",null,"W"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"o")])]),t("mo",{fence:"false",stretchy:"false"},"}")])],-1))])])]),a[152]||(a[152]=i("",4))]),a[156]||(a[156]=t("p",null,[t("strong",null,"States")],-1)),a[157]||(a[157]=t("ul",null,[t("li",null,[t("code",null,"rng"),s(": Controls the randomness (if any) in the initial state generation")])],-1)),a[158]||(a[158]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/src/layers/recurrent.jl#L309",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",g1,[t("summary",null,[a[159]||(a[159]=t("a",{id:"Lux.RNNCell",href:"#Lux.RNNCell"},[t("span",{class:"jlbinding"},"Lux.RNNCell")],-1)),a[160]||(a[160]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[173]||(a[173]=i("",2)),t("p",null,[t("mjx-container",k1,[(Q(),n("svg",c1,a[161]||(a[161]=[i("",1)]))),a[162]||(a[162]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])]),t("mo",null,"="),t("mi",null,"a"),t("mi",null,"c"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"v"),t("mi",null,"a"),t("mi",null,"t"),t("mi",null,"i"),t("mi",null,"o"),t("mi",null,"n"),t("mo",{stretchy:"false"},"("),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"×"),t("mi",null,"x"),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"i"),t("mi",null,"h")])]),t("mo",null,"+"),t("mi",null,"w"),t("mi",null,"e"),t("mi",null,"i"),t("mi",null,"g"),t("mi",null,"h"),t("msub",null,[t("mi",null,"t"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",null,"×"),t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"p"),t("mi",null,"r"),t("mi",null,"e"),t("mi",null,"v")])]),t("mo",null,"+"),t("mi",null,"b"),t("mi",null,"i"),t("mi",null,"a"),t("msub",null,[t("mi",null,"s"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"h"),t("mi",null,"h")])]),t("mo",{stretchy:"false"},")")])],-1))])]),a[174]||(a[174]=i("",5)),t("ul",null,[t("li",null,[a[171]||(a[171]=t("p",null,"Tuple containing",-1)),t("ul",null,[t("li",null,[t("p",null,[a[165]||(a[165]=s("Output ")),t("mjx-container",u1,[(Q(),n("svg",y1,a[163]||(a[163]=[i("",1)]))),a[164]||(a[164]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))]),a[166]||(a[166]=s(" of shape ")),a[167]||(a[167]=t("code",null,"(out_dims, batch_size)",-1))])]),t("li",null,[t("p",null,[a[170]||(a[170]=s("Tuple containing new hidden state ")),t("mjx-container",E1,[(Q(),n("svg",f1,a[168]||(a[168]=[i("",1)]))),a[169]||(a[169]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"h"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"n"),t("mi",null,"e"),t("mi",null,"w")])])])],-1))])])])])]),a[172]||(a[172]=t("li",null,[t("p",null,"Updated model state")],-1))]),a[175]||(a[175]=i("",5))]),t("details",L1,[t("summary",null,[a[176]||(a[176]=t("a",{id:"Lux.Recurrence",href:"#Lux.Recurrence"},[t("span",{class:"jlbinding"},"Lux.Recurrence")],-1)),a[177]||(a[177]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[178]||(a[178]=i("",14))]),t("details",x1,[t("summary",null,[a[179]||(a[179]=t("a",{id:"Lux.StatefulRecurrentCell",href:"#Lux.StatefulRecurrentCell"},[t("span",{class:"jlbinding"},"Lux.StatefulRecurrentCell")],-1)),a[180]||(a[180]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[181]||(a[181]=i("",12))]),t("details",b1,[t("summary",null,[a[182]||(a[182]=t("a",{id:"Lux.BidirectionalRNN",href:"#Lux.BidirectionalRNN"},[t("span",{class:"jlbinding"},"Lux.BidirectionalRNN")],-1)),a[183]||(a[183]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[184]||(a[184]=i("",16))]),a[275]||(a[275]=t("h2",{id:"Linear-Layers",tabindex:"-1"},[s("Linear Layers "),t("a",{class:"header-anchor",href:"#Linear-Layers","aria-label":'Permalink to "Linear Layers {#Linear-Layers}"'},"​")],-1)),t("details",w1,[t("summary",null,[a[185]||(a[185]=t("a",{id:"Lux.Bilinear",href:"#Lux.Bilinear"},[t("span",{class:"jlbinding"},"Lux.Bilinear")],-1)),a[186]||(a[186]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[187]||(a[187]=i("",15))]),t("details",H1,[t("summary",null,[a[188]||(a[188]=t("a",{id:"Lux.Dense",href:"#Lux.Dense"},[t("span",{class:"jlbinding"},"Lux.Dense")],-1)),a[189]||(a[189]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[190]||(a[190]=i("",13))]),t("details",C1,[t("summary",null,[a[191]||(a[191]=t("a",{id:"Lux.Embedding",href:"#Lux.Embedding"},[t("span",{class:"jlbinding"},"Lux.Embedding")],-1)),a[192]||(a[192]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[193]||(a[193]=i("",12))]),t("details",D1,[t("summary",null,[a[194]||(a[194]=t("a",{id:"Lux.Scale",href:"#Lux.Scale"},[t("span",{class:"jlbinding"},"Lux.Scale")],-1)),a[195]||(a[195]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[196]||(a[196]=i("",13))]),a[276]||(a[276]=t("h2",{id:"Misc.-Helper-Layers",tabindex:"-1"},[s("Misc. Helper Layers "),t("a",{class:"header-anchor",href:"#Misc.-Helper-Layers","aria-label":'Permalink to "Misc. Helper Layers {#Misc.-Helper-Layers}"'},"​")],-1)),t("details",F1,[t("summary",null,[a[197]||(a[197]=t("a",{id:"Lux.FlattenLayer",href:"#Lux.FlattenLayer"},[t("span",{class:"jlbinding"},"Lux.FlattenLayer")],-1)),a[198]||(a[198]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[199]||(a[199]=i("",11))]),t("details",_1,[t("summary",null,[a[200]||(a[200]=t("a",{id:"Lux.Maxout",href:"#Lux.Maxout"},[t("span",{class:"jlbinding"},"Lux.Maxout")],-1)),a[201]||(a[201]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[202]||(a[202]=i("",18))]),t("details",M1,[t("summary",null,[a[203]||(a[203]=t("a",{id:"Lux.NoOpLayer",href:"#Lux.NoOpLayer"},[t("span",{class:"jlbinding"},"Lux.NoOpLayer")],-1)),a[204]||(a[204]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[205]||(a[205]=i("",5))]),t("details",A1,[t("summary",null,[a[206]||(a[206]=t("a",{id:"Lux.ReshapeLayer",href:"#Lux.ReshapeLayer"},[t("span",{class:"jlbinding"},"Lux.ReshapeLayer")],-1)),a[207]||(a[207]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[208]||(a[208]=i("",11))]),t("details",v1,[t("summary",null,[a[209]||(a[209]=t("a",{id:"Lux.SelectDim",href:"#Lux.SelectDim"},[t("span",{class:"jlbinding"},"Lux.SelectDim")],-1)),a[210]||(a[210]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[211]||(a[211]=i("",9))]),t("details",Z1,[t("summary",null,[a[212]||(a[212]=t("a",{id:"Lux.WrappedFunction",href:"#Lux.WrappedFunction"},[t("span",{class:"jlbinding"},"Lux.WrappedFunction")],-1)),a[213]||(a[213]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[214]||(a[214]=i("",9))]),t("details",j1,[t("summary",null,[a[215]||(a[215]=t("a",{id:"Lux.ReverseSequence",href:"#Lux.ReverseSequence"},[t("span",{class:"jlbinding"},"Lux.ReverseSequence")],-1)),a[216]||(a[216]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[217]||(a[217]=i("",11))]),a[277]||(a[277]=t("h2",{id:"Normalization-Layers",tabindex:"-1"},[s("Normalization Layers "),t("a",{class:"header-anchor",href:"#Normalization-Layers","aria-label":'Permalink to "Normalization Layers {#Normalization-Layers}"'},"​")],-1)),t("details",V1,[t("summary",null,[a[218]||(a[218]=t("a",{id:"Lux.BatchNorm",href:"#Lux.BatchNorm"},[t("span",{class:"jlbinding"},"Lux.BatchNorm")],-1)),a[219]||(a[219]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[225]||(a[225]=i("",2)),t("p",null,[a[222]||(a[222]=t("code",null,"BatchNorm",-1)),a[223]||(a[223]=s(" computes the mean and variance for each ")),t("mjx-container",B1,[(Q(),n("svg",N1,a[220]||(a[220]=[i("",1)]))),a[221]||(a[221]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mi",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mi",null,"×"),t("mn",null,"1"),t("mi",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),a[224]||(a[224]=s(" input slice and normalises the input accordingly."))]),a[226]||(a[226]=i("",19))]),t("details",S1,[t("summary",null,[a[227]||(a[227]=t("a",{id:"Lux.GroupNorm",href:"#Lux.GroupNorm"},[t("span",{class:"jlbinding"},"Lux.GroupNorm")],-1)),a[228]||(a[228]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[229]||(a[229]=i("",20))]),t("details",R1,[t("summary",null,[a[230]||(a[230]=t("a",{id:"Lux.InstanceNorm",href:"#Lux.InstanceNorm"},[t("span",{class:"jlbinding"},"Lux.InstanceNorm")],-1)),a[231]||(a[231]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[236]||(a[236]=i("",2)),t("p",null,[a[234]||(a[234]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",I1,[(Q(),n("svg",P1,a[232]||(a[232]=[i("",1)]))),a[233]||(a[233]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),a[235]||(a[235]=s("` input slice and normalises the input accordingly."))]),a[237]||(a[237]=i("",20))]),t("details",O1,[t("summary",null,[a[238]||(a[238]=t("a",{id:"Lux.LayerNorm",href:"#Lux.LayerNorm"},[t("span",{class:"jlbinding"},"Lux.LayerNorm")],-1)),a[239]||(a[239]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[255]||(a[255]=i("",2)),t("p",null,[a[242]||(a[242]=s("Given an input array ")),t("mjx-container",z1,[(Q(),n("svg",G1,a[240]||(a[240]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),a[241]||(a[241]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),a[243]||(a[243]=s(", this layer computes"))]),t("mjx-container",W1,[(Q(),n("svg",X1,a[244]||(a[244]=[i("",1)]))),a[245]||(a[245]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),t("p",null,[a[250]||(a[250]=s("where ")),t("mjx-container",U1,[(Q(),n("svg",q1,a[246]||(a[246]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),a[247]||(a[247]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),a[251]||(a[251]=s(" & ")),t("mjx-container",J1,[(Q(),n("svg",K1,a[248]||(a[248]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),a[249]||(a[249]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),a[252]||(a[252]=s(" are trainable parameters if ")),a[253]||(a[253]=t("code",null,"affine=true",-1)),a[254]||(a[254]=s("."))]),a[256]||(a[256]=i("",12))]),t("details",$1,[t("summary",null,[a[257]||(a[257]=t("a",{id:"Lux.WeightNorm",href:"#Lux.WeightNorm"},[t("span",{class:"jlbinding"},"Lux.WeightNorm")],-1)),a[258]||(a[258]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[261]||(a[261]=i("",2)),t("mjx-container",Y1,[(Q(),n("svg",t2,a[259]||(a[259]=[i("",1)]))),a[260]||(a[260]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"w"),t("mo",null,"="),t("mi",null,"g"),t("mfrac",null,[t("mi",null,"v"),t("mrow",null,[t("mo",{"data-mjx-texclass":"ORD"},"∥"),t("mi",null,"v"),t("mo",{"data-mjx-texclass":"ORD"},"∥")])])])],-1))]),a[262]||(a[262]=i("",12))]),a[278]||(a[278]=t("h2",{id:"upsampling",tabindex:"-1"},[s("Upsampling "),t("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),t("details",a2,[t("summary",null,[a[263]||(a[263]=t("a",{id:"Lux.PixelShuffle",href:"#Lux.PixelShuffle"},[t("span",{class:"jlbinding"},"Lux.PixelShuffle")],-1)),a[264]||(a[264]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[265]||(a[265]=i("",10))]),t("details",s2,[t("summary",null,[a[266]||(a[266]=t("a",{id:"Lux.Upsample",href:"#Lux.Upsample"},[t("span",{class:"jlbinding"},"Lux.Upsample")],-1)),a[267]||(a[267]=s()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),a[268]||(a[268]=i("",19))])])}const r2=T(o,[["render",i2]]);export{o2 as __pageData,r2 as default}; diff --git a/dev/assets/api_Lux_utilities.md.DvX6-akN.js b/dev/assets/api_Lux_utilities.md.CRauyyus.js similarity index 97% rename from dev/assets/api_Lux_utilities.md.DvX6-akN.js rename to dev/assets/api_Lux_utilities.md.CRauyyus.js index 1c69609fb5..0c97e3b6f2 100644 --- a/dev/assets/api_Lux_utilities.md.DvX6-akN.js +++ b/dev/assets/api_Lux_utilities.md.CRauyyus.js @@ -1,12 +1,12 @@ -import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framework.I-x9Gl6h.js";const bs=JSON.parse('{"title":"Utilities","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/utilities.md","filePath":"api/Lux/utilities.md","lastUpdated":null}'),d={name:"api/Lux/utilities.md"},o={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.681ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 20633.1 1199","aria-hidden":"true"},x={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.631ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 11329 1199","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},H={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.718ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10483.3 1000","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},M={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},V={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},R={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},P={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.184ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 10247.1 1799","aria-hidden":"true"},q={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.919ex",height:"1ex",role:"img",focusable:"false",viewBox:"0 -431 406 442","aria-hidden":"true"},X={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},J={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},W={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},K={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},_={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.539ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 15266.3 1000","aria-hidden":"true"},s1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},i1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},a1={class:"jldocstring custom-block"},t1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.308ex"},xmlns:"http://www.w3.org/2000/svg",width:"28.659ex",height:"5.747ex",role:"img",focusable:"false",viewBox:"0 -1520 12667.4 2540","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},h1={class:"jldocstring custom-block"},p1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},Q1={class:"jldocstring custom-block"},g1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"20.065ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8868.8 1199","aria-hidden":"true"},y1={class:"jldocstring custom-block"},c1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"5.428ex",role:"img",focusable:"false",viewBox:"0 -1449.5 17948.3 2399","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.005ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -717 444 727","aria-hidden":"true"},x1={class:"jldocstring custom-block"},C1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},f1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},w1={class:"jldocstring custom-block"},H1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},v1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.333ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5451.1 1199","aria-hidden":"true"},j1={class:"jldocstring custom-block"},D1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"14.515ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 6415.7 1799","aria-hidden":"true"},M1={class:"jldocstring custom-block"},V1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"32.253ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 14255.9 1799","aria-hidden":"true"},Z1={class:"jldocstring custom-block"},O1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},R1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},N1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},z1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},P1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"18.723ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8275.6 1199","aria-hidden":"true"},I1={class:"jldocstring custom-block"},q1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 17948.2 1233.4","aria-hidden":"true"},X1={class:"jldocstring custom-block"},J1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},U1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.053ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 9305.3 1233.4","aria-hidden":"true"},W1={class:"jldocstring custom-block"},K1={class:"jldocstring custom-block"},Y1={class:"jldocstring custom-block"},$1={class:"jldocstring custom-block"},_1={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},Qs={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"};function Es(us,i,Fs,xs,Cs,Ls){const e=r("Badge");return h(),n("div",null,[i[293]||(i[293]=s("h1",{id:"utilities",tabindex:"-1"},[a("Utilities "),s("a",{class:"header-anchor",href:"#utilities","aria-label":'Permalink to "Utilities"'},"​")],-1)),i[294]||(i[294]=s("h2",{id:"Training-API",tabindex:"-1"},[a("Training API "),s("a",{class:"header-anchor",href:"#Training-API","aria-label":'Permalink to "Training API {#Training-API}"'},"​")],-1)),i[295]||(i[295]=s("p",null,[a("Helper Functions making it easier to train "),s("code",null,"Lux.jl"),a(" models.")],-1)),i[296]||(i[296]=s("p",null,"Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.",-1)),s("details",o,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.Training.TrainState",href:"#Lux.Training.TrainState"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[1]||(i[1]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=t('
    julia
    TrainState

    Training State containing:

    Internal fields:

    Warning

    Constructing this object directly shouldn't be considered a stable API. Use the version with the Optimisers API.

    source

    ',7))]),s("details",k,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}",href:"#Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[4]||(i[4]=a()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),i[5]||(i[5]=t('
    julia
    TrainState(model::Lux.AbstractLuxLayer, ps, st, optimizer::Optimisers.AbstractRule)

    Constructor for TrainState.

    Arguments

    Returns

    TrainState object.

    source

    ',7))]),s("details",T,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.Training.compute_gradients",href:"#Lux.Training.compute_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.compute_gradients")],-1)),i[7]||(i[7]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=t(`
    julia
    compute_gradients(ad::AbstractADType, objective_function::Function, data,
    -    ts::TrainState)

    Compute the gradients of the objective function wrt parameters stored in ts.

    Backends & AD Packages

    Supported BackendsPackages Needed
    AutoZygoteZygote.jl
    AutoReverseDiff(; compile)ReverseDiff.jl
    AutoTrackerTracker.jl
    AutoEnzymeEnzyme.jl

    Arguments

    Return

    A 4-Tuple containing:

    Known Limitations

    Aliased Gradients

    grads returned by this function might be aliased by the implementation of the gradient backend. For example, if you cache the grads from step i, the new gradients returned in step i + 1 might be aliased by the old gradients. If you want to prevent this, simply use copy(grads) or deepcopy(grads) to make a copy of the gradients.

    source

    `,13))]),s("details",Q,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.Training.apply_gradients",href:"#Lux.Training.apply_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients")],-1)),i[10]||(i[10]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=t('
    julia
    apply_gradients(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads.

    Arguments

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",g,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.Training.apply_gradients!",href:"#Lux.Training.apply_gradients!"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients!")],-1)),i[13]||(i[13]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=t('
    julia
    apply_gradients!(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads. This is an inplace version of apply_gradients.

    Arguments

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",m,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.Training.single_train_step",href:"#Lux.Training.single_train_step"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step")],-1)),i[16]||(i[16]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=t('
    julia
    single_train_step(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients. All backends supported via compute_gradients are supported here.

    In most cases you should use single_train_step! instead of this function.

    Keyword Arguments

    Return

    Returned values are the same as single_train_step!.

    source

    ',8))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.Training.single_train_step!",href:"#Lux.Training.single_train_step!"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step!")],-1)),i[19]||(i[19]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=t('
    julia
    single_train_step!(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients!. All backends supported via compute_gradients are supported here.

    Keyword Arguments

    Return

    Returned values are the same as compute_gradients. Note that despite the !, only the parameters in ts are updated inplace. Users should be using the returned ts object for further training steps, else there is no caching and performance will be suboptimal (and absolutely terrible for backends like AutoReactant).

    source

    ',7))]),i[297]||(i[297]=t('

    Loss Functions

    Loss Functions Objects take 2 forms of inputs:

    1. and y where is the predicted output and y is the target output.

    2. model, ps, st, (x, y) where model is the model, ps are the parameters, st are the states and (x, y) are the input and target pair. Then it returns the loss, updated states, and an empty named tuple. This makes them compatible with the Training API.

    Warning

    When using ChainRules.jl compatible AD (like Zygote), we only compute the gradients wrt the inputs and drop any gradients wrt the targets.

    ',4)),s("details",c,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.GenericLossFunction",href:"#Lux.GenericLossFunction"},[s("span",{class:"jlbinding"},"Lux.GenericLossFunction")],-1)),i[22]||(i[22]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[23]||(i[23]=t(`
    julia
    GenericLossFunction(loss_fn; agg = mean)

    Takes any function loss_fn that maps 2 number inputs to a single number output. Additionally, array inputs are efficiently broadcasted and aggregated using agg.

    julia
    julia> mseloss = GenericLossFunction((ŷ, y) -> abs2(ŷ - y));
    +import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framework.BetCMmtc.js";const bs=JSON.parse('{"title":"Utilities","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/utilities.md","filePath":"api/Lux/utilities.md","lastUpdated":null}'),d={name:"api/Lux/utilities.md"},o={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.681ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 20633.1 1199","aria-hidden":"true"},x={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.631ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 11329 1199","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},H={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},_={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},v={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.718ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10483.3 1000","aria-hidden":"true"},D={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},j={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},V={class:"jldocstring custom-block"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},M={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},Z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},P={class:"jldocstring custom-block"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},N={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.184ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 10247.1 1799","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.919ex",height:"1ex",role:"img",focusable:"false",viewBox:"0 -431 406 442","aria-hidden":"true"},G={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},U={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},Y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},$={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.539ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 15266.3 1000","aria-hidden":"true"},s1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},i1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},a1={class:"jldocstring custom-block"},t1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.308ex"},xmlns:"http://www.w3.org/2000/svg",width:"28.659ex",height:"5.747ex",role:"img",focusable:"false",viewBox:"0 -1520 12667.4 2540","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},h1={class:"jldocstring custom-block"},p1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},Q1={class:"jldocstring custom-block"},g1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"20.065ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8868.8 1199","aria-hidden":"true"},y1={class:"jldocstring custom-block"},c1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"5.428ex",role:"img",focusable:"false",viewBox:"0 -1449.5 17948.3 2399","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.005ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -717 444 727","aria-hidden":"true"},x1={class:"jldocstring custom-block"},C1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},f1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},w1={class:"jldocstring custom-block"},H1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},_1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.333ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5451.1 1199","aria-hidden":"true"},v1={class:"jldocstring custom-block"},A1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"14.515ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 6415.7 1799","aria-hidden":"true"},j1={class:"jldocstring custom-block"},V1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"32.253ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 14255.9 1799","aria-hidden":"true"},M1={class:"jldocstring custom-block"},Z1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},P1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},I1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},R1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},O1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"18.723ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8275.6 1199","aria-hidden":"true"},N1={class:"jldocstring custom-block"},z1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 17948.2 1233.4","aria-hidden":"true"},G1={class:"jldocstring custom-block"},X1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},J1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.053ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 9305.3 1233.4","aria-hidden":"true"},U1={class:"jldocstring custom-block"},W1={class:"jldocstring custom-block"},K1={class:"jldocstring custom-block"},Y1={class:"jldocstring custom-block"},$1={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},Qs={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"};function Es(us,i,Fs,xs,Cs,Ls){const e=r("Badge");return h(),n("div",null,[i[293]||(i[293]=s("h1",{id:"utilities",tabindex:"-1"},[a("Utilities "),s("a",{class:"header-anchor",href:"#utilities","aria-label":'Permalink to "Utilities"'},"​")],-1)),i[294]||(i[294]=s("h2",{id:"Training-API",tabindex:"-1"},[a("Training API "),s("a",{class:"header-anchor",href:"#Training-API","aria-label":'Permalink to "Training API {#Training-API}"'},"​")],-1)),i[295]||(i[295]=s("p",null,[a("Helper Functions making it easier to train "),s("code",null,"Lux.jl"),a(" models.")],-1)),i[296]||(i[296]=s("p",null,"Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.",-1)),s("details",o,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.Training.TrainState",href:"#Lux.Training.TrainState"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[1]||(i[1]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=t('
    julia
    TrainState

    Training State containing:

    • model: Lux model.

    • parameters: Trainable Variables of the model.

    • states: Non-trainable Variables of the model.

    • optimizer: Optimizer from Optimisers.jl.

    • optimizer_state: Optimizer State.

    • step: Number of updates of the parameters made.

    Internal fields:

    • cache: Cached values. Implementations are free to use this for whatever they want.

    • objective_function: Objective function might be cached.

    Warning

    Constructing this object directly shouldn't be considered a stable API. Use the version with the Optimisers API.

    source

    ',7))]),s("details",k,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}",href:"#Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[4]||(i[4]=a()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),i[5]||(i[5]=t('
    julia
    TrainState(model::Lux.AbstractLuxLayer, ps, st, optimizer::Optimisers.AbstractRule)

    Constructor for TrainState.

    Arguments

    • ps: Parameters of the model.

    • st: States of the model.

    • model: Lux model.

    • optimizer: Optimizer from Optimisers.jl.

    Returns

    TrainState object.

    source

    ',7))]),s("details",T,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.Training.compute_gradients",href:"#Lux.Training.compute_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.compute_gradients")],-1)),i[7]||(i[7]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=t(`
    julia
    compute_gradients(ad::AbstractADType, objective_function::Function, data,
    +    ts::TrainState)

    Compute the gradients of the objective function wrt parameters stored in ts.

    Backends & AD Packages

    Supported BackendsPackages Needed
    AutoZygoteZygote.jl
    AutoReverseDiff(; compile)ReverseDiff.jl
    AutoTrackerTracker.jl
    AutoEnzymeEnzyme.jl

    Arguments

    • ad: Backend (from ADTypes.jl) used to compute the gradients.

    • objective_function: Objective function. The function must take 4 inputs – model, parameters, states and data. The function must return 3 values – loss, updated_state, and any computed statistics.

    • data: Data used to compute the gradients.

    • ts: Current Training State. See TrainState.

    Return

    A 4-Tuple containing:

    • grads: Computed Gradients.

    • loss: Loss from the objective function.

    • stats: Any computed statistics from the objective function.

    • ts: Updated Training State.

    Known Limitations

    • AutoReverseDiff(; compile=true) is not supported for Lux models with non-empty state st. Additionally the returned stats must be empty (NamedTuple()). We catch these issues in most cases and throw an error.

    Aliased Gradients

    grads returned by this function might be aliased by the implementation of the gradient backend. For example, if you cache the grads from step i, the new gradients returned in step i + 1 might be aliased by the old gradients. If you want to prevent this, simply use copy(grads) or deepcopy(grads) to make a copy of the gradients.

    source

    `,13))]),s("details",Q,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.Training.apply_gradients",href:"#Lux.Training.apply_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients")],-1)),i[10]||(i[10]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=t('
    julia
    apply_gradients(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads.

    Arguments

    • ts: TrainState object.

    • grads: Gradients of the loss function wrt ts.params.

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",g,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.Training.apply_gradients!",href:"#Lux.Training.apply_gradients!"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients!")],-1)),i[13]||(i[13]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=t('
    julia
    apply_gradients!(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads. This is an inplace version of apply_gradients.

    Arguments

    • ts: TrainState object.

    • grads: Gradients of the loss function wrt ts.params.

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",m,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.Training.single_train_step",href:"#Lux.Training.single_train_step"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step")],-1)),i[16]||(i[16]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=t('
    julia
    single_train_step(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients. All backends supported via compute_gradients are supported here.

    In most cases you should use single_train_step! instead of this function.

    Keyword Arguments

    • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

    Return

    Returned values are the same as single_train_step!.

    source

    ',8))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.Training.single_train_step!",href:"#Lux.Training.single_train_step!"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step!")],-1)),i[19]||(i[19]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=t('
    julia
    single_train_step!(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients!. All backends supported via compute_gradients are supported here.

    Keyword Arguments

    • return_gradients: If True(), the gradients are returned. If False(), the returned gradients are nothing. Defaults to True(). This is only used for Reactant Backend.

    Return

    Returned values are the same as compute_gradients. Note that despite the !, only the parameters in ts are updated inplace. Users should be using the returned ts object for further training steps, else there is no caching and performance will be suboptimal (and absolutely terrible for backends like AutoReactant).

    source

    ',7))]),i[297]||(i[297]=t('

    Loss Functions

    Loss Functions Objects take 2 forms of inputs:

    1. and y where is the predicted output and y is the target output.

    2. model, ps, st, (x, y) where model is the model, ps are the parameters, st are the states and (x, y) are the input and target pair. Then it returns the loss, updated states, and an empty named tuple. This makes them compatible with the Training API.

    Warning

    When using ChainRules.jl compatible AD (like Zygote), we only compute the gradients wrt the inputs and drop any gradients wrt the targets.

    ',4)),s("details",c,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.GenericLossFunction",href:"#Lux.GenericLossFunction"},[s("span",{class:"jlbinding"},"Lux.GenericLossFunction")],-1)),i[22]||(i[22]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[23]||(i[23]=t(`
    julia
    GenericLossFunction(loss_fn; agg = mean)

    Takes any function loss_fn that maps 2 number inputs to a single number output. Additionally, array inputs are efficiently broadcasted and aggregated using agg.

    julia
    julia> mseloss = GenericLossFunction((ŷ, y) -> abs2(ŷ - y));
     
     julia> y_model = [1.1, 1.9, 3.1];
     
     julia> mseloss(y_model, 1:3)  0.01
    -true

    Special Note

    This function takes any of the LossFunctions.jl public functions into the Lux Losses API with efficient aggregation.

    source

    `,6))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.BinaryCrossEntropyLoss",href:"#Lux.BinaryCrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryCrossEntropyLoss")],-1)),i[25]||(i[25]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[56]||(i[56]=t(`
    julia
    BinaryCrossEntropyLoss(; agg = mean, epsilon = nothing,
    +true

    Special Note

    This function takes any of the LossFunctions.jl public functions into the Lux Losses API with efficient aggregation.

    source

    `,6))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.BinaryCrossEntropyLoss",href:"#Lux.BinaryCrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryCrossEntropyLoss")],-1)),i[25]||(i[25]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[56]||(i[56]=t(`
    julia
    BinaryCrossEntropyLoss(; agg = mean, epsilon = nothing,
         label_smoothing::Union{Nothing, Real}=nothing,
    -    logits::Union{Bool, Val}=Val(false))

    Binary Cross Entropy Loss with optional label smoothing and fused logit computation.

    Returns the binary cross entropy loss computed as:

    `,3)),s("ul",null,[s("li",null,[i[28]||(i[28]=s("p",null,[a("If "),s("code",null,"logits"),a(" is either "),s("code",null,"false"),a(" or "),s("code",null,"Val(false)"),a(":")],-1)),s("mjx-container",u,[(h(),n("svg",F,i[26]||(i[26]=[t('',1)]))),i[27]||(i[27]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])]),s("li",null,[i[31]||(i[31]=s("p",null,[a("If "),s("code",null,"logits"),a(" is "),s("code",null,"true"),a(" or "),s("code",null,"Val(true)"),a(":")],-1)),s("mjx-container",x,[(h(),n("svg",C,i[29]||(i[29]=[t('',1)]))),i[30]||(i[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"g"),s("mi",null,"σ"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])])]),s("p",null,[i[38]||(i[38]=a("The value of ")),s("mjx-container",L,[(h(),n("svg",f,i[32]||(i[32]=[t('',1)]))),i[33]||(i[33]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[39]||(i[39]=a(" is computed using label smoothing. If ")),i[40]||(i[40]=s("code",null,"label_smoothing",-1)),i[41]||(i[41]=a(" is ")),i[42]||(i[42]=s("code",null,"nothing",-1)),i[43]||(i[43]=a(", then no label smoothing is applied. If ")),i[44]||(i[44]=s("code",null,"label_smoothing",-1)),i[45]||(i[45]=a(" is a real number ")),s("mjx-container",b,[(h(),n("svg",w,i[34]||(i[34]=[t('',1)]))),i[35]||(i[35]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[46]||(i[46]=a(", then the value of ")),s("mjx-container",H,[(h(),n("svg",v,i[36]||(i[36]=[t('',1)]))),i[37]||(i[37]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[47]||(i[47]=a(" is:"))]),s("mjx-container",j,[(h(),n("svg",D,i[48]||(i[48]=[t('',1)]))),i[49]||(i[49]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mn",null,"0.5")])],-1))]),s("p",null,[i[52]||(i[52]=a("where ")),s("mjx-container",B,[(h(),n("svg",M,i[50]||(i[50]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[51]||(i[51]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[53]||(i[53]=a(" is the value of ")),i[54]||(i[54]=s("code",null,"label_smoothing",-1)),i[55]||(i[55]=a("."))]),i[57]||(i[57]=t(`

    Extended Help

    Example

    julia
    julia> bce = BinaryCrossEntropyLoss();
    +    logits::Union{Bool, Val}=Val(false))

    Binary Cross Entropy Loss with optional label smoothing and fused logit computation.

    Returns the binary cross entropy loss computed as:

    `,3)),s("ul",null,[s("li",null,[i[28]||(i[28]=s("p",null,[a("If "),s("code",null,"logits"),a(" is either "),s("code",null,"false"),a(" or "),s("code",null,"Val(false)"),a(":")],-1)),s("mjx-container",u,[(h(),n("svg",F,i[26]||(i[26]=[t('',1)]))),i[27]||(i[27]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])]),s("li",null,[i[31]||(i[31]=s("p",null,[a("If "),s("code",null,"logits"),a(" is "),s("code",null,"true"),a(" or "),s("code",null,"Val(true)"),a(":")],-1)),s("mjx-container",x,[(h(),n("svg",C,i[29]||(i[29]=[t('',1)]))),i[30]||(i[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"g"),s("mi",null,"σ"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])])]),s("p",null,[i[38]||(i[38]=a("The value of ")),s("mjx-container",L,[(h(),n("svg",f,i[32]||(i[32]=[t('',1)]))),i[33]||(i[33]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[39]||(i[39]=a(" is computed using label smoothing. If ")),i[40]||(i[40]=s("code",null,"label_smoothing",-1)),i[41]||(i[41]=a(" is ")),i[42]||(i[42]=s("code",null,"nothing",-1)),i[43]||(i[43]=a(", then no label smoothing is applied. If ")),i[44]||(i[44]=s("code",null,"label_smoothing",-1)),i[45]||(i[45]=a(" is a real number ")),s("mjx-container",b,[(h(),n("svg",w,i[34]||(i[34]=[t('',1)]))),i[35]||(i[35]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[46]||(i[46]=a(", then the value of ")),s("mjx-container",H,[(h(),n("svg",_,i[36]||(i[36]=[t('',1)]))),i[37]||(i[37]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[47]||(i[47]=a(" is:"))]),s("mjx-container",v,[(h(),n("svg",A,i[48]||(i[48]=[t('',1)]))),i[49]||(i[49]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mn",null,"0.5")])],-1))]),s("p",null,[i[52]||(i[52]=a("where ")),s("mjx-container",D,[(h(),n("svg",j,i[50]||(i[50]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[51]||(i[51]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[53]||(i[53]=a(" is the value of ")),i[54]||(i[54]=s("code",null,"label_smoothing",-1)),i[55]||(i[55]=a("."))]),i[57]||(i[57]=t(`

    Extended Help

    Example

    julia
    julia> bce = BinaryCrossEntropyLoss();
     
     julia> y_bin = Bool[1, 0, 1];
     
    @@ -32,7 +32,7 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     julia> logitbce_ls = BinaryCrossEntropyLoss(label_smoothing=0.1, logits=Val(true));
     
     julia> logitbce_ls(y_model, y_bin) > logitbce(y_model, y_bin)
    -true

    source

    `,4))]),s("details",V,[s("summary",null,[i[58]||(i[58]=s("a",{id:"Lux.BinaryFocalLoss",href:"#Lux.BinaryFocalLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryFocalLoss")],-1)),i[59]||(i[59]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[70]||(i[70]=t('
    julia
    BinaryFocalLoss(; gamma = 2, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[62]||(i[62]=a("Return the binary focal loss [1]. The model input, ")),s("mjx-container",A,[(h(),n("svg",Z,i[60]||(i[60]=[t('',1)]))),i[61]||(i[61]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[63]||(i[63]=a(", is expected to be normalized (i.e. softmax output)."))]),s("p",null,[i[66]||(i[66]=a("For ")),s("mjx-container",O,[(h(),n("svg",S,i[64]||(i[64]=[t('',1)]))),i[65]||(i[65]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[67]||(i[67]=a(" this is equivalent to ")),i[68]||(i[68]=s("a",{href:"/dev/api/Lux/utilities#Lux.BinaryCrossEntropyLoss"},[s("code",null,"BinaryCrossEntropyLoss")],-1)),i[69]||(i[69]=a("."))]),i[71]||(i[71]=t(`

    Example

    julia
    julia> y = [0  1  0
    +true

    source

    `,4))]),s("details",V,[s("summary",null,[i[58]||(i[58]=s("a",{id:"Lux.BinaryFocalLoss",href:"#Lux.BinaryFocalLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryFocalLoss")],-1)),i[59]||(i[59]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[70]||(i[70]=t('
    julia
    BinaryFocalLoss(; gamma = 2, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[62]||(i[62]=a("Return the binary focal loss [1]. The model input, ")),s("mjx-container",B,[(h(),n("svg",M,i[60]||(i[60]=[t('',1)]))),i[61]||(i[61]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[63]||(i[63]=a(", is expected to be normalized (i.e. softmax output)."))]),s("p",null,[i[66]||(i[66]=a("For ")),s("mjx-container",Z,[(h(),n("svg",S,i[64]||(i[64]=[t('',1)]))),i[65]||(i[65]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[67]||(i[67]=a(" this is equivalent to ")),i[68]||(i[68]=s("a",{href:"/dev/api/Lux/utilities#Lux.BinaryCrossEntropyLoss"},[s("code",null,"BinaryCrossEntropyLoss")],-1)),i[69]||(i[69]=a("."))]),i[71]||(i[71]=t(`

    Example

    julia
    julia> y = [0  1  0
                 1  0  1];
     
     julia> ŷ = [0.268941  0.5  0.268941
    @@ -42,10 +42,10 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     true
     
     julia> BinaryFocalLoss(gamma=0)(ŷ, y)  BinaryCrossEntropyLoss()(ŷ, y)
    -true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",R,[s("summary",null,[i[72]||(i[72]=s("a",{id:"Lux.CrossEntropyLoss",href:"#Lux.CrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.CrossEntropyLoss")],-1)),i[73]||(i[73]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[115]||(i[115]=t(`
    julia
    CrossEntropyLoss(;
    +true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",P,[s("summary",null,[i[72]||(i[72]=s("a",{id:"Lux.CrossEntropyLoss",href:"#Lux.CrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.CrossEntropyLoss")],-1)),i[73]||(i[73]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[115]||(i[115]=t(`
    julia
    CrossEntropyLoss(;
         agg=mean, epsilon=nothing, dims=1, logits::Union{Bool, Val}=Val(false),
         label_smoothing::Union{Nothing, Real}=nothing
    -)
    `,1)),s("p",null,[i[76]||(i[76]=a("Return the cross entropy loss which is used in multi-class classification tasks. The input, ")),s("mjx-container",N,[(h(),n("svg",z,i[74]||(i[74]=[t('',1)]))),i[75]||(i[75]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[77]||(i[77]=a(", is expected to be normalized (i.e. ")),i[78]||(i[78]=s("code",null,"softmax",-1)),i[79]||(i[79]=a(" output) if ")),i[80]||(i[80]=s("code",null,"logits",-1)),i[81]||(i[81]=a(" is ")),i[82]||(i[82]=s("code",null,"false",-1)),i[83]||(i[83]=a(" or ")),i[84]||(i[84]=s("code",null,"Val(false)",-1)),i[85]||(i[85]=a("."))]),i[116]||(i[116]=s("p",null,"The loss is calculated as:",-1)),s("mjx-container",P,[(h(),n("svg",I,i[86]||(i[86]=[t('',1)]))),i[87]||(i[87]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[96]||(i[96]=a("where ")),s("mjx-container",q,[(h(),n("svg",G,i[88]||(i[88]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D716",d:"M227 -11Q149 -11 95 41T40 174Q40 262 87 322Q121 367 173 396T287 430Q289 431 329 431H367Q382 426 382 411Q382 385 341 385H325H312Q191 385 154 277L150 265H327Q340 256 340 246Q340 228 320 219H138V217Q128 187 128 143Q128 77 160 52T231 26Q258 26 284 36T326 57T343 68Q350 68 354 58T358 39Q358 36 357 35Q354 31 337 21T289 0T227 -11Z",style:{"stroke-width":"3"}})])])],-1)]))),i[89]||(i[89]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"ϵ")])],-1))]),i[97]||(i[97]=a(" is added for numerical stability. The value of ")),s("mjx-container",X,[(h(),n("svg",J,i[90]||(i[90]=[t('',1)]))),i[91]||(i[91]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[98]||(i[98]=a(" is computed using label smoothing. If ")),i[99]||(i[99]=s("code",null,"label_smoothing",-1)),i[100]||(i[100]=a(" is ")),i[101]||(i[101]=s("code",null,"nothing",-1)),i[102]||(i[102]=a(", then no label smoothing is applied. If ")),i[103]||(i[103]=s("code",null,"label_smoothing",-1)),i[104]||(i[104]=a(" is a real number ")),s("mjx-container",U,[(h(),n("svg",W,i[92]||(i[92]=[t('',1)]))),i[93]||(i[93]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[105]||(i[105]=a(", then the value of ")),s("mjx-container",K,[(h(),n("svg",Y,i[94]||(i[94]=[t('',1)]))),i[95]||(i[95]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[106]||(i[106]=a(" is calculated as:"))]),s("mjx-container",$,[(h(),n("svg",_,i[107]||(i[107]=[t('',1)]))),i[108]||(i[108]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mtext",null,"size along dim")])],-1))]),s("p",null,[i[111]||(i[111]=a("where ")),s("mjx-container",s1,[(h(),n("svg",i1,i[109]||(i[109]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[110]||(i[110]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[112]||(i[112]=a(" is the value of ")),i[113]||(i[113]=s("code",null,"label_smoothing",-1)),i[114]||(i[114]=a("."))]),i[117]||(i[117]=t(`

    Extended Help

    Example

    julia
    julia> y = [1  0  0  0  1
    +)
    `,1)),s("p",null,[i[76]||(i[76]=a("Return the cross entropy loss which is used in multi-class classification tasks. The input, ")),s("mjx-container",I,[(h(),n("svg",R,i[74]||(i[74]=[t('',1)]))),i[75]||(i[75]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[77]||(i[77]=a(", is expected to be normalized (i.e. ")),i[78]||(i[78]=s("code",null,"softmax",-1)),i[79]||(i[79]=a(" output) if ")),i[80]||(i[80]=s("code",null,"logits",-1)),i[81]||(i[81]=a(" is ")),i[82]||(i[82]=s("code",null,"false",-1)),i[83]||(i[83]=a(" or ")),i[84]||(i[84]=s("code",null,"Val(false)",-1)),i[85]||(i[85]=a("."))]),i[116]||(i[116]=s("p",null,"The loss is calculated as:",-1)),s("mjx-container",O,[(h(),n("svg",N,i[86]||(i[86]=[t('',1)]))),i[87]||(i[87]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[96]||(i[96]=a("where ")),s("mjx-container",z,[(h(),n("svg",q,i[88]||(i[88]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D716",d:"M227 -11Q149 -11 95 41T40 174Q40 262 87 322Q121 367 173 396T287 430Q289 431 329 431H367Q382 426 382 411Q382 385 341 385H325H312Q191 385 154 277L150 265H327Q340 256 340 246Q340 228 320 219H138V217Q128 187 128 143Q128 77 160 52T231 26Q258 26 284 36T326 57T343 68Q350 68 354 58T358 39Q358 36 357 35Q354 31 337 21T289 0T227 -11Z",style:{"stroke-width":"3"}})])])],-1)]))),i[89]||(i[89]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"ϵ")])],-1))]),i[97]||(i[97]=a(" is added for numerical stability. The value of ")),s("mjx-container",G,[(h(),n("svg",X,i[90]||(i[90]=[t('',1)]))),i[91]||(i[91]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[98]||(i[98]=a(" is computed using label smoothing. If ")),i[99]||(i[99]=s("code",null,"label_smoothing",-1)),i[100]||(i[100]=a(" is ")),i[101]||(i[101]=s("code",null,"nothing",-1)),i[102]||(i[102]=a(", then no label smoothing is applied. If ")),i[103]||(i[103]=s("code",null,"label_smoothing",-1)),i[104]||(i[104]=a(" is a real number ")),s("mjx-container",J,[(h(),n("svg",U,i[92]||(i[92]=[t('',1)]))),i[93]||(i[93]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[105]||(i[105]=a(", then the value of ")),s("mjx-container",W,[(h(),n("svg",K,i[94]||(i[94]=[t('',1)]))),i[95]||(i[95]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[106]||(i[106]=a(" is calculated as:"))]),s("mjx-container",Y,[(h(),n("svg",$,i[107]||(i[107]=[t('',1)]))),i[108]||(i[108]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mtext",null,"size along dim")])],-1))]),s("p",null,[i[111]||(i[111]=a("where ")),s("mjx-container",s1,[(h(),n("svg",i1,i[109]||(i[109]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[110]||(i[110]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[112]||(i[112]=a(" is the value of ")),i[113]||(i[113]=s("code",null,"label_smoothing",-1)),i[114]||(i[114]=a("."))]),i[117]||(i[117]=t(`

    Extended Help

    Example

    julia
    julia> y = [1  0  0  0  1
                 0  1  0  1  0
                 0  0  1  0  0]
     3×5 Matrix{Int64}:
    @@ -66,7 +66,7 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     true
     
     julia> CrossEntropyLoss(label_smoothing=0.15)(y_model, y)  1.5776052f0
    -true

    source

    `,4))]),s("details",a1,[s("summary",null,[i[118]||(i[118]=s("a",{id:"Lux.DiceCoeffLoss",href:"#Lux.DiceCoeffLoss"},[s("span",{class:"jlbinding"},"Lux.DiceCoeffLoss")],-1)),i[119]||(i[119]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[128]||(i[128]=t('
    julia
    DiceCoeffLoss(; smooth = true, agg = mean)

    Return the Dice Coefficient loss [1] which is used in segmentation tasks. The dice coefficient is similar to the F1_score. Loss calculated as:

    ',2)),s("mjx-container",t1,[(h(),n("svg",e1,i[120]||(i[120]=[t('',1)]))),i[121]||(i[121]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"a"),s("mi",null,"g"),s("mi",null,"g"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mfrac",null,[s("mrow",null,[s("mn",null,"2"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"α")]),s("mrow",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mi",null,"y"),s("mn",null,"2")]),s("mo",null,"+"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"α")])]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[124]||(i[124]=a("where ")),s("mjx-container",l1,[(h(),n("svg",n1,i[122]||(i[122]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[123]||(i[123]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[125]||(i[125]=a(" is the smoothing factor (")),i[126]||(i[126]=s("code",null,"smooth",-1)),i[127]||(i[127]=a(")."))]),i[129]||(i[129]=t(`

    Example

    julia
    julia> y_pred = [1.1, 2.1, 3.1];
    +true

    source

    `,4))]),s("details",a1,[s("summary",null,[i[118]||(i[118]=s("a",{id:"Lux.DiceCoeffLoss",href:"#Lux.DiceCoeffLoss"},[s("span",{class:"jlbinding"},"Lux.DiceCoeffLoss")],-1)),i[119]||(i[119]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[128]||(i[128]=t('
    julia
    DiceCoeffLoss(; smooth = true, agg = mean)

    Return the Dice Coefficient loss [1] which is used in segmentation tasks. The dice coefficient is similar to the F1_score. Loss calculated as:

    ',2)),s("mjx-container",t1,[(h(),n("svg",e1,i[120]||(i[120]=[t('',1)]))),i[121]||(i[121]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"a"),s("mi",null,"g"),s("mi",null,"g"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mfrac",null,[s("mrow",null,[s("mn",null,"2"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"α")]),s("mrow",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mi",null,"y"),s("mn",null,"2")]),s("mo",null,"+"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"α")])]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[124]||(i[124]=a("where ")),s("mjx-container",l1,[(h(),n("svg",n1,i[122]||(i[122]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[123]||(i[123]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[125]||(i[125]=a(" is the smoothing factor (")),i[126]||(i[126]=s("code",null,"smooth",-1)),i[127]||(i[127]=a(")."))]),i[129]||(i[129]=t(`

    Example

    julia
    julia> y_pred = [1.1, 2.1, 3.1];
     
     julia> DiceCoeffLoss()(y_pred, 1:3)   0.000992391663909964
     true
    @@ -75,7 +75,7 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     true
     
     julia> DiceCoeffLoss()(reshape(y_pred, 3, 1), reshape(1:3, 3, 1))  0.000992391663909964
    -true

    References

    [1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). Ieee, 2016.

    source

    `,5))]),s("details",h1,[s("summary",null,[i[130]||(i[130]=s("a",{id:"Lux.FocalLoss",href:"#Lux.FocalLoss"},[s("span",{class:"jlbinding"},"Lux.FocalLoss")],-1)),i[131]||(i[131]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[147]||(i[147]=t('
    julia
    FocalLoss(; gamma = 2, dims = 1, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[134]||(i[134]=a("Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, ")),s("mjx-container",p1,[(h(),n("svg",r1,i[132]||(i[132]=[t('',1)]))),i[133]||(i[133]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[135]||(i[135]=a(", is expected to be normalized (i.e. ")),i[136]||(i[136]=s("code",null,"softmax",-1)),i[137]||(i[137]=a(" output)."))]),s("p",null,[i[142]||(i[142]=a("The modulating factor ")),s("mjx-container",d1,[(h(),n("svg",o1,i[138]||(i[138]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[139]||(i[139]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ")])],-1))]),i[143]||(i[143]=a(", controls the down-weighting strength. For ")),s("mjx-container",k1,[(h(),n("svg",T1,i[140]||(i[140]=[t('',1)]))),i[141]||(i[141]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[144]||(i[144]=a(" this is equivalent to ")),i[145]||(i[145]=s("a",{href:"/dev/api/Lux/utilities#Lux.CrossEntropyLoss"},[s("code",null,"CrossEntropyLoss")],-1)),i[146]||(i[146]=a("."))]),i[148]||(i[148]=t(`

    Example

    julia
    julia> y = [1  0  0  0  1
    +true

    References

    [1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). Ieee, 2016.

    source

    `,5))]),s("details",h1,[s("summary",null,[i[130]||(i[130]=s("a",{id:"Lux.FocalLoss",href:"#Lux.FocalLoss"},[s("span",{class:"jlbinding"},"Lux.FocalLoss")],-1)),i[131]||(i[131]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[147]||(i[147]=t('
    julia
    FocalLoss(; gamma = 2, dims = 1, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[134]||(i[134]=a("Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, ")),s("mjx-container",p1,[(h(),n("svg",r1,i[132]||(i[132]=[t('',1)]))),i[133]||(i[133]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[135]||(i[135]=a(", is expected to be normalized (i.e. ")),i[136]||(i[136]=s("code",null,"softmax",-1)),i[137]||(i[137]=a(" output)."))]),s("p",null,[i[142]||(i[142]=a("The modulating factor ")),s("mjx-container",d1,[(h(),n("svg",o1,i[138]||(i[138]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[139]||(i[139]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ")])],-1))]),i[143]||(i[143]=a(", controls the down-weighting strength. For ")),s("mjx-container",k1,[(h(),n("svg",T1,i[140]||(i[140]=[t('',1)]))),i[141]||(i[141]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[144]||(i[144]=a(" this is equivalent to ")),i[145]||(i[145]=s("a",{href:"/dev/api/Lux/utilities#Lux.CrossEntropyLoss"},[s("code",null,"CrossEntropyLoss")],-1)),i[146]||(i[146]=a("."))]),i[148]||(i[148]=t(`

    Example

    julia
    julia> y = [1  0  0  0  1
                 0  1  0  1  0
                 0  0  1  0  0]
     3×5 Matrix{Int64}:
    @@ -90,20 +90,20 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
      0.665241   0.665241   0.665241   0.665241   0.665241
     
     julia> FocalLoss()(ŷ, y)  1.1277556f0
    -true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",Q1,[s("summary",null,[i[149]||(i[149]=s("a",{id:"Lux.HingeLoss",href:"#Lux.HingeLoss"},[s("span",{class:"jlbinding"},"Lux.HingeLoss")],-1)),i[150]||(i[150]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[153]||(i[153]=t('
    julia
    HingeLoss(; agg = mean)

    Return the hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",g1,[(h(),n("svg",m1,i[151]||(i[151]=[t('',1)]))),i[152]||(i[152]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[154]||(i[154]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = HingeLoss();
    +true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",Q1,[s("summary",null,[i[149]||(i[149]=s("a",{id:"Lux.HingeLoss",href:"#Lux.HingeLoss"},[s("span",{class:"jlbinding"},"Lux.HingeLoss")],-1)),i[150]||(i[150]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[153]||(i[153]=t('
    julia
    HingeLoss(; agg = mean)

    Return the hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",g1,[(h(),n("svg",m1,i[151]||(i[151]=[t('',1)]))),i[152]||(i[152]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[154]||(i[154]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = HingeLoss();
     
     julia> y_true = [1, -1, 1, 1];
     
     julia> y_pred = [0.1, 0.3, 1, 1.5];
     
     julia> loss(y_pred, y_true)  0.55
    -true

    source

    `,4))]),s("details",y1,[s("summary",null,[i[155]||(i[155]=s("a",{id:"Lux.HuberLoss",href:"#Lux.HuberLoss"},[s("span",{class:"jlbinding"},"Lux.HuberLoss")],-1)),i[156]||(i[156]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[165]||(i[165]=t('
    julia
    HuberLoss(; delta = 1, agg = mean)

    Returns the Huber loss, calculated as:

    ',2)),s("mjx-container",c1,[(h(),n("svg",E1,i[157]||(i[157]=[t('',1)]))),i[158]||(i[158]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"{"),s("mtable",{columnalign:"left left",columnspacing:"1em",rowspacing:".2em"},[s("mtr",null,[s("mtd",null,[s("mn",null,"0.5"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mn",null,"2")])]),s("mtd",null,[s("mtext",null,"if "),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"≤"),s("mi",null,"δ")])]),s("mtr",null,[s("mtd",null,[s("mi",null,"δ"),s("mo",null,"∗"),s("mo",{stretchy:"false"},"("),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"−"),s("mn",null,"0.5"),s("mo",null,"∗"),s("mi",null,"δ"),s("mo",{stretchy:"false"},")")]),s("mtd",null,[s("mtext",null,"otherwise")])])]),s("mo",{"data-mjx-texclass":"CLOSE",fence:"true",stretchy:"true",symmetric:"true"})])])],-1))]),s("p",null,[i[161]||(i[161]=a("where ")),s("mjx-container",u1,[(h(),n("svg",F1,i[159]||(i[159]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FF",d:"M195 609Q195 656 227 686T302 717Q319 716 351 709T407 697T433 690Q451 682 451 662Q451 644 438 628T403 612Q382 612 348 641T288 671T249 657T235 628Q235 584 334 463Q401 379 401 292Q401 169 340 80T205 -10H198Q127 -10 83 36T36 153Q36 286 151 382Q191 413 252 434Q252 435 245 449T230 481T214 521T201 566T195 609ZM112 130Q112 83 136 55T204 27Q233 27 256 51T291 111T309 178T316 232Q316 267 309 298T295 344T269 400L259 396Q215 381 183 342T137 256T118 179T112 130Z",style:{"stroke-width":"3"}})])])],-1)]))),i[160]||(i[160]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"δ")])],-1))]),i[162]||(i[162]=a(" is the ")),i[163]||(i[163]=s("code",null,"delta",-1)),i[164]||(i[164]=a(" parameter."))]),i[166]||(i[166]=t(`

    Example

    julia
    julia> y_model = [1.1, 2.1, 3.1];
    +true

    source

    `,4))]),s("details",y1,[s("summary",null,[i[155]||(i[155]=s("a",{id:"Lux.HuberLoss",href:"#Lux.HuberLoss"},[s("span",{class:"jlbinding"},"Lux.HuberLoss")],-1)),i[156]||(i[156]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[165]||(i[165]=t('
    julia
    HuberLoss(; delta = 1, agg = mean)

    Returns the Huber loss, calculated as:

    ',2)),s("mjx-container",c1,[(h(),n("svg",E1,i[157]||(i[157]=[t('',1)]))),i[158]||(i[158]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"{"),s("mtable",{columnalign:"left left",columnspacing:"1em",rowspacing:".2em"},[s("mtr",null,[s("mtd",null,[s("mn",null,"0.5"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mn",null,"2")])]),s("mtd",null,[s("mtext",null,"if "),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"≤"),s("mi",null,"δ")])]),s("mtr",null,[s("mtd",null,[s("mi",null,"δ"),s("mo",null,"∗"),s("mo",{stretchy:"false"},"("),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"−"),s("mn",null,"0.5"),s("mo",null,"∗"),s("mi",null,"δ"),s("mo",{stretchy:"false"},")")]),s("mtd",null,[s("mtext",null,"otherwise")])])]),s("mo",{"data-mjx-texclass":"CLOSE",fence:"true",stretchy:"true",symmetric:"true"})])])],-1))]),s("p",null,[i[161]||(i[161]=a("where ")),s("mjx-container",u1,[(h(),n("svg",F1,i[159]||(i[159]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FF",d:"M195 609Q195 656 227 686T302 717Q319 716 351 709T407 697T433 690Q451 682 451 662Q451 644 438 628T403 612Q382 612 348 641T288 671T249 657T235 628Q235 584 334 463Q401 379 401 292Q401 169 340 80T205 -10H198Q127 -10 83 36T36 153Q36 286 151 382Q191 413 252 434Q252 435 245 449T230 481T214 521T201 566T195 609ZM112 130Q112 83 136 55T204 27Q233 27 256 51T291 111T309 178T316 232Q316 267 309 298T295 344T269 400L259 396Q215 381 183 342T137 256T118 179T112 130Z",style:{"stroke-width":"3"}})])])],-1)]))),i[160]||(i[160]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"δ")])],-1))]),i[162]||(i[162]=a(" is the ")),i[163]||(i[163]=s("code",null,"delta",-1)),i[164]||(i[164]=a(" parameter."))]),i[166]||(i[166]=t(`

    Example

    julia
    julia> y_model = [1.1, 2.1, 3.1];
     
     julia> HuberLoss()(y_model, 1:3)  0.005000000000000009
     true
     
     julia> HuberLoss(delta=0.05)(y_model, 1:3)  0.003750000000000005
    -true

    source

    `,3))]),s("details",x1,[s("summary",null,[i[167]||(i[167]=s("a",{id:"Lux.KLDivergenceLoss",href:"#Lux.KLDivergenceLoss"},[s("span",{class:"jlbinding"},"Lux.KLDivergenceLoss")],-1)),i[168]||(i[168]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[176]||(i[176]=t('
    julia
    KLDivergenceLoss(; dims = 1, agg = mean, epsilon = nothing, label_smoothing = nothing)
    ',1)),s("p",null,[i[173]||(i[173]=a("Return the Kullback-Leibler Divergence loss between the predicted distribution ")),s("mjx-container",C1,[(h(),n("svg",L1,i[169]||(i[169]=[t('',1)]))),i[170]||(i[170]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[174]||(i[174]=a(" and the true distribution ")),s("mjx-container",f1,[(h(),n("svg",b1,i[171]||(i[171]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[172]||(i[172]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[175]||(i[175]=a(":"))]),i[177]||(i[177]=t(`

    The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

    For epsilon and label_smoothing, see CrossEntropyLoss.

    Example

    julia
    julia> p1 = [1 0; 0 1]
    +true

    source

    `,3))]),s("details",x1,[s("summary",null,[i[167]||(i[167]=s("a",{id:"Lux.KLDivergenceLoss",href:"#Lux.KLDivergenceLoss"},[s("span",{class:"jlbinding"},"Lux.KLDivergenceLoss")],-1)),i[168]||(i[168]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[176]||(i[176]=t('
    julia
    KLDivergenceLoss(; dims = 1, agg = mean, epsilon = nothing, label_smoothing = nothing)
    ',1)),s("p",null,[i[173]||(i[173]=a("Return the Kullback-Leibler Divergence loss between the predicted distribution ")),s("mjx-container",C1,[(h(),n("svg",L1,i[169]||(i[169]=[t('',1)]))),i[170]||(i[170]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[174]||(i[174]=a(" and the true distribution ")),s("mjx-container",f1,[(h(),n("svg",b1,i[171]||(i[171]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[172]||(i[172]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[175]||(i[175]=a(":"))]),i[177]||(i[177]=t(`

    The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

    For epsilon and label_smoothing, see CrossEntropyLoss.

    Example

    julia
    julia> p1 = [1 0; 0 1]
     2×2 Matrix{Int64}:
      1  0
      0  1
    @@ -123,44 +123,44 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     0.0
     
     julia> KLDivergenceLoss(; epsilon=0)(p1, p2)
    -Inf

    source

    `,5))]),s("details",w1,[s("summary",null,[i[178]||(i[178]=s("a",{id:"Lux.MAELoss",href:"#Lux.MAELoss"},[s("span",{class:"jlbinding"},"Lux.MAELoss")],-1)),i[179]||(i[179]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[182]||(i[182]=t('
    julia
    MAELoss(; agg = mean)

    Returns the loss corresponding to mean absolute error:

    ',2)),s("mjx-container",H1,[(h(),n("svg",v1,i[180]||(i[180]=[t('',1)]))),i[181]||(i[181]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"|"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},"|")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[183]||(i[183]=t(`

    Example

    julia
    julia> loss = MAELoss();
    +Inf

    source

    `,5))]),s("details",w1,[s("summary",null,[i[178]||(i[178]=s("a",{id:"Lux.MAELoss",href:"#Lux.MAELoss"},[s("span",{class:"jlbinding"},"Lux.MAELoss")],-1)),i[179]||(i[179]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[182]||(i[182]=t('
    julia
    MAELoss(; agg = mean)

    Returns the loss corresponding to mean absolute error:

    ',2)),s("mjx-container",H1,[(h(),n("svg",_1,i[180]||(i[180]=[t('',1)]))),i[181]||(i[181]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"|"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},"|")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[183]||(i[183]=t(`

    Example

    julia
    julia> loss = MAELoss();
     
     julia> y_model = [1.1, 1.9, 3.1];
     
     julia> loss(y_model, 1:3)  0.1
    -true

    source

    `,3))]),s("details",j1,[s("summary",null,[i[184]||(i[184]=s("a",{id:"Lux.MSELoss",href:"#Lux.MSELoss"},[s("span",{class:"jlbinding"},"Lux.MSELoss")],-1)),i[185]||(i[185]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[188]||(i[188]=t('
    julia
    MSELoss(; agg = mean)

    Returns the loss corresponding to mean squared error:

    ',2)),s("mjx-container",D1,[(h(),n("svg",B1,i[186]||(i[186]=[t('',1)]))),i[187]||(i[187]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[189]||(i[189]=t(`

    Example

    julia
    julia> loss = MSELoss();
    +true

    source

    `,3))]),s("details",v1,[s("summary",null,[i[184]||(i[184]=s("a",{id:"Lux.MSELoss",href:"#Lux.MSELoss"},[s("span",{class:"jlbinding"},"Lux.MSELoss")],-1)),i[185]||(i[185]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[188]||(i[188]=t('
    julia
    MSELoss(; agg = mean)

    Returns the loss corresponding to mean squared error:

    ',2)),s("mjx-container",A1,[(h(),n("svg",D1,i[186]||(i[186]=[t('',1)]))),i[187]||(i[187]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[189]||(i[189]=t(`

    Example

    julia
    julia> loss = MSELoss();
     
     julia> y_model = [1.1, 1.9, 3.1];
     
     julia> loss(y_model, 1:3)  0.01
    -true

    source

    `,3))]),s("details",M1,[s("summary",null,[i[190]||(i[190]=s("a",{id:"Lux.MSLELoss",href:"#Lux.MSLELoss"},[s("span",{class:"jlbinding"},"Lux.MSLELoss")],-1)),i[191]||(i[191]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[194]||(i[194]=t('
    julia
    MSLELoss(; agg = mean, epsilon = nothing)

    Returns the loss corresponding to mean squared logarithmic error:

    ',2)),s("mjx-container",V1,[(h(),n("svg",A1,i[192]||(i[192]=[t('',1)]))),i[193]||(i[193]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[195]||(i[195]=t(`

    epsilon is added to both y and to prevent taking the logarithm of zero. If epsilon is nothing, then we set it to eps(<type of y and ŷ>).

    Example

    julia
    julia> loss = MSLELoss();
    +true

    source

    `,3))]),s("details",j1,[s("summary",null,[i[190]||(i[190]=s("a",{id:"Lux.MSLELoss",href:"#Lux.MSLELoss"},[s("span",{class:"jlbinding"},"Lux.MSLELoss")],-1)),i[191]||(i[191]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[194]||(i[194]=t('
    julia
    MSLELoss(; agg = mean, epsilon = nothing)

    Returns the loss corresponding to mean squared logarithmic error:

    ',2)),s("mjx-container",V1,[(h(),n("svg",B1,i[192]||(i[192]=[t('',1)]))),i[193]||(i[193]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[195]||(i[195]=t(`

    epsilon is added to both y and to prevent taking the logarithm of zero. If epsilon is nothing, then we set it to eps(<type of y and ŷ>).

    Example

    julia
    julia> loss = MSLELoss();
     
     julia> loss(Float32[1.1, 2.2, 3.3], 1:3)  0.009084041f0
     true
     
     julia> loss(Float32[0.9, 1.8, 2.7], 1:3)  0.011100831f0
    -true

    source

    `,4))]),s("details",Z1,[s("summary",null,[i[196]||(i[196]=s("a",{id:"Lux.PoissonLoss",href:"#Lux.PoissonLoss"},[s("span",{class:"jlbinding"},"Lux.PoissonLoss")],-1)),i[197]||(i[197]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[207]||(i[207]=t('
    julia
    PoissonLoss(; agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[202]||(i[202]=a("Return how much the predicted distribution ")),s("mjx-container",O1,[(h(),n("svg",S1,i[198]||(i[198]=[t('',1)]))),i[199]||(i[199]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[203]||(i[203]=a(" diverges from the expected Poisson distribution ")),s("mjx-container",R1,[(h(),n("svg",N1,i[200]||(i[200]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[201]||(i[201]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[204]||(i[204]=a(", calculated as:"))]),s("mjx-container",z1,[(h(),n("svg",P1,i[205]||(i[205]=[t('',1)]))),i[206]||(i[206]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[208]||(i[208]=t(`

    Example

    julia
    julia> y_model = [1, 3, 3];  # data should only take integral values
    +true

    source

    `,4))]),s("details",M1,[s("summary",null,[i[196]||(i[196]=s("a",{id:"Lux.PoissonLoss",href:"#Lux.PoissonLoss"},[s("span",{class:"jlbinding"},"Lux.PoissonLoss")],-1)),i[197]||(i[197]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[207]||(i[207]=t('
    julia
    PoissonLoss(; agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[202]||(i[202]=a("Return how much the predicted distribution ")),s("mjx-container",Z1,[(h(),n("svg",S1,i[198]||(i[198]=[t('',1)]))),i[199]||(i[199]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[203]||(i[203]=a(" diverges from the expected Poisson distribution ")),s("mjx-container",P1,[(h(),n("svg",I1,i[200]||(i[200]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[201]||(i[201]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[204]||(i[204]=a(", calculated as:"))]),s("mjx-container",R1,[(h(),n("svg",O1,i[205]||(i[205]=[t('',1)]))),i[206]||(i[206]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[208]||(i[208]=t(`

    Example

    julia
    julia> y_model = [1, 3, 3];  # data should only take integral values
     
     julia> PoissonLoss()(y_model, 1:3)  0.502312852219817
    -true

    source

    `,3))]),s("details",I1,[s("summary",null,[i[209]||(i[209]=s("a",{id:"Lux.SiameseContrastiveLoss",href:"#Lux.SiameseContrastiveLoss"},[s("span",{class:"jlbinding"},"Lux.SiameseContrastiveLoss")],-1)),i[210]||(i[210]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[213]||(i[213]=t('
    julia
    SiameseContrastiveLoss(; margin = true, agg = mean)

    Return the contrastive loss [1] which can be useful for training Siamese Networks. It is given by:

    ',2)),s("mjx-container",q1,[(h(),n("svg",G1,i[211]||(i[211]=[t('',1)]))),i[212]||(i[212]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mo",{stretchy:"false"},")"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"y"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mtext",null,"margin"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[214]||(i[214]=t(`

    Specify margin to set the baseline for distance at which pairs are dissimilar.

    Example

    julia
    julia>= [0.5, 1.5, 2.5];
    +true

    source

    `,3))]),s("details",N1,[s("summary",null,[i[209]||(i[209]=s("a",{id:"Lux.SiameseContrastiveLoss",href:"#Lux.SiameseContrastiveLoss"},[s("span",{class:"jlbinding"},"Lux.SiameseContrastiveLoss")],-1)),i[210]||(i[210]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[213]||(i[213]=t('
    julia
    SiameseContrastiveLoss(; margin = true, agg = mean)

    Return the contrastive loss [1] which can be useful for training Siamese Networks. It is given by:

    ',2)),s("mjx-container",z1,[(h(),n("svg",q1,i[211]||(i[211]=[t('',1)]))),i[212]||(i[212]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mo",{stretchy:"false"},")"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"y"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mtext",null,"margin"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[214]||(i[214]=t(`

    Specify margin to set the baseline for distance at which pairs are dissimilar.

    Example

    julia
    julia>= [0.5, 1.5, 2.5];
     
     julia> SiameseContrastiveLoss()(ŷ, 1:3)  -4.833333333333333
     true
     
     julia> SiameseContrastiveLoss(margin=2)(ŷ, 1:3)  -4.0
    -true

    References

    [1] Hadsell, Raia, Sumit Chopra, and Yann LeCun. "Dimensionality reduction by learning an invariant mapping." 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE, 2006.

    source

    `,6))]),s("details",X1,[s("summary",null,[i[215]||(i[215]=s("a",{id:"Lux.SquaredHingeLoss",href:"#Lux.SquaredHingeLoss"},[s("span",{class:"jlbinding"},"Lux.SquaredHingeLoss")],-1)),i[216]||(i[216]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[219]||(i[219]=t('
    julia
    SquaredHingeLoss(; agg = mean)

    Return the squared hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",J1,[(h(),n("svg",U1,i[217]||(i[217]=[t('',1)]))),i[218]||(i[218]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[220]||(i[220]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = SquaredHingeLoss();
    +true

    References

    [1] Hadsell, Raia, Sumit Chopra, and Yann LeCun. "Dimensionality reduction by learning an invariant mapping." 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE, 2006.

    source

    `,6))]),s("details",G1,[s("summary",null,[i[215]||(i[215]=s("a",{id:"Lux.SquaredHingeLoss",href:"#Lux.SquaredHingeLoss"},[s("span",{class:"jlbinding"},"Lux.SquaredHingeLoss")],-1)),i[216]||(i[216]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[219]||(i[219]=t('
    julia
    SquaredHingeLoss(; agg = mean)

    Return the squared hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",X1,[(h(),n("svg",J1,i[217]||(i[217]=[t('',1)]))),i[218]||(i[218]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[220]||(i[220]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = SquaredHingeLoss();
     
     julia> y_true = [1, -1, 1, 1];
     
     julia> y_pred = [0.1, 0.3, 1, 1.5];
     
     julia> loss(y_pred, y_true)  0.625
    -true

    source

    `,4))]),i[298]||(i[298]=s("h2",{id:"LuxOps-Module",tabindex:"-1"},[a("LuxOps Module "),s("a",{class:"header-anchor",href:"#LuxOps-Module","aria-label":'Permalink to "LuxOps Module {#LuxOps-Module}"'},"​")],-1)),s("details",W1,[s("summary",null,[i[221]||(i[221]=s("a",{id:"Lux.LuxOps",href:"#Lux.LuxOps"},[s("span",{class:"jlbinding"},"Lux.LuxOps")],-1)),i[222]||(i[222]=a()),l(e,{type:"info",class:"jlObjectType jlModule",text:"Module"})]),i[223]||(i[223]=t('
    julia
    LuxOps

    This module is a part of Lux.jl. It contains operations that are useful in DL context. Additionally certain operations here alias Base functions to behave more sensibly with GPUArrays.

    source

    ',3))]),s("details",K1,[s("summary",null,[i[224]||(i[224]=s("a",{id:"Lux.LuxOps.eachslice",href:"#Lux.LuxOps.eachslice"},[s("span",{class:"jlbinding"},"Lux.LuxOps.eachslice")],-1)),i[225]||(i[225]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[226]||(i[226]=t('
    julia
    eachslice(x, dims::Val)

    Same as Base.eachslice but doesn't produce a SubArray for the slices if x is a GPUArray.

    Additional dispatches for RNN helpers are also provided for TimeLastIndex and BatchLastIndex.

    source

    ',4))]),s("details",Y1,[s("summary",null,[i[227]||(i[227]=s("a",{id:"Lux.LuxOps.foldl_init",href:"#Lux.LuxOps.foldl_init"},[s("span",{class:"jlbinding"},"Lux.LuxOps.foldl_init")],-1)),i[228]||(i[228]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[229]||(i[229]=t(`
    julia
    foldl_init(op, x)
    -foldl_init(op, x, init)

    Exactly same as foldl(op, x; init) in the forward pass. But, gives gradients wrt init in the backward pass.

    source

    `,3))]),s("details",$1,[s("summary",null,[i[230]||(i[230]=s("a",{id:"Lux.LuxOps.getproperty",href:"#Lux.LuxOps.getproperty"},[s("span",{class:"jlbinding"},"Lux.LuxOps.getproperty")],-1)),i[231]||(i[231]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[232]||(i[232]=t(`
    julia
    getproperty(x, ::Val{v})
    -getproperty(x, ::StaticSymbol{v})

    Similar to Base.getproperty but requires a Val (or Static.StaticSymbol). Additionally, if v is not present in x, then nothing is returned.

    source

    `,3))]),s("details",_1,[s("summary",null,[i[233]||(i[233]=s("a",{id:"Lux.LuxOps.xlogx",href:"#Lux.LuxOps.xlogx"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogx")],-1)),i[234]||(i[234]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[235]||(i[235]=t('
    julia
    xlogx(x::Number)

    Return x * log(x) for x ≥ 0, handling x == 0 by taking the limit from above, to get zero.

    source

    ',3))]),s("details",ss,[s("summary",null,[i[236]||(i[236]=s("a",{id:"Lux.LuxOps.xlogy",href:"#Lux.LuxOps.xlogy"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogy")],-1)),i[237]||(i[237]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[238]||(i[238]=t('
    julia
    xlogy(x::Number, y::Number)

    Return x * log(y) for y > 0, and zero when x == 0.

    source

    ',3))]),s("details",is,[s("summary",null,[i[239]||(i[239]=s("a",{id:"Lux.LuxOps.istraining",href:"#Lux.LuxOps.istraining"},[s("span",{class:"jlbinding"},"Lux.LuxOps.istraining")],-1)),i[240]||(i[240]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[241]||(i[241]=t(`
    julia
    istraining(::Val{training})
    +true

    source

    `,4))]),i[298]||(i[298]=s("h2",{id:"LuxOps-Module",tabindex:"-1"},[a("LuxOps Module "),s("a",{class:"header-anchor",href:"#LuxOps-Module","aria-label":'Permalink to "LuxOps Module {#LuxOps-Module}"'},"​")],-1)),s("details",U1,[s("summary",null,[i[221]||(i[221]=s("a",{id:"Lux.LuxOps",href:"#Lux.LuxOps"},[s("span",{class:"jlbinding"},"Lux.LuxOps")],-1)),i[222]||(i[222]=a()),l(e,{type:"info",class:"jlObjectType jlModule",text:"Module"})]),i[223]||(i[223]=t('
    julia
    LuxOps

    This module is a part of Lux.jl. It contains operations that are useful in DL context. Additionally certain operations here alias Base functions to behave more sensibly with GPUArrays.

    source

    ',3))]),s("details",W1,[s("summary",null,[i[224]||(i[224]=s("a",{id:"Lux.LuxOps.eachslice",href:"#Lux.LuxOps.eachslice"},[s("span",{class:"jlbinding"},"Lux.LuxOps.eachslice")],-1)),i[225]||(i[225]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[226]||(i[226]=t('
    julia
    eachslice(x, dims::Val)

    Same as Base.eachslice but doesn't produce a SubArray for the slices if x is a GPUArray.

    Additional dispatches for RNN helpers are also provided for TimeLastIndex and BatchLastIndex.

    source

    ',4))]),s("details",K1,[s("summary",null,[i[227]||(i[227]=s("a",{id:"Lux.LuxOps.foldl_init",href:"#Lux.LuxOps.foldl_init"},[s("span",{class:"jlbinding"},"Lux.LuxOps.foldl_init")],-1)),i[228]||(i[228]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[229]||(i[229]=t(`
    julia
    foldl_init(op, x)
    +foldl_init(op, x, init)

    Exactly same as foldl(op, x; init) in the forward pass. But, gives gradients wrt init in the backward pass.

    source

    `,3))]),s("details",Y1,[s("summary",null,[i[230]||(i[230]=s("a",{id:"Lux.LuxOps.getproperty",href:"#Lux.LuxOps.getproperty"},[s("span",{class:"jlbinding"},"Lux.LuxOps.getproperty")],-1)),i[231]||(i[231]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[232]||(i[232]=t(`
    julia
    getproperty(x, ::Val{v})
    +getproperty(x, ::StaticSymbol{v})

    Similar to Base.getproperty but requires a Val (or Static.StaticSymbol). Additionally, if v is not present in x, then nothing is returned.

    source

    `,3))]),s("details",$1,[s("summary",null,[i[233]||(i[233]=s("a",{id:"Lux.LuxOps.xlogx",href:"#Lux.LuxOps.xlogx"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogx")],-1)),i[234]||(i[234]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[235]||(i[235]=t('
    julia
    xlogx(x::Number)

    Return x * log(x) for x ≥ 0, handling x == 0 by taking the limit from above, to get zero.

    source

    ',3))]),s("details",ss,[s("summary",null,[i[236]||(i[236]=s("a",{id:"Lux.LuxOps.xlogy",href:"#Lux.LuxOps.xlogy"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogy")],-1)),i[237]||(i[237]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[238]||(i[238]=t('
    julia
    xlogy(x::Number, y::Number)

    Return x * log(y) for y > 0, and zero when x == 0.

    source

    ',3))]),s("details",is,[s("summary",null,[i[239]||(i[239]=s("a",{id:"Lux.LuxOps.istraining",href:"#Lux.LuxOps.istraining"},[s("span",{class:"jlbinding"},"Lux.LuxOps.istraining")],-1)),i[240]||(i[240]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[241]||(i[241]=t(`
    julia
    istraining(::Val{training})
     istraining(::StaticBool)
     istraining(::Bool)
    -istraining(st::NamedTuple)

    Returns true if training is true or if st contains a training field with value true. Else returns false.

    source

    `,3))]),s("details",as,[s("summary",null,[i[242]||(i[242]=s("a",{id:"Lux.LuxOps.multigate",href:"#Lux.LuxOps.multigate"},[s("span",{class:"jlbinding"},"Lux.LuxOps.multigate")],-1)),i[243]||(i[243]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[244]||(i[244]=t('
    julia
    multigate(x::AbstractArray, ::Val{N})

    Split up x into N equally sized chunks (along dimension 1).

    source

    ',3))]),i[299]||(i[299]=s("h2",{id:"Recursive-Operations",tabindex:"-1"},[a("Recursive Operations "),s("a",{class:"header-anchor",href:"#Recursive-Operations","aria-label":'Permalink to "Recursive Operations {#Recursive-Operations}"'},"​")],-1)),s("details",ts,[s("summary",null,[i[245]||(i[245]=s("a",{id:"Lux.recursive_map",href:"#Lux.recursive_map"},[s("span",{class:"jlbinding"},"Lux.recursive_map")],-1)),i[246]||(i[246]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[247]||(i[247]=t('
    julia
    recursive_map(f, x, args...)

    Similar to fmap(f, args...) but with restricted support for the notion of "leaf" types. However, this allows for more efficient and type stable implementations of recursive operations.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    How this works?

    For the following types it directly defines recursion rules:

    1. AbstractArray: If eltype is isbitstype, then f is applied to the array, else we recurse on the array.

    2. Tuple/NamedTuple: We recurse on the values.

    3. Number/Val/Nothing: We directly apply f.

    4. For all other types, we recurse on the fields using Functors.fmap.

    Note

    In most cases, users should gravitate towards Functors.fmap if it is being used outside of hot loops. Even for other cases, it is always recommended to verify the correctness of this implementation for specific usecases.

    source

    ',8))]),s("details",es,[s("summary",null,[i[248]||(i[248]=s("a",{id:"Lux.recursive_add!!",href:"#Lux.recursive_add!!"},[s("span",{class:"jlbinding"},"Lux.recursive_add!!")],-1)),i[249]||(i[249]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[250]||(i[250]=t('
    julia
    recursive_add!!(x, y)

    Recursively add the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(+, x, y), but this implementation uses type stable code for common cases.

    Any leaves of x that are arrays and allow in-place addition will be modified in place.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ls,[s("summary",null,[i[251]||(i[251]=s("a",{id:"Lux.recursive_copyto!",href:"#Lux.recursive_copyto!"},[s("span",{class:"jlbinding"},"Lux.recursive_copyto!")],-1)),i[252]||(i[252]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[253]||(i[253]=t('
    julia
    recursive_copyto!(x, y)

    Recursively copy the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(copyto!, x, y), but this implementation uses type stable code for common cases. Note that any immutable leaf will lead to an error.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',4))]),s("details",ns,[s("summary",null,[i[254]||(i[254]=s("a",{id:"Lux.recursive_eltype",href:"#Lux.recursive_eltype"},[s("span",{class:"jlbinding"},"Lux.recursive_eltype")],-1)),i[255]||(i[255]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[256]||(i[256]=t('
    julia
    recursive_eltype(x, unwrap_ad_types = Val(false))

    Recursively determine the element type of a nested structure x. This is equivalent to doing fmap(Lux.Utils.eltype, x), but this implementation uses type stable code for common cases.

    For ambiguous inputs like nothing and Val types we return Bool as the eltype.

    If unwrap_ad_types is set to Val(true) then for tracing and operator overloading based ADs (ForwardDiff, ReverseDiff, Tracker), this function will return the eltype of the unwrapped value.

    source

    ',5))]),s("details",hs,[s("summary",null,[i[257]||(i[257]=s("a",{id:"Lux.recursive_make_zero",href:"#Lux.recursive_make_zero"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero")],-1)),i[258]||(i[258]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[259]||(i[259]=t('
    julia
    recursive_make_zero(x)

    Recursively create a zero value for a nested structure x. This is equivalent to doing fmap(zero, x), but this implementation uses type stable code for common cases.

    See also Lux.recursive_make_zero!!.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ps,[s("summary",null,[i[260]||(i[260]=s("a",{id:"Lux.recursive_make_zero!!",href:"#Lux.recursive_make_zero!!"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero!!")],-1)),i[261]||(i[261]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[262]||(i[262]=t('
    julia
    recursive_make_zero!!(x)

    Recursively create a zero value for a nested structure x. Leaves that can be mutated with in-place zeroing will be modified in place.

    See also Lux.recursive_make_zero for fully out-of-place version.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),i[300]||(i[300]=s("h2",{id:"Updating-Floating-Point-Precision",tabindex:"-1"},[a("Updating Floating Point Precision "),s("a",{class:"header-anchor",href:"#Updating-Floating-Point-Precision","aria-label":'Permalink to "Updating Floating Point Precision {#Updating-Floating-Point-Precision}"'},"​")],-1)),i[301]||(i[301]=s("p",null,"By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.",-1)),s("details",rs,[s("summary",null,[i[263]||(i[263]=s("a",{id:"Lux.f16",href:"#Lux.f16"},[s("span",{class:"jlbinding"},"Lux.f16")],-1)),i[264]||(i[264]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[265]||(i[265]=t('
    julia
    f16(m)

    Converts the eltype of m floating point values to Float16. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ds,[s("summary",null,[i[266]||(i[266]=s("a",{id:"Lux.f32",href:"#Lux.f32"},[s("span",{class:"jlbinding"},"Lux.f32")],-1)),i[267]||(i[267]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[268]||(i[268]=t('
    julia
    f32(m)

    Converts the eltype of m floating point values to Float32. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",os,[s("summary",null,[i[269]||(i[269]=s("a",{id:"Lux.f64",href:"#Lux.f64"},[s("span",{class:"jlbinding"},"Lux.f64")],-1)),i[270]||(i[270]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[271]||(i[271]=t('
    julia
    f64(m)

    Converts the eltype of m floating point values to Float64. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ks,[s("summary",null,[i[272]||(i[272]=s("a",{id:"Lux.bf16",href:"#Lux.bf16"},[s("span",{class:"jlbinding"},"Lux.bf16")],-1)),i[273]||(i[273]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[274]||(i[274]=t('
    julia
    bf16(m)

    Converts the eltype of m floating point values to BFloat16. To avoid recursion into structs mark them with Functors.@leaf.

    Warning

    BFloat16s.jl needs to be loaded before using this function.

    Support for BFloat16

    Most Lux operations aren't optimized for BFloat16 yet. Instead this is meant to be used together with Reactant.@compile.

    source

    ',5))]),i[302]||(i[302]=s("h2",{id:"Element-Type-Matching",tabindex:"-1"},[a("Element Type Matching "),s("a",{class:"header-anchor",href:"#Element-Type-Matching","aria-label":'Permalink to "Element Type Matching {#Element-Type-Matching}"'},"​")],-1)),s("details",Ts,[s("summary",null,[i[275]||(i[275]=s("a",{id:"Lux.match_eltype",href:"#Lux.match_eltype"},[s("span",{class:"jlbinding"},"Lux.match_eltype")],-1)),i[276]||(i[276]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[277]||(i[277]=t('
    julia
    match_eltype(layer, ps, st, args...)

    Helper function to "maybe" (see below) match the element type of args... with the element type of the layer's parameters and states. This is useful for debugging purposes, to track down accidental type-promotions inside Lux layers.

    Extended Help

    Controlling the Behavior via Preferences

    Behavior of this function is controlled via the eltype_mismatch_handling preference. The following options are supported:

    • "none": This is the default behavior. In this case, this function is a no-op, i.e., it simply returns args....

    • "warn": This option will issue a warning if the element type of args... does not match the element type of the layer's parameters and states. The warning will contain information about the layer and the element type mismatch.

    • "convert": This option is same as "warn", but it will also convert the element type of args... to match the element type of the layer's parameters and states (for the cases listed below).

    • "error": Same as "warn", but instead of issuing a warning, it will throw an error.

    Warning

    We print the warning for type-mismatch only once.

    Element Type Conversions

    For "convert" only the following conversions are done:

    Element Type of parameters/statesElement Type of args...Converted to
    Float64IntegerFloat64
    Float32Float64Float32
    Float32IntegerFloat32
    Float16Float64Float16
    Float16Float32Float16
    Float16IntegerFloat16

    source

    ',11))]),i[303]||(i[303]=s("h2",{id:"Stateful-Layer",tabindex:"-1"},[a("Stateful Layer "),s("a",{class:"header-anchor",href:"#Stateful-Layer","aria-label":'Permalink to "Stateful Layer {#Stateful-Layer}"'},"​")],-1)),s("details",Qs,[s("summary",null,[i[278]||(i[278]=s("a",{id:"Lux.StatefulLuxLayer",href:"#Lux.StatefulLuxLayer"},[s("span",{class:"jlbinding"},"Lux.StatefulLuxLayer")],-1)),i[279]||(i[279]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[280]||(i[280]=t('
    julia
    StatefulLuxLayer{FT}(model, ps, st)

    Warning

    This is not a Lux.AbstractLuxLayer

    A convenience wrapper over Lux layers which stores the parameters and states internally. This is meant to be used in internal implementation of layers.

    Usecases

    • Internal implementation of @compact heavily uses this layer.

    • In SciML codebases where propagating state might involving Boxing. For a motivating example, see the Neural ODE tutorial.

    • Facilitates Nested AD support in Lux. For more details on this feature, see the Nested AD Manual Page.

    Static Parameters

    • If FT = true then the type of the state is fixed, i.e., typeof(last(model(x, ps, st))) == st.

    • If FT = false then type of the state might change. Note that while this works in all cases, it will introduce type instability.

    Arguments

    • model: A Lux layer

    • ps: The parameters of the layer. This can be set to nothing, if the user provides the parameters on function call

    • st: The state of the layer

    Inputs

    • x: The input to the layer

    • ps: The parameters of the layer. Optional, defaults to s.ps

    Outputs

    • y: The output of the layer

    source

    ',14))]),i[304]||(i[304]=s("h2",{id:"Compact-Layer",tabindex:"-1"},[a("Compact Layer "),s("a",{class:"header-anchor",href:"#Compact-Layer","aria-label":'Permalink to "Compact Layer {#Compact-Layer}"'},"​")],-1)),s("details",gs,[s("summary",null,[i[281]||(i[281]=s("a",{id:"Lux.@compact",href:"#Lux.@compact"},[s("span",{class:"jlbinding"},"Lux.@compact")],-1)),i[282]||(i[282]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[283]||(i[283]=t(`
    julia
    @compact(kw...) do x
    +istraining(st::NamedTuple)

    Returns true if training is true or if st contains a training field with value true. Else returns false.

    source

    `,3))]),s("details",as,[s("summary",null,[i[242]||(i[242]=s("a",{id:"Lux.LuxOps.multigate",href:"#Lux.LuxOps.multigate"},[s("span",{class:"jlbinding"},"Lux.LuxOps.multigate")],-1)),i[243]||(i[243]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[244]||(i[244]=t('
    julia
    multigate(x::AbstractArray, ::Val{N})

    Split up x into N equally sized chunks (along dimension 1).

    source

    ',3))]),i[299]||(i[299]=s("h2",{id:"Recursive-Operations",tabindex:"-1"},[a("Recursive Operations "),s("a",{class:"header-anchor",href:"#Recursive-Operations","aria-label":'Permalink to "Recursive Operations {#Recursive-Operations}"'},"​")],-1)),s("details",ts,[s("summary",null,[i[245]||(i[245]=s("a",{id:"Lux.recursive_map",href:"#Lux.recursive_map"},[s("span",{class:"jlbinding"},"Lux.recursive_map")],-1)),i[246]||(i[246]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[247]||(i[247]=t('
    julia
    recursive_map(f, x, args...)

    Similar to fmap(f, args...) but with restricted support for the notion of "leaf" types. However, this allows for more efficient and type stable implementations of recursive operations.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    How this works?

    For the following types it directly defines recursion rules:

    1. AbstractArray: If eltype is isbitstype, then f is applied to the array, else we recurse on the array.

    2. Tuple/NamedTuple: We recurse on the values.

    3. Number/Val/Nothing: We directly apply f.

    4. For all other types, we recurse on the fields using Functors.fmap.

    Note

    In most cases, users should gravitate towards Functors.fmap if it is being used outside of hot loops. Even for other cases, it is always recommended to verify the correctness of this implementation for specific usecases.

    source

    ',8))]),s("details",es,[s("summary",null,[i[248]||(i[248]=s("a",{id:"Lux.recursive_add!!",href:"#Lux.recursive_add!!"},[s("span",{class:"jlbinding"},"Lux.recursive_add!!")],-1)),i[249]||(i[249]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[250]||(i[250]=t('
    julia
    recursive_add!!(x, y)

    Recursively add the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(+, x, y), but this implementation uses type stable code for common cases.

    Any leaves of x that are arrays and allow in-place addition will be modified in place.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ls,[s("summary",null,[i[251]||(i[251]=s("a",{id:"Lux.recursive_copyto!",href:"#Lux.recursive_copyto!"},[s("span",{class:"jlbinding"},"Lux.recursive_copyto!")],-1)),i[252]||(i[252]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[253]||(i[253]=t('
    julia
    recursive_copyto!(x, y)

    Recursively copy the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(copyto!, x, y), but this implementation uses type stable code for common cases. Note that any immutable leaf will lead to an error.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',4))]),s("details",ns,[s("summary",null,[i[254]||(i[254]=s("a",{id:"Lux.recursive_eltype",href:"#Lux.recursive_eltype"},[s("span",{class:"jlbinding"},"Lux.recursive_eltype")],-1)),i[255]||(i[255]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[256]||(i[256]=t('
    julia
    recursive_eltype(x, unwrap_ad_types = Val(false))

    Recursively determine the element type of a nested structure x. This is equivalent to doing fmap(Lux.Utils.eltype, x), but this implementation uses type stable code for common cases.

    For ambiguous inputs like nothing and Val types we return Bool as the eltype.

    If unwrap_ad_types is set to Val(true) then for tracing and operator overloading based ADs (ForwardDiff, ReverseDiff, Tracker), this function will return the eltype of the unwrapped value.

    source

    ',5))]),s("details",hs,[s("summary",null,[i[257]||(i[257]=s("a",{id:"Lux.recursive_make_zero",href:"#Lux.recursive_make_zero"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero")],-1)),i[258]||(i[258]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[259]||(i[259]=t('
    julia
    recursive_make_zero(x)

    Recursively create a zero value for a nested structure x. This is equivalent to doing fmap(zero, x), but this implementation uses type stable code for common cases.

    See also Lux.recursive_make_zero!!.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ps,[s("summary",null,[i[260]||(i[260]=s("a",{id:"Lux.recursive_make_zero!!",href:"#Lux.recursive_make_zero!!"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero!!")],-1)),i[261]||(i[261]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[262]||(i[262]=t('
    julia
    recursive_make_zero!!(x)

    Recursively create a zero value for a nested structure x. Leaves that can be mutated with in-place zeroing will be modified in place.

    See also Lux.recursive_make_zero for fully out-of-place version.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),i[300]||(i[300]=s("h2",{id:"Updating-Floating-Point-Precision",tabindex:"-1"},[a("Updating Floating Point Precision "),s("a",{class:"header-anchor",href:"#Updating-Floating-Point-Precision","aria-label":'Permalink to "Updating Floating Point Precision {#Updating-Floating-Point-Precision}"'},"​")],-1)),i[301]||(i[301]=s("p",null,"By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.",-1)),s("details",rs,[s("summary",null,[i[263]||(i[263]=s("a",{id:"Lux.f16",href:"#Lux.f16"},[s("span",{class:"jlbinding"},"Lux.f16")],-1)),i[264]||(i[264]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[265]||(i[265]=t('
    julia
    f16(m)

    Converts the eltype of m floating point values to Float16. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ds,[s("summary",null,[i[266]||(i[266]=s("a",{id:"Lux.f32",href:"#Lux.f32"},[s("span",{class:"jlbinding"},"Lux.f32")],-1)),i[267]||(i[267]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[268]||(i[268]=t('
    julia
    f32(m)

    Converts the eltype of m floating point values to Float32. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",os,[s("summary",null,[i[269]||(i[269]=s("a",{id:"Lux.f64",href:"#Lux.f64"},[s("span",{class:"jlbinding"},"Lux.f64")],-1)),i[270]||(i[270]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[271]||(i[271]=t('
    julia
    f64(m)

    Converts the eltype of m floating point values to Float64. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ks,[s("summary",null,[i[272]||(i[272]=s("a",{id:"Lux.bf16",href:"#Lux.bf16"},[s("span",{class:"jlbinding"},"Lux.bf16")],-1)),i[273]||(i[273]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[274]||(i[274]=t('
    julia
    bf16(m)

    Converts the eltype of m floating point values to BFloat16. To avoid recursion into structs mark them with Functors.@leaf.

    Warning

    BFloat16s.jl needs to be loaded before using this function.

    Support for BFloat16

    Most Lux operations aren't optimized for BFloat16 yet. Instead this is meant to be used together with Reactant.@compile.

    source

    ',5))]),i[302]||(i[302]=s("h2",{id:"Element-Type-Matching",tabindex:"-1"},[a("Element Type Matching "),s("a",{class:"header-anchor",href:"#Element-Type-Matching","aria-label":'Permalink to "Element Type Matching {#Element-Type-Matching}"'},"​")],-1)),s("details",Ts,[s("summary",null,[i[275]||(i[275]=s("a",{id:"Lux.match_eltype",href:"#Lux.match_eltype"},[s("span",{class:"jlbinding"},"Lux.match_eltype")],-1)),i[276]||(i[276]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[277]||(i[277]=t('
    julia
    match_eltype(layer, ps, st, args...)

    Helper function to "maybe" (see below) match the element type of args... with the element type of the layer's parameters and states. This is useful for debugging purposes, to track down accidental type-promotions inside Lux layers.

    Extended Help

    Controlling the Behavior via Preferences

    Behavior of this function is controlled via the eltype_mismatch_handling preference. The following options are supported:

    • "none": This is the default behavior. In this case, this function is a no-op, i.e., it simply returns args....

    • "warn": This option will issue a warning if the element type of args... does not match the element type of the layer's parameters and states. The warning will contain information about the layer and the element type mismatch.

    • "convert": This option is same as "warn", but it will also convert the element type of args... to match the element type of the layer's parameters and states (for the cases listed below).

    • "error": Same as "warn", but instead of issuing a warning, it will throw an error.

    Warning

    We print the warning for type-mismatch only once.

    Element Type Conversions

    For "convert" only the following conversions are done:

    Element Type of parameters/statesElement Type of args...Converted to
    Float64IntegerFloat64
    Float32Float64Float32
    Float32IntegerFloat32
    Float16Float64Float16
    Float16Float32Float16
    Float16IntegerFloat16

    source

    ',11))]),i[303]||(i[303]=s("h2",{id:"Stateful-Layer",tabindex:"-1"},[a("Stateful Layer "),s("a",{class:"header-anchor",href:"#Stateful-Layer","aria-label":'Permalink to "Stateful Layer {#Stateful-Layer}"'},"​")],-1)),s("details",Qs,[s("summary",null,[i[278]||(i[278]=s("a",{id:"Lux.StatefulLuxLayer",href:"#Lux.StatefulLuxLayer"},[s("span",{class:"jlbinding"},"Lux.StatefulLuxLayer")],-1)),i[279]||(i[279]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[280]||(i[280]=t('
    julia
    StatefulLuxLayer{FT}(model, ps, st)

    Warning

    This is not a Lux.AbstractLuxLayer

    A convenience wrapper over Lux layers which stores the parameters and states internally. This is meant to be used in internal implementation of layers.

    Usecases

    • Internal implementation of @compact heavily uses this layer.

    • In SciML codebases where propagating state might involving Boxing. For a motivating example, see the Neural ODE tutorial.

    • Facilitates Nested AD support in Lux. For more details on this feature, see the Nested AD Manual Page.

    Static Parameters

    • If FT = true then the type of the state is fixed, i.e., typeof(last(model(x, ps, st))) == st.

    • If FT = false then type of the state might change. Note that while this works in all cases, it will introduce type instability.

    Arguments

    • model: A Lux layer

    • ps: The parameters of the layer. This can be set to nothing, if the user provides the parameters on function call

    • st: The state of the layer

    Inputs

    • x: The input to the layer

    • ps: The parameters of the layer. Optional, defaults to s.ps

    Outputs

    • y: The output of the layer

    source

    ',14))]),i[304]||(i[304]=s("h2",{id:"Compact-Layer",tabindex:"-1"},[a("Compact Layer "),s("a",{class:"header-anchor",href:"#Compact-Layer","aria-label":'Permalink to "Compact Layer {#Compact-Layer}"'},"​")],-1)),s("details",gs,[s("summary",null,[i[281]||(i[281]=s("a",{id:"Lux.@compact",href:"#Lux.@compact"},[s("span",{class:"jlbinding"},"Lux.@compact")],-1)),i[282]||(i[282]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[283]||(i[283]=t(`
    julia
    @compact(kw...) do x
         ...
         @return y # optional (but recommended for best performance)
     end
    @@ -276,7 +276,7 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     true

    You may also specify a name for the model, which will be used instead of the default printout, which gives a verbatim representation of the code used to construct the model:

    julia
    julia> model = @compact(w=rand(3), name="Linear(3 => 1)") do x
                @return sum(w .* x)
            end
    -Linear(3 => 1)      # 3 parameters

    This can be useful when using @compact to hierarchically construct complex models to be used inside a Chain.

    Type Stability

    If your input function f is type-stable but the generated model is not type stable, it should be treated as a bug. We will appreciate issues if you find such cases.

    Parameter Count

    Array Parameter don't print the number of parameters on the side. However, they do account for the total number of parameters printed at the bottom.

    source

    `,23))]),s("details",ms,[s("summary",null,[i[284]||(i[284]=s("a",{id:"Lux.@init_fn",href:"#Lux.@init_fn"},[s("span",{class:"jlbinding"},"Lux.@init_fn")],-1)),i[285]||(i[285]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[286]||(i[286]=t(`
    julia
    @init_fn(fn, kind::Symbol = :parameter)

    Create an initializer function for a parameter or state to be used for in a Compact Lux Layer created using @compact.

    Arguments

    • fn: The function to be used for initializing the parameter or state. This only takes a single argument rng.

    • kind: If set to :parameter, the initializer function will be used to initialize the parameters of the layer. If set to :state, the initializer function will be used to initialize the states of the layer.

    Examples

    julia
    julia> using Lux, Random
    +Linear(3 => 1)      # 3 parameters

    This can be useful when using @compact to hierarchically construct complex models to be used inside a Chain.

    Type Stability

    If your input function f is type-stable but the generated model is not type stable, it should be treated as a bug. We will appreciate issues if you find such cases.

    Parameter Count

    Array Parameter don't print the number of parameters on the side. However, they do account for the total number of parameters printed at the bottom.

    source

    `,23))]),s("details",ms,[s("summary",null,[i[284]||(i[284]=s("a",{id:"Lux.@init_fn",href:"#Lux.@init_fn"},[s("span",{class:"jlbinding"},"Lux.@init_fn")],-1)),i[285]||(i[285]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[286]||(i[286]=t(`
    julia
    @init_fn(fn, kind::Symbol = :parameter)

    Create an initializer function for a parameter or state to be used for in a Compact Lux Layer created using @compact.

    Arguments

    • fn: The function to be used for initializing the parameter or state. This only takes a single argument rng.

    • kind: If set to :parameter, the initializer function will be used to initialize the parameters of the layer. If set to :state, the initializer function will be used to initialize the states of the layer.

    Examples

    julia
    julia> using Lux, Random
     
     julia> r = @compact(w=@init_fn(rng->randn32(rng, 3, 2)),
                b=@init_fn(rng->randn32(rng, 3), :state)) do x
    @@ -292,7 +292,7 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     (3,)
     
     julia> size(r([1, 2], ps, st)[1])
    -(3,)

    source

    `,7))]),s("details",ys,[s("summary",null,[i[287]||(i[287]=s("a",{id:"Lux.@non_trainable",href:"#Lux.@non_trainable"},[s("span",{class:"jlbinding"},"Lux.@non_trainable")],-1)),i[288]||(i[288]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[289]||(i[289]=t(`
    julia
    @non_trainable(x)

    Mark a value as non-trainable. This bypasses the regular checks and places the value into the state of the layer.

    Arguments

    • x: The value to be marked as non-trainable.

    Examples

    julia
    julia> using Lux, Random
    +(3,)

    source

    `,7))]),s("details",ys,[s("summary",null,[i[287]||(i[287]=s("a",{id:"Lux.@non_trainable",href:"#Lux.@non_trainable"},[s("span",{class:"jlbinding"},"Lux.@non_trainable")],-1)),i[288]||(i[288]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[289]||(i[289]=t(`
    julia
    @non_trainable(x)

    Mark a value as non-trainable. This bypasses the regular checks and places the value into the state of the layer.

    Arguments

    • x: The value to be marked as non-trainable.

    Examples

    julia
    julia> using Lux, Random
     
     julia> r = @compact(w=ones(3), w_fixed=@non_trainable(rand(3))) do x
                @return sum(w .* x .+ w_fixed)
    @@ -312,5 +312,5 @@ import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framewo
     true
     
     julia> res isa Number
    -true

    source

    `,7))]),i[305]||(i[305]=s("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),s("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),s("details",cs,[s("summary",null,[i[290]||(i[290]=s("a",{id:"Lux.set_dispatch_doctor_preferences!",href:"#Lux.set_dispatch_doctor_preferences!"},[s("span",{class:"jlbinding"},"Lux.set_dispatch_doctor_preferences!")],-1)),i[291]||(i[291]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[292]||(i[292]=t(`
    julia
    set_dispatch_doctor_preferences!(mode::String)
    -set_dispatch_doctor_preferences!(; luxcore::String="disable", luxlib::String="disable")

    Set the dispatch doctor preference for LuxCore and LuxLib packages.

    mode can be "disable", "warn", or "error". For details on the different modes, see the DispatchDoctor.jl documentation.

    If the preferences are already set, then no action is taken. Otherwise the preference is set. For changes to take effect, the Julia session must be restarted.

    source

    `,5))])])}const ws=p(d,[["render",Es]]);export{bs as __pageData,ws as default}; +true

    source

    `,7))]),i[305]||(i[305]=s("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),s("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),s("details",cs,[s("summary",null,[i[290]||(i[290]=s("a",{id:"Lux.set_dispatch_doctor_preferences!",href:"#Lux.set_dispatch_doctor_preferences!"},[s("span",{class:"jlbinding"},"Lux.set_dispatch_doctor_preferences!")],-1)),i[291]||(i[291]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[292]||(i[292]=t(`
    julia
    set_dispatch_doctor_preferences!(mode::String)
    +set_dispatch_doctor_preferences!(; luxcore::String="disable", luxlib::String="disable")

    Set the dispatch doctor preference for LuxCore and LuxLib packages.

    mode can be "disable", "warn", or "error". For details on the different modes, see the DispatchDoctor.jl documentation.

    If the preferences are already set, then no action is taken. Otherwise the preference is set. For changes to take effect, the Julia session must be restarted.

    source

    `,5))])])}const ws=p(d,[["render",Es]]);export{bs as __pageData,ws as default}; diff --git a/dev/assets/api_Lux_utilities.md.CRauyyus.lean.js b/dev/assets/api_Lux_utilities.md.CRauyyus.lean.js new file mode 100644 index 0000000000..d0d272eec1 --- /dev/null +++ b/dev/assets/api_Lux_utilities.md.CRauyyus.lean.js @@ -0,0 +1 @@ +import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framework.BetCMmtc.js";const bs=JSON.parse('{"title":"Utilities","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/utilities.md","filePath":"api/Lux/utilities.md","lastUpdated":null}'),d={name:"api/Lux/utilities.md"},o={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.681ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 20633.1 1199","aria-hidden":"true"},x={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.631ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 11329 1199","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},H={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},_={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},v={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.718ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10483.3 1000","aria-hidden":"true"},D={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},j={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},V={class:"jldocstring custom-block"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},M={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},Z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},P={class:"jldocstring custom-block"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},N={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.184ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 10247.1 1799","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.919ex",height:"1ex",role:"img",focusable:"false",viewBox:"0 -431 406 442","aria-hidden":"true"},G={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},X={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},U={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},W={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},Y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},$={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.539ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 15266.3 1000","aria-hidden":"true"},s1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},i1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},a1={class:"jldocstring custom-block"},t1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.308ex"},xmlns:"http://www.w3.org/2000/svg",width:"28.659ex",height:"5.747ex",role:"img",focusable:"false",viewBox:"0 -1520 12667.4 2540","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},h1={class:"jldocstring custom-block"},p1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},Q1={class:"jldocstring custom-block"},g1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"20.065ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8868.8 1199","aria-hidden":"true"},y1={class:"jldocstring custom-block"},c1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"5.428ex",role:"img",focusable:"false",viewBox:"0 -1449.5 17948.3 2399","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.005ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -717 444 727","aria-hidden":"true"},x1={class:"jldocstring custom-block"},C1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},f1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},w1={class:"jldocstring custom-block"},H1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},_1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.333ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5451.1 1199","aria-hidden":"true"},v1={class:"jldocstring custom-block"},A1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"14.515ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 6415.7 1799","aria-hidden":"true"},j1={class:"jldocstring custom-block"},V1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"32.253ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 14255.9 1799","aria-hidden":"true"},M1={class:"jldocstring custom-block"},Z1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},P1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},I1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},R1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},O1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"18.723ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8275.6 1199","aria-hidden":"true"},N1={class:"jldocstring custom-block"},z1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},q1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 17948.2 1233.4","aria-hidden":"true"},G1={class:"jldocstring custom-block"},X1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},J1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.053ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 9305.3 1233.4","aria-hidden":"true"},U1={class:"jldocstring custom-block"},W1={class:"jldocstring custom-block"},K1={class:"jldocstring custom-block"},Y1={class:"jldocstring custom-block"},$1={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},Qs={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"};function Es(us,i,Fs,xs,Cs,Ls){const e=r("Badge");return h(),n("div",null,[i[293]||(i[293]=s("h1",{id:"utilities",tabindex:"-1"},[a("Utilities "),s("a",{class:"header-anchor",href:"#utilities","aria-label":'Permalink to "Utilities"'},"​")],-1)),i[294]||(i[294]=s("h2",{id:"Training-API",tabindex:"-1"},[a("Training API "),s("a",{class:"header-anchor",href:"#Training-API","aria-label":'Permalink to "Training API {#Training-API}"'},"​")],-1)),i[295]||(i[295]=s("p",null,[a("Helper Functions making it easier to train "),s("code",null,"Lux.jl"),a(" models.")],-1)),i[296]||(i[296]=s("p",null,"Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.",-1)),s("details",o,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.Training.TrainState",href:"#Lux.Training.TrainState"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[1]||(i[1]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=t("",7))]),s("details",k,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}",href:"#Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[4]||(i[4]=a()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),i[5]||(i[5]=t("",7))]),s("details",T,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.Training.compute_gradients",href:"#Lux.Training.compute_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.compute_gradients")],-1)),i[7]||(i[7]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=t("",13))]),s("details",Q,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.Training.apply_gradients",href:"#Lux.Training.apply_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients")],-1)),i[10]||(i[10]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=t("",7))]),s("details",g,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.Training.apply_gradients!",href:"#Lux.Training.apply_gradients!"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients!")],-1)),i[13]||(i[13]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=t("",7))]),s("details",m,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.Training.single_train_step",href:"#Lux.Training.single_train_step"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step")],-1)),i[16]||(i[16]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=t("",8))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.Training.single_train_step!",href:"#Lux.Training.single_train_step!"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step!")],-1)),i[19]||(i[19]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=t("",7))]),i[297]||(i[297]=t("",4)),s("details",c,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.GenericLossFunction",href:"#Lux.GenericLossFunction"},[s("span",{class:"jlbinding"},"Lux.GenericLossFunction")],-1)),i[22]||(i[22]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[23]||(i[23]=t("",6))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.BinaryCrossEntropyLoss",href:"#Lux.BinaryCrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryCrossEntropyLoss")],-1)),i[25]||(i[25]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[56]||(i[56]=t("",3)),s("ul",null,[s("li",null,[i[28]||(i[28]=s("p",null,[a("If "),s("code",null,"logits"),a(" is either "),s("code",null,"false"),a(" or "),s("code",null,"Val(false)"),a(":")],-1)),s("mjx-container",u,[(h(),n("svg",F,i[26]||(i[26]=[t("",1)]))),i[27]||(i[27]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])]),s("li",null,[i[31]||(i[31]=s("p",null,[a("If "),s("code",null,"logits"),a(" is "),s("code",null,"true"),a(" or "),s("code",null,"Val(true)"),a(":")],-1)),s("mjx-container",x,[(h(),n("svg",C,i[29]||(i[29]=[t("",1)]))),i[30]||(i[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"g"),s("mi",null,"σ"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])])]),s("p",null,[i[38]||(i[38]=a("The value of ")),s("mjx-container",L,[(h(),n("svg",f,i[32]||(i[32]=[t("",1)]))),i[33]||(i[33]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[39]||(i[39]=a(" is computed using label smoothing. If ")),i[40]||(i[40]=s("code",null,"label_smoothing",-1)),i[41]||(i[41]=a(" is ")),i[42]||(i[42]=s("code",null,"nothing",-1)),i[43]||(i[43]=a(", then no label smoothing is applied. If ")),i[44]||(i[44]=s("code",null,"label_smoothing",-1)),i[45]||(i[45]=a(" is a real number ")),s("mjx-container",b,[(h(),n("svg",w,i[34]||(i[34]=[t("",1)]))),i[35]||(i[35]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[46]||(i[46]=a(", then the value of ")),s("mjx-container",H,[(h(),n("svg",_,i[36]||(i[36]=[t("",1)]))),i[37]||(i[37]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[47]||(i[47]=a(" is:"))]),s("mjx-container",v,[(h(),n("svg",A,i[48]||(i[48]=[t("",1)]))),i[49]||(i[49]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mn",null,"0.5")])],-1))]),s("p",null,[i[52]||(i[52]=a("where ")),s("mjx-container",D,[(h(),n("svg",j,i[50]||(i[50]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[51]||(i[51]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[53]||(i[53]=a(" is the value of ")),i[54]||(i[54]=s("code",null,"label_smoothing",-1)),i[55]||(i[55]=a("."))]),i[57]||(i[57]=t("",4))]),s("details",V,[s("summary",null,[i[58]||(i[58]=s("a",{id:"Lux.BinaryFocalLoss",href:"#Lux.BinaryFocalLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryFocalLoss")],-1)),i[59]||(i[59]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[70]||(i[70]=t("",1)),s("p",null,[i[62]||(i[62]=a("Return the binary focal loss [1]. The model input, ")),s("mjx-container",B,[(h(),n("svg",M,i[60]||(i[60]=[t("",1)]))),i[61]||(i[61]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[63]||(i[63]=a(", is expected to be normalized (i.e. softmax output)."))]),s("p",null,[i[66]||(i[66]=a("For ")),s("mjx-container",Z,[(h(),n("svg",S,i[64]||(i[64]=[t("",1)]))),i[65]||(i[65]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[67]||(i[67]=a(" this is equivalent to ")),i[68]||(i[68]=s("a",{href:"/dev/api/Lux/utilities#Lux.BinaryCrossEntropyLoss"},[s("code",null,"BinaryCrossEntropyLoss")],-1)),i[69]||(i[69]=a("."))]),i[71]||(i[71]=t("",5))]),s("details",P,[s("summary",null,[i[72]||(i[72]=s("a",{id:"Lux.CrossEntropyLoss",href:"#Lux.CrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.CrossEntropyLoss")],-1)),i[73]||(i[73]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[115]||(i[115]=t("",1)),s("p",null,[i[76]||(i[76]=a("Return the cross entropy loss which is used in multi-class classification tasks. The input, ")),s("mjx-container",I,[(h(),n("svg",R,i[74]||(i[74]=[t("",1)]))),i[75]||(i[75]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[77]||(i[77]=a(", is expected to be normalized (i.e. ")),i[78]||(i[78]=s("code",null,"softmax",-1)),i[79]||(i[79]=a(" output) if ")),i[80]||(i[80]=s("code",null,"logits",-1)),i[81]||(i[81]=a(" is ")),i[82]||(i[82]=s("code",null,"false",-1)),i[83]||(i[83]=a(" or ")),i[84]||(i[84]=s("code",null,"Val(false)",-1)),i[85]||(i[85]=a("."))]),i[116]||(i[116]=s("p",null,"The loss is calculated as:",-1)),s("mjx-container",O,[(h(),n("svg",N,i[86]||(i[86]=[t("",1)]))),i[87]||(i[87]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[96]||(i[96]=a("where ")),s("mjx-container",z,[(h(),n("svg",q,i[88]||(i[88]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D716",d:"M227 -11Q149 -11 95 41T40 174Q40 262 87 322Q121 367 173 396T287 430Q289 431 329 431H367Q382 426 382 411Q382 385 341 385H325H312Q191 385 154 277L150 265H327Q340 256 340 246Q340 228 320 219H138V217Q128 187 128 143Q128 77 160 52T231 26Q258 26 284 36T326 57T343 68Q350 68 354 58T358 39Q358 36 357 35Q354 31 337 21T289 0T227 -11Z",style:{"stroke-width":"3"}})])])],-1)]))),i[89]||(i[89]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"ϵ")])],-1))]),i[97]||(i[97]=a(" is added for numerical stability. The value of ")),s("mjx-container",G,[(h(),n("svg",X,i[90]||(i[90]=[t("",1)]))),i[91]||(i[91]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[98]||(i[98]=a(" is computed using label smoothing. If ")),i[99]||(i[99]=s("code",null,"label_smoothing",-1)),i[100]||(i[100]=a(" is ")),i[101]||(i[101]=s("code",null,"nothing",-1)),i[102]||(i[102]=a(", then no label smoothing is applied. If ")),i[103]||(i[103]=s("code",null,"label_smoothing",-1)),i[104]||(i[104]=a(" is a real number ")),s("mjx-container",J,[(h(),n("svg",U,i[92]||(i[92]=[t("",1)]))),i[93]||(i[93]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[105]||(i[105]=a(", then the value of ")),s("mjx-container",W,[(h(),n("svg",K,i[94]||(i[94]=[t("",1)]))),i[95]||(i[95]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[106]||(i[106]=a(" is calculated as:"))]),s("mjx-container",Y,[(h(),n("svg",$,i[107]||(i[107]=[t("",1)]))),i[108]||(i[108]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mtext",null,"size along dim")])],-1))]),s("p",null,[i[111]||(i[111]=a("where ")),s("mjx-container",s1,[(h(),n("svg",i1,i[109]||(i[109]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[110]||(i[110]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[112]||(i[112]=a(" is the value of ")),i[113]||(i[113]=s("code",null,"label_smoothing",-1)),i[114]||(i[114]=a("."))]),i[117]||(i[117]=t("",4))]),s("details",a1,[s("summary",null,[i[118]||(i[118]=s("a",{id:"Lux.DiceCoeffLoss",href:"#Lux.DiceCoeffLoss"},[s("span",{class:"jlbinding"},"Lux.DiceCoeffLoss")],-1)),i[119]||(i[119]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[128]||(i[128]=t("",2)),s("mjx-container",t1,[(h(),n("svg",e1,i[120]||(i[120]=[t("",1)]))),i[121]||(i[121]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"a"),s("mi",null,"g"),s("mi",null,"g"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mfrac",null,[s("mrow",null,[s("mn",null,"2"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"α")]),s("mrow",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mi",null,"y"),s("mn",null,"2")]),s("mo",null,"+"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"α")])]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[124]||(i[124]=a("where ")),s("mjx-container",l1,[(h(),n("svg",n1,i[122]||(i[122]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[123]||(i[123]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[125]||(i[125]=a(" is the smoothing factor (")),i[126]||(i[126]=s("code",null,"smooth",-1)),i[127]||(i[127]=a(")."))]),i[129]||(i[129]=t("",5))]),s("details",h1,[s("summary",null,[i[130]||(i[130]=s("a",{id:"Lux.FocalLoss",href:"#Lux.FocalLoss"},[s("span",{class:"jlbinding"},"Lux.FocalLoss")],-1)),i[131]||(i[131]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[147]||(i[147]=t("",1)),s("p",null,[i[134]||(i[134]=a("Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, ")),s("mjx-container",p1,[(h(),n("svg",r1,i[132]||(i[132]=[t("",1)]))),i[133]||(i[133]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[135]||(i[135]=a(", is expected to be normalized (i.e. ")),i[136]||(i[136]=s("code",null,"softmax",-1)),i[137]||(i[137]=a(" output)."))]),s("p",null,[i[142]||(i[142]=a("The modulating factor ")),s("mjx-container",d1,[(h(),n("svg",o1,i[138]||(i[138]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[139]||(i[139]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ")])],-1))]),i[143]||(i[143]=a(", controls the down-weighting strength. For ")),s("mjx-container",k1,[(h(),n("svg",T1,i[140]||(i[140]=[t("",1)]))),i[141]||(i[141]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[144]||(i[144]=a(" this is equivalent to ")),i[145]||(i[145]=s("a",{href:"/dev/api/Lux/utilities#Lux.CrossEntropyLoss"},[s("code",null,"CrossEntropyLoss")],-1)),i[146]||(i[146]=a("."))]),i[148]||(i[148]=t("",5))]),s("details",Q1,[s("summary",null,[i[149]||(i[149]=s("a",{id:"Lux.HingeLoss",href:"#Lux.HingeLoss"},[s("span",{class:"jlbinding"},"Lux.HingeLoss")],-1)),i[150]||(i[150]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[153]||(i[153]=t("",2)),s("mjx-container",g1,[(h(),n("svg",m1,i[151]||(i[151]=[t("",1)]))),i[152]||(i[152]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[154]||(i[154]=t("",4))]),s("details",y1,[s("summary",null,[i[155]||(i[155]=s("a",{id:"Lux.HuberLoss",href:"#Lux.HuberLoss"},[s("span",{class:"jlbinding"},"Lux.HuberLoss")],-1)),i[156]||(i[156]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[165]||(i[165]=t("",2)),s("mjx-container",c1,[(h(),n("svg",E1,i[157]||(i[157]=[t("",1)]))),i[158]||(i[158]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"{"),s("mtable",{columnalign:"left left",columnspacing:"1em",rowspacing:".2em"},[s("mtr",null,[s("mtd",null,[s("mn",null,"0.5"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mn",null,"2")])]),s("mtd",null,[s("mtext",null,"if "),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"≤"),s("mi",null,"δ")])]),s("mtr",null,[s("mtd",null,[s("mi",null,"δ"),s("mo",null,"∗"),s("mo",{stretchy:"false"},"("),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"−"),s("mn",null,"0.5"),s("mo",null,"∗"),s("mi",null,"δ"),s("mo",{stretchy:"false"},")")]),s("mtd",null,[s("mtext",null,"otherwise")])])]),s("mo",{"data-mjx-texclass":"CLOSE",fence:"true",stretchy:"true",symmetric:"true"})])])],-1))]),s("p",null,[i[161]||(i[161]=a("where ")),s("mjx-container",u1,[(h(),n("svg",F1,i[159]||(i[159]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FF",d:"M195 609Q195 656 227 686T302 717Q319 716 351 709T407 697T433 690Q451 682 451 662Q451 644 438 628T403 612Q382 612 348 641T288 671T249 657T235 628Q235 584 334 463Q401 379 401 292Q401 169 340 80T205 -10H198Q127 -10 83 36T36 153Q36 286 151 382Q191 413 252 434Q252 435 245 449T230 481T214 521T201 566T195 609ZM112 130Q112 83 136 55T204 27Q233 27 256 51T291 111T309 178T316 232Q316 267 309 298T295 344T269 400L259 396Q215 381 183 342T137 256T118 179T112 130Z",style:{"stroke-width":"3"}})])])],-1)]))),i[160]||(i[160]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"δ")])],-1))]),i[162]||(i[162]=a(" is the ")),i[163]||(i[163]=s("code",null,"delta",-1)),i[164]||(i[164]=a(" parameter."))]),i[166]||(i[166]=t("",3))]),s("details",x1,[s("summary",null,[i[167]||(i[167]=s("a",{id:"Lux.KLDivergenceLoss",href:"#Lux.KLDivergenceLoss"},[s("span",{class:"jlbinding"},"Lux.KLDivergenceLoss")],-1)),i[168]||(i[168]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[176]||(i[176]=t("",1)),s("p",null,[i[173]||(i[173]=a("Return the Kullback-Leibler Divergence loss between the predicted distribution ")),s("mjx-container",C1,[(h(),n("svg",L1,i[169]||(i[169]=[t("",1)]))),i[170]||(i[170]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[174]||(i[174]=a(" and the true distribution ")),s("mjx-container",f1,[(h(),n("svg",b1,i[171]||(i[171]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[172]||(i[172]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[175]||(i[175]=a(":"))]),i[177]||(i[177]=t("",5))]),s("details",w1,[s("summary",null,[i[178]||(i[178]=s("a",{id:"Lux.MAELoss",href:"#Lux.MAELoss"},[s("span",{class:"jlbinding"},"Lux.MAELoss")],-1)),i[179]||(i[179]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[182]||(i[182]=t("",2)),s("mjx-container",H1,[(h(),n("svg",_1,i[180]||(i[180]=[t("",1)]))),i[181]||(i[181]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"|"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},"|")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[183]||(i[183]=t("",3))]),s("details",v1,[s("summary",null,[i[184]||(i[184]=s("a",{id:"Lux.MSELoss",href:"#Lux.MSELoss"},[s("span",{class:"jlbinding"},"Lux.MSELoss")],-1)),i[185]||(i[185]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[188]||(i[188]=t("",2)),s("mjx-container",A1,[(h(),n("svg",D1,i[186]||(i[186]=[t("",1)]))),i[187]||(i[187]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[189]||(i[189]=t("",3))]),s("details",j1,[s("summary",null,[i[190]||(i[190]=s("a",{id:"Lux.MSLELoss",href:"#Lux.MSLELoss"},[s("span",{class:"jlbinding"},"Lux.MSLELoss")],-1)),i[191]||(i[191]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[194]||(i[194]=t("",2)),s("mjx-container",V1,[(h(),n("svg",B1,i[192]||(i[192]=[t("",1)]))),i[193]||(i[193]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[195]||(i[195]=t("",4))]),s("details",M1,[s("summary",null,[i[196]||(i[196]=s("a",{id:"Lux.PoissonLoss",href:"#Lux.PoissonLoss"},[s("span",{class:"jlbinding"},"Lux.PoissonLoss")],-1)),i[197]||(i[197]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[207]||(i[207]=t("",1)),s("p",null,[i[202]||(i[202]=a("Return how much the predicted distribution ")),s("mjx-container",Z1,[(h(),n("svg",S1,i[198]||(i[198]=[t("",1)]))),i[199]||(i[199]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[203]||(i[203]=a(" diverges from the expected Poisson distribution ")),s("mjx-container",P1,[(h(),n("svg",I1,i[200]||(i[200]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[201]||(i[201]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[204]||(i[204]=a(", calculated as:"))]),s("mjx-container",R1,[(h(),n("svg",O1,i[205]||(i[205]=[t("",1)]))),i[206]||(i[206]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[208]||(i[208]=t("",3))]),s("details",N1,[s("summary",null,[i[209]||(i[209]=s("a",{id:"Lux.SiameseContrastiveLoss",href:"#Lux.SiameseContrastiveLoss"},[s("span",{class:"jlbinding"},"Lux.SiameseContrastiveLoss")],-1)),i[210]||(i[210]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[213]||(i[213]=t("",2)),s("mjx-container",z1,[(h(),n("svg",q1,i[211]||(i[211]=[t("",1)]))),i[212]||(i[212]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mo",{stretchy:"false"},")"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"y"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mtext",null,"margin"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[214]||(i[214]=t("",6))]),s("details",G1,[s("summary",null,[i[215]||(i[215]=s("a",{id:"Lux.SquaredHingeLoss",href:"#Lux.SquaredHingeLoss"},[s("span",{class:"jlbinding"},"Lux.SquaredHingeLoss")],-1)),i[216]||(i[216]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[219]||(i[219]=t("",2)),s("mjx-container",X1,[(h(),n("svg",J1,i[217]||(i[217]=[t("",1)]))),i[218]||(i[218]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[220]||(i[220]=t("",4))]),i[298]||(i[298]=s("h2",{id:"LuxOps-Module",tabindex:"-1"},[a("LuxOps Module "),s("a",{class:"header-anchor",href:"#LuxOps-Module","aria-label":'Permalink to "LuxOps Module {#LuxOps-Module}"'},"​")],-1)),s("details",U1,[s("summary",null,[i[221]||(i[221]=s("a",{id:"Lux.LuxOps",href:"#Lux.LuxOps"},[s("span",{class:"jlbinding"},"Lux.LuxOps")],-1)),i[222]||(i[222]=a()),l(e,{type:"info",class:"jlObjectType jlModule",text:"Module"})]),i[223]||(i[223]=t("",3))]),s("details",W1,[s("summary",null,[i[224]||(i[224]=s("a",{id:"Lux.LuxOps.eachslice",href:"#Lux.LuxOps.eachslice"},[s("span",{class:"jlbinding"},"Lux.LuxOps.eachslice")],-1)),i[225]||(i[225]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[226]||(i[226]=t("",4))]),s("details",K1,[s("summary",null,[i[227]||(i[227]=s("a",{id:"Lux.LuxOps.foldl_init",href:"#Lux.LuxOps.foldl_init"},[s("span",{class:"jlbinding"},"Lux.LuxOps.foldl_init")],-1)),i[228]||(i[228]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[229]||(i[229]=t("",3))]),s("details",Y1,[s("summary",null,[i[230]||(i[230]=s("a",{id:"Lux.LuxOps.getproperty",href:"#Lux.LuxOps.getproperty"},[s("span",{class:"jlbinding"},"Lux.LuxOps.getproperty")],-1)),i[231]||(i[231]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[232]||(i[232]=t("",3))]),s("details",$1,[s("summary",null,[i[233]||(i[233]=s("a",{id:"Lux.LuxOps.xlogx",href:"#Lux.LuxOps.xlogx"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogx")],-1)),i[234]||(i[234]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[235]||(i[235]=t("",3))]),s("details",ss,[s("summary",null,[i[236]||(i[236]=s("a",{id:"Lux.LuxOps.xlogy",href:"#Lux.LuxOps.xlogy"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogy")],-1)),i[237]||(i[237]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[238]||(i[238]=t("",3))]),s("details",is,[s("summary",null,[i[239]||(i[239]=s("a",{id:"Lux.LuxOps.istraining",href:"#Lux.LuxOps.istraining"},[s("span",{class:"jlbinding"},"Lux.LuxOps.istraining")],-1)),i[240]||(i[240]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[241]||(i[241]=t("",3))]),s("details",as,[s("summary",null,[i[242]||(i[242]=s("a",{id:"Lux.LuxOps.multigate",href:"#Lux.LuxOps.multigate"},[s("span",{class:"jlbinding"},"Lux.LuxOps.multigate")],-1)),i[243]||(i[243]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[244]||(i[244]=t("",3))]),i[299]||(i[299]=s("h2",{id:"Recursive-Operations",tabindex:"-1"},[a("Recursive Operations "),s("a",{class:"header-anchor",href:"#Recursive-Operations","aria-label":'Permalink to "Recursive Operations {#Recursive-Operations}"'},"​")],-1)),s("details",ts,[s("summary",null,[i[245]||(i[245]=s("a",{id:"Lux.recursive_map",href:"#Lux.recursive_map"},[s("span",{class:"jlbinding"},"Lux.recursive_map")],-1)),i[246]||(i[246]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[247]||(i[247]=t("",8))]),s("details",es,[s("summary",null,[i[248]||(i[248]=s("a",{id:"Lux.recursive_add!!",href:"#Lux.recursive_add!!"},[s("span",{class:"jlbinding"},"Lux.recursive_add!!")],-1)),i[249]||(i[249]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[250]||(i[250]=t("",5))]),s("details",ls,[s("summary",null,[i[251]||(i[251]=s("a",{id:"Lux.recursive_copyto!",href:"#Lux.recursive_copyto!"},[s("span",{class:"jlbinding"},"Lux.recursive_copyto!")],-1)),i[252]||(i[252]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[253]||(i[253]=t("",4))]),s("details",ns,[s("summary",null,[i[254]||(i[254]=s("a",{id:"Lux.recursive_eltype",href:"#Lux.recursive_eltype"},[s("span",{class:"jlbinding"},"Lux.recursive_eltype")],-1)),i[255]||(i[255]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[256]||(i[256]=t("",5))]),s("details",hs,[s("summary",null,[i[257]||(i[257]=s("a",{id:"Lux.recursive_make_zero",href:"#Lux.recursive_make_zero"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero")],-1)),i[258]||(i[258]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[259]||(i[259]=t("",5))]),s("details",ps,[s("summary",null,[i[260]||(i[260]=s("a",{id:"Lux.recursive_make_zero!!",href:"#Lux.recursive_make_zero!!"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero!!")],-1)),i[261]||(i[261]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[262]||(i[262]=t("",5))]),i[300]||(i[300]=s("h2",{id:"Updating-Floating-Point-Precision",tabindex:"-1"},[a("Updating Floating Point Precision "),s("a",{class:"header-anchor",href:"#Updating-Floating-Point-Precision","aria-label":'Permalink to "Updating Floating Point Precision {#Updating-Floating-Point-Precision}"'},"​")],-1)),i[301]||(i[301]=s("p",null,"By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.",-1)),s("details",rs,[s("summary",null,[i[263]||(i[263]=s("a",{id:"Lux.f16",href:"#Lux.f16"},[s("span",{class:"jlbinding"},"Lux.f16")],-1)),i[264]||(i[264]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[265]||(i[265]=t("",3))]),s("details",ds,[s("summary",null,[i[266]||(i[266]=s("a",{id:"Lux.f32",href:"#Lux.f32"},[s("span",{class:"jlbinding"},"Lux.f32")],-1)),i[267]||(i[267]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[268]||(i[268]=t("",3))]),s("details",os,[s("summary",null,[i[269]||(i[269]=s("a",{id:"Lux.f64",href:"#Lux.f64"},[s("span",{class:"jlbinding"},"Lux.f64")],-1)),i[270]||(i[270]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[271]||(i[271]=t("",3))]),s("details",ks,[s("summary",null,[i[272]||(i[272]=s("a",{id:"Lux.bf16",href:"#Lux.bf16"},[s("span",{class:"jlbinding"},"Lux.bf16")],-1)),i[273]||(i[273]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[274]||(i[274]=t("",5))]),i[302]||(i[302]=s("h2",{id:"Element-Type-Matching",tabindex:"-1"},[a("Element Type Matching "),s("a",{class:"header-anchor",href:"#Element-Type-Matching","aria-label":'Permalink to "Element Type Matching {#Element-Type-Matching}"'},"​")],-1)),s("details",Ts,[s("summary",null,[i[275]||(i[275]=s("a",{id:"Lux.match_eltype",href:"#Lux.match_eltype"},[s("span",{class:"jlbinding"},"Lux.match_eltype")],-1)),i[276]||(i[276]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[277]||(i[277]=t("",11))]),i[303]||(i[303]=s("h2",{id:"Stateful-Layer",tabindex:"-1"},[a("Stateful Layer "),s("a",{class:"header-anchor",href:"#Stateful-Layer","aria-label":'Permalink to "Stateful Layer {#Stateful-Layer}"'},"​")],-1)),s("details",Qs,[s("summary",null,[i[278]||(i[278]=s("a",{id:"Lux.StatefulLuxLayer",href:"#Lux.StatefulLuxLayer"},[s("span",{class:"jlbinding"},"Lux.StatefulLuxLayer")],-1)),i[279]||(i[279]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[280]||(i[280]=t("",14))]),i[304]||(i[304]=s("h2",{id:"Compact-Layer",tabindex:"-1"},[a("Compact Layer "),s("a",{class:"header-anchor",href:"#Compact-Layer","aria-label":'Permalink to "Compact Layer {#Compact-Layer}"'},"​")],-1)),s("details",gs,[s("summary",null,[i[281]||(i[281]=s("a",{id:"Lux.@compact",href:"#Lux.@compact"},[s("span",{class:"jlbinding"},"Lux.@compact")],-1)),i[282]||(i[282]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[283]||(i[283]=t("",23))]),s("details",ms,[s("summary",null,[i[284]||(i[284]=s("a",{id:"Lux.@init_fn",href:"#Lux.@init_fn"},[s("span",{class:"jlbinding"},"Lux.@init_fn")],-1)),i[285]||(i[285]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[286]||(i[286]=t("",7))]),s("details",ys,[s("summary",null,[i[287]||(i[287]=s("a",{id:"Lux.@non_trainable",href:"#Lux.@non_trainable"},[s("span",{class:"jlbinding"},"Lux.@non_trainable")],-1)),i[288]||(i[288]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[289]||(i[289]=t("",7))]),i[305]||(i[305]=s("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),s("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),s("details",cs,[s("summary",null,[i[290]||(i[290]=s("a",{id:"Lux.set_dispatch_doctor_preferences!",href:"#Lux.set_dispatch_doctor_preferences!"},[s("span",{class:"jlbinding"},"Lux.set_dispatch_doctor_preferences!")],-1)),i[291]||(i[291]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[292]||(i[292]=t("",5))])])}const ws=p(d,[["render",Es]]);export{bs as __pageData,ws as default}; diff --git a/dev/assets/api_Lux_utilities.md.DvX6-akN.lean.js b/dev/assets/api_Lux_utilities.md.DvX6-akN.lean.js deleted file mode 100644 index 1c69609fb5..0000000000 --- a/dev/assets/api_Lux_utilities.md.DvX6-akN.lean.js +++ /dev/null @@ -1,316 +0,0 @@ -import{_ as p,c as n,j as s,a,G as l,a2 as t,B as r,o as h}from"./chunks/framework.I-x9Gl6h.js";const bs=JSON.parse('{"title":"Utilities","description":"","frontmatter":{},"headers":[],"relativePath":"api/Lux/utilities.md","filePath":"api/Lux/utilities.md","lastUpdated":null}'),d={name:"api/Lux/utilities.md"},o={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},u={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.681ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 20633.1 1199","aria-hidden":"true"},x={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.631ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 11329 1199","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},H={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.718ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10483.3 1000","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},M={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},V={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},R={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},P={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.184ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 10247.1 1799","aria-hidden":"true"},q={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.919ex",height:"1ex",role:"img",focusable:"false",viewBox:"0 -431 406 442","aria-hidden":"true"},X={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},J={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},W={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"6.664ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 2945.4 1000","aria-hidden":"true"},K={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.161ex",role:"img",focusable:"false",viewBox:"0 -750 490 955","aria-hidden":"true"},$={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},_={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.539ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 15266.3 1000","aria-hidden":"true"},s1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},i1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},a1={class:"jldocstring custom-block"},t1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},e1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.308ex"},xmlns:"http://www.w3.org/2000/svg",width:"28.659ex",height:"5.747ex",role:"img",focusable:"false",viewBox:"0 -1520 12667.4 2540","aria-hidden":"true"},l1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},n1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.448ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 640 453","aria-hidden":"true"},h1={class:"jldocstring custom-block"},p1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},d1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},k1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"5.377ex",height:"1.995ex",role:"img",focusable:"false",viewBox:"0 -666 2376.6 882","aria-hidden":"true"},Q1={class:"jldocstring custom-block"},g1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"20.065ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8868.8 1199","aria-hidden":"true"},y1={class:"jldocstring custom-block"},c1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.148ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"5.428ex",role:"img",focusable:"false",viewBox:"0 -1449.5 17948.3 2399","aria-hidden":"true"},u1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.005ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -717 444 727","aria-hidden":"true"},x1={class:"jldocstring custom-block"},C1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},f1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},w1={class:"jldocstring custom-block"},H1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},v1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.333ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5451.1 1199","aria-hidden":"true"},j1={class:"jldocstring custom-block"},D1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"14.515ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 6415.7 1799","aria-hidden":"true"},M1={class:"jldocstring custom-block"},V1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},A1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-1.469ex"},xmlns:"http://www.w3.org/2000/svg",width:"32.253ex",height:"4.07ex",role:"img",focusable:"false",viewBox:"0 -1149.5 14255.9 1799","aria-hidden":"true"},Z1={class:"jldocstring custom-block"},O1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},S1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"2.296ex",role:"img",focusable:"false",viewBox:"0 -810 490 1015","aria-hidden":"true"},R1={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},N1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.109ex",height:"1.464ex",role:"img",focusable:"false",viewBox:"0 -442 490 647","aria-hidden":"true"},z1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},P1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"18.723ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 8275.6 1199","aria-hidden":"true"},I1={class:"jldocstring custom-block"},q1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},G1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.607ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 17948.2 1233.4","aria-hidden":"true"},X1={class:"jldocstring custom-block"},J1={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},U1={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.053ex",height:"2.791ex",role:"img",focusable:"false",viewBox:"0 -883.9 9305.3 1233.4","aria-hidden":"true"},W1={class:"jldocstring custom-block"},K1={class:"jldocstring custom-block"},Y1={class:"jldocstring custom-block"},$1={class:"jldocstring custom-block"},_1={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},Qs={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"};function Es(us,i,Fs,xs,Cs,Ls){const e=r("Badge");return h(),n("div",null,[i[293]||(i[293]=s("h1",{id:"utilities",tabindex:"-1"},[a("Utilities "),s("a",{class:"header-anchor",href:"#utilities","aria-label":'Permalink to "Utilities"'},"​")],-1)),i[294]||(i[294]=s("h2",{id:"Training-API",tabindex:"-1"},[a("Training API "),s("a",{class:"header-anchor",href:"#Training-API","aria-label":'Permalink to "Training API {#Training-API}"'},"​")],-1)),i[295]||(i[295]=s("p",null,[a("Helper Functions making it easier to train "),s("code",null,"Lux.jl"),a(" models.")],-1)),i[296]||(i[296]=s("p",null,"Training is meant to be simple and provide extremely basic functionality. We provide basic building blocks which can be seamlessly composed to create complex training pipelines.",-1)),s("details",o,[s("summary",null,[i[0]||(i[0]=s("a",{id:"Lux.Training.TrainState",href:"#Lux.Training.TrainState"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[1]||(i[1]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[2]||(i[2]=t('
    julia
    TrainState

    Training State containing:

    Internal fields:

    Warning

    Constructing this object directly shouldn't be considered a stable API. Use the version with the Optimisers API.

    source

    ',7))]),s("details",k,[s("summary",null,[i[3]||(i[3]=s("a",{id:"Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}",href:"#Lux.Training.TrainState-Tuple{AbstractLuxLayer, Any, Any, AbstractRule}"},[s("span",{class:"jlbinding"},"Lux.Training.TrainState")],-1)),i[4]||(i[4]=a()),l(e,{type:"info",class:"jlObjectType jlMethod",text:"Method"})]),i[5]||(i[5]=t('
    julia
    TrainState(model::Lux.AbstractLuxLayer, ps, st, optimizer::Optimisers.AbstractRule)

    Constructor for TrainState.

    Arguments

    Returns

    TrainState object.

    source

    ',7))]),s("details",T,[s("summary",null,[i[6]||(i[6]=s("a",{id:"Lux.Training.compute_gradients",href:"#Lux.Training.compute_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.compute_gradients")],-1)),i[7]||(i[7]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=t(`
    julia
    compute_gradients(ad::AbstractADType, objective_function::Function, data,
    -    ts::TrainState)

    Compute the gradients of the objective function wrt parameters stored in ts.

    Backends & AD Packages

    Supported BackendsPackages Needed
    AutoZygoteZygote.jl
    AutoReverseDiff(; compile)ReverseDiff.jl
    AutoTrackerTracker.jl
    AutoEnzymeEnzyme.jl

    Arguments

    Return

    A 4-Tuple containing:

    Known Limitations

    Aliased Gradients

    grads returned by this function might be aliased by the implementation of the gradient backend. For example, if you cache the grads from step i, the new gradients returned in step i + 1 might be aliased by the old gradients. If you want to prevent this, simply use copy(grads) or deepcopy(grads) to make a copy of the gradients.

    source

    `,13))]),s("details",Q,[s("summary",null,[i[9]||(i[9]=s("a",{id:"Lux.Training.apply_gradients",href:"#Lux.Training.apply_gradients"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients")],-1)),i[10]||(i[10]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=t('
    julia
    apply_gradients(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads.

    Arguments

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",g,[s("summary",null,[i[12]||(i[12]=s("a",{id:"Lux.Training.apply_gradients!",href:"#Lux.Training.apply_gradients!"},[s("span",{class:"jlbinding"},"Lux.Training.apply_gradients!")],-1)),i[13]||(i[13]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=t('
    julia
    apply_gradients!(ts::TrainState, grads)

    Update the parameters stored in ts using the gradients grads. This is an inplace version of apply_gradients.

    Arguments

    Returns

    Updated TrainState object.

    source

    ',7))]),s("details",m,[s("summary",null,[i[15]||(i[15]=s("a",{id:"Lux.Training.single_train_step",href:"#Lux.Training.single_train_step"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step")],-1)),i[16]||(i[16]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=t('
    julia
    single_train_step(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients. All backends supported via compute_gradients are supported here.

    In most cases you should use single_train_step! instead of this function.

    Keyword Arguments

    Return

    Returned values are the same as single_train_step!.

    source

    ',8))]),s("details",y,[s("summary",null,[i[18]||(i[18]=s("a",{id:"Lux.Training.single_train_step!",href:"#Lux.Training.single_train_step!"},[s("span",{class:"jlbinding"},"Lux.Training.single_train_step!")],-1)),i[19]||(i[19]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=t('
    julia
    single_train_step!(backend, obj_fn::F, data, ts::TrainState; return_gradients=True())

    Perform a single training step. Computes the gradients using compute_gradients and updates the parameters using apply_gradients!. All backends supported via compute_gradients are supported here.

    Keyword Arguments

    Return

    Returned values are the same as compute_gradients. Note that despite the !, only the parameters in ts are updated inplace. Users should be using the returned ts object for further training steps, else there is no caching and performance will be suboptimal (and absolutely terrible for backends like AutoReactant).

    source

    ',7))]),i[297]||(i[297]=t('

    Loss Functions

    Loss Functions Objects take 2 forms of inputs:

    1. and y where is the predicted output and y is the target output.

    2. model, ps, st, (x, y) where model is the model, ps are the parameters, st are the states and (x, y) are the input and target pair. Then it returns the loss, updated states, and an empty named tuple. This makes them compatible with the Training API.

    Warning

    When using ChainRules.jl compatible AD (like Zygote), we only compute the gradients wrt the inputs and drop any gradients wrt the targets.

    ',4)),s("details",c,[s("summary",null,[i[21]||(i[21]=s("a",{id:"Lux.GenericLossFunction",href:"#Lux.GenericLossFunction"},[s("span",{class:"jlbinding"},"Lux.GenericLossFunction")],-1)),i[22]||(i[22]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[23]||(i[23]=t(`
    julia
    GenericLossFunction(loss_fn; agg = mean)

    Takes any function loss_fn that maps 2 number inputs to a single number output. Additionally, array inputs are efficiently broadcasted and aggregated using agg.

    julia
    julia> mseloss = GenericLossFunction((ŷ, y) -> abs2(ŷ - y));
    -
    -julia> y_model = [1.1, 1.9, 3.1];
    -
    -julia> mseloss(y_model, 1:3)  0.01
    -true

    Special Note

    This function takes any of the LossFunctions.jl public functions into the Lux Losses API with efficient aggregation.

    source

    `,6))]),s("details",E,[s("summary",null,[i[24]||(i[24]=s("a",{id:"Lux.BinaryCrossEntropyLoss",href:"#Lux.BinaryCrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryCrossEntropyLoss")],-1)),i[25]||(i[25]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[56]||(i[56]=t(`
    julia
    BinaryCrossEntropyLoss(; agg = mean, epsilon = nothing,
    -    label_smoothing::Union{Nothing, Real}=nothing,
    -    logits::Union{Bool, Val}=Val(false))

    Binary Cross Entropy Loss with optional label smoothing and fused logit computation.

    Returns the binary cross entropy loss computed as:

    `,3)),s("ul",null,[s("li",null,[i[28]||(i[28]=s("p",null,[a("If "),s("code",null,"logits"),a(" is either "),s("code",null,"false"),a(" or "),s("code",null,"Val(false)"),a(":")],-1)),s("mjx-container",u,[(h(),n("svg",F,i[26]||(i[26]=[t('',1)]))),i[27]||(i[27]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])]),s("li",null,[i[31]||(i[31]=s("p",null,[a("If "),s("code",null,"logits"),a(" is "),s("code",null,"true"),a(" or "),s("code",null,"Val(true)"),a(":")],-1)),s("mjx-container",x,[(h(),n("svg",C,i[29]||(i[29]=[t('',1)]))),i[30]||(i[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"l"),s("mi",null,"o"),s("mi",null,"g"),s("mi",null,"σ"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))])])]),s("p",null,[i[38]||(i[38]=a("The value of ")),s("mjx-container",L,[(h(),n("svg",f,i[32]||(i[32]=[t('',1)]))),i[33]||(i[33]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[39]||(i[39]=a(" is computed using label smoothing. If ")),i[40]||(i[40]=s("code",null,"label_smoothing",-1)),i[41]||(i[41]=a(" is ")),i[42]||(i[42]=s("code",null,"nothing",-1)),i[43]||(i[43]=a(", then no label smoothing is applied. If ")),i[44]||(i[44]=s("code",null,"label_smoothing",-1)),i[45]||(i[45]=a(" is a real number ")),s("mjx-container",b,[(h(),n("svg",w,i[34]||(i[34]=[t('',1)]))),i[35]||(i[35]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[46]||(i[46]=a(", then the value of ")),s("mjx-container",H,[(h(),n("svg",v,i[36]||(i[36]=[t('',1)]))),i[37]||(i[37]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[47]||(i[47]=a(" is:"))]),s("mjx-container",j,[(h(),n("svg",D,i[48]||(i[48]=[t('',1)]))),i[49]||(i[49]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mn",null,"0.5")])],-1))]),s("p",null,[i[52]||(i[52]=a("where ")),s("mjx-container",B,[(h(),n("svg",M,i[50]||(i[50]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[51]||(i[51]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[53]||(i[53]=a(" is the value of ")),i[54]||(i[54]=s("code",null,"label_smoothing",-1)),i[55]||(i[55]=a("."))]),i[57]||(i[57]=t(`

    Extended Help

    Example

    julia
    julia> bce = BinaryCrossEntropyLoss();
    -
    -julia> y_bin = Bool[1, 0, 1];
    -
    -julia> y_model = Float32[2, -1, pi]
    -3-element Vector{Float32}:
    -  2.0
    - -1.0
    -  3.1415927
    -
    -julia> logitbce = BinaryCrossEntropyLoss(; logits=Val(true));
    -
    -julia> logitbce(y_model, y_bin)  0.160832f0
    -true
    -
    -julia> bce(sigmoid.(y_model), y_bin)  0.16083185f0
    -true
    -
    -julia> bce_ls = BinaryCrossEntropyLoss(label_smoothing=0.1);
    -
    -julia> bce_ls(sigmoid.(y_model), y_bin) > bce(sigmoid.(y_model), y_bin)
    -true
    -
    -julia> logitbce_ls = BinaryCrossEntropyLoss(label_smoothing=0.1, logits=Val(true));
    -
    -julia> logitbce_ls(y_model, y_bin) > logitbce(y_model, y_bin)
    -true

    source

    `,4))]),s("details",V,[s("summary",null,[i[58]||(i[58]=s("a",{id:"Lux.BinaryFocalLoss",href:"#Lux.BinaryFocalLoss"},[s("span",{class:"jlbinding"},"Lux.BinaryFocalLoss")],-1)),i[59]||(i[59]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[70]||(i[70]=t('
    julia
    BinaryFocalLoss(; gamma = 2, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[62]||(i[62]=a("Return the binary focal loss [1]. The model input, ")),s("mjx-container",A,[(h(),n("svg",Z,i[60]||(i[60]=[t('',1)]))),i[61]||(i[61]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[63]||(i[63]=a(", is expected to be normalized (i.e. softmax output)."))]),s("p",null,[i[66]||(i[66]=a("For ")),s("mjx-container",O,[(h(),n("svg",S,i[64]||(i[64]=[t('',1)]))),i[65]||(i[65]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[67]||(i[67]=a(" this is equivalent to ")),i[68]||(i[68]=s("a",{href:"/dev/api/Lux/utilities#Lux.BinaryCrossEntropyLoss"},[s("code",null,"BinaryCrossEntropyLoss")],-1)),i[69]||(i[69]=a("."))]),i[71]||(i[71]=t(`

    Example

    julia
    julia> y = [0  1  0
    -            1  0  1];
    -
    -julia> ŷ = [0.268941  0.5  0.268941
    -            0.731059  0.5  0.731059];
    -
    -julia> BinaryFocalLoss()(ŷ, y)  0.0728675615927385
    -true
    -
    -julia> BinaryFocalLoss(gamma=0)(ŷ, y)  BinaryCrossEntropyLoss()(ŷ, y)
    -true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",R,[s("summary",null,[i[72]||(i[72]=s("a",{id:"Lux.CrossEntropyLoss",href:"#Lux.CrossEntropyLoss"},[s("span",{class:"jlbinding"},"Lux.CrossEntropyLoss")],-1)),i[73]||(i[73]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[115]||(i[115]=t(`
    julia
    CrossEntropyLoss(;
    -    agg=mean, epsilon=nothing, dims=1, logits::Union{Bool, Val}=Val(false),
    -    label_smoothing::Union{Nothing, Real}=nothing
    -)
    `,1)),s("p",null,[i[76]||(i[76]=a("Return the cross entropy loss which is used in multi-class classification tasks. The input, ")),s("mjx-container",N,[(h(),n("svg",z,i[74]||(i[74]=[t('',1)]))),i[75]||(i[75]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[77]||(i[77]=a(", is expected to be normalized (i.e. ")),i[78]||(i[78]=s("code",null,"softmax",-1)),i[79]||(i[79]=a(" output) if ")),i[80]||(i[80]=s("code",null,"logits",-1)),i[81]||(i[81]=a(" is ")),i[82]||(i[82]=s("code",null,"false",-1)),i[83]||(i[83]=a(" or ")),i[84]||(i[84]=s("code",null,"Val(false)",-1)),i[85]||(i[85]=a("."))]),i[116]||(i[116]=s("p",null,"The loss is calculated as:",-1)),s("mjx-container",P,[(h(),n("svg",I,i[86]||(i[86]=[t('',1)]))),i[87]||(i[87]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",null,"−"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[96]||(i[96]=a("where ")),s("mjx-container",q,[(h(),n("svg",G,i[88]||(i[88]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D716",d:"M227 -11Q149 -11 95 41T40 174Q40 262 87 322Q121 367 173 396T287 430Q289 431 329 431H367Q382 426 382 411Q382 385 341 385H325H312Q191 385 154 277L150 265H327Q340 256 340 246Q340 228 320 219H138V217Q128 187 128 143Q128 77 160 52T231 26Q258 26 284 36T326 57T343 68Q350 68 354 58T358 39Q358 36 357 35Q354 31 337 21T289 0T227 -11Z",style:{"stroke-width":"3"}})])])],-1)]))),i[89]||(i[89]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"ϵ")])],-1))]),i[97]||(i[97]=a(" is added for numerical stability. The value of ")),s("mjx-container",X,[(h(),n("svg",J,i[90]||(i[90]=[t('',1)]))),i[91]||(i[91]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[98]||(i[98]=a(" is computed using label smoothing. If ")),i[99]||(i[99]=s("code",null,"label_smoothing",-1)),i[100]||(i[100]=a(" is ")),i[101]||(i[101]=s("code",null,"nothing",-1)),i[102]||(i[102]=a(", then no label smoothing is applied. If ")),i[103]||(i[103]=s("code",null,"label_smoothing",-1)),i[104]||(i[104]=a(" is a real number ")),s("mjx-container",U,[(h(),n("svg",W,i[92]||(i[92]=[t('',1)]))),i[93]||(i[93]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",{stretchy:"false"},"]")])],-1))]),i[105]||(i[105]=a(", then the value of ")),s("mjx-container",K,[(h(),n("svg",Y,i[94]||(i[94]=[t('',1)]))),i[95]||(i[95]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])])])],-1))]),i[106]||(i[106]=a(" is calculated as:"))]),s("mjx-container",$,[(h(),n("svg",_,i[107]||(i[107]=[t('',1)]))),i[108]||(i[108]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"~")])]),s("mo",null,"="),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"∗"),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"α"),s("mo",null,"∗"),s("mtext",null,"size along dim")])],-1))]),s("p",null,[i[111]||(i[111]=a("where ")),s("mjx-container",s1,[(h(),n("svg",i1,i[109]||(i[109]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[110]||(i[110]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[112]||(i[112]=a(" is the value of ")),i[113]||(i[113]=s("code",null,"label_smoothing",-1)),i[114]||(i[114]=a("."))]),i[117]||(i[117]=t(`

    Extended Help

    Example

    julia
    julia> y = [1  0  0  0  1
    -            0  1  0  1  0
    -            0  0  1  0  0]
    -3×5 Matrix{Int64}:
    - 1  0  0  0  1
    - 0  1  0  1  0
    - 0  0  1  0  0
    -
    -julia> y_model = softmax(reshape(-7:7, 3, 5) .* 1f0)
    -3×5 Matrix{Float32}:
    - 0.0900306  0.0900306  0.0900306  0.0900306  0.0900306
    - 0.244728   0.244728   0.244728   0.244728   0.244728
    - 0.665241   0.665241   0.665241   0.665241   0.665241
    -
    -julia> CrossEntropyLoss()(y_model, y)  1.6076053f0
    -true
    -
    -julia> 5 * 1.6076053f0 CrossEntropyLoss(; agg=sum)(y_model, y)
    -true
    -
    -julia> CrossEntropyLoss(label_smoothing=0.15)(y_model, y)  1.5776052f0
    -true

    source

    `,4))]),s("details",a1,[s("summary",null,[i[118]||(i[118]=s("a",{id:"Lux.DiceCoeffLoss",href:"#Lux.DiceCoeffLoss"},[s("span",{class:"jlbinding"},"Lux.DiceCoeffLoss")],-1)),i[119]||(i[119]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[128]||(i[128]=t('
    julia
    DiceCoeffLoss(; smooth = true, agg = mean)

    Return the Dice Coefficient loss [1] which is used in segmentation tasks. The dice coefficient is similar to the F1_score. Loss calculated as:

    ',2)),s("mjx-container",t1,[(h(),n("svg",e1,i[120]||(i[120]=[t('',1)]))),i[121]||(i[121]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"a"),s("mi",null,"g"),s("mi",null,"g"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mfrac",null,[s("mrow",null,[s("mn",null,"2"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"α")]),s("mrow",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mi",null,"y"),s("mn",null,"2")]),s("mo",null,"+"),s("mo",{"data-mjx-texclass":"OP"},"∑"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"α")])]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),s("p",null,[i[124]||(i[124]=a("where ")),s("mjx-container",l1,[(h(),n("svg",n1,i[122]||(i[122]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FC",d:"M34 156Q34 270 120 356T309 442Q379 442 421 402T478 304Q484 275 485 237V208Q534 282 560 374Q564 388 566 390T582 393Q603 393 603 385Q603 376 594 346T558 261T497 161L486 147L487 123Q489 67 495 47T514 26Q528 28 540 37T557 60Q559 67 562 68T577 70Q597 70 597 62Q597 56 591 43Q579 19 556 5T512 -10H505Q438 -10 414 62L411 69L400 61Q390 53 370 41T325 18T267 -2T203 -11Q124 -11 79 39T34 156ZM208 26Q257 26 306 47T379 90L403 112Q401 255 396 290Q382 405 304 405Q235 405 183 332Q156 292 139 224T121 120Q121 71 146 49T208 26Z",style:{"stroke-width":"3"}})])])],-1)]))),i[123]||(i[123]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"α")])],-1))]),i[125]||(i[125]=a(" is the smoothing factor (")),i[126]||(i[126]=s("code",null,"smooth",-1)),i[127]||(i[127]=a(")."))]),i[129]||(i[129]=t(`

    Example

    julia
    julia> y_pred = [1.1, 2.1, 3.1];
    -
    -julia> DiceCoeffLoss()(y_pred, 1:3)   0.000992391663909964
    -true
    -
    -julia> 1 - DiceCoeffLoss()(y_pred, 1:3)   0.99900760833609
    -true
    -
    -julia> DiceCoeffLoss()(reshape(y_pred, 3, 1), reshape(1:3, 3, 1))  0.000992391663909964
    -true

    References

    [1] Milletari, Fausto, Nassir Navab, and Seyed-Ahmad Ahmadi. "V-net: Fully convolutional neural networks for volumetric medical image segmentation." 2016 fourth international conference on 3D vision (3DV). Ieee, 2016.

    source

    `,5))]),s("details",h1,[s("summary",null,[i[130]||(i[130]=s("a",{id:"Lux.FocalLoss",href:"#Lux.FocalLoss"},[s("span",{class:"jlbinding"},"Lux.FocalLoss")],-1)),i[131]||(i[131]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[147]||(i[147]=t('
    julia
    FocalLoss(; gamma = 2, dims = 1, agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[134]||(i[134]=a("Return the focal loss [1] which can be used in classification tasks with highly imbalanced classes. It down-weights well-classified examples and focuses on hard examples. The input, ")),s("mjx-container",p1,[(h(),n("svg",r1,i[132]||(i[132]=[t('',1)]))),i[133]||(i[133]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[135]||(i[135]=a(", is expected to be normalized (i.e. ")),i[136]||(i[136]=s("code",null,"softmax",-1)),i[137]||(i[137]=a(" output)."))]),s("p",null,[i[142]||(i[142]=a("The modulating factor ")),s("mjx-container",d1,[(h(),n("svg",o1,i[138]||(i[138]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[139]||(i[139]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ")])],-1))]),i[143]||(i[143]=a(", controls the down-weighting strength. For ")),s("mjx-container",k1,[(h(),n("svg",T1,i[140]||(i[140]=[t('',1)]))),i[141]||(i[141]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"γ"),s("mo",null,"="),s("mn",null,"0")])],-1))]),i[144]||(i[144]=a(" this is equivalent to ")),i[145]||(i[145]=s("a",{href:"/dev/api/Lux/utilities#Lux.CrossEntropyLoss"},[s("code",null,"CrossEntropyLoss")],-1)),i[146]||(i[146]=a("."))]),i[148]||(i[148]=t(`

    Example

    julia
    julia> y = [1  0  0  0  1
    -            0  1  0  1  0
    -            0  0  1  0  0]
    -3×5 Matrix{Int64}:
    - 1  0  0  0  1
    - 0  1  0  1  0
    - 0  0  1  0  0
    -
    -julia> ŷ = softmax(reshape(-7:7, 3, 5) .* 1f0)
    -3×5 Matrix{Float32}:
    - 0.0900306  0.0900306  0.0900306  0.0900306  0.0900306
    - 0.244728   0.244728   0.244728   0.244728   0.244728
    - 0.665241   0.665241   0.665241   0.665241   0.665241
    -
    -julia> FocalLoss()(ŷ, y)  1.1277556f0
    -true

    References

    [1] Lin, Tsung-Yi, et al. "Focal loss for dense object detection." Proceedings of the IEEE international conference on computer vision. 2017.

    source

    `,5))]),s("details",Q1,[s("summary",null,[i[149]||(i[149]=s("a",{id:"Lux.HingeLoss",href:"#Lux.HingeLoss"},[s("span",{class:"jlbinding"},"Lux.HingeLoss")],-1)),i[150]||(i[150]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[153]||(i[153]=t('
    julia
    HingeLoss(; agg = mean)

    Return the hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",g1,[(h(),n("svg",m1,i[151]||(i[151]=[t('',1)]))),i[152]||(i[152]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[154]||(i[154]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = HingeLoss();
    -
    -julia> y_true = [1, -1, 1, 1];
    -
    -julia> y_pred = [0.1, 0.3, 1, 1.5];
    -
    -julia> loss(y_pred, y_true)  0.55
    -true

    source

    `,4))]),s("details",y1,[s("summary",null,[i[155]||(i[155]=s("a",{id:"Lux.HuberLoss",href:"#Lux.HuberLoss"},[s("span",{class:"jlbinding"},"Lux.HuberLoss")],-1)),i[156]||(i[156]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[165]||(i[165]=t('
    julia
    HuberLoss(; delta = 1, agg = mean)

    Returns the Huber loss, calculated as:

    ',2)),s("mjx-container",c1,[(h(),n("svg",E1,i[157]||(i[157]=[t('',1)]))),i[158]||(i[158]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"{"),s("mtable",{columnalign:"left left",columnspacing:"1em",rowspacing:".2em"},[s("mtr",null,[s("mtd",null,[s("mn",null,"0.5"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mn",null,"2")])]),s("mtd",null,[s("mtext",null,"if "),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"≤"),s("mi",null,"δ")])]),s("mtr",null,[s("mtd",null,[s("mi",null,"δ"),s("mo",null,"∗"),s("mo",{stretchy:"false"},"("),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"y"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mo",null,"−"),s("mn",null,"0.5"),s("mo",null,"∗"),s("mi",null,"δ"),s("mo",{stretchy:"false"},")")]),s("mtd",null,[s("mtext",null,"otherwise")])])]),s("mo",{"data-mjx-texclass":"CLOSE",fence:"true",stretchy:"true",symmetric:"true"})])])],-1))]),s("p",null,[i[161]||(i[161]=a("where ")),s("mjx-container",u1,[(h(),n("svg",F1,i[159]||(i[159]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D6FF",d:"M195 609Q195 656 227 686T302 717Q319 716 351 709T407 697T433 690Q451 682 451 662Q451 644 438 628T403 612Q382 612 348 641T288 671T249 657T235 628Q235 584 334 463Q401 379 401 292Q401 169 340 80T205 -10H198Q127 -10 83 36T36 153Q36 286 151 382Q191 413 252 434Q252 435 245 449T230 481T214 521T201 566T195 609ZM112 130Q112 83 136 55T204 27Q233 27 256 51T291 111T309 178T316 232Q316 267 309 298T295 344T269 400L259 396Q215 381 183 342T137 256T118 179T112 130Z",style:{"stroke-width":"3"}})])])],-1)]))),i[160]||(i[160]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"δ")])],-1))]),i[162]||(i[162]=a(" is the ")),i[163]||(i[163]=s("code",null,"delta",-1)),i[164]||(i[164]=a(" parameter."))]),i[166]||(i[166]=t(`

    Example

    julia
    julia> y_model = [1.1, 2.1, 3.1];
    -
    -julia> HuberLoss()(y_model, 1:3)  0.005000000000000009
    -true
    -
    -julia> HuberLoss(delta=0.05)(y_model, 1:3)  0.003750000000000005
    -true

    source

    `,3))]),s("details",x1,[s("summary",null,[i[167]||(i[167]=s("a",{id:"Lux.KLDivergenceLoss",href:"#Lux.KLDivergenceLoss"},[s("span",{class:"jlbinding"},"Lux.KLDivergenceLoss")],-1)),i[168]||(i[168]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[176]||(i[176]=t('
    julia
    KLDivergenceLoss(; dims = 1, agg = mean, epsilon = nothing, label_smoothing = nothing)
    ',1)),s("p",null,[i[173]||(i[173]=a("Return the Kullback-Leibler Divergence loss between the predicted distribution ")),s("mjx-container",C1,[(h(),n("svg",L1,i[169]||(i[169]=[t('',1)]))),i[170]||(i[170]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[174]||(i[174]=a(" and the true distribution ")),s("mjx-container",f1,[(h(),n("svg",b1,i[171]||(i[171]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[172]||(i[172]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[175]||(i[175]=a(":"))]),i[177]||(i[177]=t(`

    The KL divergence is a measure of how much one probability distribution is different from the other. It is always non-negative, and zero only when both the distributions are equal.

    For epsilon and label_smoothing, see CrossEntropyLoss.

    Example

    julia
    julia> p1 = [1 0; 0 1]
    -2×2 Matrix{Int64}:
    - 1  0
    - 0  1
    -
    -julia> p2 = fill(0.5, 2, 2)
    -2×2 Matrix{Float64}:
    - 0.5  0.5
    - 0.5  0.5
    -
    -julia> KLDivergenceLoss()(p2, p1)  log(2)
    -true
    -
    -julia> KLDivergenceLoss(; agg=sum)(p2, p1)  2 * log(2)
    -true
    -
    -julia> KLDivergenceLoss(; epsilon=0)(p2, p2)
    -0.0
    -
    -julia> KLDivergenceLoss(; epsilon=0)(p1, p2)
    -Inf

    source

    `,5))]),s("details",w1,[s("summary",null,[i[178]||(i[178]=s("a",{id:"Lux.MAELoss",href:"#Lux.MAELoss"},[s("span",{class:"jlbinding"},"Lux.MAELoss")],-1)),i[179]||(i[179]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[182]||(i[182]=t('
    julia
    MAELoss(; agg = mean)

    Returns the loss corresponding to mean absolute error:

    ',2)),s("mjx-container",H1,[(h(),n("svg",v1,i[180]||(i[180]=[t('',1)]))),i[181]||(i[181]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"|"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},"|")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[183]||(i[183]=t(`

    Example

    julia
    julia> loss = MAELoss();
    -
    -julia> y_model = [1.1, 1.9, 3.1];
    -
    -julia> loss(y_model, 1:3)  0.1
    -true

    source

    `,3))]),s("details",j1,[s("summary",null,[i[184]||(i[184]=s("a",{id:"Lux.MSELoss",href:"#Lux.MSELoss"},[s("span",{class:"jlbinding"},"Lux.MSELoss")],-1)),i[185]||(i[185]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[188]||(i[188]=t('
    julia
    MSELoss(; agg = mean)

    Returns the loss corresponding to mean squared error:

    ',2)),s("mjx-container",D1,[(h(),n("svg",B1,i[186]||(i[186]=[t('',1)]))),i[187]||(i[187]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[189]||(i[189]=t(`

    Example

    julia
    julia> loss = MSELoss();
    -
    -julia> y_model = [1.1, 1.9, 3.1];
    -
    -julia> loss(y_model, 1:3)  0.01
    -true

    source

    `,3))]),s("details",M1,[s("summary",null,[i[190]||(i[190]=s("a",{id:"Lux.MSLELoss",href:"#Lux.MSLELoss"},[s("span",{class:"jlbinding"},"Lux.MSLELoss")],-1)),i[191]||(i[191]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[194]||(i[194]=t('
    julia
    MSLELoss(; agg = mean, epsilon = nothing)

    Returns the loss corresponding to mean squared logarithmic error:

    ',2)),s("mjx-container",V1,[(h(),n("svg",A1,i[192]||(i[192]=[t('',1)]))),i[193]||(i[193]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("msup",null,[s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",null,"−"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mi",null,"y"),s("mo",null,"+"),s("mi",null,"ϵ"),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")]),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[195]||(i[195]=t(`

    epsilon is added to both y and to prevent taking the logarithm of zero. If epsilon is nothing, then we set it to eps(<type of y and ŷ>).

    Example

    julia
    julia> loss = MSLELoss();
    -
    -julia> loss(Float32[1.1, 2.2, 3.3], 1:3)  0.009084041f0
    -true
    -
    -julia> loss(Float32[0.9, 1.8, 2.7], 1:3)  0.011100831f0
    -true

    source

    `,4))]),s("details",Z1,[s("summary",null,[i[196]||(i[196]=s("a",{id:"Lux.PoissonLoss",href:"#Lux.PoissonLoss"},[s("span",{class:"jlbinding"},"Lux.PoissonLoss")],-1)),i[197]||(i[197]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[207]||(i[207]=t('
    julia
    PoissonLoss(; agg = mean, epsilon = nothing)
    ',1)),s("p",null,[i[202]||(i[202]=a("Return how much the predicted distribution ")),s("mjx-container",O1,[(h(),n("svg",S1,i[198]||(i[198]=[t('',1)]))),i[199]||(i[199]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])])])],-1))]),i[203]||(i[203]=a(" diverges from the expected Poisson distribution ")),s("mjx-container",R1,[(h(),n("svg",N1,i[200]||(i[200]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D466",d:"M21 287Q21 301 36 335T84 406T158 442Q199 442 224 419T250 355Q248 336 247 334Q247 331 231 288T198 191T182 105Q182 62 196 45T238 27Q261 27 281 38T312 61T339 94Q339 95 344 114T358 173T377 247Q415 397 419 404Q432 431 462 431Q475 431 483 424T494 412T496 403Q496 390 447 193T391 -23Q363 -106 294 -155T156 -205Q111 -205 77 -183T43 -117Q43 -95 50 -80T69 -58T89 -48T106 -45Q150 -45 150 -87Q150 -107 138 -122T115 -142T102 -147L99 -148Q101 -153 118 -160T152 -167H160Q177 -167 186 -165Q219 -156 247 -127T290 -65T313 -9T321 21L315 17Q309 13 296 6T270 -6Q250 -11 231 -11Q185 -11 150 11T104 82Q103 89 103 113Q103 170 138 262T173 379Q173 380 173 381Q173 390 173 393T169 400T158 404H154Q131 404 112 385T82 344T65 302T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),i[201]||(i[201]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y")])],-1))]),i[204]||(i[204]=a(", calculated as:"))]),s("mjx-container",z1,[(h(),n("svg",P1,i[205]||(i[205]=[t('',1)]))),i[206]||(i[206]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",null,"−"),s("mi",null,"y"),s("mo",null,"∗"),s("mi",null,"log"),s("mo",{"data-mjx-texclass":"NONE"},"⁡"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mo",{stretchy:"false"},")"),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[208]||(i[208]=t(`

    Example

    julia
    julia> y_model = [1, 3, 3];  # data should only take integral values
    -
    -julia> PoissonLoss()(y_model, 1:3)  0.502312852219817
    -true

    source

    `,3))]),s("details",I1,[s("summary",null,[i[209]||(i[209]=s("a",{id:"Lux.SiameseContrastiveLoss",href:"#Lux.SiameseContrastiveLoss"},[s("span",{class:"jlbinding"},"Lux.SiameseContrastiveLoss")],-1)),i[210]||(i[210]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[213]||(i[213]=t('
    julia
    SiameseContrastiveLoss(; margin = true, agg = mean)

    Return the contrastive loss [1] which can be useful for training Siamese Networks. It is given by:

    ',2)),s("mjx-container",q1,[(h(),n("svg",G1,i[211]||(i[211]=[t('',1)]))),i[212]||(i[212]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{stretchy:"false"},"("),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mo",{stretchy:"false"},")"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("mn",null,"2")]),s("mo",null,"+"),s("mi",null,"y"),s("mo",null,"∗"),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mtext",null,"margin"),s("mo",null,"−"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[214]||(i[214]=t(`

    Specify margin to set the baseline for distance at which pairs are dissimilar.

    Example

    julia
    julia>= [0.5, 1.5, 2.5];
    -
    -julia> SiameseContrastiveLoss()(ŷ, 1:3)  -4.833333333333333
    -true
    -
    -julia> SiameseContrastiveLoss(margin=2)(ŷ, 1:3)  -4.0
    -true

    References

    [1] Hadsell, Raia, Sumit Chopra, and Yann LeCun. "Dimensionality reduction by learning an invariant mapping." 2006 IEEE computer society conference on computer vision and pattern recognition (CVPR'06). Vol. 2. IEEE, 2006.

    source

    `,6))]),s("details",X1,[s("summary",null,[i[215]||(i[215]=s("a",{id:"Lux.SquaredHingeLoss",href:"#Lux.SquaredHingeLoss"},[s("span",{class:"jlbinding"},"Lux.SquaredHingeLoss")],-1)),i[216]||(i[216]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[219]||(i[219]=t('
    julia
    SquaredHingeLoss(; agg = mean)

    Return the squared hinge loss loss given the prediction and true labels y (containing 1 or -1); calculated as:

    ',2)),s("mjx-container",J1,[(h(),n("svg",U1,i[217]||(i[217]=[t('',1)]))),i[218]||(i[218]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"agg"),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"("),s("mo",{"data-mjx-texclass":"OP",movablelimits:"true"},"max"),s("mo",{stretchy:"false"},"("),s("mn",null,"0"),s("mo",null,","),s("mn",null,"1"),s("mo",null,"−"),s("mi",null,"y"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"y"),s("mo",{stretchy:"false"},"^")])]),s("msup",null,[s("mo",{stretchy:"false"},")"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"CLOSE"},")")])])],-1))]),i[220]||(i[220]=t(`

    Usually used with classifiers like Support Vector Machines.

    Example

    julia
    julia> loss = SquaredHingeLoss();
    -
    -julia> y_true = [1, -1, 1, 1];
    -
    -julia> y_pred = [0.1, 0.3, 1, 1.5];
    -
    -julia> loss(y_pred, y_true)  0.625
    -true

    source

    `,4))]),i[298]||(i[298]=s("h2",{id:"LuxOps-Module",tabindex:"-1"},[a("LuxOps Module "),s("a",{class:"header-anchor",href:"#LuxOps-Module","aria-label":'Permalink to "LuxOps Module {#LuxOps-Module}"'},"​")],-1)),s("details",W1,[s("summary",null,[i[221]||(i[221]=s("a",{id:"Lux.LuxOps",href:"#Lux.LuxOps"},[s("span",{class:"jlbinding"},"Lux.LuxOps")],-1)),i[222]||(i[222]=a()),l(e,{type:"info",class:"jlObjectType jlModule",text:"Module"})]),i[223]||(i[223]=t('
    julia
    LuxOps

    This module is a part of Lux.jl. It contains operations that are useful in DL context. Additionally certain operations here alias Base functions to behave more sensibly with GPUArrays.

    source

    ',3))]),s("details",K1,[s("summary",null,[i[224]||(i[224]=s("a",{id:"Lux.LuxOps.eachslice",href:"#Lux.LuxOps.eachslice"},[s("span",{class:"jlbinding"},"Lux.LuxOps.eachslice")],-1)),i[225]||(i[225]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[226]||(i[226]=t('
    julia
    eachslice(x, dims::Val)

    Same as Base.eachslice but doesn't produce a SubArray for the slices if x is a GPUArray.

    Additional dispatches for RNN helpers are also provided for TimeLastIndex and BatchLastIndex.

    source

    ',4))]),s("details",Y1,[s("summary",null,[i[227]||(i[227]=s("a",{id:"Lux.LuxOps.foldl_init",href:"#Lux.LuxOps.foldl_init"},[s("span",{class:"jlbinding"},"Lux.LuxOps.foldl_init")],-1)),i[228]||(i[228]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[229]||(i[229]=t(`
    julia
    foldl_init(op, x)
    -foldl_init(op, x, init)

    Exactly same as foldl(op, x; init) in the forward pass. But, gives gradients wrt init in the backward pass.

    source

    `,3))]),s("details",$1,[s("summary",null,[i[230]||(i[230]=s("a",{id:"Lux.LuxOps.getproperty",href:"#Lux.LuxOps.getproperty"},[s("span",{class:"jlbinding"},"Lux.LuxOps.getproperty")],-1)),i[231]||(i[231]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[232]||(i[232]=t(`
    julia
    getproperty(x, ::Val{v})
    -getproperty(x, ::StaticSymbol{v})

    Similar to Base.getproperty but requires a Val (or Static.StaticSymbol). Additionally, if v is not present in x, then nothing is returned.

    source

    `,3))]),s("details",_1,[s("summary",null,[i[233]||(i[233]=s("a",{id:"Lux.LuxOps.xlogx",href:"#Lux.LuxOps.xlogx"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogx")],-1)),i[234]||(i[234]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[235]||(i[235]=t('
    julia
    xlogx(x::Number)

    Return x * log(x) for x ≥ 0, handling x == 0 by taking the limit from above, to get zero.

    source

    ',3))]),s("details",ss,[s("summary",null,[i[236]||(i[236]=s("a",{id:"Lux.LuxOps.xlogy",href:"#Lux.LuxOps.xlogy"},[s("span",{class:"jlbinding"},"Lux.LuxOps.xlogy")],-1)),i[237]||(i[237]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[238]||(i[238]=t('
    julia
    xlogy(x::Number, y::Number)

    Return x * log(y) for y > 0, and zero when x == 0.

    source

    ',3))]),s("details",is,[s("summary",null,[i[239]||(i[239]=s("a",{id:"Lux.LuxOps.istraining",href:"#Lux.LuxOps.istraining"},[s("span",{class:"jlbinding"},"Lux.LuxOps.istraining")],-1)),i[240]||(i[240]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[241]||(i[241]=t(`
    julia
    istraining(::Val{training})
    -istraining(::StaticBool)
    -istraining(::Bool)
    -istraining(st::NamedTuple)

    Returns true if training is true or if st contains a training field with value true. Else returns false.

    source

    `,3))]),s("details",as,[s("summary",null,[i[242]||(i[242]=s("a",{id:"Lux.LuxOps.multigate",href:"#Lux.LuxOps.multigate"},[s("span",{class:"jlbinding"},"Lux.LuxOps.multigate")],-1)),i[243]||(i[243]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[244]||(i[244]=t('
    julia
    multigate(x::AbstractArray, ::Val{N})

    Split up x into N equally sized chunks (along dimension 1).

    source

    ',3))]),i[299]||(i[299]=s("h2",{id:"Recursive-Operations",tabindex:"-1"},[a("Recursive Operations "),s("a",{class:"header-anchor",href:"#Recursive-Operations","aria-label":'Permalink to "Recursive Operations {#Recursive-Operations}"'},"​")],-1)),s("details",ts,[s("summary",null,[i[245]||(i[245]=s("a",{id:"Lux.recursive_map",href:"#Lux.recursive_map"},[s("span",{class:"jlbinding"},"Lux.recursive_map")],-1)),i[246]||(i[246]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[247]||(i[247]=t('
    julia
    recursive_map(f, x, args...)

    Similar to fmap(f, args...) but with restricted support for the notion of "leaf" types. However, this allows for more efficient and type stable implementations of recursive operations.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    How this works?

    For the following types it directly defines recursion rules:

    1. AbstractArray: If eltype is isbitstype, then f is applied to the array, else we recurse on the array.

    2. Tuple/NamedTuple: We recurse on the values.

    3. Number/Val/Nothing: We directly apply f.

    4. For all other types, we recurse on the fields using Functors.fmap.

    Note

    In most cases, users should gravitate towards Functors.fmap if it is being used outside of hot loops. Even for other cases, it is always recommended to verify the correctness of this implementation for specific usecases.

    source

    ',8))]),s("details",es,[s("summary",null,[i[248]||(i[248]=s("a",{id:"Lux.recursive_add!!",href:"#Lux.recursive_add!!"},[s("span",{class:"jlbinding"},"Lux.recursive_add!!")],-1)),i[249]||(i[249]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[250]||(i[250]=t('
    julia
    recursive_add!!(x, y)

    Recursively add the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(+, x, y), but this implementation uses type stable code for common cases.

    Any leaves of x that are arrays and allow in-place addition will be modified in place.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ls,[s("summary",null,[i[251]||(i[251]=s("a",{id:"Lux.recursive_copyto!",href:"#Lux.recursive_copyto!"},[s("span",{class:"jlbinding"},"Lux.recursive_copyto!")],-1)),i[252]||(i[252]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[253]||(i[253]=t('
    julia
    recursive_copyto!(x, y)

    Recursively copy the leaves of two nested structures x and y. In Functor language, this is equivalent to doing fmap(copyto!, x, y), but this implementation uses type stable code for common cases. Note that any immutable leaf will lead to an error.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',4))]),s("details",ns,[s("summary",null,[i[254]||(i[254]=s("a",{id:"Lux.recursive_eltype",href:"#Lux.recursive_eltype"},[s("span",{class:"jlbinding"},"Lux.recursive_eltype")],-1)),i[255]||(i[255]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[256]||(i[256]=t('
    julia
    recursive_eltype(x, unwrap_ad_types = Val(false))

    Recursively determine the element type of a nested structure x. This is equivalent to doing fmap(Lux.Utils.eltype, x), but this implementation uses type stable code for common cases.

    For ambiguous inputs like nothing and Val types we return Bool as the eltype.

    If unwrap_ad_types is set to Val(true) then for tracing and operator overloading based ADs (ForwardDiff, ReverseDiff, Tracker), this function will return the eltype of the unwrapped value.

    source

    ',5))]),s("details",hs,[s("summary",null,[i[257]||(i[257]=s("a",{id:"Lux.recursive_make_zero",href:"#Lux.recursive_make_zero"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero")],-1)),i[258]||(i[258]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[259]||(i[259]=t('
    julia
    recursive_make_zero(x)

    Recursively create a zero value for a nested structure x. This is equivalent to doing fmap(zero, x), but this implementation uses type stable code for common cases.

    See also Lux.recursive_make_zero!!.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),s("details",ps,[s("summary",null,[i[260]||(i[260]=s("a",{id:"Lux.recursive_make_zero!!",href:"#Lux.recursive_make_zero!!"},[s("span",{class:"jlbinding"},"Lux.recursive_make_zero!!")],-1)),i[261]||(i[261]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[262]||(i[262]=t('
    julia
    recursive_make_zero!!(x)

    Recursively create a zero value for a nested structure x. Leaves that can be mutated with in-place zeroing will be modified in place.

    See also Lux.recursive_make_zero for fully out-of-place version.

    Deprecation Warning

    Starting Lux v1.3.0, this function is deprecated in favor of Functors.fmap. Functors v0.5 made significant strides towards improving the performance of fmap and hence this function has been deprecated. Users are encouraged to use Functors.fmap instead.

    source

    ',5))]),i[300]||(i[300]=s("h2",{id:"Updating-Floating-Point-Precision",tabindex:"-1"},[a("Updating Floating Point Precision "),s("a",{class:"header-anchor",href:"#Updating-Floating-Point-Precision","aria-label":'Permalink to "Updating Floating Point Precision {#Updating-Floating-Point-Precision}"'},"​")],-1)),i[301]||(i[301]=s("p",null,"By default, Lux uses Float32 for all parameters and states. To update the precision simply pass the parameters / states / arrays into one of the following functions.",-1)),s("details",rs,[s("summary",null,[i[263]||(i[263]=s("a",{id:"Lux.f16",href:"#Lux.f16"},[s("span",{class:"jlbinding"},"Lux.f16")],-1)),i[264]||(i[264]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[265]||(i[265]=t('
    julia
    f16(m)

    Converts the eltype of m floating point values to Float16. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ds,[s("summary",null,[i[266]||(i[266]=s("a",{id:"Lux.f32",href:"#Lux.f32"},[s("span",{class:"jlbinding"},"Lux.f32")],-1)),i[267]||(i[267]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[268]||(i[268]=t('
    julia
    f32(m)

    Converts the eltype of m floating point values to Float32. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",os,[s("summary",null,[i[269]||(i[269]=s("a",{id:"Lux.f64",href:"#Lux.f64"},[s("span",{class:"jlbinding"},"Lux.f64")],-1)),i[270]||(i[270]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[271]||(i[271]=t('
    julia
    f64(m)

    Converts the eltype of m floating point values to Float64. To avoid recursion into structs mark them with Functors.@leaf.

    source

    ',3))]),s("details",ks,[s("summary",null,[i[272]||(i[272]=s("a",{id:"Lux.bf16",href:"#Lux.bf16"},[s("span",{class:"jlbinding"},"Lux.bf16")],-1)),i[273]||(i[273]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[274]||(i[274]=t('
    julia
    bf16(m)

    Converts the eltype of m floating point values to BFloat16. To avoid recursion into structs mark them with Functors.@leaf.

    Warning

    BFloat16s.jl needs to be loaded before using this function.

    Support for BFloat16

    Most Lux operations aren't optimized for BFloat16 yet. Instead this is meant to be used together with Reactant.@compile.

    source

    ',5))]),i[302]||(i[302]=s("h2",{id:"Element-Type-Matching",tabindex:"-1"},[a("Element Type Matching "),s("a",{class:"header-anchor",href:"#Element-Type-Matching","aria-label":'Permalink to "Element Type Matching {#Element-Type-Matching}"'},"​")],-1)),s("details",Ts,[s("summary",null,[i[275]||(i[275]=s("a",{id:"Lux.match_eltype",href:"#Lux.match_eltype"},[s("span",{class:"jlbinding"},"Lux.match_eltype")],-1)),i[276]||(i[276]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[277]||(i[277]=t('
    julia
    match_eltype(layer, ps, st, args...)

    Helper function to "maybe" (see below) match the element type of args... with the element type of the layer's parameters and states. This is useful for debugging purposes, to track down accidental type-promotions inside Lux layers.

    Extended Help

    Controlling the Behavior via Preferences

    Behavior of this function is controlled via the eltype_mismatch_handling preference. The following options are supported:

    Warning

    We print the warning for type-mismatch only once.

    Element Type Conversions

    For "convert" only the following conversions are done:

    Element Type of parameters/statesElement Type of args...Converted to
    Float64IntegerFloat64
    Float32Float64Float32
    Float32IntegerFloat32
    Float16Float64Float16
    Float16Float32Float16
    Float16IntegerFloat16

    source

    ',11))]),i[303]||(i[303]=s("h2",{id:"Stateful-Layer",tabindex:"-1"},[a("Stateful Layer "),s("a",{class:"header-anchor",href:"#Stateful-Layer","aria-label":'Permalink to "Stateful Layer {#Stateful-Layer}"'},"​")],-1)),s("details",Qs,[s("summary",null,[i[278]||(i[278]=s("a",{id:"Lux.StatefulLuxLayer",href:"#Lux.StatefulLuxLayer"},[s("span",{class:"jlbinding"},"Lux.StatefulLuxLayer")],-1)),i[279]||(i[279]=a()),l(e,{type:"info",class:"jlObjectType jlType",text:"Type"})]),i[280]||(i[280]=t('
    julia
    StatefulLuxLayer{FT}(model, ps, st)

    Warning

    This is not a Lux.AbstractLuxLayer

    A convenience wrapper over Lux layers which stores the parameters and states internally. This is meant to be used in internal implementation of layers.

    Usecases

    Static Parameters

    Arguments

    Inputs

    Outputs

    source

    ',14))]),i[304]||(i[304]=s("h2",{id:"Compact-Layer",tabindex:"-1"},[a("Compact Layer "),s("a",{class:"header-anchor",href:"#Compact-Layer","aria-label":'Permalink to "Compact Layer {#Compact-Layer}"'},"​")],-1)),s("details",gs,[s("summary",null,[i[281]||(i[281]=s("a",{id:"Lux.@compact",href:"#Lux.@compact"},[s("span",{class:"jlbinding"},"Lux.@compact")],-1)),i[282]||(i[282]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[283]||(i[283]=t(`
    julia
    @compact(kw...) do x
    -    ...
    -    @return y # optional (but recommended for best performance)
    -end
    -@compact(kw...) do x, p
    -    ...
    -    @return y # optional (but recommended for best performance)
    -end
    -@compact(forward::Function; name=nothing, dispatch=nothing, parameters...)

    Creates a layer by specifying some parameters, in the form of keywords, and (usually as a do block) a function for the forward pass. You may think of @compact as a specialized let block creating local variables that are trainable in Lux. Declared variable names may be used within the body of the forward function. Note that unlike typical Lux models, the forward function doesn't need to explicitly manage states.

    Defining the version with p allows you to access the parameters in the forward pass. This is useful when using it with SciML tools which require passing in the parameters explicitly.

    Reserved Kwargs:

    1. name: The name of the layer.

    2. dispatch: The constructed layer has the type Lux.CompactLuxLayer{dispatch} which can be used for custom dispatches.

    Tip

    Check the Lux tutorials for more examples of using @compact.

    If you are passing in kwargs by splatting them, they will be passed as is to the function body. This means if your splatted kwargs contain a lux layer that won't be registered in the CompactLuxLayer. Additionally all of the device functions treat these kwargs as leaves.

    Special Syntax

    Extended Help

    Examples

    Here is a linear model:

    julia
    julia> using Lux, Random
    -
    -julia> r = @compact(w=ones(3)) do x
    -           @return w .* x
    -       end
    -@compact(
    -    w = 3-element Vector{Float64},
    -) do x
    -    return w .* x
    -end       # Total: 3 parameters,
    -          #        plus 0 states.
    -
    -julia> ps, st = Lux.setup(Xoshiro(0), r);
    -
    -julia> r([1, 2, 3], ps, st)  # x is set to [1, 1, 1].
    -([1.0, 2.0, 3.0], NamedTuple())

    Here is a linear model with bias and activation:

    julia
    julia> d_in = 5
    -5
    -
    -julia> d_out = 3
    -3
    -
    -julia> d = @compact(W=ones(d_out, d_in), b=zeros(d_out), act=relu) do x
    -           y = W * x
    -           @return act.(y .+ b)
    -       end
    -@compact(
    -    W = 3×5 Matrix{Float64},
    -    b = 3-element Vector{Float64},
    -    act = relu,
    -) do x
    -    y = W * x
    -    return act.(y .+ b)
    -end       # Total: 18 parameters,
    -          #        plus 1 states.
    -
    -julia> ps, st = Lux.setup(Xoshiro(0), d);
    -
    -julia> d(ones(5, 2), ps, st)[1] # 3×2 Matrix as output.
    -3×2 Matrix{Float64}:
    - 5.0  5.0
    - 5.0  5.0
    - 5.0  5.0
    -
    -julia> ps_dense = (; weight=ps.W, bias=ps.b);
    -
    -julia> first(d([1, 2, 3, 4, 5], ps, st)) 
    -       first(Dense(d_in => d_out, relu)([1, 2, 3, 4, 5], ps_dense, NamedTuple())) # Equivalent to a dense layer
    -true

    Finally, here is a simple MLP. We can train this model just like any Lux model:

    julia
    julia> n_in = 1;
    -
    -julia> n_out = 1;
    -
    -julia> nlayers = 3;
    -
    -julia> model = @compact(w1=Dense(n_in, 128),
    -           w2=[Dense(128, 128) for i in 1:nlayers], w3=Dense(128, n_out), act=relu) do x
    -           embed = act.(w1(x))
    -           for w in w2
    -               embed = act.(w(embed))
    -           end
    -           out = w3(embed)
    -           @return out
    -       end
    -@compact(
    -    w1 = Dense(1 => 128),               # 256 parameters
    -    w2 = NamedTuple(
    -        1 = Dense(128 => 128),          # 16_512 parameters
    -        2 = Dense(128 => 128),          # 16_512 parameters
    -        3 = Dense(128 => 128),          # 16_512 parameters
    -    ),
    -    w3 = Dense(128 => 1),               # 129 parameters
    -    act = relu,
    -) do x
    -    embed = act.(w1(x))
    -    for w = w2
    -        embed = act.(w(embed))
    -    end
    -    out = w3(embed)
    -    return out
    -end       # Total: 49_921 parameters,
    -          #        plus 1 states.
    -
    -julia> ps, st = Lux.setup(Xoshiro(0), model);
    -
    -julia> size(first(model(randn(n_in, 32), ps, st)))  # 1×32 Matrix as output.
    -(1, 32)
    -
    -julia> using Optimisers, Zygote
    -
    -julia> x_data = collect(-2.0f0:0.1f0:2.0f0)';
    -
    -julia> y_data = 2 .* x_data .- x_data .^ 3;
    -
    -julia> optim = Optimisers.setup(Adam(), ps);
    -
    -julia> loss_initial = sum(abs2, first(model(x_data, ps, st)) .- y_data);
    -
    -julia> for epoch in 1:1000
    -           loss, gs = Zygote.withgradient(
    -               ps -> sum(abs2, first(model(x_data, ps, st)) .- y_data), ps)
    -           Optimisers.update!(optim, ps, gs[1])
    -       end;
    -
    -julia> loss_final = sum(abs2, first(model(x_data, ps, st)) .- y_data);
    -
    -julia> loss_initial > loss_final
    -true

    You may also specify a name for the model, which will be used instead of the default printout, which gives a verbatim representation of the code used to construct the model:

    julia
    julia> model = @compact(w=rand(3), name="Linear(3 => 1)") do x
    -           @return sum(w .* x)
    -       end
    -Linear(3 => 1)      # 3 parameters

    This can be useful when using @compact to hierarchically construct complex models to be used inside a Chain.

    Type Stability

    If your input function f is type-stable but the generated model is not type stable, it should be treated as a bug. We will appreciate issues if you find such cases.

    Parameter Count

    Array Parameter don't print the number of parameters on the side. However, they do account for the total number of parameters printed at the bottom.

    source

    `,23))]),s("details",ms,[s("summary",null,[i[284]||(i[284]=s("a",{id:"Lux.@init_fn",href:"#Lux.@init_fn"},[s("span",{class:"jlbinding"},"Lux.@init_fn")],-1)),i[285]||(i[285]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[286]||(i[286]=t(`
    julia
    @init_fn(fn, kind::Symbol = :parameter)

    Create an initializer function for a parameter or state to be used for in a Compact Lux Layer created using @compact.

    Arguments

    Examples

    julia
    julia> using Lux, Random
    -
    -julia> r = @compact(w=@init_fn(rng->randn32(rng, 3, 2)),
    -           b=@init_fn(rng->randn32(rng, 3), :state)) do x
    -           @return w * x .+ b
    -       end;
    -
    -julia> ps, st = Lux.setup(Xoshiro(0), r);
    -
    -julia> size(ps.w)
    -(3, 2)
    -
    -julia> size(st.b)
    -(3,)
    -
    -julia> size(r([1, 2], ps, st)[1])
    -(3,)

    source

    `,7))]),s("details",ys,[s("summary",null,[i[287]||(i[287]=s("a",{id:"Lux.@non_trainable",href:"#Lux.@non_trainable"},[s("span",{class:"jlbinding"},"Lux.@non_trainable")],-1)),i[288]||(i[288]=a()),l(e,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),i[289]||(i[289]=t(`
    julia
    @non_trainable(x)

    Mark a value as non-trainable. This bypasses the regular checks and places the value into the state of the layer.

    Arguments

    Examples

    julia
    julia> using Lux, Random
    -
    -julia> r = @compact(w=ones(3), w_fixed=@non_trainable(rand(3))) do x
    -           @return sum(w .* x .+ w_fixed)
    -       end;
    -
    -julia> ps, st = Lux.setup(Xoshiro(0), r);
    -
    -julia> size(ps.w)
    -(3,)
    -
    -julia> size(st.w_fixed)
    -(3,)
    -
    -julia> res, st_ = r([1, 2, 3], ps, st);
    -
    -julia> st_.w_fixed == st.w_fixed
    -true
    -
    -julia> res isa Number
    -true

    source

    `,7))]),i[305]||(i[305]=s("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),s("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),s("details",cs,[s("summary",null,[i[290]||(i[290]=s("a",{id:"Lux.set_dispatch_doctor_preferences!",href:"#Lux.set_dispatch_doctor_preferences!"},[s("span",{class:"jlbinding"},"Lux.set_dispatch_doctor_preferences!")],-1)),i[291]||(i[291]=a()),l(e,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[292]||(i[292]=t(`
    julia
    set_dispatch_doctor_preferences!(mode::String)
    -set_dispatch_doctor_preferences!(; luxcore::String="disable", luxlib::String="disable")

    Set the dispatch doctor preference for LuxCore and LuxLib packages.

    mode can be "disable", "warn", or "error". For details on the different modes, see the DispatchDoctor.jl documentation.

    If the preferences are already set, then no action is taken. Otherwise the preference is set. For changes to take effect, the Julia session must be restarted.

    source

    `,5))])])}const ws=p(d,[["render",Es]]);export{bs as __pageData,ws as default}; diff --git a/dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.js b/dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.js similarity index 99% rename from dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.js rename to dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.js index c10667c1a6..4f88fd4b36 100644 --- a/dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.js +++ b/dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.js @@ -1,4 +1,4 @@ -import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framework.I-x9Gl6h.js";const H=JSON.parse('{"title":"Activation Functions","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/ActivationFunctions.md","filePath":"api/NN_Primitives/ActivationFunctions.md","lastUpdated":null}'),E={name:"api/NN_Primitives/ActivationFunctions.md"},r={class:"jldocstring custom-block"},d={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"};function R(U,s,V,I,z,$){const n=t("Badge");return e(),k("div",null,[s[81]||(s[81]=i("h1",{id:"NNlib-ActivationFunctions-API",tabindex:"-1"},[a("Activation Functions "),i("a",{class:"header-anchor",href:"#NNlib-ActivationFunctions-API","aria-label":'Permalink to "Activation Functions {#NNlib-ActivationFunctions-API}"'},"​")],-1)),s[82]||(s[82]=i("p",null,[a("Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call "),i("code",null,"σ.(xs)"),a(", "),i("code",null,"relu.(xs)"),a(" and so on.")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.celu",href:"#NNlib.celu"},[i("span",{class:"jlbinding"},"NNlib.celu")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=h(`
    julia
    celu(x, α=1) = x  0 ? x : α * (exp(x/α) - 1)

    Activation function from "Continuously Differentiable Exponential Linear Units".

    julia
    julia> lineplot(celu, -2, 2, height=7)
    +import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framework.BetCMmtc.js";const G=JSON.parse('{"title":"Activation Functions","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/ActivationFunctions.md","filePath":"api/NN_Primitives/ActivationFunctions.md","lastUpdated":null}'),E={name:"api/NN_Primitives/ActivationFunctions.md"},r={class:"jldocstring custom-block"},d={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"};function R(q,s,M,O,U,z){const n=t("Badge");return e(),k("div",null,[s[81]||(s[81]=i("h1",{id:"NNlib-ActivationFunctions-API",tabindex:"-1"},[a("Activation Functions "),i("a",{class:"header-anchor",href:"#NNlib-ActivationFunctions-API","aria-label":'Permalink to "Activation Functions {#NNlib-ActivationFunctions-API}"'},"​")],-1)),s[82]||(s[82]=i("p",null,[a("Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call "),i("code",null,"σ.(xs)"),a(", "),i("code",null,"relu.(xs)"),a(" and so on.")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.celu",href:"#NNlib.celu"},[i("span",{class:"jlbinding"},"NNlib.celu")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=h(`
    julia
    celu(x, α=1) = x  0 ? x : α * (exp(x/α) - 1)

    Activation function from "Continuously Differentiable Exponential Linear Units".

    julia
    julia> lineplot(celu, -2, 2, height=7)
                ┌────────────────────────────────────────┐        
              2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ celu(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│        
    @@ -193,7 +193,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⠢⢄⣀⣀⣇⣀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│           
               └────────────────────────────────────────┘           
     -5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀           
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",v,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.logsigmoid",href:"#NNlib.logsigmoid"},[i("span",{class:"jlbinding"},"NNlib.logsigmoid")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
    +          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",A,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.logsigmoid",href:"#NNlib.logsigmoid"},[i("span",{class:"jlbinding"},"NNlib.logsigmoid")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
                ┌────────────────────────────────────────┐        
              0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ logσ(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    @@ -204,7 +204,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             -6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
                └────────────────────────────────────────┘        
     -5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",f,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.logσ",href:"#NNlib.logσ"},[i("span",{class:"jlbinding"},"NNlib.logσ")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
    +           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",v,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.logσ",href:"#NNlib.logσ"},[i("span",{class:"jlbinding"},"NNlib.logσ")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
                ┌────────────────────────────────────────┐        
              0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ logσ(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    @@ -215,7 +215,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             -6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
                └────────────────────────────────────────┘        
     -5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",j,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.mish",href:"#NNlib.mish"},[i("span",{class:"jlbinding"},"NNlib.mish")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=h(`
    julia
    mish(x) = x * tanh(softplus(x))

    Activation function from "Mish: A Self Regularized Non-Monotonic Neural Activation Function".

    julia
    julia> lineplot(mish, -5, 5, height=7)
    +           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",f,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.mish",href:"#NNlib.mish"},[i("span",{class:"jlbinding"},"NNlib.mish")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=h(`
    julia
    mish(x) = x * tanh(softplus(x))

    Activation function from "Mish: A Self Regularized Non-Monotonic Neural Activation Function".

    julia
    julia> lineplot(mish, -5, 5, height=7)
                ┌────────────────────────────────────────┐        
              5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋│ mish(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀│        
    @@ -226,7 +226,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             -1 │⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
                └────────────────────────────────────────┘        
     -5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",m,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.relu",href:"#NNlib.relu"},[i("span",{class:"jlbinding"},"NNlib.relu")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=h(`
    julia
    relu(x) = max(0, x)

    Rectified Linear Unit activation function.

    julia
    julia> lineplot(relu, -2, 2, height=7)
    +           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",j,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.relu",href:"#NNlib.relu"},[i("span",{class:"jlbinding"},"NNlib.relu")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=h(`
    julia
    relu(x) = max(0, x)

    Rectified Linear Unit activation function.

    julia
    julia> lineplot(relu, -2, 2, height=7)
               ┌────────────────────────────────────────┐        
             2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│ relu(x)
               │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀│        
    @@ -237,7 +237,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
               └────────────────────────────────────────┘        
     -2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",x,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.relu6",href:"#NNlib.relu6"},[i("span",{class:"jlbinding"},"NNlib.relu6")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=h(`
    julia
    relu6(x) = min(max(0, x), 6)

    Rectified Linear Unit activation function capped at 6. See "Convolutional Deep Belief Networks" from CIFAR-10.

    julia
    julia> lineplot(relu6, -10, 10, height=7)
    +          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",m,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.relu6",href:"#NNlib.relu6"},[i("span",{class:"jlbinding"},"NNlib.relu6")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=h(`
    julia
    relu6(x) = min(max(0, x), 6)

    Rectified Linear Unit activation function capped at 6. See "Convolutional Deep Belief Networks" from CIFAR-10.

    julia
    julia> lineplot(relu6, -10, 10, height=7)
               ┌────────────────────────────────────────┐         
             6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠎⠉⠉⠉⠉⠉⠉⠉⠉│ relu6(x)
               │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    @@ -248,7 +248,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⡧⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
               └────────────────────────────────────────┘         
     -10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",A,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.rrelu",href:"#NNlib.rrelu"},[i("span",{class:"jlbinding"},"NNlib.rrelu")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[50]||(s[50]=h(`
    julia
    rrelu(x, lo=1/8, hi=1/3) = max(a*x, x)
    +          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",x,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.rrelu",href:"#NNlib.rrelu"},[i("span",{class:"jlbinding"},"NNlib.rrelu")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[50]||(s[50]=h(`
    julia
    rrelu(x, lo=1/8, hi=1/3) = max(a*x, x)
     # where \`a\` is randomly sampled from uniform distribution \`U(lo, hi)\`

    Randomized Leaky Rectified Linear Unit activation function. See "Empirical Evaluation of Rectified Activations" You can also specify the bound explicitly, e.g. rrelu(x, 0.0, 1.0).

    julia
    julia> lineplot(rrelu, -20, 10, height=7)
                 ┌────────────────────────────────────────┐         
              10 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ rrelu(x)
    @@ -280,7 +280,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
                ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        
     
     julia> selu(-10f0)
    --1.7580194f0

    source

    `,4))]),i("details",N,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.sigmoid",href:"#NNlib.sigmoid"},[i("span",{class:"jlbinding"},"NNlib.sigmoid")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[56]||(s[56]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
    +-1.7580194f0

    source

    `,4))]),i("details",_,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.sigmoid",href:"#NNlib.sigmoid"},[i("span",{class:"jlbinding"},"NNlib.sigmoid")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[56]||(s[56]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
     
     julia> lineplot(sigmoid, -5, 5, height=7)
               ┌────────────────────────────────────────┐     
    @@ -296,7 +296,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
               ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     
     
     julia> sigmoid === σ
    -true

    source

    `,5))]),i("details",L,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.σ",href:"#NNlib.σ"},[i("span",{class:"jlbinding"},"NNlib.σ")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[59]||(s[59]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
    +true

    source

    `,5))]),i("details",N,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.σ",href:"#NNlib.σ"},[i("span",{class:"jlbinding"},"NNlib.σ")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[59]||(s[59]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
     
     julia> lineplot(sigmoid, -5, 5, height=7)
               ┌────────────────────────────────────────┐     
    @@ -312,7 +312,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
               ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     
     
     julia> sigmoid === σ
    -true

    source

    `,5))]),i("details",w,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.softplus",href:"#NNlib.softplus"},[i("span",{class:"jlbinding"},"NNlib.softplus")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=h(`
    julia
    softplus(x) = log(exp(x) + 1)

    See "Deep Sparse Rectifier Neural Networks", JMLR 2011.

    julia
    julia> lineplot(softplus, -3, 3, height=7)
    +true

    source

    `,5))]),i("details",T,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.softplus",href:"#NNlib.softplus"},[i("span",{class:"jlbinding"},"NNlib.softplus")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=h(`
    julia
    softplus(x) = log(exp(x) + 1)

    See "Deep Sparse Rectifier Neural Networks", JMLR 2011.

    julia
    julia> lineplot(softplus, -3, 3, height=7)
               ┌────────────────────────────────────────┐            
             4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ softplus(x)
               │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠│            
    @@ -339,7 +339,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
               ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀            
     
     julia> softplus(16f0)
    -16.0f0

    source

    `,4))]),i("details",T,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.softshrink",href:"#NNlib.softshrink"},[i("span",{class:"jlbinding"},"NNlib.softshrink")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=h(`
    julia
    softshrink(x, λ=0.5) =
    +16.0f0

    source

    `,4))]),i("details",S,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.softshrink",href:"#NNlib.softshrink"},[i("span",{class:"jlbinding"},"NNlib.softshrink")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=h(`
    julia
    softshrink(x, λ=0.5) =
         (x  λ ? x - λ : (-λ  x ? x + λ : 0))

    See "Softshrink Activation Function".

    julia
    julia> lineplot(softshrink, -2, 2, height=7)
                ┌────────────────────────────────────────┐              
              2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│ softshrink(x)
    @@ -367,7 +367,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
                ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀
     
     julia> softshrink.((-10f0, 10f0))
    -(-9.5f0, 9.5f0)

    source

    `,4))]),i("details",S,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.softsign",href:"#NNlib.softsign"},[i("span",{class:"jlbinding"},"NNlib.softsign")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=h(`
    julia
    softsign(x) = x / (1 + |x|)

    See "Quadratic Polynomials Learn Better Image Features" (2009).

    julia
    julia> lineplot(softsign, -5, 5, height=7)
    +(-9.5f0, 9.5f0)

    source

    `,4))]),i("details",L,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.softsign",href:"#NNlib.softsign"},[i("span",{class:"jlbinding"},"NNlib.softsign")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=h(`
    julia
    softsign(x) = x / (1 + |x|)

    See "Quadratic Polynomials Learn Better Image Features" (2009).

    julia
    julia> lineplot(softsign, -5, 5, height=7)
                ┌────────────────────────────────────────┐            
              1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⣀⣀⠤⠤⠤⠤⠤│ softsign(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    @@ -397,7 +397,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
     0.5f0
     
     julia> softsign(100f0)
    -0.990099f0

    source

    `,4))]),i("details",q,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.swish",href:"#NNlib.swish"},[i("span",{class:"jlbinding"},"NNlib.swish")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=h(`
    julia
    swish(x) = x * σ(x)

    Self-gated activation function. See "Swish: a Self-Gated Activation Function".

    julia
    julia> lineplot(swish, -2, 2, height=7)
    +0.990099f0

    source

    `,4))]),i("details",P,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.swish",href:"#NNlib.swish"},[i("span",{class:"jlbinding"},"NNlib.swish")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=h(`
    julia
    swish(x) = x * σ(x)

    Self-gated activation function. See "Swish: a Self-Gated Activation Function".

    julia
    julia> lineplot(swish, -2, 2, height=7)
                ┌────────────────────────────────────────┐         
              2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤│ swish(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀│         
    @@ -408,7 +408,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             -1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
                └────────────────────────────────────────┘         
     -2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀         
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",M,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.hardswish",href:"#NNlib.hardswish"},[i("span",{class:"jlbinding"},"NNlib.hardswish")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=h(`
    julia
    hardswish(x) = x * hardσ(x)

    Hard-Swish activation function. See "Searching for MobileNetV3".

    julia
    julia> lineplot(hardswish, -2, 5, height = 7)
    +           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",w,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.hardswish",href:"#NNlib.hardswish"},[i("span",{class:"jlbinding"},"NNlib.hardswish")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=h(`
    julia
    hardswish(x) = x * hardσ(x)

    Hard-Swish activation function. See "Searching for MobileNetV3".

    julia
    julia> lineplot(hardswish, -2, 5, height = 7)
                ┌────────────────────────────────────────┐             
              5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠒⠉│ hardswish(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠒⠉⠁⠀⠀⠀⠀│             
    @@ -438,7 +438,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
     
     julia> hardswish.(-5:5)'
     1×11 adjoint(::Vector{Float64}) with eltype Float64:
    - -0.0  -0.0  -0.0  -0.333333  -0.333333  0.0  0.666667  1.66667  3.0  4.0  5.0

    source

    `,4))]),i("details",O,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.tanhshrink",href:"#NNlib.tanhshrink"},[i("span",{class:"jlbinding"},"NNlib.tanhshrink")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=h(`
    julia
    tanhshrink(x) = x - tanh(x)

    See "Tanhshrink Activation Function".

    julia
    julia> lineplot(tanhshrink, -3, 3, height=7)
    + -0.0  -0.0  -0.0  -0.333333  -0.333333  0.0  0.666667  1.66667  3.0  4.0  5.0

    source

    `,4))]),i("details",V,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.tanhshrink",href:"#NNlib.tanhshrink"},[i("span",{class:"jlbinding"},"NNlib.tanhshrink")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=h(`
    julia
    tanhshrink(x) = x - tanh(x)

    See "Tanhshrink Activation Function".

    julia
    julia> lineplot(tanhshrink, -3, 3, height=7)
                ┌────────────────────────────────────────┐              
              3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ tanhshrink(x)
                │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠊│              
    @@ -452,7 +452,7 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
                ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀              
     
     julia> tanhshrink.((-10f0, 10f0))
    -(-9.0f0, 9.0f0)

    source

    `,4))]),i("details",P,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.trelu",href:"#NNlib.trelu"},[i("span",{class:"jlbinding"},"NNlib.trelu")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=h(`
    julia
    trelu(x, theta=1) = x > theta ? x : 0

    Threshold gated rectified linear activation function. See "Zero-bias autoencoders and the benefits of co-adapting features"

    julia
    julia> lineplot(trelu, -2, 4, height=7)
    +(-9.0f0, 9.0f0)

    source

    `,4))]),i("details",I,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.trelu",href:"#NNlib.trelu"},[i("span",{class:"jlbinding"},"NNlib.trelu")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=h(`
    julia
    trelu(x, theta=1) = x > theta ? x : 0

    Threshold gated rectified linear activation function. See "Zero-bias autoencoders and the benefits of co-adapting features"

    julia
    julia> lineplot(trelu, -2, 4, height=7)
               ┌────────────────────────────────────────┐         
             4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ trelu(x)
               │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│         
    @@ -463,4 +463,4 @@ import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framewo
             0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣀⣀⣀⣀⣀⣀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
               └────────────────────────────────────────┘         
     -2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀4⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))])])}const J=p(E,[["render",R]]);export{H as __pageData,J as default}; + ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))])])}const H=p(E,[["render",R]]);export{G as __pageData,H as default}; diff --git a/dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.lean.js b/dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.lean.js new file mode 100644 index 0000000000..e5bc54703d --- /dev/null +++ b/dev/assets/api_NN_Primitives_ActivationFunctions.md.Cjp_vPbj.lean.js @@ -0,0 +1 @@ +import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framework.BetCMmtc.js";const G=JSON.parse('{"title":"Activation Functions","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/ActivationFunctions.md","filePath":"api/NN_Primitives/ActivationFunctions.md","lastUpdated":null}'),E={name:"api/NN_Primitives/ActivationFunctions.md"},r={class:"jldocstring custom-block"},d={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"};function R(q,s,M,O,U,z){const n=t("Badge");return e(),k("div",null,[s[81]||(s[81]=i("h1",{id:"NNlib-ActivationFunctions-API",tabindex:"-1"},[a("Activation Functions "),i("a",{class:"header-anchor",href:"#NNlib-ActivationFunctions-API","aria-label":'Permalink to "Activation Functions {#NNlib-ActivationFunctions-API}"'},"​")],-1)),s[82]||(s[82]=i("p",null,[a("Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call "),i("code",null,"σ.(xs)"),a(", "),i("code",null,"relu.(xs)"),a(" and so on.")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.celu",href:"#NNlib.celu"},[i("span",{class:"jlbinding"},"NNlib.celu")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=h("",4))]),i("details",d,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.elu",href:"#NNlib.elu"},[i("span",{class:"jlbinding"},"NNlib.elu")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=h("",4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.gelu",href:"#NNlib.gelu"},[i("span",{class:"jlbinding"},"NNlib.gelu")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=h("",4))]),i("details",y,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.hardsigmoid",href:"#NNlib.hardsigmoid"},[i("span",{class:"jlbinding"},"NNlib.hardsigmoid")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=h("",4))]),i("details",F,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.hardσ",href:"#NNlib.hardσ"},[i("span",{class:"jlbinding"},"NNlib.hardσ")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=h("",4))]),i("details",o,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.sigmoid_fast",href:"#NNlib.sigmoid_fast"},[i("span",{class:"jlbinding"},"NNlib.sigmoid_fast")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=h("",5))]),i("details",c,[i("summary",null,[s[18]||(s[18]=i("a",{id:"NNlib.hardtanh",href:"#NNlib.hardtanh"},[i("span",{class:"jlbinding"},"NNlib.hardtanh")],-1)),s[19]||(s[19]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=h("",5))]),i("details",C,[i("summary",null,[s[21]||(s[21]=i("a",{id:"NNlib.tanh_fast",href:"#NNlib.tanh_fast"},[i("span",{class:"jlbinding"},"NNlib.tanh_fast")],-1)),s[22]||(s[22]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=h("",7))]),i("details",u,[i("summary",null,[s[24]||(s[24]=i("a",{id:"NNlib.leakyrelu",href:"#NNlib.leakyrelu"},[i("span",{class:"jlbinding"},"NNlib.leakyrelu")],-1)),s[25]||(s[25]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=h("",4))]),i("details",b,[i("summary",null,[s[27]||(s[27]=i("a",{id:"NNlib.lisht",href:"#NNlib.lisht"},[i("span",{class:"jlbinding"},"NNlib.lisht")],-1)),s[28]||(s[28]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=h("",4))]),i("details",B,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.logcosh",href:"#NNlib.logcosh"},[i("span",{class:"jlbinding"},"NNlib.logcosh")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=h("",4))]),i("details",A,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.logsigmoid",href:"#NNlib.logsigmoid"},[i("span",{class:"jlbinding"},"NNlib.logsigmoid")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=h("",4))]),i("details",v,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.logσ",href:"#NNlib.logσ"},[i("span",{class:"jlbinding"},"NNlib.logσ")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=h("",4))]),i("details",f,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.mish",href:"#NNlib.mish"},[i("span",{class:"jlbinding"},"NNlib.mish")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=h("",4))]),i("details",j,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.relu",href:"#NNlib.relu"},[i("span",{class:"jlbinding"},"NNlib.relu")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=h("",4))]),i("details",m,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.relu6",href:"#NNlib.relu6"},[i("span",{class:"jlbinding"},"NNlib.relu6")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=h("",4))]),i("details",x,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.rrelu",href:"#NNlib.rrelu"},[i("span",{class:"jlbinding"},"NNlib.rrelu")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[50]||(s[50]=h("",4))]),i("details",D,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.selu",href:"#NNlib.selu"},[i("span",{class:"jlbinding"},"NNlib.selu")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=h("",4))]),i("details",_,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.sigmoid",href:"#NNlib.sigmoid"},[i("span",{class:"jlbinding"},"NNlib.sigmoid")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[56]||(s[56]=h("",5))]),i("details",N,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.σ",href:"#NNlib.σ"},[i("span",{class:"jlbinding"},"NNlib.σ")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[59]||(s[59]=h("",5))]),i("details",T,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.softplus",href:"#NNlib.softplus"},[i("span",{class:"jlbinding"},"NNlib.softplus")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=h("",4))]),i("details",S,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.softshrink",href:"#NNlib.softshrink"},[i("span",{class:"jlbinding"},"NNlib.softshrink")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=h("",4))]),i("details",L,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.softsign",href:"#NNlib.softsign"},[i("span",{class:"jlbinding"},"NNlib.softsign")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=h("",4))]),i("details",P,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.swish",href:"#NNlib.swish"},[i("span",{class:"jlbinding"},"NNlib.swish")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=h("",4))]),i("details",w,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.hardswish",href:"#NNlib.hardswish"},[i("span",{class:"jlbinding"},"NNlib.hardswish")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=h("",4))]),i("details",V,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.tanhshrink",href:"#NNlib.tanhshrink"},[i("span",{class:"jlbinding"},"NNlib.tanhshrink")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=h("",4))]),i("details",I,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.trelu",href:"#NNlib.trelu"},[i("span",{class:"jlbinding"},"NNlib.trelu")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=h("",4))])])}const H=p(E,[["render",R]]);export{G as __pageData,H as default}; diff --git a/dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.lean.js b/dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.lean.js deleted file mode 100644 index c10667c1a6..0000000000 --- a/dev/assets/api_NN_Primitives_ActivationFunctions.md.DNcaJ2dy.lean.js +++ /dev/null @@ -1,466 +0,0 @@ -import{_ as p,c as k,j as i,a,G as l,a2 as h,B as t,o as e}from"./chunks/framework.I-x9Gl6h.js";const H=JSON.parse('{"title":"Activation Functions","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/ActivationFunctions.md","filePath":"api/NN_Primitives/ActivationFunctions.md","lastUpdated":null}'),E={name:"api/NN_Primitives/ActivationFunctions.md"},r={class:"jldocstring custom-block"},d={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"};function R(U,s,V,I,z,$){const n=t("Badge");return e(),k("div",null,[s[81]||(s[81]=i("h1",{id:"NNlib-ActivationFunctions-API",tabindex:"-1"},[a("Activation Functions "),i("a",{class:"header-anchor",href:"#NNlib-ActivationFunctions-API","aria-label":'Permalink to "Activation Functions {#NNlib-ActivationFunctions-API}"'},"​")],-1)),s[82]||(s[82]=i("p",null,[a("Non-linearities that go between layers of your model. Note that, unless otherwise stated, activation functions operate on scalars. To apply them to an array you can call "),i("code",null,"σ.(xs)"),a(", "),i("code",null,"relu.(xs)"),a(" and so on.")],-1)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.celu",href:"#NNlib.celu"},[i("span",{class:"jlbinding"},"NNlib.celu")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=h(`
    julia
    celu(x, α=1) = x  0 ? x : α * (exp(x/α) - 1)

    Activation function from "Continuously Differentiable Exponential Linear Units".

    julia
    julia> lineplot(celu, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐        
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ celu(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠔⠒⠋⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -1 │⠤⠤⠤⠤⠔⠒⠒⠒⠊⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        
    -
    -julia> celu(-10f0)
    --0.9999546f0

    source

    `,4))]),i("details",d,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.elu",href:"#NNlib.elu"},[i("span",{class:"jlbinding"},"NNlib.elu")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=h(`
    julia
    elu(x, α=1) = x > 0 ? x : α * (exp(x) - 1)

    Exponential Linear Unit activation function. See "Fast and Accurate Deep Network Learning by Exponential Linear Units". You can also specify the coefficient explicitly, e.g. elu(x, 1).

    julia
    julia> lineplot(elu, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐       
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ elu(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│       
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│       
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -           │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│       
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠔⠒⠋⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -        -1 │⠤⠤⠤⠤⠔⠒⠒⠒⠊⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -           └────────────────────────────────────────┘       
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀       
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀       
    -
    -julia> elu(-10f0)
    --0.9999546f0
    -
    -julia> elu(-10f0, 2)
    --1.9999092f0

    source

    `,4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.gelu",href:"#NNlib.gelu"},[i("span",{class:"jlbinding"},"NNlib.gelu")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=h(`
    julia
    gelu(x) = 0.5x * (1 + tanh((2/π) * (x + 0.044715x^3)))

    Activation function from "Gaussian Error Linear Units".

    julia
    julia> lineplot(gelu, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐        
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊│ gelu(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⣤⣤⣤⣤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⡤⡧⠶⠶⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠉⠉⠉⠉⠉⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        
    -
    -julia> lineplot(gelu, -5, 0, height=7);
    -
    -julia> lineplot!(ans, swish)
    -             ┌────────────────────────────────────────┐         
    -           0 │⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠒⠒⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸│ gelu(x) 
    -             │⠑⠒⠢⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇│ swish(x)
    -             │⠀⠀⠀⠀⠀⠈⠉⠒⠤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⠁│         
    -   f(x)      │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠒⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⠀⠀⢠⡇⠀│         
    -             │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⣄⠀⠀⠀⠀⠀⢠⡞⠀⠀│         
    -             │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⡤⢣⠃⠀⠀│         
    -        -0.2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀│         
    -             └────────────────────────────────────────┘         
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀         
    -             ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",y,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.hardsigmoid",href:"#NNlib.hardsigmoid"},[i("span",{class:"jlbinding"},"NNlib.hardsigmoid")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=h(`
    julia
    hardσ(x) = max(0, min(1, (x + 3) / 6))

    Piecewise linear approximation of sigmoid.

    julia
    julia> lineplot(hardsigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐         
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠉⠉⠉⠉⠉⠉⠉⠉│ hardσ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⡗⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⠤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          └────────────────────────────────────────┘         
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀         
    -
    -julia> lineplot(sigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐     
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│ σ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -        0 │⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          └────────────────────────────────────────┘     
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀     
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",F,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.hardσ",href:"#NNlib.hardσ"},[i("span",{class:"jlbinding"},"NNlib.hardσ")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=h(`
    julia
    hardσ(x) = max(0, min(1, (x + 3) / 6))

    Piecewise linear approximation of sigmoid.

    julia
    julia> lineplot(hardsigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐         
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠉⠉⠉⠉⠉⠉⠉⠉│ hardσ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⡗⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⠤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          └────────────────────────────────────────┘         
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀         
    -
    -julia> lineplot(sigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐     
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│ σ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -        0 │⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          └────────────────────────────────────────┘     
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀     
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",o,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.sigmoid_fast",href:"#NNlib.sigmoid_fast"},[i("span",{class:"jlbinding"},"NNlib.sigmoid_fast")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[17]||(s[17]=h(`
    julia
    sigmoid_fast(x)

    This is a faster, and very slightly less accurate, version of sigmoid. For x::Float32, perhaps 3 times faster, and maximum errors 2 eps instead of 1.

    See also tanh_fast.

    julia
    julia> sigmoid(0.2f0)
    -0.54983395f0
    -
    -julia> sigmoid_fast(0.2f0)
    -0.54983395f0
    -
    -julia> hardσ(0.2f0)
    -0.53333336f0

    source

    `,5))]),i("details",c,[i("summary",null,[s[18]||(s[18]=i("a",{id:"NNlib.hardtanh",href:"#NNlib.hardtanh"},[i("span",{class:"jlbinding"},"NNlib.hardtanh")],-1)),s[19]||(s[19]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=h(`
    julia
    hardtanh(x) = max(-1, min(1, x))

    Segment-wise linear approximation of tanh, much cheaper to compute. See "Large Scale Machine Learning".

    See also tanh_fast.

    julia
    julia> lineplot(hardtanh, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐            
    -         1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⠔⠋⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ hardtanh(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡷⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠋⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -        -1 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠔⠋⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           └────────────────────────────────────────┘            
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀            
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x
    -
    -julia> lineplot(tanh, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐        
    -         1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠤⠒⠒⠒⠊⠉⠉⠉│ tanh(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡷⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠊⠁⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -1 │⣀⣀⣀⡠⠤⠤⠤⠖⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,5))]),i("details",C,[i("summary",null,[s[21]||(s[21]=i("a",{id:"NNlib.tanh_fast",href:"#NNlib.tanh_fast"},[i("span",{class:"jlbinding"},"NNlib.tanh_fast")],-1)),s[22]||(s[22]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=h(`
    julia
    tanh_fast(x)

    This is a faster but slighly less accurate version of tanh.

    Where Julia's tanh function has an error under 2 eps, this may be wrong by 5 eps, a reduction by less than one decimal digit.

    For x::Float32 this is usually about 10 times faster, with a smaller speedup for x::Float64. For any other number types, it just calls tanh.

    See also sigmoid_fast.

    julia
    julia> tanh(0.5f0)
    -0.46211717f0
    -
    -julia> tanh_fast(0.5f0)
    -0.46211714f0
    -
    -julia> hard_tanh(0.5f0)
    -0.5f0

    source

    `,7))]),i("details",u,[i("summary",null,[s[24]||(s[24]=i("a",{id:"NNlib.leakyrelu",href:"#NNlib.leakyrelu"},[i("span",{class:"jlbinding"},"NNlib.leakyrelu")],-1)),s[25]||(s[25]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=h(`
    julia
    leakyrelu(x, a=0.01) = max(a*x, x)

    Leaky Rectified Linear Unit activation function. You can also specify the coefficient explicitly, e.g. leakyrelu(x, 0.01).

    julia
    julia> lineplot(x -> leakyrelu(x, 0.5), -2, 2, height=7)
    -           ┌────────────────────────────────────────┐       
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│ #42(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│       
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│       
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -           │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│       
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠤⠒⠒⠋⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -        -1 │⣀⣀⠤⠤⠒⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│       
    -           └────────────────────────────────────────┘       
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀       
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀       
    -
    -julia> leakyrelu(-10f0, 0.2)
    --2.0f0
    -
    -julia> leakyrelu(-10f0, 0.02)
    --0.5f0

    source

    `,4))]),i("details",b,[i("summary",null,[s[27]||(s[27]=i("a",{id:"NNlib.lisht",href:"#NNlib.lisht"},[i("span",{class:"jlbinding"},"NNlib.lisht")],-1)),s[28]||(s[28]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=h(`
    julia
    lisht(x) = x * tanh(x)

    Activation function from "LiSHT: Non-Parametric Linearly Scaled Hyperbolic Tangent ..."

    julia
    julia> lineplot(lisht, -2, 2, height=7)
    -          ┌────────────────────────────────────────┐         
    -        2 │⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔│ lisht(x)
    -          │⠀⠈⠑⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀│         
    -          │⠀⠀⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⠀⠀│         
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⣄⣀⣀⣇⣀⣀⠤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          └────────────────────────────────────────┘         
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀         
    -
    -julia> lineplot!(ans, logcosh)
    -          ┌────────────────────────────────────────┐           
    -        2 │⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔│ lisht(x)  
    -          │⠀⠈⠑⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀│ logcosh(x)
    -          │⠢⣄⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⣀⠔│           
    -   f(x)   │⠀⠈⠑⠢⣀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⣀⠔⠊⠁⠀│           
    -          │⠀⠀⠀⠀⠀⠉⠢⢄⡀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⡠⠔⠋⠁⠀⠀⠀⠀│           
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠦⣌⡓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⣁⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀│           
    -        0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠪⠷⣦⣄⣀⣀⣇⣀⣀⣤⠶⠕⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│           
    -          └────────────────────────────────────────┘           
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀           
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",B,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.logcosh",href:"#NNlib.logcosh"},[i("span",{class:"jlbinding"},"NNlib.logcosh")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=h(`
    julia
    logcosh(x)

    Return log(cosh(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logcosh, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐           
    -        5 │⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ logcosh(x)
    -          │⠉⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│           
    -          │⠀⠀⠀⠑⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀│           
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀│           
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│           
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│           
    -        0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⠢⢄⣀⣀⣇⣀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│           
    -          └────────────────────────────────────────┘           
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀           
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",v,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.logsigmoid",href:"#NNlib.logsigmoid"},[i("span",{class:"jlbinding"},"NNlib.logsigmoid")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
    -           ┌────────────────────────────────────────┐        
    -         0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ logσ(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⡤⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",f,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.logσ",href:"#NNlib.logσ"},[i("span",{class:"jlbinding"},"NNlib.logσ")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=h(`
    julia
    logσ(x)

    Return log(σ(x)) which is computed in a numerically stable way.

    julia
    julia> lineplot(logsigmoid, -5, 5, height=7)
    -           ┌────────────────────────────────────────┐        
    -         0 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│ logσ(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⡤⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",j,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.mish",href:"#NNlib.mish"},[i("span",{class:"jlbinding"},"NNlib.mish")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=h(`
    julia
    mish(x) = x * tanh(softplus(x))

    Activation function from "Mish: A Self Regularized Non-Monotonic Neural Activation Function".

    julia
    julia> lineplot(mish, -5, 5, height=7)
    -           ┌────────────────────────────────────────┐        
    -         5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋│ mish(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣧⣔⣊⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│        
    -        -1 │⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",m,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.relu",href:"#NNlib.relu"},[i("span",{class:"jlbinding"},"NNlib.relu")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=h(`
    julia
    relu(x) = max(0, x)

    Rectified Linear Unit activation function.

    julia
    julia> lineplot(relu, -2, 2, height=7)
    -          ┌────────────────────────────────────────┐        
    -        2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│ relu(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀│        
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀│        
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀│        
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -          └────────────────────────────────────────┘        
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",x,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.relu6",href:"#NNlib.relu6"},[i("span",{class:"jlbinding"},"NNlib.relu6")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=h(`
    julia
    relu6(x) = min(max(0, x), 6)

    Rectified Linear Unit activation function capped at 6. See "Convolutional Deep Belief Networks" from CIFAR-10.

    julia
    julia> lineplot(relu6, -10, 10, height=7)
    -          ┌────────────────────────────────────────┐         
    -        6 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠎⠉⠉⠉⠉⠉⠉⠉⠉│ relu6(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡤⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⡠⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⡧⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          └────────────────────────────────────────┘         
    --10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",A,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.rrelu",href:"#NNlib.rrelu"},[i("span",{class:"jlbinding"},"NNlib.rrelu")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[50]||(s[50]=h(`
    julia
    rrelu(x, lo=1/8, hi=1/3) = max(a*x, x)
    -# where \`a\` is randomly sampled from uniform distribution \`U(lo, hi)\`

    Randomized Leaky Rectified Linear Unit activation function. See "Empirical Evaluation of Rectified Activations" You can also specify the bound explicitly, e.g. rrelu(x, 0.0, 1.0).

    julia
    julia> lineplot(rrelu, -20, 10, height=7)
    -            ┌────────────────────────────────────────┐         
    -         10 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ rrelu(x)
    -            │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│         
    -            │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│         
    -   f(x)     │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⠤⣤⣤⢤⣤⣤⠤⠤⠤⢼⠮⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│         
    -            │⣰⢀⣆⡄⣄⡄⡠⡰⠦⠷⡜⢢⠷⠳⠢⠊⠉⠉⠀⠀⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -            │⠃⠉⠙⠘⠃⠈⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        -10 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -            └────────────────────────────────────────┘         
    --20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀         
    -            ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀         
    -
    -julia> extrema(rrelu.(fill(-10f0, 1000)))
    -(-3.3316886f0, -1.2548422f0)

    source

    `,4))]),i("details",D,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.selu",href:"#NNlib.selu"},[i("span",{class:"jlbinding"},"NNlib.selu")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=h(`
    julia
    selu(x) = λ * (x  0 ? x : α * (exp(x) - 1))
    -
    -λ  1.05070...
    -α  1.67326...

    Scaled exponential linear units. See "Self-Normalizing Neural Networks".

    julia
    julia> lineplot(selu, -3, 2, height=7)
    -           ┌────────────────────────────────────────┐        
    -         3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ selu(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠊⠉⠀⠀⠀⠀│        
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⡠⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           │⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⣉⠭⠛⡏⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│        
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⡤⠤⠒⠊⠉⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -        -2 │⠤⠤⠖⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│        
    -           └────────────────────────────────────────┘        
    --3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀        
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀        
    -
    -julia> selu(-10f0)
    --1.7580194f0

    source

    `,4))]),i("details",N,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.sigmoid",href:"#NNlib.sigmoid"},[i("span",{class:"jlbinding"},"NNlib.sigmoid")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[56]||(s[56]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
    -
    -julia> lineplot(sigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐     
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│ σ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -        0 │⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          └────────────────────────────────────────┘     
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀     
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     
    -
    -julia> sigmoid === σ
    -true

    source

    `,5))]),i("details",L,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.σ",href:"#NNlib.σ"},[i("span",{class:"jlbinding"},"NNlib.σ")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[59]||(s[59]=h(`
    julia
    σ(x) = 1 / (1 + exp(-x))

    Classic sigmoid activation function. Unicode σ can be entered as \\sigma then tab, in many editors. The ascii name sigmoid is also exported.

    See also sigmoid_fast.

    julia
    julia> using UnicodePlots
    -
    -julia> lineplot(sigmoid, -5, 5, height=7)
    -          ┌────────────────────────────────────────┐     
    -        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│ σ(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -        0 │⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│     
    -          └────────────────────────────────────────┘     
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀     
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀     
    -
    -julia> sigmoid === σ
    -true

    source

    `,5))]),i("details",w,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.softplus",href:"#NNlib.softplus"},[i("span",{class:"jlbinding"},"NNlib.softplus")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=h(`
    julia
    softplus(x) = log(exp(x) + 1)

    See "Deep Sparse Rectifier Neural Networks", JMLR 2011.

    julia
    julia> lineplot(softplus, -3, 3, height=7)
    -          ┌────────────────────────────────────────┐            
    -        4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ softplus(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠│            
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀│            
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠊⠁⠀⠀⠀⠀⠀│            
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⠤⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -        0 │⣀⣀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤⠔⠒⠒⠚⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -          └────────────────────────────────────────┘            
    --3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀            
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀            
    -
    -julia> lineplot!(ans, relu)
    -          ┌────────────────────────────────────────┐            
    -        4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ softplus(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠│ relu(x)    
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⡴⠞⠋⠁│            
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⡴⠞⠋⠁⠀⠀⠀⠀│            
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⢤⡲⠝⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀│            
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⣉⠥⠚⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣠⣤⣤⣤⣤⣔⣒⣒⣚⣉⣉⣁⣀⣇⠴⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -          └────────────────────────────────────────┘            
    --3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀            
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀            
    -
    -julia> softplus(16f0)
    -16.0f0

    source

    `,4))]),i("details",T,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.softshrink",href:"#NNlib.softshrink"},[i("span",{class:"jlbinding"},"NNlib.softshrink")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=h(`
    julia
    softshrink(x, λ=0.5) =
    -    (x  λ ? x - λ : (-λ  x ? x + λ : 0))

    See "Softshrink Activation Function".

    julia
    julia> lineplot(softshrink, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐              
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│ softshrink(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⠉⠁│              
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀│              
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⡤⠤⠤⠤⠤⠤⠤⡧⠤⠤⠤⠤⠶⠮⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│              
    -           │⠀⠀⠀⠀⠀⠀⢀⣀⠤⠖⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           │⠀⣀⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -        -2 │⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           └────────────────────────────────────────┘              
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀              
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀              
    -
    -julia> lineplot!(ans, tanhshrink)
    -           ┌────────────────────────────────────────┐              
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│ softshrink(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⣉⡡│ tanhshrink(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⣒⣋⠥⠤⠒⠊⠉⠁⠀│              
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠾⠿⠯⠭⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤│              
    -           │⠀⢀⣀⡠⠤⠖⢒⣋⠭⠗⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           │⠊⣉⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -        -2 │⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           └────────────────────────────────────────┘              
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀              
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀
    -
    -julia> softshrink.((-10f0, 10f0))
    -(-9.5f0, 9.5f0)

    source

    `,4))]),i("details",S,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.softsign",href:"#NNlib.softsign"},[i("span",{class:"jlbinding"},"NNlib.softsign")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=h(`
    julia
    softsign(x) = x / (1 + |x|)

    See "Quadratic Polynomials Learn Better Image Features" (2009).

    julia
    julia> lineplot(softsign, -5, 5, height=7)
    -           ┌────────────────────────────────────────┐            
    -         1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⣀⣀⠤⠤⠤⠤⠤│ softsign(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⡔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡯⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⠋⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -        -1 │⠒⠒⠒⠒⠒⠊⠉⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           └────────────────────────────────────────┘            
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀            
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀            
    -
    -julia> lineplot!(ans, tanh)
    -           ┌────────────────────────────────────────┐            
    -         1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠊⠉⠉⠉⣉⣉⣉⣉⣉⠭⠭⠭⠭⠭│ softsign(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⣃⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│ tanh(x)    
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⡞⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡯⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡴⠃⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⢋⠕⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -        -1 │⣒⣒⣒⣒⣒⣊⣉⣉⣉⣉⣁⣀⣀⡠⠤⠒⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│            
    -           └────────────────────────────────────────┘            
    --5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀            
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀            
    -
    -julia> softsign(1f0)
    -0.5f0
    -
    -julia> softsign(100f0)
    -0.990099f0

    source

    `,4))]),i("details",q,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.swish",href:"#NNlib.swish"},[i("span",{class:"jlbinding"},"NNlib.swish")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=h(`
    julia
    swish(x) = x * σ(x)

    Self-gated activation function. See "Swish: a Self-Gated Activation Function".

    julia
    julia> lineplot(swish, -2, 2, height=7)
    -           ┌────────────────────────────────────────┐         
    -         2 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤│ swish(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀│         
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀│         
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⣀⡤⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -           │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⣤⡤⡧⠴⠶⠯⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│         
    -           │⠉⠑⠒⠒⠒⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        -1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -           └────────────────────────────────────────┘         
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀         
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))]),i("details",M,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.hardswish",href:"#NNlib.hardswish"},[i("span",{class:"jlbinding"},"NNlib.hardswish")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=h(`
    julia
    hardswish(x) = x * hardσ(x)

    Hard-Swish activation function. See "Searching for MobileNetV3".

    julia
    julia> lineplot(hardswish, -2, 5, height = 7)
    -           ┌────────────────────────────────────────┐             
    -         5 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠒⠉│ hardswish(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠒⠉⠁⠀⠀⠀⠀│             
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│             
    -   f(x)    │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│             
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│             
    -           │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣤⣤⣖⣚⣉⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│             
    -        -1 │⠉⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│             
    -           └────────────────────────────────────────┘             
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀             
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀             
    -
    -julia> lineplot(hardswish, -4, 0, height = 7);
    -
    -julia> lineplot!(ans, swish)
    -             ┌────────────────────────────────────────┐             
    -           0 │⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡜│ hardswish(x)
    -             │⠒⠒⠢⠤⢄⣀⡀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠀│ swish(x)    
    -             │⠀⠀⠀⠀⠀⠀⠈⠉⠑⠒⠦⢄⣘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀│             
    -   f(x)      │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠑⡖⠦⢄⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⢔⠏⠁⠀⠀⠀│             
    -             │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠣⣄⠀⠉⠑⠒⠦⠤⢄⣀⣀⣀⣀⡠⠤⠖⣊⠕⠁⠀⠀⠀⠀⠀│             
    -             │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⠤⡀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀│             
    -        -0.4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠒⠢⠤⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│             
    -             └────────────────────────────────────────┘             
    --4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀             
    -             ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀             
    -
    -julia> hardswish.(-5:5)'
    -1×11 adjoint(::Vector{Float64}) with eltype Float64:
    - -0.0  -0.0  -0.0  -0.333333  -0.333333  0.0  0.666667  1.66667  3.0  4.0  5.0

    source

    `,4))]),i("details",O,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.tanhshrink",href:"#NNlib.tanhshrink"},[i("span",{class:"jlbinding"},"NNlib.tanhshrink")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=h(`
    julia
    tanhshrink(x) = x - tanh(x)

    See "Tanhshrink Activation Function".

    julia
    julia> lineplot(tanhshrink, -3, 3, height=7)
    -           ┌────────────────────────────────────────┐              
    -         3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│ tanhshrink(x)
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠊│              
    -           │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⣀⡠⠤⠒⠊⠉⠁⠀⠀⠀⠀│              
    -   f(x)    │⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠮⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│              
    -           │⠀⠀⠀⠀⠀⣀⡠⠴⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           │⡠⠴⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -        -3 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│              
    -           └────────────────────────────────────────┘              
    --3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀              
    -           ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀              
    -
    -julia> tanhshrink.((-10f0, 10f0))
    -(-9.0f0, 9.0f0)

    source

    `,4))]),i("details",P,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.trelu",href:"#NNlib.trelu"},[i("span",{class:"jlbinding"},"NNlib.trelu")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=h(`
    julia
    trelu(x, theta=1) = x > theta ? x : 0

    Threshold gated rectified linear activation function. See "Zero-bias autoencoders and the benefits of co-adapting features"

    julia
    julia> lineplot(trelu, -2, 4, height=7)
    -          ┌────────────────────────────────────────┐         
    -        4 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│ trelu(x)
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│         
    -   f(x)   │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠴⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -        0 │⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣀⣀⣀⣀⣀⣀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│         
    -          └────────────────────────────────────────┘         
    --2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀4⠀         
    -          ⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀

    source

    `,4))])])}const J=p(E,[["render",R]]);export{H as __pageData,J as default}; diff --git a/dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.lean.js b/dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.lean.js deleted file mode 100644 index 9b88bc3413..0000000000 --- a/dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.lean.js +++ /dev/null @@ -1,10 +0,0 @@ -import{_ as r,c as l,j as t,a as s,G as n,a2 as e,B as d,o}from"./chunks/framework.I-x9Gl6h.js";const st=JSON.parse('{"title":"LuxLib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/LuxLib.md","filePath":"api/NN_Primitives/LuxLib.md","lastUpdated":null}'),p={name:"api/NN_Primitives/LuxLib.md"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},x={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.07ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 11080.9 891","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},F={class:"jldocstring custom-block"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},H={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},D={class:"jldocstring custom-block"},M={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},N={class:"jldocstring custom-block"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},O={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},S={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},G={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},U={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},X={class:"jldocstring custom-block"};function K(W,i,$,Y,_,tt){const a=d("Badge");return o(),l("div",null,[i[144]||(i[144]=t("h1",{id:"LuxLib-API",tabindex:"-1"},[s("LuxLib "),t("a",{class:"header-anchor",href:"#LuxLib-API","aria-label":'Permalink to "LuxLib {#LuxLib-API}"'},"​")],-1)),i[145]||(i[145]=t("p",null,"Backend for Lux.jl",-1)),i[146]||(i[146]=t("h2",{id:"Apply-Activation",tabindex:"-1"},[s("Apply Activation "),t("a",{class:"header-anchor",href:"#Apply-Activation","aria-label":'Permalink to "Apply Activation {#Apply-Activation}"'},"​")],-1)),t("details",h,[t("summary",null,[i[0]||(i[0]=t("a",{id:"LuxLib.API.fast_activation",href:"#LuxLib.API.fast_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation")],-1)),i[1]||(i[1]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=e('
    julia
    fast_activation::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. On CPUs we unroll the loop and use LoopVectorization.jl to vectorize the computation. On GPUs we use simply use broadcasting.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Arguments

    Returns

    source

    ',8))]),t("details",u,[t("summary",null,[i[3]||(i[3]=t("a",{id:"LuxLib.API.fast_activation!!",href:"#LuxLib.API.fast_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation!!")],-1)),i[4]||(i[4]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[5]||(i[5]=e('
    julia
    fast_activation!!::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. If it is possible to rewrite x in-place, it does so. If x is an immutable array, it falls back to the generic implementation.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Load SLEEFPirates.jl to get faster activations

    Certain activation functions are replaced with specialized implementations from SLEEFPirates.jl for FP32. This might lead to faster performance but can cause slight decrease in accuracy (in the floating point limit).

    Arguments

    Returns

    source

    ',9))]),i[147]||(i[147]=t("h2",{id:"Batched-Operations",tabindex:"-1"},[s("Batched Operations "),t("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),t("details",m,[t("summary",null,[i[6]||(i[6]=t("a",{id:"LuxLib.API.batched_matmul",href:"#LuxLib.API.batched_matmul"},[t("span",{class:"jlbinding"},"LuxLib.API.batched_matmul")],-1)),i[7]||(i[7]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
    julia
    batched_matmul(x, y)

    Computes the batched matrix multiplication of x and y. For more details see the NNlib documentation on NNlib.batched_mul. This function is mostly a wrapper around batched_mul but attempts to be faster on CPUs.

    Load LoopVectorization.jl to get faster batched matrix multiplication

    On CPUs loading LoopVectorization adds faster implementations of batched matrix multiplication.

    source

    ',4))]),i[148]||(i[148]=t("h2",{id:"Bias-Activation",tabindex:"-1"},[s("Bias Activation "),t("a",{class:"header-anchor",href:"#Bias-Activation","aria-label":'Permalink to "Bias Activation {#Bias-Activation}"'},"​")],-1)),t("details",Q,[t("summary",null,[i[9]||(i[9]=t("a",{id:"LuxLib.API.bias_activation",href:"#LuxLib.API.bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation")],-1)),i[10]||(i[10]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
    julia
    bias_activation(σ, x, bias)

    Applies the activation function σ elementwise to the result of broadcasted addition of x and bias along the penultimate dimension. A vector x is treated as a matrix with a single last dimension.

    Arguments

    See also bias_activation!!, fast_activation.

    source

    ',6))]),t("details",k,[t("summary",null,[i[12]||(i[12]=t("a",{id:"LuxLib.API.bias_activation!!",href:"#LuxLib.API.bias_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation!!")],-1)),i[13]||(i[13]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
    julia
    bias_activation!!(σ, x, bias)

    Same as bias_activation but might update x in-place if possible. Users should not rely on x being mutated, it is recommended to use it like y = bias_activation!!(σ, x, bias). If x is updated in-place, y aliases x.

    See also bias_activation, fast_activation!!.

    source

    ',4))]),i[149]||(i[149]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[i[15]||(i[15]=t("a",{id:"LuxLib.API.fused_conv_bias_activation",href:"#LuxLib.API.fused_conv_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_conv_bias_activation")],-1)),i[16]||(i[16]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e(`
    julia
    fused_conv_bias_activation::F, weight::AbstractArray, x::AbstractArray,
    -    b::Optional{<:AbstractVector}, cdims::ConvDims) where {F}

    Computes σ.(conv(x, weight, cdims) .+ b) (b is not exactly broadcasted like this, rather it is reshaped and broadcasted to the penultimate dimension) with the best possible implementation available. This operation fuses operations into a single kernel if possible, and minimizes reallocations by reusing the output buffer for multiple operations.

    Arguments

    Notes on implementation

    source

    `,7))]),i[150]||(i[150]=t("h2",{id:"dropout",tabindex:"-1"},[s("Dropout "),t("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),t("details",g,[t("summary",null,[i[18]||(i[18]=t("a",{id:"LuxLib.API.alpha_dropout",href:"#LuxLib.API.alpha_dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.alpha_dropout")],-1)),i[19]||(i[19]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
    julia
    alpha_dropout(rng::AbstractRNG, x, p, training)\nalpha_dropout(rng::AbstractRNG, x, p, training, α, A, B)

    Alpha Dropout: Dropout ensuring that the mean and variance of the output remains same as the input. For details see [1]. Use the second call signature to avoid recomputing the constants for a fixed dropout probability.

    Arguments

    Returns

    References

    [1] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017).

    source

    ',9))]),t("details",T,[t("summary",null,[i[21]||(i[21]=t("a",{id:"LuxLib.API.dropout",href:"#LuxLib.API.dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.dropout")],-1)),i[22]||(i[22]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
    julia
    dropout(rng::AbstractRNG, x, p, training, invp, dims)
    -dropout(rng::AbstractRNG, x, mask, p, training, update_mask::Union{Val, StaticBool},
    -    invp, dims)

    Dropout: Simple Way to prevent Neural Networks for Overfitting. For details see [1].

    Arguments

    Returns

    References

    [1] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.

    source

    `,9))]),i[151]||(i[151]=t("h2",{id:"Fully-Connected-Layers",tabindex:"-1"},[s("Fully Connected Layers "),t("a",{class:"header-anchor",href:"#Fully-Connected-Layers","aria-label":'Permalink to "Fully Connected Layers {#Fully-Connected-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[i[24]||(i[24]=t("a",{id:"LuxLib.API.fused_dense_bias_activation",href:"#LuxLib.API.fused_dense_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_dense_bias_activation")],-1)),i[25]||(i[25]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e('
    julia
    fused_dense_bias_activation::F, weight::AbstractMatrix, x::AbstractMatrix,\n    b::Optional{<:AbstractVector}) where {F}

    Compute σ.(weight * x .+ b) with the best possible implementation available. Currently this implementation attempts to minimize reallocations by reusing the output buffer for multiple operations.

    Arguments

    Notes on implementation

    !!! tip "Load Octavian.jl

    Loading `Octavian.jl` enables a polyalgorithm that uses different backends based on the\ninput sizes.

    source

    ',9))]),i[152]||(i[152]=t("h2",{id:"normalization",tabindex:"-1"},[s("Normalization "),t("a",{class:"header-anchor",href:"#normalization","aria-label":'Permalink to "Normalization"'},"​")],-1)),t("details",y,[t("summary",null,[i[27]||(i[27]=t("a",{id:"LuxLib.API.batchnorm",href:"#LuxLib.API.batchnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.batchnorm")],-1)),i[28]||(i[28]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[49]||(i[49]=e(`
    julia
    batchnorm(x, scale, bias, running_mean, running_var, training,
    -    σ=identity, momentum = 0.1f0, epsilon = eps(eltype(x)) ^ (5 // 7))

    Batch Normalization. For details see [1].

    `,2)),t("p",null,[i[31]||(i[31]=s("Batch Normalization computes the mean and variance for each ")),t("mjx-container",x,[(o(),l("svg",f,i[29]||(i[29]=[e('',1)]))),i[30]||(i[30]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),i[32]||(i[32]=s(" input slice and normalises the input accordingly."))]),i[50]||(i[50]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[47]||(i[47]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[35]||(i[35]=t("code",null,"scale",-1)),i[36]||(i[36]=s(": Scale factor (")),t("mjx-container",L,[(o(),l("svg",v,i[33]||(i[33]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[34]||(i[34]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[37]||(i[37]=s(") (can be ")),i[38]||(i[38]=t("code",null,"nothing",-1)),i[39]||(i[39]=s(")"))])]),t("li",null,[t("p",null,[i[42]||(i[42]=t("code",null,"bias",-1)),i[43]||(i[43]=s(": Bias factor (")),t("mjx-container",E,[(o(),l("svg",w,i[40]||(i[40]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[41]||(i[41]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[44]||(i[44]=s(") (can be ")),i[45]||(i[45]=t("code",null,"nothing",-1)),i[46]||(i[46]=s(")"))])]),i[48]||(i[48]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",6))]),i[51]||(i[51]=t("p",null,[t("strong",null,"Returns")],-1)),i[52]||(i[52]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[53]||(i[53]=t("p",null,[t("strong",null,"References")],-1)),i[54]||(i[54]=t("p",null,'[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.',-1)),i[55]||(i[55]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/batchnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",F,[t("summary",null,[i[56]||(i[56]=t("a",{id:"LuxLib.API.groupnorm",href:"#LuxLib.API.groupnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.groupnorm")],-1)),i[57]||(i[57]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[74]||(i[74]=e(`
    julia
    groupnorm(x, scale, bias, groups::Int, σ::F=identity,
    -    epsilon=eps(eltype(x)) ^ (5 // 7))

    Group Normalization. For details see [1].

    This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics.

    Arguments

    `,4)),t("ul",null,[i[72]||(i[72]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[60]||(i[60]=t("code",null,"scale",-1)),i[61]||(i[61]=s(": Scale factor (")),t("mjx-container",C,[(o(),l("svg",A,i[58]||(i[58]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[59]||(i[59]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[62]||(i[62]=s(") (can be ")),i[63]||(i[63]=t("code",null,"nothing",-1)),i[64]||(i[64]=s(")"))])]),t("li",null,[t("p",null,[i[67]||(i[67]=t("code",null,"bias",-1)),i[68]||(i[68]=s(": Bias factor (")),t("mjx-container",j,[(o(),l("svg",H,i[65]||(i[65]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[66]||(i[66]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[69]||(i[69]=s(") (can be ")),i[70]||(i[70]=t("code",null,"nothing",-1)),i[71]||(i[71]=s(")"))])]),i[73]||(i[73]=e("
  • groups: Number of groups

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[75]||(i[75]=t("p",null,[t("strong",null,"Returns")],-1)),i[76]||(i[76]=t("p",null,"The normalized array is returned.",-1)),i[77]||(i[77]=t("p",null,[t("strong",null,"References")],-1)),i[78]||(i[78]=t("p",null,'[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.',-1)),i[79]||(i[79]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/groupnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",D,[t("summary",null,[i[80]||(i[80]=t("a",{id:"LuxLib.API.instancenorm",href:"#LuxLib.API.instancenorm"},[t("span",{class:"jlbinding"},"LuxLib.API.instancenorm")],-1)),i[81]||(i[81]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=e(`
    julia
    instancenorm(x, scale, bias, training, act, epsilon = eps(eltype(x)) ^ (5 // 7))
    -instancenorm(x, scale, bias, running_mean, running_var, training, act, momentum,
    -    epsilon = eps(eltype(x)) ^ (5 // 7))

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[i[84]||(i[84]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",M,[(o(),l("svg",B,i[82]||(i[82]=[e('',1)]))),i[83]||(i[83]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),i[85]||(i[85]=s(" input slice and normalises the input accordingly."))]),i[103]||(i[103]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[100]||(i[100]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized (must be atleast 3D)")])],-1)),t("li",null,[t("p",null,[i[88]||(i[88]=t("code",null,"scale",-1)),i[89]||(i[89]=s(": Scale factor (")),t("mjx-container",V,[(o(),l("svg",P,i[86]||(i[86]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[87]||(i[87]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[90]||(i[90]=s(") (can be ")),i[91]||(i[91]=t("code",null,"nothing",-1)),i[92]||(i[92]=s(")"))])]),t("li",null,[t("p",null,[i[95]||(i[95]=t("code",null,"bias",-1)),i[96]||(i[96]=s(": Bias factor (")),t("mjx-container",I,[(o(),l("svg",Z,i[93]||(i[93]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[94]||(i[94]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[97]||(i[97]=s(") (can be ")),i[98]||(i[98]=t("code",null,"nothing",-1)),i[99]||(i[99]=s(")"))])]),i[101]||(i[101]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • ",6))]),i[104]||(i[104]=t("p",null,[t("strong",null,"Returns")],-1)),i[105]||(i[105]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[106]||(i[106]=t("p",null,[t("strong",null,"References")],-1)),i[107]||(i[107]=t("p",null,'[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).',-1)),i[108]||(i[108]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/instancenorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",N,[t("summary",null,[i[109]||(i[109]=t("a",{id:"LuxLib.API.layernorm",href:"#LuxLib.API.layernorm"},[t("span",{class:"jlbinding"},"LuxLib.API.layernorm")],-1)),i[110]||(i[110]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[133]||(i[133]=e(`
    julia
    layernorm(x::AbstractArray{xT, N}, scale, bias, σ = identity, dims=1:(N - 1),
    -    epsilon = eps(eltype(x)) ^ (5 / 7)) where {xT, N}

    Layer Normalization. For details see [1].

    `,2)),t("p",null,[i[113]||(i[113]=s("Given an input array ")),t("mjx-container",z,[(o(),l("svg",O,i[111]||(i[111]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),i[112]||(i[112]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),i[114]||(i[114]=s(", this layer computes"))]),t("mjx-container",S,[(o(),l("svg",R,i[115]||(i[115]=[e('',1)]))),i[116]||(i[116]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),i[134]||(i[134]=t("p",null,[s("and applies the activation function "),t("code",null,"σ"),s(" elementwise to "),t("code",null,"y"),s(".")],-1)),i[135]||(i[135]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[131]||(i[131]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[119]||(i[119]=t("code",null,"scale",-1)),i[120]||(i[120]=s(": Scale factor (")),t("mjx-container",G,[(o(),l("svg",U,i[117]||(i[117]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[118]||(i[118]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[121]||(i[121]=s(") (can be ")),i[122]||(i[122]=t("code",null,"nothing",-1)),i[123]||(i[123]=s(")"))])]),t("li",null,[t("p",null,[i[126]||(i[126]=t("code",null,"bias",-1)),i[127]||(i[127]=s(": Bias factor (")),t("mjx-container",J,[(o(),l("svg",q,i[124]||(i[124]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[125]||(i[125]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[128]||(i[128]=s(") (can be ")),i[129]||(i[129]=t("code",null,"nothing",-1)),i[130]||(i[130]=s(")"))])]),i[132]||(i[132]=e("
  • σ: Activation function (default: identity)

  • dims: Dimensions along which the mean and std of x is computed. If nothing is passed, the dims are inferred based on the dimensions of scale and bias. For example, if x is N dimensional and scale and bias are M dimensional, then the dims will be 1:(N - M).

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[136]||(i[136]=t("p",null,[t("strong",null,"Returns")],-1)),i[137]||(i[137]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(".")],-1)),i[138]||(i[138]=t("p",null,[t("strong",null,"References")],-1)),i[139]||(i[139]=t("p",null,'[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).',-1)),i[140]||(i[140]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/layernorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),i[153]||(i[153]=t("h2",{id:"Helper-Functions",tabindex:"-1"},[s("Helper Functions "),t("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),t("details",X,[t("summary",null,[i[141]||(i[141]=t("a",{id:"LuxLib.internal_operation_mode",href:"#LuxLib.internal_operation_mode"},[t("span",{class:"jlbinding"},"LuxLib.internal_operation_mode")],-1)),i[142]||(i[142]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[143]||(i[143]=e(`
    julia
    internal_operation_mode(xs::Tuple)
    -internal_operation_mode(x::AbstractArray)

    Returns the internal operation mode for the given array(s). This is useful to define custom implementations using different backends like simple Julia broadcasting, Kernel Abstractions, Loop Vectorization, etc.

    Currently supported modes are:

    source

    `,5))])])}const et=r(p,[["render",K]]);export{st as __pageData,et as default}; diff --git a/dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.js b/dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.js similarity index 94% rename from dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.js rename to dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.js index 9b88bc3413..2c6d7f935c 100644 --- a/dev/assets/api_NN_Primitives_LuxLib.md.CPEqKhMV.js +++ b/dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.js @@ -1,10 +1,10 @@ -import{_ as r,c as l,j as t,a as s,G as n,a2 as e,B as d,o}from"./chunks/framework.I-x9Gl6h.js";const st=JSON.parse('{"title":"LuxLib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/LuxLib.md","filePath":"api/NN_Primitives/LuxLib.md","lastUpdated":null}'),p={name:"api/NN_Primitives/LuxLib.md"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},x={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.07ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 11080.9 891","aria-hidden":"true"},L={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},F={class:"jldocstring custom-block"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},A={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},H={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},D={class:"jldocstring custom-block"},M={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},N={class:"jldocstring custom-block"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},O={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},S={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},G={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},U={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},J={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},X={class:"jldocstring custom-block"};function K(W,i,$,Y,_,tt){const a=d("Badge");return o(),l("div",null,[i[144]||(i[144]=t("h1",{id:"LuxLib-API",tabindex:"-1"},[s("LuxLib "),t("a",{class:"header-anchor",href:"#LuxLib-API","aria-label":'Permalink to "LuxLib {#LuxLib-API}"'},"​")],-1)),i[145]||(i[145]=t("p",null,"Backend for Lux.jl",-1)),i[146]||(i[146]=t("h2",{id:"Apply-Activation",tabindex:"-1"},[s("Apply Activation "),t("a",{class:"header-anchor",href:"#Apply-Activation","aria-label":'Permalink to "Apply Activation {#Apply-Activation}"'},"​")],-1)),t("details",h,[t("summary",null,[i[0]||(i[0]=t("a",{id:"LuxLib.API.fast_activation",href:"#LuxLib.API.fast_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation")],-1)),i[1]||(i[1]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=e('
    julia
    fast_activation::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. On CPUs we unroll the loop and use LoopVectorization.jl to vectorize the computation. On GPUs we use simply use broadcasting.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Arguments

    Returns

    source

    ',8))]),t("details",u,[t("summary",null,[i[3]||(i[3]=t("a",{id:"LuxLib.API.fast_activation!!",href:"#LuxLib.API.fast_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation!!")],-1)),i[4]||(i[4]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[5]||(i[5]=e('
    julia
    fast_activation!!::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. If it is possible to rewrite x in-place, it does so. If x is an immutable array, it falls back to the generic implementation.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Load SLEEFPirates.jl to get faster activations

    Certain activation functions are replaced with specialized implementations from SLEEFPirates.jl for FP32. This might lead to faster performance but can cause slight decrease in accuracy (in the floating point limit).

    Arguments

    Returns

    source

    ',9))]),i[147]||(i[147]=t("h2",{id:"Batched-Operations",tabindex:"-1"},[s("Batched Operations "),t("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),t("details",m,[t("summary",null,[i[6]||(i[6]=t("a",{id:"LuxLib.API.batched_matmul",href:"#LuxLib.API.batched_matmul"},[t("span",{class:"jlbinding"},"LuxLib.API.batched_matmul")],-1)),i[7]||(i[7]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
    julia
    batched_matmul(x, y)

    Computes the batched matrix multiplication of x and y. For more details see the NNlib documentation on NNlib.batched_mul. This function is mostly a wrapper around batched_mul but attempts to be faster on CPUs.

    Load LoopVectorization.jl to get faster batched matrix multiplication

    On CPUs loading LoopVectorization adds faster implementations of batched matrix multiplication.

    source

    ',4))]),i[148]||(i[148]=t("h2",{id:"Bias-Activation",tabindex:"-1"},[s("Bias Activation "),t("a",{class:"header-anchor",href:"#Bias-Activation","aria-label":'Permalink to "Bias Activation {#Bias-Activation}"'},"​")],-1)),t("details",Q,[t("summary",null,[i[9]||(i[9]=t("a",{id:"LuxLib.API.bias_activation",href:"#LuxLib.API.bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation")],-1)),i[10]||(i[10]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
    julia
    bias_activation(σ, x, bias)

    Applies the activation function σ elementwise to the result of broadcasted addition of x and bias along the penultimate dimension. A vector x is treated as a matrix with a single last dimension.

    Arguments

    See also bias_activation!!, fast_activation.

    source

    ',6))]),t("details",k,[t("summary",null,[i[12]||(i[12]=t("a",{id:"LuxLib.API.bias_activation!!",href:"#LuxLib.API.bias_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation!!")],-1)),i[13]||(i[13]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
    julia
    bias_activation!!(σ, x, bias)

    Same as bias_activation but might update x in-place if possible. Users should not rely on x being mutated, it is recommended to use it like y = bias_activation!!(σ, x, bias). If x is updated in-place, y aliases x.

    See also bias_activation, fast_activation!!.

    source

    ',4))]),i[149]||(i[149]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",c,[t("summary",null,[i[15]||(i[15]=t("a",{id:"LuxLib.API.fused_conv_bias_activation",href:"#LuxLib.API.fused_conv_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_conv_bias_activation")],-1)),i[16]||(i[16]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e(`
    julia
    fused_conv_bias_activation::F, weight::AbstractArray, x::AbstractArray,
    -    b::Optional{<:AbstractVector}, cdims::ConvDims) where {F}

    Computes σ.(conv(x, weight, cdims) .+ b) (b is not exactly broadcasted like this, rather it is reshaped and broadcasted to the penultimate dimension) with the best possible implementation available. This operation fuses operations into a single kernel if possible, and minimizes reallocations by reusing the output buffer for multiple operations.

    Arguments

    Notes on implementation

    source

    `,7))]),i[150]||(i[150]=t("h2",{id:"dropout",tabindex:"-1"},[s("Dropout "),t("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),t("details",g,[t("summary",null,[i[18]||(i[18]=t("a",{id:"LuxLib.API.alpha_dropout",href:"#LuxLib.API.alpha_dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.alpha_dropout")],-1)),i[19]||(i[19]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
    julia
    alpha_dropout(rng::AbstractRNG, x, p, training)\nalpha_dropout(rng::AbstractRNG, x, p, training, α, A, B)

    Alpha Dropout: Dropout ensuring that the mean and variance of the output remains same as the input. For details see [1]. Use the second call signature to avoid recomputing the constants for a fixed dropout probability.

    Arguments

    Returns

    References

    [1] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017).

    source

    ',9))]),t("details",T,[t("summary",null,[i[21]||(i[21]=t("a",{id:"LuxLib.API.dropout",href:"#LuxLib.API.dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.dropout")],-1)),i[22]||(i[22]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
    julia
    dropout(rng::AbstractRNG, x, p, training, invp, dims)
    +import{_ as r,c as l,j as t,a as s,G as n,a2 as e,B as d,o}from"./chunks/framework.BetCMmtc.js";const st=JSON.parse('{"title":"LuxLib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/LuxLib.md","filePath":"api/NN_Primitives/LuxLib.md","lastUpdated":null}'),p={name:"api/NN_Primitives/LuxLib.md"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},x={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.07ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 11080.9 891","aria-hidden":"true"},f={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},v={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},C={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},H={class:"jldocstring custom-block"},_={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},V={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},M={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},S={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},J={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},q={class:"jldocstring custom-block"};function X(K,i,W,$,Y,tt){const a=d("Badge");return o(),l("div",null,[i[144]||(i[144]=t("h1",{id:"LuxLib-API",tabindex:"-1"},[s("LuxLib "),t("a",{class:"header-anchor",href:"#LuxLib-API","aria-label":'Permalink to "LuxLib {#LuxLib-API}"'},"​")],-1)),i[145]||(i[145]=t("p",null,"Backend for Lux.jl",-1)),i[146]||(i[146]=t("h2",{id:"Apply-Activation",tabindex:"-1"},[s("Apply Activation "),t("a",{class:"header-anchor",href:"#Apply-Activation","aria-label":'Permalink to "Apply Activation {#Apply-Activation}"'},"​")],-1)),t("details",h,[t("summary",null,[i[0]||(i[0]=t("a",{id:"LuxLib.API.fast_activation",href:"#LuxLib.API.fast_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation")],-1)),i[1]||(i[1]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=e('
    julia
    fast_activation::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. On CPUs we unroll the loop and use LoopVectorization.jl to vectorize the computation. On GPUs we use simply use broadcasting.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Arguments

    • σ: Activation function

    • x: Input array

    Returns

    • Output Array with the same size as x

    source

    ',8))]),t("details",u,[t("summary",null,[i[3]||(i[3]=t("a",{id:"LuxLib.API.fast_activation!!",href:"#LuxLib.API.fast_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation!!")],-1)),i[4]||(i[4]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[5]||(i[5]=e('
    julia
    fast_activation!!::F, x::AbstractArray) where {F}

    Compute σ.(x) with the best possible implementation available. If it is possible to rewrite x in-place, it does so. If x is an immutable array, it falls back to the generic implementation.

    Note

    This function doesn't replace σ with NNlib.fast_act(σ, ...), that needs to be done by the user if needed.

    Load SLEEFPirates.jl to get faster activations

    Certain activation functions are replaced with specialized implementations from SLEEFPirates.jl for FP32. This might lead to faster performance but can cause slight decrease in accuracy (in the floating point limit).

    Arguments

    • σ: Activation function

    • x: Input array

    Returns

    • Output Array with the same size as x

    source

    ',9))]),i[147]||(i[147]=t("h2",{id:"Batched-Operations",tabindex:"-1"},[s("Batched Operations "),t("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),t("details",m,[t("summary",null,[i[6]||(i[6]=t("a",{id:"LuxLib.API.batched_matmul",href:"#LuxLib.API.batched_matmul"},[t("span",{class:"jlbinding"},"LuxLib.API.batched_matmul")],-1)),i[7]||(i[7]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e('
    julia
    batched_matmul(x, y)

    Computes the batched matrix multiplication of x and y. For more details see the NNlib documentation on NNlib.batched_mul. This function is mostly a wrapper around batched_mul but attempts to be faster on CPUs.

    Load LoopVectorization.jl to get faster batched matrix multiplication

    On CPUs loading LoopVectorization adds faster implementations of batched matrix multiplication.

    source

    ',4))]),i[148]||(i[148]=t("h2",{id:"Bias-Activation",tabindex:"-1"},[s("Bias Activation "),t("a",{class:"header-anchor",href:"#Bias-Activation","aria-label":'Permalink to "Bias Activation {#Bias-Activation}"'},"​")],-1)),t("details",Q,[t("summary",null,[i[9]||(i[9]=t("a",{id:"LuxLib.API.bias_activation",href:"#LuxLib.API.bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation")],-1)),i[10]||(i[10]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e('
    julia
    bias_activation(σ, x, bias)

    Applies the activation function σ elementwise to the result of broadcasted addition of x and bias along the penultimate dimension. A vector x is treated as a matrix with a single last dimension.

    Arguments

    • σ: Activation function

    • x: Input to be transformed

    • bias: Bias to be added. Can be nothing.

    See also bias_activation!!, fast_activation.

    source

    ',6))]),t("details",T,[t("summary",null,[i[12]||(i[12]=t("a",{id:"LuxLib.API.bias_activation!!",href:"#LuxLib.API.bias_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation!!")],-1)),i[13]||(i[13]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e('
    julia
    bias_activation!!(σ, x, bias)

    Same as bias_activation but might update x in-place if possible. Users should not rely on x being mutated, it is recommended to use it like y = bias_activation!!(σ, x, bias). If x is updated in-place, y aliases x.

    See also bias_activation, fast_activation!!.

    source

    ',4))]),i[149]||(i[149]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",k,[t("summary",null,[i[15]||(i[15]=t("a",{id:"LuxLib.API.fused_conv_bias_activation",href:"#LuxLib.API.fused_conv_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_conv_bias_activation")],-1)),i[16]||(i[16]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e(`
    julia
    fused_conv_bias_activation::F, weight::AbstractArray, x::AbstractArray,
    +    b::Optional{<:AbstractVector}, cdims::ConvDims) where {F}

    Computes σ.(conv(x, weight, cdims) .+ b) (b is not exactly broadcasted like this, rather it is reshaped and broadcasted to the penultimate dimension) with the best possible implementation available. This operation fuses operations into a single kernel if possible, and minimizes reallocations by reusing the output buffer for multiple operations.

    Arguments

    • σ: Activation function

    • weight: Weight tensor

    • x: Input tensor

    • b: Bias tensor (can be nothing)

    • cdims: ConvDims object

    Notes on implementation

    • For CUDA Arrays, this uses fused CUDNN kernels when the activation is identity or relu. For other activations, it tries to fuse the operations on the Julia side.

    • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

    • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

    • For Mixed-Precision Inputs on GPU, we type promote the inputs to the highest precision, with a warning.

    source

    `,7))]),i[150]||(i[150]=t("h2",{id:"dropout",tabindex:"-1"},[s("Dropout "),t("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),t("details",c,[t("summary",null,[i[18]||(i[18]=t("a",{id:"LuxLib.API.alpha_dropout",href:"#LuxLib.API.alpha_dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.alpha_dropout")],-1)),i[19]||(i[19]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e('
    julia
    alpha_dropout(rng::AbstractRNG, x, p, training)\nalpha_dropout(rng::AbstractRNG, x, p, training, α, A, B)

    Alpha Dropout: Dropout ensuring that the mean and variance of the output remains same as the input. For details see [1]. Use the second call signature to avoid recomputing the constants for a fixed dropout probability.

    Arguments

    • rng: Random number generator

    • x: Input Array

    • p: Probability of an element to be dropped out

    • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context`

    • α: -1.7580993408473766. Computed at limit x tends to infinity, selu(x) = -λβ = α

    • A: Scaling factor for the mean

    • B: Scaling factor for the variance

    Returns

    • Output Array after applying alpha dropout

    • Updated state for the random number generator

    References

    [1] Klambauer, Günter, et al. "Self-normalizing neural networks." Advances in neural information processing systems 30 (2017).

    source

    ',9))]),t("details",g,[t("summary",null,[i[21]||(i[21]=t("a",{id:"LuxLib.API.dropout",href:"#LuxLib.API.dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.dropout")],-1)),i[22]||(i[22]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e(`
    julia
    dropout(rng::AbstractRNG, x, p, training, invp, dims)
     dropout(rng::AbstractRNG, x, mask, p, training, update_mask::Union{Val, StaticBool},
    -    invp, dims)

    Dropout: Simple Way to prevent Neural Networks for Overfitting. For details see [1].

    Arguments

    • rng: Random number generator

    • x: Input Array

    • mask: Dropout Mask. If not used then it is constructed automatically

    • p: Probability of an element to be dropped out

    • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

    • update_mask: If Val(true) or True() then the mask is generated and used. Else, the mask provided is directly used

    • invp: Inverse multiplied to the mask. Calculated as invp = 1 / (1 - p).

    Returns

    • Output Array after applying dropout

    • Dropout Mask (if training == false, the returned value is meaningless)

    • Updated state for the random number generator

    References

    [1] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.

    source

    `,9))]),i[151]||(i[151]=t("h2",{id:"Fully-Connected-Layers",tabindex:"-1"},[s("Fully Connected Layers "),t("a",{class:"header-anchor",href:"#Fully-Connected-Layers","aria-label":'Permalink to "Fully Connected Layers {#Fully-Connected-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[i[24]||(i[24]=t("a",{id:"LuxLib.API.fused_dense_bias_activation",href:"#LuxLib.API.fused_dense_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_dense_bias_activation")],-1)),i[25]||(i[25]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e('
    julia
    fused_dense_bias_activation::F, weight::AbstractMatrix, x::AbstractMatrix,\n    b::Optional{<:AbstractVector}) where {F}

    Compute σ.(weight * x .+ b) with the best possible implementation available. Currently this implementation attempts to minimize reallocations by reusing the output buffer for multiple operations.

    Arguments

    • σ: Activation function

    • weight: Weight matrix

    • x: Input matrix

    • b: Bias vector (can be nothing)

    Notes on implementation

    • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

    • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

    • For CUDA Arrays, this uses a special fused implementation via cuBLASLt.

    • For small CPU Arrays, we use LoopVectorization.jl. On x86_64 we use Octavian for medium sized matrices. This is overridden if special BLAS implementations are loaded (currently MKL, AppleAccelerate, and BLISBLAS).

    !!! tip "Load Octavian.jl

    Loading `Octavian.jl` enables a polyalgorithm that uses different backends based on the\ninput sizes.

    source

    ',9))]),i[152]||(i[152]=t("h2",{id:"normalization",tabindex:"-1"},[s("Normalization "),t("a",{class:"header-anchor",href:"#normalization","aria-label":'Permalink to "Normalization"'},"​")],-1)),t("details",y,[t("summary",null,[i[27]||(i[27]=t("a",{id:"LuxLib.API.batchnorm",href:"#LuxLib.API.batchnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.batchnorm")],-1)),i[28]||(i[28]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[49]||(i[49]=e(`
    julia
    batchnorm(x, scale, bias, running_mean, running_var, training,
    -    σ=identity, momentum = 0.1f0, epsilon = eps(eltype(x)) ^ (5 // 7))

    Batch Normalization. For details see [1].

    `,2)),t("p",null,[i[31]||(i[31]=s("Batch Normalization computes the mean and variance for each ")),t("mjx-container",x,[(o(),l("svg",f,i[29]||(i[29]=[e('',1)]))),i[30]||(i[30]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),i[32]||(i[32]=s(" input slice and normalises the input accordingly."))]),i[50]||(i[50]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[47]||(i[47]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[35]||(i[35]=t("code",null,"scale",-1)),i[36]||(i[36]=s(": Scale factor (")),t("mjx-container",L,[(o(),l("svg",v,i[33]||(i[33]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[34]||(i[34]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[37]||(i[37]=s(") (can be ")),i[38]||(i[38]=t("code",null,"nothing",-1)),i[39]||(i[39]=s(")"))])]),t("li",null,[t("p",null,[i[42]||(i[42]=t("code",null,"bias",-1)),i[43]||(i[43]=s(": Bias factor (")),t("mjx-container",E,[(o(),l("svg",w,i[40]||(i[40]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[41]||(i[41]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[44]||(i[44]=s(") (can be ")),i[45]||(i[45]=t("code",null,"nothing",-1)),i[46]||(i[46]=s(")"))])]),i[48]||(i[48]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",6))]),i[51]||(i[51]=t("p",null,[t("strong",null,"Returns")],-1)),i[52]||(i[52]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[53]||(i[53]=t("p",null,[t("strong",null,"References")],-1)),i[54]||(i[54]=t("p",null,'[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.',-1)),i[55]||(i[55]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/batchnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",F,[t("summary",null,[i[56]||(i[56]=t("a",{id:"LuxLib.API.groupnorm",href:"#LuxLib.API.groupnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.groupnorm")],-1)),i[57]||(i[57]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[74]||(i[74]=e(`
    julia
    groupnorm(x, scale, bias, groups::Int, σ::F=identity,
    -    epsilon=eps(eltype(x)) ^ (5 // 7))

    Group Normalization. For details see [1].

    This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics.

    Arguments

    `,4)),t("ul",null,[i[72]||(i[72]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[60]||(i[60]=t("code",null,"scale",-1)),i[61]||(i[61]=s(": Scale factor (")),t("mjx-container",C,[(o(),l("svg",A,i[58]||(i[58]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[59]||(i[59]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[62]||(i[62]=s(") (can be ")),i[63]||(i[63]=t("code",null,"nothing",-1)),i[64]||(i[64]=s(")"))])]),t("li",null,[t("p",null,[i[67]||(i[67]=t("code",null,"bias",-1)),i[68]||(i[68]=s(": Bias factor (")),t("mjx-container",j,[(o(),l("svg",H,i[65]||(i[65]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[66]||(i[66]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[69]||(i[69]=s(") (can be ")),i[70]||(i[70]=t("code",null,"nothing",-1)),i[71]||(i[71]=s(")"))])]),i[73]||(i[73]=e("
  • groups: Number of groups

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[75]||(i[75]=t("p",null,[t("strong",null,"Returns")],-1)),i[76]||(i[76]=t("p",null,"The normalized array is returned.",-1)),i[77]||(i[77]=t("p",null,[t("strong",null,"References")],-1)),i[78]||(i[78]=t("p",null,'[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.',-1)),i[79]||(i[79]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/groupnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",D,[t("summary",null,[i[80]||(i[80]=t("a",{id:"LuxLib.API.instancenorm",href:"#LuxLib.API.instancenorm"},[t("span",{class:"jlbinding"},"LuxLib.API.instancenorm")],-1)),i[81]||(i[81]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=e(`
    julia
    instancenorm(x, scale, bias, training, act, epsilon = eps(eltype(x)) ^ (5 // 7))
    +    invp, dims)

    Dropout: Simple Way to prevent Neural Networks for Overfitting. For details see [1].

    Arguments

    • rng: Random number generator

    • x: Input Array

    • mask: Dropout Mask. If not used then it is constructed automatically

    • p: Probability of an element to be dropped out

    • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

    • update_mask: If Val(true) or True() then the mask is generated and used. Else, the mask provided is directly used

    • invp: Inverse multiplied to the mask. Calculated as invp = 1 / (1 - p).

    Returns

    • Output Array after applying dropout

    • Dropout Mask (if training == false, the returned value is meaningless)

    • Updated state for the random number generator

    References

    [1] Srivastava, Nitish, et al. "Dropout: a simple way to prevent neural networks from overfitting." The journal of machine learning research 15.1 (2014): 1929-1958.

    source

    `,9))]),i[151]||(i[151]=t("h2",{id:"Fully-Connected-Layers",tabindex:"-1"},[s("Fully Connected Layers "),t("a",{class:"header-anchor",href:"#Fully-Connected-Layers","aria-label":'Permalink to "Fully Connected Layers {#Fully-Connected-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[i[24]||(i[24]=t("a",{id:"LuxLib.API.fused_dense_bias_activation",href:"#LuxLib.API.fused_dense_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_dense_bias_activation")],-1)),i[25]||(i[25]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e('
    julia
    fused_dense_bias_activation::F, weight::AbstractMatrix, x::AbstractMatrix,\n    b::Optional{<:AbstractVector}) where {F}

    Compute σ.(weight * x .+ b) with the best possible implementation available. Currently this implementation attempts to minimize reallocations by reusing the output buffer for multiple operations.

    Arguments

    • σ: Activation function

    • weight: Weight matrix

    • x: Input matrix

    • b: Bias vector (can be nothing)

    Notes on implementation

    • If any of the inputs, don't support setindexing (aka immutable arrays) we fallback to the generic non-mutating implementation.

    • Maximum memory reuse and operation fusion is guaranteed for ChainRules compatible AD backends or backends that support mutation. Backends like Tracker and ReverseDiff fallback to the generic implementation.

    • For CUDA Arrays, this uses a special fused implementation via cuBLASLt.

    • For small CPU Arrays, we use LoopVectorization.jl. On x86_64 we use Octavian for medium sized matrices. This is overridden if special BLAS implementations are loaded (currently MKL, AppleAccelerate, and BLISBLAS).

    !!! tip "Load Octavian.jl

    Loading `Octavian.jl` enables a polyalgorithm that uses different backends based on the\ninput sizes.

    source

    ',9))]),i[152]||(i[152]=t("h2",{id:"normalization",tabindex:"-1"},[s("Normalization "),t("a",{class:"header-anchor",href:"#normalization","aria-label":'Permalink to "Normalization"'},"​")],-1)),t("details",y,[t("summary",null,[i[27]||(i[27]=t("a",{id:"LuxLib.API.batchnorm",href:"#LuxLib.API.batchnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.batchnorm")],-1)),i[28]||(i[28]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[49]||(i[49]=e(`
    julia
    batchnorm(x, scale, bias, running_mean, running_var, training,
    +    σ=identity, momentum = 0.1f0, epsilon = eps(eltype(x)) ^ (5 // 7))

    Batch Normalization. For details see [1].

    `,2)),t("p",null,[i[31]||(i[31]=s("Batch Normalization computes the mean and variance for each ")),t("mjx-container",x,[(o(),l("svg",L,i[29]||(i[29]=[e('',1)]))),i[30]||(i[30]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),i[32]||(i[32]=s(" input slice and normalises the input accordingly."))]),i[50]||(i[50]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[47]||(i[47]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[35]||(i[35]=t("code",null,"scale",-1)),i[36]||(i[36]=s(": Scale factor (")),t("mjx-container",f,[(o(),l("svg",E,i[33]||(i[33]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[34]||(i[34]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[37]||(i[37]=s(") (can be ")),i[38]||(i[38]=t("code",null,"nothing",-1)),i[39]||(i[39]=s(")"))])]),t("li",null,[t("p",null,[i[42]||(i[42]=t("code",null,"bias",-1)),i[43]||(i[43]=s(": Bias factor (")),t("mjx-container",v,[(o(),l("svg",w,i[40]||(i[40]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[41]||(i[41]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[44]||(i[44]=s(") (can be ")),i[45]||(i[45]=t("code",null,"nothing",-1)),i[46]||(i[46]=s(")"))])]),i[48]||(i[48]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",6))]),i[51]||(i[51]=t("p",null,[t("strong",null,"Returns")],-1)),i[52]||(i[52]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[53]||(i[53]=t("p",null,[t("strong",null,"References")],-1)),i[54]||(i[54]=t("p",null,'[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.',-1)),i[55]||(i[55]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/batchnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",C,[t("summary",null,[i[56]||(i[56]=t("a",{id:"LuxLib.API.groupnorm",href:"#LuxLib.API.groupnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.groupnorm")],-1)),i[57]||(i[57]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[74]||(i[74]=e(`
    julia
    groupnorm(x, scale, bias, groups::Int, σ::F=identity,
    +    epsilon=eps(eltype(x)) ^ (5 // 7))

    Group Normalization. For details see [1].

    This op is similar to batch normalization, but statistics are shared across equally-sized groups of channels and not shared across batch dimension. Thus, group normalization does not depend on the batch composition and does not require maintaining internal state for storing statistics.

    Arguments

    `,4)),t("ul",null,[i[72]||(i[72]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[60]||(i[60]=t("code",null,"scale",-1)),i[61]||(i[61]=s(": Scale factor (")),t("mjx-container",A,[(o(),l("svg",F,i[58]||(i[58]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[59]||(i[59]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[62]||(i[62]=s(") (can be ")),i[63]||(i[63]=t("code",null,"nothing",-1)),i[64]||(i[64]=s(")"))])]),t("li",null,[t("p",null,[i[67]||(i[67]=t("code",null,"bias",-1)),i[68]||(i[68]=s(": Bias factor (")),t("mjx-container",j,[(o(),l("svg",D,i[65]||(i[65]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[66]||(i[66]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[69]||(i[69]=s(") (can be ")),i[70]||(i[70]=t("code",null,"nothing",-1)),i[71]||(i[71]=s(")"))])]),i[73]||(i[73]=e("
  • groups: Number of groups

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[75]||(i[75]=t("p",null,[t("strong",null,"Returns")],-1)),i[76]||(i[76]=t("p",null,"The normalized array is returned.",-1)),i[77]||(i[77]=t("p",null,[t("strong",null,"References")],-1)),i[78]||(i[78]=t("p",null,'[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.',-1)),i[79]||(i[79]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/groupnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",H,[t("summary",null,[i[80]||(i[80]=t("a",{id:"LuxLib.API.instancenorm",href:"#LuxLib.API.instancenorm"},[t("span",{class:"jlbinding"},"LuxLib.API.instancenorm")],-1)),i[81]||(i[81]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=e(`
    julia
    instancenorm(x, scale, bias, training, act, epsilon = eps(eltype(x)) ^ (5 // 7))
     instancenorm(x, scale, bias, running_mean, running_var, training, act, momentum,
    -    epsilon = eps(eltype(x)) ^ (5 // 7))

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[i[84]||(i[84]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",M,[(o(),l("svg",B,i[82]||(i[82]=[e('',1)]))),i[83]||(i[83]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),i[85]||(i[85]=s(" input slice and normalises the input accordingly."))]),i[103]||(i[103]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[100]||(i[100]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized (must be atleast 3D)")])],-1)),t("li",null,[t("p",null,[i[88]||(i[88]=t("code",null,"scale",-1)),i[89]||(i[89]=s(": Scale factor (")),t("mjx-container",V,[(o(),l("svg",P,i[86]||(i[86]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[87]||(i[87]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[90]||(i[90]=s(") (can be ")),i[91]||(i[91]=t("code",null,"nothing",-1)),i[92]||(i[92]=s(")"))])]),t("li",null,[t("p",null,[i[95]||(i[95]=t("code",null,"bias",-1)),i[96]||(i[96]=s(": Bias factor (")),t("mjx-container",I,[(o(),l("svg",Z,i[93]||(i[93]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[94]||(i[94]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[97]||(i[97]=s(") (can be ")),i[98]||(i[98]=t("code",null,"nothing",-1)),i[99]||(i[99]=s(")"))])]),i[101]||(i[101]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • ",6))]),i[104]||(i[104]=t("p",null,[t("strong",null,"Returns")],-1)),i[105]||(i[105]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[106]||(i[106]=t("p",null,[t("strong",null,"References")],-1)),i[107]||(i[107]=t("p",null,'[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).',-1)),i[108]||(i[108]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/instancenorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",N,[t("summary",null,[i[109]||(i[109]=t("a",{id:"LuxLib.API.layernorm",href:"#LuxLib.API.layernorm"},[t("span",{class:"jlbinding"},"LuxLib.API.layernorm")],-1)),i[110]||(i[110]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[133]||(i[133]=e(`
    julia
    layernorm(x::AbstractArray{xT, N}, scale, bias, σ = identity, dims=1:(N - 1),
    -    epsilon = eps(eltype(x)) ^ (5 / 7)) where {xT, N}

    Layer Normalization. For details see [1].

    `,2)),t("p",null,[i[113]||(i[113]=s("Given an input array ")),t("mjx-container",z,[(o(),l("svg",O,i[111]||(i[111]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),i[112]||(i[112]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),i[114]||(i[114]=s(", this layer computes"))]),t("mjx-container",S,[(o(),l("svg",R,i[115]||(i[115]=[e('',1)]))),i[116]||(i[116]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),i[134]||(i[134]=t("p",null,[s("and applies the activation function "),t("code",null,"σ"),s(" elementwise to "),t("code",null,"y"),s(".")],-1)),i[135]||(i[135]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[131]||(i[131]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[119]||(i[119]=t("code",null,"scale",-1)),i[120]||(i[120]=s(": Scale factor (")),t("mjx-container",G,[(o(),l("svg",U,i[117]||(i[117]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[118]||(i[118]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[121]||(i[121]=s(") (can be ")),i[122]||(i[122]=t("code",null,"nothing",-1)),i[123]||(i[123]=s(")"))])]),t("li",null,[t("p",null,[i[126]||(i[126]=t("code",null,"bias",-1)),i[127]||(i[127]=s(": Bias factor (")),t("mjx-container",J,[(o(),l("svg",q,i[124]||(i[124]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[125]||(i[125]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[128]||(i[128]=s(") (can be ")),i[129]||(i[129]=t("code",null,"nothing",-1)),i[130]||(i[130]=s(")"))])]),i[132]||(i[132]=e("
  • σ: Activation function (default: identity)

  • dims: Dimensions along which the mean and std of x is computed. If nothing is passed, the dims are inferred based on the dimensions of scale and bias. For example, if x is N dimensional and scale and bias are M dimensional, then the dims will be 1:(N - M).

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[136]||(i[136]=t("p",null,[t("strong",null,"Returns")],-1)),i[137]||(i[137]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(".")],-1)),i[138]||(i[138]=t("p",null,[t("strong",null,"References")],-1)),i[139]||(i[139]=t("p",null,'[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).',-1)),i[140]||(i[140]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/521fefded8398091ed0b63c9cbce688d85d12571/lib/LuxLib/src/api/layernorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),i[153]||(i[153]=t("h2",{id:"Helper-Functions",tabindex:"-1"},[s("Helper Functions "),t("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),t("details",X,[t("summary",null,[i[141]||(i[141]=t("a",{id:"LuxLib.internal_operation_mode",href:"#LuxLib.internal_operation_mode"},[t("span",{class:"jlbinding"},"LuxLib.internal_operation_mode")],-1)),i[142]||(i[142]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[143]||(i[143]=e(`
    julia
    internal_operation_mode(xs::Tuple)
    -internal_operation_mode(x::AbstractArray)

    Returns the internal operation mode for the given array(s). This is useful to define custom implementations using different backends like simple Julia broadcasting, Kernel Abstractions, Loop Vectorization, etc.

    Currently supported modes are:

    • GenericBroadcastOp: This is the fallback for most types. For the following types this is the preferred mode:

      • Arrays with fast_scalar_indexing set to False.

      • Static Arrays

      • ReverseDiff Arrays

      • Tracker Arrays

      • ForwardDiff.Dual Arrays

      • Complex Arrays

    • GPUBroadcastOp{dev}: GPU Arrays where dev is obtained from get_device_type(xs). This option dispatches should preferably use KernelAbstractions or specialized vendor dispatches.

    • LoopedArrayOp: CPU arrays that can be optimized using SIMD Loops, ideally using LoopVectorization.jl or Polyester.jl.

    source

    `,5))])])}const et=r(p,[["render",K]]);export{st as __pageData,et as default}; + epsilon = eps(eltype(x)) ^ (5 // 7))

    Instance Normalization. For details see [1].

    `,2)),t("p",null,[i[84]||(i[84]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",_,[(o(),l("svg",V,i[82]||(i[82]=[e('',1)]))),i[83]||(i[83]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),i[85]||(i[85]=s(" input slice and normalises the input accordingly."))]),i[103]||(i[103]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[100]||(i[100]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized (must be atleast 3D)")])],-1)),t("li",null,[t("p",null,[i[88]||(i[88]=t("code",null,"scale",-1)),i[89]||(i[89]=s(": Scale factor (")),t("mjx-container",M,[(o(),l("svg",P,i[86]||(i[86]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[87]||(i[87]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[90]||(i[90]=s(") (can be ")),i[91]||(i[91]=t("code",null,"nothing",-1)),i[92]||(i[92]=s(")"))])]),t("li",null,[t("p",null,[i[95]||(i[95]=t("code",null,"bias",-1)),i[96]||(i[96]=s(": Bias factor (")),t("mjx-container",B,[(o(),l("svg",I,i[93]||(i[93]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[94]||(i[94]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[97]||(i[97]=s(") (can be ")),i[98]||(i[98]=t("code",null,"nothing",-1)),i[99]||(i[99]=s(")"))])]),i[101]||(i[101]=e("
  • running_mean: Running mean (can be nothing)

  • running_var: Running variance (can be nothing)

  • training: Set to Val(true) or True() if running in training mode. Can be set to nothing to automatically determine if the function is being called within an autodiff context

  • σ: Activation function (default: identity)

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • momentum: Momentum for updating running mean and variance (default: 0.1f0)

  • ",6))]),i[104]||(i[104]=t("p",null,[t("strong",null,"Returns")],-1)),i[105]||(i[105]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[106]||(i[106]=t("p",null,[t("strong",null,"References")],-1)),i[107]||(i[107]=t("p",null,'[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).',-1)),i[108]||(i[108]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/instancenorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",S,[t("summary",null,[i[109]||(i[109]=t("a",{id:"LuxLib.API.layernorm",href:"#LuxLib.API.layernorm"},[t("span",{class:"jlbinding"},"LuxLib.API.layernorm")],-1)),i[110]||(i[110]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[133]||(i[133]=e(`
    julia
    layernorm(x::AbstractArray{xT, N}, scale, bias, σ = identity, dims=1:(N - 1),
    +    epsilon = eps(eltype(x)) ^ (5 / 7)) where {xT, N}

    Layer Normalization. For details see [1].

    `,2)),t("p",null,[i[113]||(i[113]=s("Given an input array ")),t("mjx-container",N,[(o(),l("svg",Z,i[111]||(i[111]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),i[112]||(i[112]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),i[114]||(i[114]=s(", this layer computes"))]),t("mjx-container",z,[(o(),l("svg",R,i[115]||(i[115]=[e('',1)]))),i[116]||(i[116]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),i[134]||(i[134]=t("p",null,[s("and applies the activation function "),t("code",null,"σ"),s(" elementwise to "),t("code",null,"y"),s(".")],-1)),i[135]||(i[135]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[131]||(i[131]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[119]||(i[119]=t("code",null,"scale",-1)),i[120]||(i[120]=s(": Scale factor (")),t("mjx-container",O,[(o(),l("svg",G,i[117]||(i[117]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[118]||(i[118]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[121]||(i[121]=s(") (can be ")),i[122]||(i[122]=t("code",null,"nothing",-1)),i[123]||(i[123]=s(")"))])]),t("li",null,[t("p",null,[i[126]||(i[126]=t("code",null,"bias",-1)),i[127]||(i[127]=s(": Bias factor (")),t("mjx-container",U,[(o(),l("svg",J,i[124]||(i[124]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[125]||(i[125]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[128]||(i[128]=s(") (can be ")),i[129]||(i[129]=t("code",null,"nothing",-1)),i[130]||(i[130]=s(")"))])]),i[132]||(i[132]=e("
  • σ: Activation function (default: identity)

  • dims: Dimensions along which the mean and std of x is computed. If nothing is passed, the dims are inferred based on the dimensions of scale and bias. For example, if x is N dimensional and scale and bias are M dimensional, then the dims will be 1:(N - M).

  • epsilon: Value added to the denominator for numerical stability (default: eps(eltype(x)) ^ (5 / 7))

  • ",3))]),i[136]||(i[136]=t("p",null,[t("strong",null,"Returns")],-1)),i[137]||(i[137]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(".")],-1)),i[138]||(i[138]=t("p",null,[t("strong",null,"References")],-1)),i[139]||(i[139]=t("p",null,'[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).',-1)),i[140]||(i[140]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/layernorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),i[153]||(i[153]=t("h2",{id:"Helper-Functions",tabindex:"-1"},[s("Helper Functions "),t("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),t("details",q,[t("summary",null,[i[141]||(i[141]=t("a",{id:"LuxLib.internal_operation_mode",href:"#LuxLib.internal_operation_mode"},[t("span",{class:"jlbinding"},"LuxLib.internal_operation_mode")],-1)),i[142]||(i[142]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[143]||(i[143]=e(`
    julia
    internal_operation_mode(xs::Tuple)
    +internal_operation_mode(x::AbstractArray)

    Returns the internal operation mode for the given array(s). This is useful to define custom implementations using different backends like simple Julia broadcasting, Kernel Abstractions, Loop Vectorization, etc.

    Currently supported modes are:

    source

    `,5))])])}const et=r(p,[["render",X]]);export{st as __pageData,et as default}; diff --git a/dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.lean.js b/dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.lean.js new file mode 100644 index 0000000000..fc66c7bf07 --- /dev/null +++ b/dev/assets/api_NN_Primitives_LuxLib.md.CY-wCYUU.lean.js @@ -0,0 +1 @@ +import{_ as r,c as l,j as t,a as s,G as n,a2 as e,B as d,o}from"./chunks/framework.BetCMmtc.js";const st=JSON.parse('{"title":"LuxLib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/LuxLib.md","filePath":"api/NN_Primitives/LuxLib.md","lastUpdated":null}'),p={name:"api/NN_Primitives/LuxLib.md"},h={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},k={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},x={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},L={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.07ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 11080.9 891","aria-hidden":"true"},f={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},v={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},w={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},C={class:"jldocstring custom-block"},A={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},j={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},H={class:"jldocstring custom-block"},_={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},V={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.471ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.72ex",height:"2.016ex",role:"img",focusable:"false",viewBox:"0 -683 10042 891","aria-hidden":"true"},M={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},I={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},S={class:"jldocstring custom-block"},N={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Z={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.294ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 572 453","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.76ex"},xmlns:"http://www.w3.org/2000/svg",width:"25.034ex",height:"6.063ex",role:"img",focusable:"false",viewBox:"0 -1460 11064.9 2680","aria-hidden":"true"},O={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},G={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.489ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.229ex",height:"1.486ex",role:"img",focusable:"false",viewBox:"0 -441 543 657","aria-hidden":"true"},U={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},J={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.281ex",height:"2.034ex",role:"img",focusable:"false",viewBox:"0 -705 566 899","aria-hidden":"true"},q={class:"jldocstring custom-block"};function X(K,i,W,$,Y,tt){const a=d("Badge");return o(),l("div",null,[i[144]||(i[144]=t("h1",{id:"LuxLib-API",tabindex:"-1"},[s("LuxLib "),t("a",{class:"header-anchor",href:"#LuxLib-API","aria-label":'Permalink to "LuxLib {#LuxLib-API}"'},"​")],-1)),i[145]||(i[145]=t("p",null,"Backend for Lux.jl",-1)),i[146]||(i[146]=t("h2",{id:"Apply-Activation",tabindex:"-1"},[s("Apply Activation "),t("a",{class:"header-anchor",href:"#Apply-Activation","aria-label":'Permalink to "Apply Activation {#Apply-Activation}"'},"​")],-1)),t("details",h,[t("summary",null,[i[0]||(i[0]=t("a",{id:"LuxLib.API.fast_activation",href:"#LuxLib.API.fast_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation")],-1)),i[1]||(i[1]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[2]||(i[2]=e("",8))]),t("details",u,[t("summary",null,[i[3]||(i[3]=t("a",{id:"LuxLib.API.fast_activation!!",href:"#LuxLib.API.fast_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.fast_activation!!")],-1)),i[4]||(i[4]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[5]||(i[5]=e("",9))]),i[147]||(i[147]=t("h2",{id:"Batched-Operations",tabindex:"-1"},[s("Batched Operations "),t("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),t("details",m,[t("summary",null,[i[6]||(i[6]=t("a",{id:"LuxLib.API.batched_matmul",href:"#LuxLib.API.batched_matmul"},[t("span",{class:"jlbinding"},"LuxLib.API.batched_matmul")],-1)),i[7]||(i[7]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[8]||(i[8]=e("",4))]),i[148]||(i[148]=t("h2",{id:"Bias-Activation",tabindex:"-1"},[s("Bias Activation "),t("a",{class:"header-anchor",href:"#Bias-Activation","aria-label":'Permalink to "Bias Activation {#Bias-Activation}"'},"​")],-1)),t("details",Q,[t("summary",null,[i[9]||(i[9]=t("a",{id:"LuxLib.API.bias_activation",href:"#LuxLib.API.bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation")],-1)),i[10]||(i[10]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[11]||(i[11]=e("",6))]),t("details",T,[t("summary",null,[i[12]||(i[12]=t("a",{id:"LuxLib.API.bias_activation!!",href:"#LuxLib.API.bias_activation!!"},[t("span",{class:"jlbinding"},"LuxLib.API.bias_activation!!")],-1)),i[13]||(i[13]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[14]||(i[14]=e("",4))]),i[149]||(i[149]=t("h2",{id:"Convolutional-Layers",tabindex:"-1"},[s("Convolutional Layers "),t("a",{class:"header-anchor",href:"#Convolutional-Layers","aria-label":'Permalink to "Convolutional Layers {#Convolutional-Layers}"'},"​")],-1)),t("details",k,[t("summary",null,[i[15]||(i[15]=t("a",{id:"LuxLib.API.fused_conv_bias_activation",href:"#LuxLib.API.fused_conv_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_conv_bias_activation")],-1)),i[16]||(i[16]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[17]||(i[17]=e("",7))]),i[150]||(i[150]=t("h2",{id:"dropout",tabindex:"-1"},[s("Dropout "),t("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),t("details",c,[t("summary",null,[i[18]||(i[18]=t("a",{id:"LuxLib.API.alpha_dropout",href:"#LuxLib.API.alpha_dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.alpha_dropout")],-1)),i[19]||(i[19]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[20]||(i[20]=e("",9))]),t("details",g,[t("summary",null,[i[21]||(i[21]=t("a",{id:"LuxLib.API.dropout",href:"#LuxLib.API.dropout"},[t("span",{class:"jlbinding"},"LuxLib.API.dropout")],-1)),i[22]||(i[22]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[23]||(i[23]=e("",9))]),i[151]||(i[151]=t("h2",{id:"Fully-Connected-Layers",tabindex:"-1"},[s("Fully Connected Layers "),t("a",{class:"header-anchor",href:"#Fully-Connected-Layers","aria-label":'Permalink to "Fully Connected Layers {#Fully-Connected-Layers}"'},"​")],-1)),t("details",b,[t("summary",null,[i[24]||(i[24]=t("a",{id:"LuxLib.API.fused_dense_bias_activation",href:"#LuxLib.API.fused_dense_bias_activation"},[t("span",{class:"jlbinding"},"LuxLib.API.fused_dense_bias_activation")],-1)),i[25]||(i[25]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[26]||(i[26]=e("",9))]),i[152]||(i[152]=t("h2",{id:"normalization",tabindex:"-1"},[s("Normalization "),t("a",{class:"header-anchor",href:"#normalization","aria-label":'Permalink to "Normalization"'},"​")],-1)),t("details",y,[t("summary",null,[i[27]||(i[27]=t("a",{id:"LuxLib.API.batchnorm",href:"#LuxLib.API.batchnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.batchnorm")],-1)),i[28]||(i[28]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[49]||(i[49]=e("",2)),t("p",null,[i[31]||(i[31]=s("Batch Normalization computes the mean and variance for each ")),t("mjx-container",x,[(o(),l("svg",L,i[29]||(i[29]=[e("",1)]))),i[30]||(i[30]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mi",null,"N")])])],-1))]),i[32]||(i[32]=s(" input slice and normalises the input accordingly."))]),i[50]||(i[50]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[47]||(i[47]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[35]||(i[35]=t("code",null,"scale",-1)),i[36]||(i[36]=s(": Scale factor (")),t("mjx-container",f,[(o(),l("svg",E,i[33]||(i[33]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[34]||(i[34]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[37]||(i[37]=s(") (can be ")),i[38]||(i[38]=t("code",null,"nothing",-1)),i[39]||(i[39]=s(")"))])]),t("li",null,[t("p",null,[i[42]||(i[42]=t("code",null,"bias",-1)),i[43]||(i[43]=s(": Bias factor (")),t("mjx-container",v,[(o(),l("svg",w,i[40]||(i[40]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[41]||(i[41]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[44]||(i[44]=s(") (can be ")),i[45]||(i[45]=t("code",null,"nothing",-1)),i[46]||(i[46]=s(")"))])]),i[48]||(i[48]=e("",6))]),i[51]||(i[51]=t("p",null,[t("strong",null,"Returns")],-1)),i[52]||(i[52]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[53]||(i[53]=t("p",null,[t("strong",null,"References")],-1)),i[54]||(i[54]=t("p",null,'[1] Ioffe, Sergey, and Christian Szegedy. "Batch normalization: Accelerating deep network training by reducing internal covariate shift." International conference on machine learning. PMLR, 2015.',-1)),i[55]||(i[55]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/batchnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",C,[t("summary",null,[i[56]||(i[56]=t("a",{id:"LuxLib.API.groupnorm",href:"#LuxLib.API.groupnorm"},[t("span",{class:"jlbinding"},"LuxLib.API.groupnorm")],-1)),i[57]||(i[57]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[74]||(i[74]=e("",4)),t("ul",null,[i[72]||(i[72]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[60]||(i[60]=t("code",null,"scale",-1)),i[61]||(i[61]=s(": Scale factor (")),t("mjx-container",A,[(o(),l("svg",F,i[58]||(i[58]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[59]||(i[59]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[62]||(i[62]=s(") (can be ")),i[63]||(i[63]=t("code",null,"nothing",-1)),i[64]||(i[64]=s(")"))])]),t("li",null,[t("p",null,[i[67]||(i[67]=t("code",null,"bias",-1)),i[68]||(i[68]=s(": Bias factor (")),t("mjx-container",j,[(o(),l("svg",D,i[65]||(i[65]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[66]||(i[66]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[69]||(i[69]=s(") (can be ")),i[70]||(i[70]=t("code",null,"nothing",-1)),i[71]||(i[71]=s(")"))])]),i[73]||(i[73]=e("",3))]),i[75]||(i[75]=t("p",null,[t("strong",null,"Returns")],-1)),i[76]||(i[76]=t("p",null,"The normalized array is returned.",-1)),i[77]||(i[77]=t("p",null,[t("strong",null,"References")],-1)),i[78]||(i[78]=t("p",null,'[1] Wu, Yuxin, and Kaiming He. "Group normalization." Proceedings of the European conference on computer vision (ECCV). 2018.',-1)),i[79]||(i[79]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/groupnorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",H,[t("summary",null,[i[80]||(i[80]=t("a",{id:"LuxLib.API.instancenorm",href:"#LuxLib.API.instancenorm"},[t("span",{class:"jlbinding"},"LuxLib.API.instancenorm")],-1)),i[81]||(i[81]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[102]||(i[102]=e("",2)),t("p",null,[i[84]||(i[84]=s("Instance Normalization computes the mean and variance for each ")),t("mjx-container",_,[(o(),l("svg",V,i[82]||(i[82]=[e("",1)]))),i[83]||(i[83]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("msub",null,[t("mi",null,"D"),t("mn",null,"1")]),t("mo",null,"×"),t("mo",null,"."),t("mo",null,"."),t("mo",null,"."),t("mo",null,"×"),t("msub",null,[t("mi",null,"D"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",null,"N"),t("mo",null,"−"),t("mn",null,"2")])]),t("mo",null,"×"),t("mn",null,"1"),t("mo",null,"×"),t("mn",null,"1")])],-1))]),i[85]||(i[85]=s(" input slice and normalises the input accordingly."))]),i[103]||(i[103]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[100]||(i[100]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized (must be atleast 3D)")])],-1)),t("li",null,[t("p",null,[i[88]||(i[88]=t("code",null,"scale",-1)),i[89]||(i[89]=s(": Scale factor (")),t("mjx-container",M,[(o(),l("svg",P,i[86]||(i[86]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[87]||(i[87]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[90]||(i[90]=s(") (can be ")),i[91]||(i[91]=t("code",null,"nothing",-1)),i[92]||(i[92]=s(")"))])]),t("li",null,[t("p",null,[i[95]||(i[95]=t("code",null,"bias",-1)),i[96]||(i[96]=s(": Bias factor (")),t("mjx-container",B,[(o(),l("svg",I,i[93]||(i[93]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[94]||(i[94]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[97]||(i[97]=s(") (can be ")),i[98]||(i[98]=t("code",null,"nothing",-1)),i[99]||(i[99]=s(")"))])]),i[101]||(i[101]=e("",6))]),i[104]||(i[104]=t("p",null,[t("strong",null,"Returns")],-1)),i[105]||(i[105]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(". And a Named Tuple containing the updated running mean and variance.")],-1)),i[106]||(i[106]=t("p",null,[t("strong",null,"References")],-1)),i[107]||(i[107]=t("p",null,'[1] Ulyanov, Dmitry, Andrea Vedaldi, and Victor Lempitsky. "Instance normalization: The missing ingredient for fast stylization." arXiv preprint arXiv:1607.08022 (2016).',-1)),i[108]||(i[108]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/instancenorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),t("details",S,[t("summary",null,[i[109]||(i[109]=t("a",{id:"LuxLib.API.layernorm",href:"#LuxLib.API.layernorm"},[t("span",{class:"jlbinding"},"LuxLib.API.layernorm")],-1)),i[110]||(i[110]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[133]||(i[133]=e("",2)),t("p",null,[i[113]||(i[113]=s("Given an input array ")),t("mjx-container",N,[(o(),l("svg",Z,i[111]||(i[111]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D465",d:"M52 289Q59 331 106 386T222 442Q257 442 286 424T329 379Q371 442 430 442Q467 442 494 420T522 361Q522 332 508 314T481 292T458 288Q439 288 427 299T415 328Q415 374 465 391Q454 404 425 404Q412 404 406 402Q368 386 350 336Q290 115 290 78Q290 50 306 38T341 26Q378 26 414 59T463 140Q466 150 469 151T485 153H489Q504 153 504 145Q504 144 502 134Q486 77 440 33T333 -11Q263 -11 227 52Q186 -10 133 -10H127Q78 -10 57 16T35 71Q35 103 54 123T99 143Q142 143 142 101Q142 81 130 66T107 46T94 41L91 40Q91 39 97 36T113 29T132 26Q168 26 194 71Q203 87 217 139T245 247T261 313Q266 340 266 352Q266 380 251 392T217 404Q177 404 142 372T93 290Q91 281 88 280T72 278H58Q52 284 52 289Z",style:{"stroke-width":"3"}})])])],-1)]))),i[112]||(i[112]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"x")])],-1))]),i[114]||(i[114]=s(", this layer computes"))]),t("mjx-container",z,[(o(),l("svg",R,i[115]||(i[115]=[e("",1)]))),i[116]||(i[116]=t("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[t("mi",null,"y"),t("mo",null,"="),t("mfrac",null,[t("mrow",null,[t("mi",null,"x"),t("mo",null,"−"),t("mrow",{"data-mjx-texclass":"ORD"},[t("mi",{mathvariant:"double-struck"},"E")]),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]")]),t("msqrt",null,[t("mi",null,"V"),t("mi",null,"a"),t("mi",null,"r"),t("mo",{stretchy:"false"},"["),t("mi",null,"x"),t("mo",{stretchy:"false"},"]"),t("mo",null,"+"),t("mi",null,"ϵ")])]),t("mo",null,"∗"),t("mi",null,"γ"),t("mo",null,"+"),t("mi",null,"β")])],-1))]),i[134]||(i[134]=t("p",null,[s("and applies the activation function "),t("code",null,"σ"),s(" elementwise to "),t("code",null,"y"),s(".")],-1)),i[135]||(i[135]=t("p",null,[t("strong",null,"Arguments")],-1)),t("ul",null,[i[131]||(i[131]=t("li",null,[t("p",null,[t("code",null,"x"),s(": Input to be Normalized")])],-1)),t("li",null,[t("p",null,[i[119]||(i[119]=t("code",null,"scale",-1)),i[120]||(i[120]=s(": Scale factor (")),t("mjx-container",O,[(o(),l("svg",G,i[117]||(i[117]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FE",d:"M31 249Q11 249 11 258Q11 275 26 304T66 365T129 418T206 441Q233 441 239 440Q287 429 318 386T371 255Q385 195 385 170Q385 166 386 166L398 193Q418 244 443 300T486 391T508 430Q510 431 524 431H537Q543 425 543 422Q543 418 522 378T463 251T391 71Q385 55 378 6T357 -100Q341 -165 330 -190T303 -216Q286 -216 286 -188Q286 -138 340 32L346 51L347 69Q348 79 348 100Q348 257 291 317Q251 355 196 355Q148 355 108 329T51 260Q49 251 47 251Q45 249 31 249Z",style:{"stroke-width":"3"}})])])],-1)]))),i[118]||(i[118]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"γ")])],-1))]),i[121]||(i[121]=s(") (can be ")),i[122]||(i[122]=t("code",null,"nothing",-1)),i[123]||(i[123]=s(")"))])]),t("li",null,[t("p",null,[i[126]||(i[126]=t("code",null,"bias",-1)),i[127]||(i[127]=s(": Bias factor (")),t("mjx-container",U,[(o(),l("svg",J,i[124]||(i[124]=[t("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[t("g",{"data-mml-node":"math"},[t("g",{"data-mml-node":"mi"},[t("path",{"data-c":"1D6FD",d:"M29 -194Q23 -188 23 -186Q23 -183 102 134T186 465Q208 533 243 584T309 658Q365 705 429 705H431Q493 705 533 667T573 570Q573 465 469 396L482 383Q533 332 533 252Q533 139 448 65T257 -10Q227 -10 203 -2T165 17T143 40T131 59T126 65L62 -188Q60 -194 42 -194H29ZM353 431Q392 431 427 419L432 422Q436 426 439 429T449 439T461 453T472 471T484 495T493 524T501 560Q503 569 503 593Q503 611 502 616Q487 667 426 667Q384 667 347 643T286 582T247 514T224 455Q219 439 186 308T152 168Q151 163 151 147Q151 99 173 68Q204 26 260 26Q302 26 349 51T425 137Q441 171 449 214T457 279Q457 337 422 372Q380 358 347 358H337Q258 358 258 389Q258 396 261 403Q275 431 353 431Z",style:{"stroke-width":"3"}})])])],-1)]))),i[125]||(i[125]=t("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[t("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[t("mi",null,"β")])],-1))]),i[128]||(i[128]=s(") (can be ")),i[129]||(i[129]=t("code",null,"nothing",-1)),i[130]||(i[130]=s(")"))])]),i[132]||(i[132]=e("",3))]),i[136]||(i[136]=t("p",null,[t("strong",null,"Returns")],-1)),i[137]||(i[137]=t("p",null,[s("Normalized Array of same size as "),t("code",null,"x"),s(".")],-1)),i[138]||(i[138]=t("p",null,[t("strong",null,"References")],-1)),i[139]||(i[139]=t("p",null,'[1] Ba, Jimmy Lei, Jamie Ryan Kiros, and Geoffrey E. Hinton. "Layer normalization." arXiv preprint arXiv:1607.06450 (2016).',-1)),i[140]||(i[140]=t("p",null,[t("a",{href:"https://github.com/LuxDL/Lux.jl/blob/8a1cb65caa52dd70b95645780749072e6a3d28c7/lib/LuxLib/src/api/layernorm.jl#L1",target:"_blank",rel:"noreferrer"},"source")],-1))]),i[153]||(i[153]=t("h2",{id:"Helper-Functions",tabindex:"-1"},[s("Helper Functions "),t("a",{class:"header-anchor",href:"#Helper-Functions","aria-label":'Permalink to "Helper Functions {#Helper-Functions}"'},"​")],-1)),t("details",q,[t("summary",null,[i[141]||(i[141]=t("a",{id:"LuxLib.internal_operation_mode",href:"#LuxLib.internal_operation_mode"},[t("span",{class:"jlbinding"},"LuxLib.internal_operation_mode")],-1)),i[142]||(i[142]=s()),n(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),i[143]||(i[143]=e("",5))])])}const et=r(p,[["render",X]]);export{st as __pageData,et as default}; diff --git a/dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.js b/dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.js similarity index 98% rename from dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.js rename to dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.js index d7146387c6..df4729c88d 100644 --- a/dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.js +++ b/dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.js @@ -1,5 +1,5 @@ -import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framework.I-x9Gl6h.js";const Zi=JSON.parse('{"title":"NNlib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/NNlib.md","filePath":"api/NN_Primitives/NNlib.md","lastUpdated":null}'),d={name:"api/NN_Primitives/NNlib.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},Z={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},J={class:"jldocstring custom-block"},K={class:"jldocstring custom-block"},X={class:"jldocstring custom-block"},$={class:"jldocstring custom-block"},Y={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},Es={class:"jldocstring custom-block"},Fs={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"},Cs={class:"jldocstring custom-block"},us={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},bs={class:"jldocstring custom-block"},Qs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ts={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.451ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 10365.1 1267.9","aria-hidden":"true"},Bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},fs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},vs={class:"jldocstring custom-block"},Ns={class:"jldocstring custom-block"},xs={class:"jldocstring custom-block"},js={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},As={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},ws={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ds={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Ls={class:"jldocstring custom-block"},Hs={class:"jldocstring custom-block"},Vs={class:"jldocstring custom-block"},_s={class:"jldocstring custom-block"},Ms={class:"jldocstring custom-block"},zs={class:"jldocstring custom-block"},Zs={class:"jldocstring custom-block"},Is={class:"jldocstring custom-block"},Os={class:"jldocstring custom-block"},Ps={class:"jldocstring custom-block"},Ss={class:"jldocstring custom-block"},qs={class:"jldocstring custom-block"},Rs={class:"jldocstring custom-block"},Ws={class:"jldocstring custom-block"},Gs={class:"jldocstring custom-block"},Us={class:"jldocstring custom-block"},Js={class:"jldocstring custom-block"},Ks={class:"jldocstring custom-block"},Xs={class:"jldocstring custom-block"},$s={class:"jldocstring custom-block"},Ys={class:"jldocstring custom-block"},si={class:"jldocstring custom-block"},ii={class:"jldocstring custom-block"},ai={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ti={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.797ex"},xmlns:"http://www.w3.org/2000/svg",width:"65.233ex",height:"2.969ex",role:"img",focusable:"false",viewBox:"0 -960 28832.9 1312.1","aria-hidden":"true"},ni={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},li={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},hi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.407ex",height:"1.027ex",role:"img",focusable:"false",viewBox:"0 -443 622 454","aria-hidden":"true"},pi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ki={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.312ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.661ex",height:"1.907ex",role:"img",focusable:"false",viewBox:"0 -705 5596.1 843","aria-hidden":"true"},di={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ri={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.986ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 878 453","aria-hidden":"true"},oi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},gi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},yi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ei={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Fi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ci={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.565ex",height:"2.066ex",role:"img",focusable:"false",viewBox:"0 -903 3343.8 913","aria-hidden":"true"},Ci={class:"jldocstring custom-block"},ui={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},mi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.372ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 9888.3 1267.9","aria-hidden":"true"},bi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Qi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},Ti={class:"jldocstring custom-block"},Bi={class:"jldocstring custom-block"},fi={class:"jldocstring custom-block"},vi={class:"jldocstring custom-block"},Ni={class:"jldocstring custom-block"},xi={class:"jldocstring custom-block"},ji={class:"jldocstring custom-block"},Ai={class:"jldocstring custom-block"},wi={class:"jldocstring custom-block"};function Di(Li,s,Hi,Vi,_i,Mi){const n=k("Badge");return h(),e("div",null,[s[372]||(s[372]=t('

    NNlib

    Neural Network Primitives with custom bindings for different accelerator backends in Julia.

    Reexport of NNlib

    Lux doesn't re-export all of NNlib for now. Directly loading NNlib is the recommended approach for accessing these functions.

    Attention

    ',4)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.dot_product_attention",href:"#NNlib.dot_product_attention"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=t(`
    julia
    dot_product_attention(query, key, value, [bias]; [fdrop, mask, nheads])

    Multihead dot product attention used in transformer architectures.

    The input arrays must have the first two dimensions given by the number of features and the sequence length, then an arbitrary number of batch dimensions or none.

    Returns the attention output array of size (v_dim, q_len, batch_size...) and the attention scores of size (kv_len, q_len, nheads, batch_size...).

    See also dot_product_attention_scores if you only need the attention scores.

    Arguments

    Examples

    julia
    q, k, v = rand(10, 20, 2), rand(10, 30, 2), rand(20, 30, 2)
    -y, α = dot_product_attention(q, k, v)

    source

    `,10))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.dot_product_attention_scores",href:"#NNlib.dot_product_attention_scores"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention_scores")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=t('
    julia
    dot_product_attention_scores(query, key, [bias]; [fdrop, mask])

    Return the attention scores for the dot_product_attention. Input arrays must have dimensions (num_features ÷ nheads, nheads, sequence_length, batch_size).

    See dot_product_attention for more details.

    source

    ',4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.make_causal_mask",href:"#NNlib.make_causal_mask"},[i("span",{class:"jlbinding"},"NNlib.make_causal_mask")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=t('
    julia
    make_causal_mask(x, dims=2)

    Return a boolean square matrix m of the same type as x and of side size(x, dims). Its elements are set such that m[i, j] == i ≤ j.

    Can be used to mask the attention scores in dot_product_attention.

    source

    ',4))]),s[373]||(s[373]=i("h2",{id:"softmax",tabindex:"-1"},[a("Softmax "),i("a",{class:"header-anchor",href:"#softmax","aria-label":'Permalink to "Softmax"'},"​")],-1)),i("details",y,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.softmax",href:"#NNlib.softmax"},[i("span",{class:"jlbinding"},"NNlib.softmax")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=t(`
    julia
    softmax(x; dims = 1)

    Softmax turns input array x into probability distributions that sum to 1 along the dimensions specified by dims. It is semantically equivalent to the following:

    softmax(x; dims = 1) = exp.(x) ./ sum(exp.(x), dims = dims)

    with additional manipulations enhancing numerical stability.

    For a matrix input x it will by default (dims = 1) treat it as a batch of vectors, with each column independent. Keyword dims = 2 will instead treat rows independently, and so on.

    See also logsoftmax.

    Examples

    julia
    julia> softmax([1, 2, 3])
    +import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framework.BetCMmtc.js";const Ii=JSON.parse('{"title":"NNlib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/NNlib.md","filePath":"api/NN_Primitives/NNlib.md","lastUpdated":null}'),d={name:"api/NN_Primitives/NNlib.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},Z={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},J={class:"jldocstring custom-block"},K={class:"jldocstring custom-block"},X={class:"jldocstring custom-block"},$={class:"jldocstring custom-block"},Y={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},Es={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},Fs={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"},Cs={class:"jldocstring custom-block"},us={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Qs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.451ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 10365.1 1267.9","aria-hidden":"true"},Bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},_s={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},fs={class:"jldocstring custom-block"},vs={class:"jldocstring custom-block"},Ns={class:"jldocstring custom-block"},As={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},xs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},js={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ds={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},ws={class:"jldocstring custom-block"},Vs={class:"jldocstring custom-block"},Ls={class:"jldocstring custom-block"},Hs={class:"jldocstring custom-block"},Ss={class:"jldocstring custom-block"},Ms={class:"jldocstring custom-block"},Is={class:"jldocstring custom-block"},Ps={class:"jldocstring custom-block"},zs={class:"jldocstring custom-block"},Rs={class:"jldocstring custom-block"},Zs={class:"jldocstring custom-block"},Os={class:"jldocstring custom-block"},qs={class:"jldocstring custom-block"},Ws={class:"jldocstring custom-block"},Gs={class:"jldocstring custom-block"},Us={class:"jldocstring custom-block"},Js={class:"jldocstring custom-block"},Ks={class:"jldocstring custom-block"},Xs={class:"jldocstring custom-block"},$s={class:"jldocstring custom-block"},Ys={class:"jldocstring custom-block"},si={class:"jldocstring custom-block"},ii={class:"jldocstring custom-block"},ai={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ti={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.797ex"},xmlns:"http://www.w3.org/2000/svg",width:"65.233ex",height:"2.969ex",role:"img",focusable:"false",viewBox:"0 -960 28832.9 1312.1","aria-hidden":"true"},ni={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},li={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},hi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.407ex",height:"1.027ex",role:"img",focusable:"false",viewBox:"0 -443 622 454","aria-hidden":"true"},pi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ki={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.312ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.661ex",height:"1.907ex",role:"img",focusable:"false",viewBox:"0 -705 5596.1 843","aria-hidden":"true"},di={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ri={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.986ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 878 453","aria-hidden":"true"},oi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},gi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},Ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},yi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Fi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ci={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.565ex",height:"2.066ex",role:"img",focusable:"false",viewBox:"0 -903 3343.8 913","aria-hidden":"true"},Ci={class:"jldocstring custom-block"},ui={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},mi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.372ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 9888.3 1267.9","aria-hidden":"true"},Ti={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},bi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},Qi={class:"jldocstring custom-block"},Bi={class:"jldocstring custom-block"},_i={class:"jldocstring custom-block"},fi={class:"jldocstring custom-block"},vi={class:"jldocstring custom-block"},Ni={class:"jldocstring custom-block"},Ai={class:"jldocstring custom-block"},xi={class:"jldocstring custom-block"},ji={class:"jldocstring custom-block"};function Di(wi,s,Vi,Li,Hi,Si){const n=k("Badge");return h(),e("div",null,[s[372]||(s[372]=t('

    NNlib

    Neural Network Primitives with custom bindings for different accelerator backends in Julia.

    Reexport of NNlib

    Lux doesn't re-export all of NNlib for now. Directly loading NNlib is the recommended approach for accessing these functions.

    Attention

    ',4)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.dot_product_attention",href:"#NNlib.dot_product_attention"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=t(`
    julia
    dot_product_attention(query, key, value, [bias]; [fdrop, mask, nheads])

    Multihead dot product attention used in transformer architectures.

    The input arrays must have the first two dimensions given by the number of features and the sequence length, then an arbitrary number of batch dimensions or none.

    Returns the attention output array of size (v_dim, q_len, batch_size...) and the attention scores of size (kv_len, q_len, nheads, batch_size...).

    See also dot_product_attention_scores if you only need the attention scores.

    Arguments

    • query: Query array of size (qk_dim, q_len, batch_size...).

    • key: Key array of size (qk_dim, kv_len, batch_size...).

    • value: Value array of size (v_dim, kv_len, batch_size...).

    • bias: Either nothing or an array broadcastable to size (kv_len, q_len, nheads, batch_size). It will be added to the attention scores before applying the softmax. Default nothing.

    • fdrop: A dropout function or layer to be applied on the attention scores right after the softmax. Default identity (no dropout).

    • mask: Either nothing or a boolean array broadcastable to size (kv_len, q_len, nheads, batch_size). The mask is applied to the attention scores just before the softmax. See make_causal_mask fore creating causal masks. Default nothing.

    • nheads: Number of heads to split the input arrays into. Default 1.

    Examples

    julia
    q, k, v = rand(10, 20, 2), rand(10, 30, 2), rand(20, 30, 2)
    +y, α = dot_product_attention(q, k, v)

    source

    `,10))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.dot_product_attention_scores",href:"#NNlib.dot_product_attention_scores"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention_scores")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=t('
    julia
    dot_product_attention_scores(query, key, [bias]; [fdrop, mask])

    Return the attention scores for the dot_product_attention. Input arrays must have dimensions (num_features ÷ nheads, nheads, sequence_length, batch_size).

    See dot_product_attention for more details.

    source

    ',4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.make_causal_mask",href:"#NNlib.make_causal_mask"},[i("span",{class:"jlbinding"},"NNlib.make_causal_mask")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=t('
    julia
    make_causal_mask(x, dims=2)

    Return a boolean square matrix m of the same type as x and of side size(x, dims). Its elements are set such that m[i, j] == i ≤ j.

    Can be used to mask the attention scores in dot_product_attention.

    source

    ',4))]),s[373]||(s[373]=i("h2",{id:"softmax",tabindex:"-1"},[a("Softmax "),i("a",{class:"header-anchor",href:"#softmax","aria-label":'Permalink to "Softmax"'},"​")],-1)),i("details",E,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.softmax",href:"#NNlib.softmax"},[i("span",{class:"jlbinding"},"NNlib.softmax")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=t(`
    julia
    softmax(x; dims = 1)

    Softmax turns input array x into probability distributions that sum to 1 along the dimensions specified by dims. It is semantically equivalent to the following:

    softmax(x; dims = 1) = exp.(x) ./ sum(exp.(x), dims = dims)

    with additional manipulations enhancing numerical stability.

    For a matrix input x it will by default (dims = 1) treat it as a batch of vectors, with each column independent. Keyword dims = 2 will instead treat rows independently, and so on.

    See also logsoftmax.

    Examples

    julia
    julia> softmax([1, 2, 3])
     3-element Vector{Float64}:
      0.09003057317038046
      0.24472847105479764
    @@ -23,7 +23,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
     (7, 13)
     
     julia> Dense(4 => 7, softmax)(x)
    -ERROR: \`softmax(x)\` called with a number, but it expects an array.

    source

    `,11))]),i("details",E,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.logsoftmax",href:"#NNlib.logsoftmax"},[i("span",{class:"jlbinding"},"NNlib.logsoftmax")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=t('
    julia
    logsoftmax(x; dims = 1)

    Computes the log of softmax in a more numerically stable way than directly taking log.(softmax(xs)). Commonly used in computing cross entropy loss.

    It is semantically equivalent to the following:

    logsoftmax(x; dims = 1) = x .- log.(sum(exp.(x), dims = dims))

    See also softmax.

    source

    ',6))]),s[374]||(s[374]=i("h2",{id:"pooling",tabindex:"-1"},[a("Pooling "),i("a",{class:"header-anchor",href:"#pooling","aria-label":'Permalink to "Pooling"'},"​")],-1)),i("details",F,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.PoolDims",href:"#NNlib.PoolDims"},[i("span",{class:"jlbinding"},"NNlib.PoolDims")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=t(`
    julia
    PoolDims(x_size::NTuple{M}, k::Union{NTuple{L, Int}, Int};
    +ERROR: \`softmax(x)\` called with a number, but it expects an array.

    source

    `,11))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.logsoftmax",href:"#NNlib.logsoftmax"},[i("span",{class:"jlbinding"},"NNlib.logsoftmax")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=t('
    julia
    logsoftmax(x; dims = 1)

    Computes the log of softmax in a more numerically stable way than directly taking log.(softmax(xs)). Commonly used in computing cross entropy loss.

    It is semantically equivalent to the following:

    logsoftmax(x; dims = 1) = x .- log.(sum(exp.(x), dims = dims))

    See also softmax.

    source

    ',6))]),s[374]||(s[374]=i("h2",{id:"pooling",tabindex:"-1"},[a("Pooling "),i("a",{class:"header-anchor",href:"#pooling","aria-label":'Permalink to "Pooling"'},"​")],-1)),i("details",F,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.PoolDims",href:"#NNlib.PoolDims"},[i("span",{class:"jlbinding"},"NNlib.PoolDims")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=t(`
    julia
    PoolDims(x_size::NTuple{M}, k::Union{NTuple{L, Int}, Int};
              stride=k, padding=0, dilation=1)  where {M, L}

    Dimensions for a "pooling" operation that can have an arbitrary input size, kernel size, stride, dilation, and channel count. Used to dispatch onto efficient implementations at compile-time.

    source

    `,3))]),i("details",c,[i("summary",null,[s[18]||(s[18]=i("a",{id:"NNlib.maxpool",href:"#NNlib.maxpool"},[i("span",{class:"jlbinding"},"NNlib.maxpool")],-1)),s[19]||(s[19]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=t('
    julia
    maxpool(x, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform max pool operation with window size k on input tensor x.

    Arguments:

    • x and k: Expects ndim(x) ∈ 3:5, and always length(k) == ndim(x) - 2

    • pad: See pad_zeros for details.

    • stride: Either a tuple with the same length as k, or one integer for all directions. Default is k.

    source

    ',5))]),i("details",C,[i("summary",null,[s[21]||(s[21]=i("a",{id:"NNlib.meanpool",href:"#NNlib.meanpool"},[i("span",{class:"jlbinding"},"NNlib.meanpool")],-1)),s[22]||(s[22]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=t('
    julia
    meanpool(x, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform mean pool operation with window size k on input tensor x.

    Arguments:

    • x and k: Expects ndim(x) ∈ 3:5, and always length(k) == ndim(x) - 2

    • pad: See pad_zeros for details.

    • stride: Either a tuple with the same length as k, or one integer for all directions. Default is k.

    source

    ',5))]),i("details",u,[i("summary",null,[s[24]||(s[24]=i("a",{id:"NNlib.lpnormpool",href:"#NNlib.lpnormpool"},[i("span",{class:"jlbinding"},"NNlib.lpnormpool")],-1)),s[25]||(s[25]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=t('
    julia
    lpnormpool(x, p::Real, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform Lp pool operation with value of the Lp norm p and window size k on input tensor x, also known as LPPool in pytorch. This pooling operator from Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks.

    Arguments:

    • x and k: Expects ndim(x) ∈ 3:5, and always length(k) == ndim(x) - 2

    • p is restricted to 0 < p < Inf.

    • pad: See pad_zeros for details.

    • stride: Either a tuple with the same length as k, or one integer for all directions. Default is k.

    For all elements x in a size k window, lpnormpool computes (∑ᵢ xᵢ^p)^(1 / p) as an element of the output.

    Thus lpnormpool(x, 1, k) ./ prod(k) ≈ meanpool(x, k) and lpnormpool(x, 2, k).^2 ./ prod(k) ≈ meanpool(x.^2, k).

    source

    ',7))]),s[375]||(s[375]=i("h2",{id:"padding",tabindex:"-1"},[a("Padding "),i("a",{class:"header-anchor",href:"#padding","aria-label":'Permalink to "Padding"'},"​")],-1)),i("details",m,[i("summary",null,[s[27]||(s[27]=i("a",{id:"NNlib.pad_reflect",href:"#NNlib.pad_reflect"},[i("span",{class:"jlbinding"},"NNlib.pad_reflect")],-1)),s[28]||(s[28]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=t(`
    julia
    pad_reflect(x, pad::Tuple; [dims])
     pad_reflect(x, pad::Int; [dims])

    Pad the array x reflecting its values across the border.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_repeat, pad_symmetric, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
     3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
    @@ -38,7 +38,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      5  2  5  8  5  2
      6  3  6  9  6  3
      5  2  5  8  5  2
    - 4  1  4  7  4  1

    source

    `,7))]),i("details",b,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.pad_symmetric",href:"#NNlib.pad_symmetric"},[i("span",{class:"jlbinding"},"NNlib.pad_symmetric")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=t(`
    julia
    pad_symmetric(x, pad::Tuple; [dims])
    + 4  1  4  7  4  1

    source

    `,7))]),i("details",T,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.pad_symmetric",href:"#NNlib.pad_symmetric"},[i("span",{class:"jlbinding"},"NNlib.pad_symmetric")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=t(`
    julia
    pad_symmetric(x, pad::Tuple; [dims])
     pad_symmetric(x, pad::Int; [dims])

    Pad the array x reflecting its values symmetrically across the border, i.e. the border values of x are present in the padding values, in contrast to pad_reflect.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_repeat, pad_reflect, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
     3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
      1  4  7
    @@ -52,7 +52,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      2  2  5  8  8  5
      3  3  6  9  9  6
      3  3  6  9  9  6
    - 2  2  5  8  8  5

    source

    `,7))]),i("details",Q,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.pad_circular",href:"#NNlib.pad_circular"},[i("span",{class:"jlbinding"},"NNlib.pad_circular")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=t(`
    julia
    pad_circular(x, pad::Tuple; [dims])
    + 2  2  5  8  8  5

    source

    `,7))]),i("details",b,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.pad_circular",href:"#NNlib.pad_circular"},[i("span",{class:"jlbinding"},"NNlib.pad_circular")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=t(`
    julia
    pad_circular(x, pad::Tuple; [dims])
     pad_circular(x, pad::Int; [dims])

    Pad the array x "circularly" across the border by wrapping around values from the opposite side of x.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    The pad length on either side in any dimension must not exceed the size of x in that dimension, i.e. pad_circular is not able to create abitrary sized tilings of x.

    See also pad_repeat, pad_reflect, pad_symmetric, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
     3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
      1  4  7
    @@ -66,7 +66,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      8  2  5  8  2  5
      9  3  6  9  3  6
      7  1  4  7  1  4
    - 8  2  5  8  2  5

    source

    `,8))]),i("details",T,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.pad_repeat",href:"#NNlib.pad_repeat"},[i("span",{class:"jlbinding"},"NNlib.pad_repeat")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=t(`
    julia
    pad_repeat(x, pad::Tuple; [dims])
    + 8  2  5  8  2  5

    source

    `,8))]),i("details",Q,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.pad_repeat",href:"#NNlib.pad_repeat"},[i("span",{class:"jlbinding"},"NNlib.pad_repeat")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=t(`
    julia
    pad_repeat(x, pad::Tuple; [dims])
     pad_repeat(x, pad::Int; [dims])

    Pad the array x repeating the values on the border.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_reflect, pad_symmetric, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
     3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
      1  4  7
    @@ -147,8 +147,8 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
     julia> pad_constant(r, (2,1, 3), dims = (1,2)) # padding must always be either the same length as dims, or double it
     ERROR: ArgumentError: Could not parse padding (2, 1, 3) and dims (1, 2)
     Stacktrace:
    -[...]

    source

    `,7))]),i("details",f,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.pad_zeros",href:"#NNlib.pad_zeros"},[i("span",{class:"jlbinding"},"NNlib.pad_zeros")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=t(`
    julia
    pad_zeros(x, pad::Tuple; [dims])
    -pad_zeros(x, pad::Int; [dims])

    Pad the array x with zeros. Equivalent to pad_constant with the constant equal to 0.

    source

    `,3))]),s[376]||(s[376]=i("h2",{id:"convolution",tabindex:"-1"},[a("Convolution "),i("a",{class:"header-anchor",href:"#convolution","aria-label":'Permalink to "Convolution"'},"​")],-1)),i("details",v,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.conv",href:"#NNlib.conv"},[i("span",{class:"jlbinding"},"NNlib.conv")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=t('
    julia
    conv(x, w; stride = 1, pad = 0, dilation = 1, flipped = false, groups = 1)

    Apply convolution filter w to input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively. x and w may have real or complex element types.

    source

    ',3))]),i("details",N,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.ConvDims",href:"#NNlib.ConvDims"},[i("span",{class:"jlbinding"},"NNlib.ConvDims")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[50]||(s[50]=t('
    julia
    ConvDims

    Type system-level information about convolution dimensions. Critical for things like im2col!() to generate efficient code, and helpful to reduce the number of kwargs getting passed around.

    source

    ',3))]),i("details",x,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.depthwiseconv",href:"#NNlib.depthwiseconv"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=t('
    julia
    depthwiseconv(x, w; stride=1, pad=0, dilation=1, flipped=false)

    Depthwise convolution operation with filter w on input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively.

    source

    ',3))]),i("details",j,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.DepthwiseConvDims",href:"#NNlib.DepthwiseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DepthwiseConvDims")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[56]||(s[56]=t('
    julia
    DepthwiseConvDims

    Concrete subclass of ConvDims for a depthwise convolution. Differs primarily due to characterization by C_in, C_mult, rather than C_in, C_out. Useful to be separate from DenseConvDims primarily for channel calculation differences.

    source

    ',3))]),i("details",A,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.DenseConvDims",href:"#NNlib.DenseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DenseConvDims")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[59]||(s[59]=t('
    julia
    DenseConvDims

    Concrete subclass of ConvDims for a normal, dense, conv2d/conv3d.

    source

    ',3))]),i("details",w,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.unfold",href:"#NNlib.unfold"},[i("span",{class:"jlbinding"},"NNlib.unfold")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=t(`
    julia
    unfold(x, kernel_size; stride = 1, pad = 0, dilation = 0, flipped = true)

    Places sliding windows of x into a container tensor of size (num_windows, window_size, batchsize). The window size is determined by the prod(spatial dims of kernel)*input_channels. The number of sliding windows will match those of convolution (conv) with the same kernel_size and arguments. Note that by default conv flips the spatial dimensions of its kernel (default flipped=false), whereas unfold does not (default flipped=true). Uses NNlib.im2col! as backend.

    See also fold, the adjoint/transpose operator and a potential inverse of unfold.

    Example

    The below example demonstrates that unfold uses the same sliding windows as conv. In general batched_mul + unfold should not be used to achieve convolution.

    julia
    julia> x = reshape([100 2 3 40 5 6 700], 7, 1, 1);  # 1D data, 1 channel, batch of 1
    +[...]

    source

    `,7))]),i("details",_,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.pad_zeros",href:"#NNlib.pad_zeros"},[i("span",{class:"jlbinding"},"NNlib.pad_zeros")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=t(`
    julia
    pad_zeros(x, pad::Tuple; [dims])
    +pad_zeros(x, pad::Int; [dims])

    Pad the array x with zeros. Equivalent to pad_constant with the constant equal to 0.

    source

    `,3))]),s[376]||(s[376]=i("h2",{id:"convolution",tabindex:"-1"},[a("Convolution "),i("a",{class:"header-anchor",href:"#convolution","aria-label":'Permalink to "Convolution"'},"​")],-1)),i("details",f,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.conv",href:"#NNlib.conv"},[i("span",{class:"jlbinding"},"NNlib.conv")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=t('
    julia
    conv(x, w; stride = 1, pad = 0, dilation = 1, flipped = false, groups = 1)

    Apply convolution filter w to input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively. x and w may have real or complex element types.

    source

    ',3))]),i("details",v,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.ConvDims",href:"#NNlib.ConvDims"},[i("span",{class:"jlbinding"},"NNlib.ConvDims")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[50]||(s[50]=t('
    julia
    ConvDims

    Type system-level information about convolution dimensions. Critical for things like im2col!() to generate efficient code, and helpful to reduce the number of kwargs getting passed around.

    source

    ',3))]),i("details",N,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.depthwiseconv",href:"#NNlib.depthwiseconv"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=t('
    julia
    depthwiseconv(x, w; stride=1, pad=0, dilation=1, flipped=false)

    Depthwise convolution operation with filter w on input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively.

    source

    ',3))]),i("details",A,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.DepthwiseConvDims",href:"#NNlib.DepthwiseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DepthwiseConvDims")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[56]||(s[56]=t('
    julia
    DepthwiseConvDims

    Concrete subclass of ConvDims for a depthwise convolution. Differs primarily due to characterization by C_in, C_mult, rather than C_in, C_out. Useful to be separate from DenseConvDims primarily for channel calculation differences.

    source

    ',3))]),i("details",x,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.DenseConvDims",href:"#NNlib.DenseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DenseConvDims")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[59]||(s[59]=t('
    julia
    DenseConvDims

    Concrete subclass of ConvDims for a normal, dense, conv2d/conv3d.

    source

    ',3))]),i("details",j,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.unfold",href:"#NNlib.unfold"},[i("span",{class:"jlbinding"},"NNlib.unfold")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=t(`
    julia
    unfold(x, kernel_size; stride = 1, pad = 0, dilation = 0, flipped = true)

    Places sliding windows of x into a container tensor of size (num_windows, window_size, batchsize). The window size is determined by the prod(spatial dims of kernel)*input_channels. The number of sliding windows will match those of convolution (conv) with the same kernel_size and arguments. Note that by default conv flips the spatial dimensions of its kernel (default flipped=false), whereas unfold does not (default flipped=true). Uses NNlib.im2col! as backend.

    See also fold, the adjoint/transpose operator and a potential inverse of unfold.

    Example

    The below example demonstrates that unfold uses the same sliding windows as conv. In general batched_mul + unfold should not be used to achieve convolution.

    julia
    julia> x = reshape([100 2 3 40 5 6 700], 7, 1, 1);  # 1D data, 1 channel, batch of 1
     
     julia> w = reshape([1 0 -1], 3, 1, 1);  # 1D conv kernel of length 3
     
    @@ -218,7 +218,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
       40.0
        5.0
        6.0
    - 700.0

    In general, an inverse to unfold does not exist if divisor contains zeros.

    source

    `,7))]),s[377]||(s[377]=i("h2",{id:"upsampling",tabindex:"-1"},[a("Upsampling "),i("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),i("details",L,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.upsample_nearest",href:"#NNlib.upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.upsample_nearest")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=t(`
    julia
    upsample_nearest(x, scale::NTuple{S,Int})
    + 700.0

    In general, an inverse to unfold does not exist if divisor contains zeros.

    source

    `,7))]),s[377]||(s[377]=i("h2",{id:"upsampling",tabindex:"-1"},[a("Upsampling "),i("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),i("details",w,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.upsample_nearest",href:"#NNlib.upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.upsample_nearest")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=t(`
    julia
    upsample_nearest(x, scale::NTuple{S,Int})
     upsample_nearest(x; size::NTuple{S,Int})

    Upsamples the array x by integer multiples along the first S dimensions. Subsequent dimensions of x are not altered.

    Either the scale factors or the final output size can be specified.

    See also upsample_bilinear, for two dimensions of an N=4 array.

    Example

    julia
    julia> upsample_nearest([1 2 3; 4 5 6], (2, 3))
     4×9 Matrix{Int64}:
      1  1  1  2  2  2  3  3  3
    @@ -237,8 +237,8 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      4  5  6
     
     julia> ans == upsample_nearest([1 2 3; 4 5 6], size=(4,))
    -true

    source

    `,7))]),i("details",H,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.∇upsample_nearest",href:"#NNlib.∇upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_nearest")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=t('
    julia
    ∇upsample_nearest::AbstractArray{T,3}, scales::NTuple{S, <:Integer}) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • scales: scales by which the image was upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",V,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.upsample_linear",href:"#NNlib.upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.upsample_linear")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=t(`
    julia
    upsample_linear(x::AbstractArray{T,3}, scale::Real; align_corners::Bool = true)
    -upsample_linear(x::AbstractArray{T,3}; size::Integer, align_corners::Bool = true)

    Upsamples the first dimension of the array x by the upsample provided scale, using linear interpolation. As an alternative to using scale, the resulting array size can be directly specified with a keyword argument.

    The size of the output is equal to (scale*S1, S2, S3), where S1, S2, S3 = size(x).

    source

    `,4))]),i("details",_,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.∇upsample_linear",href:"#NNlib.∇upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_linear")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=t('
    julia
    ∇upsample_linear::AbstractArray{T,3}; size::Integer, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Size of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",M,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.upsample_bilinear",href:"#NNlib.upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_bilinear")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=t(`
    julia
    upsample_bilinear(x::AbstractArray{T,4}, scale::NTuple{2,Real}; align_corners::Bool = true)
    +true

    source

    `,7))]),i("details",V,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.∇upsample_nearest",href:"#NNlib.∇upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_nearest")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=t('
    julia
    ∇upsample_nearest::AbstractArray{T,3}, scales::NTuple{S, <:Integer}) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • scales: scales by which the image was upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",L,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.upsample_linear",href:"#NNlib.upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.upsample_linear")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=t(`
    julia
    upsample_linear(x::AbstractArray{T,3}, scale::Real; align_corners::Bool = true)
    +upsample_linear(x::AbstractArray{T,3}; size::Integer, align_corners::Bool = true)

    Upsamples the first dimension of the array x by the upsample provided scale, using linear interpolation. As an alternative to using scale, the resulting array size can be directly specified with a keyword argument.

    The size of the output is equal to (scale*S1, S2, S3), where S1, S2, S3 = size(x).

    source

    `,4))]),i("details",H,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.∇upsample_linear",href:"#NNlib.∇upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_linear")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=t('
    julia
    ∇upsample_linear::AbstractArray{T,3}; size::Integer, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Size of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",S,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.upsample_bilinear",href:"#NNlib.upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_bilinear")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=t(`
    julia
    upsample_bilinear(x::AbstractArray{T,4}, scale::NTuple{2,Real}; align_corners::Bool = true)
     upsample_bilinear(x::AbstractArray{T,4}; size::NTuple{2,Integer}, align_corners::Bool = true)

    Upsamples the first 2 dimensions of the array x by the upsample factors stored in scale, using bilinear interpolation. As an alternative to using scale, the resulting image size can be directly specified with a keyword argument.

    The size of the output is equal to (scale[1]*S1, scale[2]*S2, S3, S4), where S1, S2, S3, S4 = size(x).

    Examples

    julia
    julia> x = reshape(Float32[1 2 3; 4 5 6], (2,3,1,1))
     2×3×1×1 Array{Float32, 4}:
     [:, :, 1, 1] =
    @@ -263,10 +263,10 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      1.75  1.97222  2.19444  2.41667  2.63889     3.08333  3.30556  3.52778  3.75
      2.5   2.72222  2.94444  3.16667  3.38889     3.83333  4.05556  4.27778  4.5
      3.25  3.47222  3.69444  3.91667  4.13889     4.58333  4.80556  5.02778  5.25
    - 4.0   4.22222  4.44444  4.66667  4.88889     5.33333  5.55556  5.77778  6.0

    source

    `,6))]),i("details",z,[i("summary",null,[s[81]||(s[81]=i("a",{id:"NNlib.∇upsample_bilinear",href:"#NNlib.∇upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_bilinear")],-1)),s[82]||(s[82]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[83]||(s[83]=t('
    julia
    ∇upsample_bilinear::AbstractArray{T,4}; size::NTuple{2,Integer}, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Lateral (W,H) size of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",Z,[i("summary",null,[s[84]||(s[84]=i("a",{id:"NNlib.upsample_trilinear",href:"#NNlib.upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_trilinear")],-1)),s[85]||(s[85]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[86]||(s[86]=t(`
    julia
    upsample_trilinear(x::AbstractArray{T,5}, scale::NTuple{3,Real}; align_corners::Bool = true)
    + 4.0   4.22222  4.44444  4.66667  4.88889     5.33333  5.55556  5.77778  6.0

    source

    `,6))]),i("details",M,[i("summary",null,[s[81]||(s[81]=i("a",{id:"NNlib.∇upsample_bilinear",href:"#NNlib.∇upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_bilinear")],-1)),s[82]||(s[82]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[83]||(s[83]=t('
    julia
    ∇upsample_bilinear::AbstractArray{T,4}; size::NTuple{2,Integer}, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Lateral (W,H) size of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",I,[i("summary",null,[s[84]||(s[84]=i("a",{id:"NNlib.upsample_trilinear",href:"#NNlib.upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_trilinear")],-1)),s[85]||(s[85]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[86]||(s[86]=t(`
    julia
    upsample_trilinear(x::AbstractArray{T,5}, scale::NTuple{3,Real}; align_corners::Bool = true)
     upsample_trilinear(x::AbstractArray{T,5}; size::NTuple{3,Integer}, align_corners::Bool = true)

    Upsamples the first 3 dimensions of the array x by the upsample factors stored in scale, using trilinear interpolation. As an alternative to using scale, the resulting image size can be directly specified with a keyword argument.

    The size of the output is equal to (scale[1]*S1, scale[2]*S2, scale[3]*S3, S4, S5), where S1, S2, S3, S4, S5 = size(x).

    Examples

    julia
    upsample_trilinear(x, (2, 3, 4))
     upsample_trilinear(x; size=(4, 9, 11))  # specify ouput size instead
    -upsample_trilinear(x, (2.5, 3.5, pi))  # non-integer scaling factors are allowed

    source

    `,6))]),i("details",I,[i("summary",null,[s[87]||(s[87]=i("a",{id:"NNlib.∇upsample_trilinear",href:"#NNlib.∇upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_trilinear")],-1)),s[88]||(s[88]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[89]||(s[89]=t('
    julia
    ∇upsample_trilinear::AbstractArray{T,5}; size::NTuple{3,Integer}, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Lateral size & depth (W,H,D) of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",O,[i("summary",null,[s[90]||(s[90]=i("a",{id:"NNlib.pixel_shuffle",href:"#NNlib.pixel_shuffle"},[i("span",{class:"jlbinding"},"NNlib.pixel_shuffle")],-1)),s[91]||(s[91]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[92]||(s[92]=t(`
    julia
    pixel_shuffle(x, r::Integer)

    Pixel shuffling operation, upscaling by a factor r.

    For 4-arrays representing N images, the operation converts input size(x) == (W, H, r^2*C, N) to output of size (r*W, r*H, C, N). For D-dimensional data, it expects ndims(x) == D+2 with channel and batch dimensions, and divides the number of channels by r^D.

    Used in super-resolution networks to upsample towards high resolution features. Reference: Shi et. al., "Real-Time Single Image and Video Super-Resolution ...", CVPR 2016, https://arxiv.org/abs/1609.05158

    Examples

    julia
    julia> x = [10i + j + channel/10 for i in 1:2, j in 1:3, channel in 1:4, batch in 1:1]
    +upsample_trilinear(x, (2.5, 3.5, pi))  # non-integer scaling factors are allowed

    source

    `,6))]),i("details",P,[i("summary",null,[s[87]||(s[87]=i("a",{id:"NNlib.∇upsample_trilinear",href:"#NNlib.∇upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_trilinear")],-1)),s[88]||(s[88]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[89]||(s[89]=t('
    julia
    ∇upsample_trilinear::AbstractArray{T,5}; size::NTuple{3,Integer}, align_corners::Bool = true) where T

    Arguments

    • Δ: Incoming gradient array, backpropagated from downstream layers

    • size: Lateral size & depth (W,H,D) of the image upsampled in the first place

    Outputs

    • dx: Downsampled version of Δ

    source

    ',6))]),i("details",z,[i("summary",null,[s[90]||(s[90]=i("a",{id:"NNlib.pixel_shuffle",href:"#NNlib.pixel_shuffle"},[i("span",{class:"jlbinding"},"NNlib.pixel_shuffle")],-1)),s[91]||(s[91]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[92]||(s[92]=t(`
    julia
    pixel_shuffle(x, r::Integer)

    Pixel shuffling operation, upscaling by a factor r.

    For 4-arrays representing N images, the operation converts input size(x) == (W, H, r^2*C, N) to output of size (r*W, r*H, C, N). For D-dimensional data, it expects ndims(x) == D+2 with channel and batch dimensions, and divides the number of channels by r^D.

    Used in super-resolution networks to upsample towards high resolution features. Reference: Shi et. al., "Real-Time Single Image and Video Super-Resolution ...", CVPR 2016, https://arxiv.org/abs/1609.05158

    Examples

    julia
    julia> x = [10i + j + channel/10 for i in 1:2, j in 1:3, channel in 1:4, batch in 1:1]
     2×3×4×1 Array{Float64, 4}:
     [:, :, 1, 1] =
      11.1  12.1  13.1
    @@ -307,7 +307,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
      2.1  2.3  2.5
      2.2  2.4  2.6
      3.1  3.3  3.5
    - 3.2  3.4  3.6

    source

    `,7))]),s[378]||(s[378]=i("h2",{id:"rotation",tabindex:"-1"},[a("Rotation "),i("a",{class:"header-anchor",href:"#rotation","aria-label":'Permalink to "Rotation"'},"​")],-1)),s[379]||(s[379]=i("p",null,"Rotate images in the first two dimensions of an array.",-1)),i("details",P,[i("summary",null,[s[93]||(s[93]=i("a",{id:"NNlib.imrotate",href:"#NNlib.imrotate"},[i("span",{class:"jlbinding"},"NNlib.imrotate")],-1)),s[94]||(s[94]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[95]||(s[95]=t(`
    julia
    imrotate(arr::AbstractArray{T, 4}, θ; method=:bilinear, rotation_center=size(arr)  2 .+ 1)

    Rotates an array in the first two dimensions around the center pixel rotation_center. The default value of rotation_center is defined such that there is a integer center pixel for even and odd sized arrays which it is rotated around. For an even sized array of size (4,4) this would be (3,3), for an odd array of size (3,3) this would be (2,2) However, rotation_center can be also non-integer numbers if specified.

    The angle θ is interpreted in radians.

    The adjoint is defined with ChainRulesCore.jl. This method also runs with CUDA (and in principle all KernelAbstractions.jl supported backends).

    Keywords

    • method=:bilinear for bilinear interpolation or method=:nearest for nearest neighbour

    • rotation_center=size(arr) .÷ 2 .+ 1 means there is a real center pixel around it is rotated.

    Examples

    julia
    julia> arr = zeros((4,4,1,1)); arr[2,2,1,1] = 1;
    + 3.2  3.4  3.6

    source

    `,7))]),s[378]||(s[378]=i("h2",{id:"rotation",tabindex:"-1"},[a("Rotation "),i("a",{class:"header-anchor",href:"#rotation","aria-label":'Permalink to "Rotation"'},"​")],-1)),s[379]||(s[379]=i("p",null,"Rotate images in the first two dimensions of an array.",-1)),i("details",R,[i("summary",null,[s[93]||(s[93]=i("a",{id:"NNlib.imrotate",href:"#NNlib.imrotate"},[i("span",{class:"jlbinding"},"NNlib.imrotate")],-1)),s[94]||(s[94]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[95]||(s[95]=t(`
    julia
    imrotate(arr::AbstractArray{T, 4}, θ; method=:bilinear, rotation_center=size(arr)  2 .+ 1)

    Rotates an array in the first two dimensions around the center pixel rotation_center. The default value of rotation_center is defined such that there is a integer center pixel for even and odd sized arrays which it is rotated around. For an even sized array of size (4,4) this would be (3,3), for an odd array of size (3,3) this would be (2,2) However, rotation_center can be also non-integer numbers if specified.

    The angle θ is interpreted in radians.

    The adjoint is defined with ChainRulesCore.jl. This method also runs with CUDA (and in principle all KernelAbstractions.jl supported backends).

    Keywords

    • method=:bilinear for bilinear interpolation or method=:nearest for nearest neighbour

    • rotation_center=size(arr) .÷ 2 .+ 1 means there is a real center pixel around it is rotated.

    Examples

    julia
    julia> arr = zeros((4,4,1,1)); arr[2,2,1,1] = 1;
     
     julia> arr
     4×4×1×1 Array{Float64, 4}:
    @@ -355,8 +355,8 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
     [:, :, 1, 1] =
      0.0  0.0  1.0
      0.0  0.0  0.0
    - 0.0  0.0  0.0

    source

    `,9))]),i("details",S,[i("summary",null,[s[96]||(s[96]=i("a",{id:"NNlib.∇imrotate",href:"#NNlib.∇imrotate"},[i("span",{class:"jlbinding"},"NNlib.∇imrotate")],-1)),s[97]||(s[97]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[98]||(s[98]=t(`
    julia
    ∇imrotate(dy, arr::AbstractArray{T, 4}, θ; method=:bilinear,
    -                                           rotation_center=size(arr)  2 .+ 1)

    Adjoint for imrotate. Gradient only with respect to arr and not θ.

    Arguments

    • dy: input gradient

    • arr: Input from primal computation

    • θ: rotation angle in radians

    • method=:bilinear or method=:nearest

    • rotation_center=size(arr) .÷ 2 .+ 1 rotates around a real center pixel for even and odd sized arrays

    source

    `,5))]),s[380]||(s[380]=i("h2",{id:"Batched-Operations",tabindex:"-1"},[a("Batched Operations "),i("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),i("details",q,[i("summary",null,[s[99]||(s[99]=i("a",{id:"NNlib.batched_mul",href:"#NNlib.batched_mul"},[i("span",{class:"jlbinding"},"NNlib.batched_mul")],-1)),s[100]||(s[100]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[101]||(s[101]=t(`
    julia
    batched_mul(A, B) -> C
    + 0.0  0.0  0.0

    source

    `,9))]),i("details",Z,[i("summary",null,[s[96]||(s[96]=i("a",{id:"NNlib.∇imrotate",href:"#NNlib.∇imrotate"},[i("span",{class:"jlbinding"},"NNlib.∇imrotate")],-1)),s[97]||(s[97]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[98]||(s[98]=t(`
    julia
    ∇imrotate(dy, arr::AbstractArray{T, 4}, θ; method=:bilinear,
    +                                           rotation_center=size(arr)  2 .+ 1)

    Adjoint for imrotate. Gradient only with respect to arr and not θ.

    Arguments

    • dy: input gradient

    • arr: Input from primal computation

    • θ: rotation angle in radians

    • method=:bilinear or method=:nearest

    • rotation_center=size(arr) .÷ 2 .+ 1 rotates around a real center pixel for even and odd sized arrays

    source

    `,5))]),s[380]||(s[380]=i("h2",{id:"Batched-Operations",tabindex:"-1"},[a("Batched Operations "),i("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),i("details",O,[i("summary",null,[s[99]||(s[99]=i("a",{id:"NNlib.batched_mul",href:"#NNlib.batched_mul"},[i("span",{class:"jlbinding"},"NNlib.batched_mul")],-1)),s[100]||(s[100]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[101]||(s[101]=t(`
    julia
    batched_mul(A, B) -> C
     A  B  # \\boxtimes

    Batched matrix multiplication. Result has C[:,:,k...] == A[:,:,k...] * B[:,:,k...] where k... represent any indices in the last dimensions.

    If ndims(A) == ndims(B) == 3 and size(B,3) == 1 then instead C[:,:,k] == A[:,:,k] * B[:,:,1], and similarly for A.

    To transpose each matrix, apply batched_transpose to the array, or batched_adjoint for conjugate-transpose:

    julia
    julia> A, B = randn(2,5,17), randn(5,9,17);
     
     julia> A  B |> size
    @@ -381,7 +381,7 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
     (16, 4, 32)
     
     julia> randn(16,8)  randn(8,4,32) |> size
    -(16, 4, 32)

    See also batched_vec to regard B as a batch of vectors, A[:,:,k] * B[:,k].

    source

    `,15))]),i("details",R,[i("summary",null,[s[102]||(s[102]=i("a",{id:"NNlib.batched_mul!",href:"#NNlib.batched_mul!"},[i("span",{class:"jlbinding"},"NNlib.batched_mul!")],-1)),s[103]||(s[103]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[104]||(s[104]=t(`
    julia
    batched_mul!(C, A, B) -> C
    +(16, 4, 32)

    See also batched_vec to regard B as a batch of vectors, A[:,:,k] * B[:,k].

    source

    `,15))]),i("details",q,[i("summary",null,[s[102]||(s[102]=i("a",{id:"NNlib.batched_mul!",href:"#NNlib.batched_mul!"},[i("span",{class:"jlbinding"},"NNlib.batched_mul!")],-1)),s[103]||(s[103]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[104]||(s[104]=t(`
    julia
    batched_mul!(C, A, B) -> C
     batched_mul!(C, A, B, α=1, β=0)

    In-place batched matrix multiplication, equivalent to mul!(C[:,:,k], A[:,:,k], B[:,:,k], α, β) for all k. If size(B,3) == 1 then every batch uses B[:,:,1] instead.

    This will call batched_gemm! whenever possible. For real arrays this means that, for X ∈ [A,B,C], either stride(X,1)==1 or stride(X,2)==1, the latter may be caused by batched_transpose or by for instance PermutedDimsArray(::Array, (3,1,2)). Unlike batched_mul this will never make a copy.

    For complex arrays, the wrapper made by batched_adjoint must be outermost to be seen. In this case the strided accepted by BLAS are more restricted, if stride(C,1)==1 then only stride(AorB::BatchedAdjoint,2) == 1 is accepted.

    source

    `,5))]),i("details",W,[i("summary",null,[s[105]||(s[105]=i("a",{id:"NNlib.batched_adjoint",href:"#NNlib.batched_adjoint"},[i("span",{class:"jlbinding"},"NNlib.batched_adjoint")],-1)),s[106]||(s[106]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[107]||(s[107]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
     batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
     BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",G,[i("summary",null,[s[108]||(s[108]=i("a",{id:"NNlib.batched_transpose",href:"#NNlib.batched_transpose"},[i("span",{class:"jlbinding"},"NNlib.batched_transpose")],-1)),s[109]||(s[109]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[110]||(s[110]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    @@ -522,9 +522,9 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
         pad::Int = 0, n_fft::Int, hop_length::Int, window,
         center::Bool = true, power::Real = 2.0,
         normalized::Bool = false, window_normalized::Bool = false,
    -)

    Create a spectrogram or a batch of spectrograms from a raw audio signal.

    Arguments

    • pad::Int: Then amount of padding to apply on both sides.

    • window_normalized::Bool: Whether to normalize the waveform by the window’s L2 energy.

    • power::Real: Exponent for the magnitude spectrogram (must be ≥ 0) e.g., 1 for magnitude, 2 for power, etc. If 0, complex spectrum is returned instead.

    See stft for other arguments.

    Returns

    Spectrogram in the shape (T, F, B), where T is the number of window hops and F = n_fft ÷ 2 + 1.

    source

    `,8))]),i("details",gs,[i("summary",null,[s[168]||(s[168]=i("a",{id:"NNlib.is_strided",href:"#NNlib.is_strided"},[i("span",{class:"jlbinding"},"NNlib.is_strided")],-1)),s[169]||(s[169]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[170]||(s[170]=t('
    julia
    is_strided(A::AbstractArray) -> Bool

    This generalises A isa StridedArray to treat wrappers like A::PermutedDimsArray, for which it returns is_strided(parent(A)).

    It returns true for CuArrays, and PermutedDimsArrays of those.

    Other wrappers (defined outside Base, LinearAlgebra) are assumed not to break strided-ness, and hence also return is_strided(parent(A)). This correctly handles things like NamedDimsArray wihch don't alter indexing. However, it's a little pessimistic in that e.g. a view of such a container will return false, even in cases where the same view of parent(A) would be a StridedArray.

    source

    ',5))]),i("details",ys,[i("summary",null,[s[171]||(s[171]=i("a",{id:"NNlib.conv_direct!",href:"#NNlib.conv_direct!"},[i("span",{class:"jlbinding"},"NNlib.conv_direct!")],-1)),s[172]||(s[172]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[173]||(s[173]=t('
    julia
    conv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct convolution implementation; used for debugging, tests, and mixing/matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    The keyword arguments alpha and beta control accumulation behavior; this function calculates y = alpha * x * w + beta * y, therefore by setting beta to a nonzero value, the user is able to accumulate values into a preallocated y buffer, or by setting alpha to a nonunitary value, an arbitrary gain factor can be applied.

    By defaulting beta to false, we make use of the Bradbury promotion trick to override NaN's that may pre-exist within our output buffer, as false*NaN == 0.0, whereas 0.0*NaN == NaN. Only set beta if you are certain that none of the elements within y are NaN.

    The basic implementation performs 3-dimensional convolution; 1-dimensional and 2- dimensional cases are supported by simply reshaping y, x and w, for which wrapper methods are available.

    source

    ',6))]),i("details",Es,[i("summary",null,[s[174]||(s[174]=i("a",{id:"NNlib.gemm!",href:"#NNlib.gemm!"},[i("span",{class:"jlbinding"},"NNlib.gemm!")],-1)),s[175]||(s[175]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[176]||(s[176]=t('
    julia
    gemm!()

    Low-level gemm!() call with pointers, borrowed from Knet.jl

    Calculates C = alpha*op(A)*op(B) + beta*C, where:

    • transA and transB set op(X) to be either identity() or transpose()

    • alpha and beta are scalars

    • op(A) is an (M, K) matrix

    • op(B) is a (K, N) matrix

    • C is an (M, N) matrix.

    source

    ',5))]),i("details",Fs,[i("summary",null,[s[177]||(s[177]=i("a",{id:"NNlib.calc_padding_regions",href:"#NNlib.calc_padding_regions"},[i("span",{class:"jlbinding"},"NNlib.calc_padding_regions")],-1)),s[178]||(s[178]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[179]||(s[179]=t('
    julia
    calc_padding_regions(dims)

    Padding is a jerk. A HUGE jerk that tries to sneak a bunch of conditionals and edge cases (quite literally) into our beautiful stencil operations such as convolution, pooling, etc... The way we deal with this is to, first, deal with everything in 3d, and then define a single padding region helper function that returns the seven regions that all 3d operations must deal with, including the central "unpadded" region where we can run at full bore, not paying any attention to padding.

    source

    ',3))]),i("details",cs,[i("summary",null,[s[180]||(s[180]=i("a",{id:"NNlib.∇depthwiseconv_data_im2col!",href:"#NNlib.∇depthwiseconv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_im2col!")],-1)),s[181]||(s[181]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[182]||(s[182]=t('
    julia
    ∇depthwiseconv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Depwthwise conv2d backward pass onto the input using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Cs,[i("summary",null,[s[183]||(s[183]=i("a",{id:"NNlib._prepare_imrotate",href:"#NNlib._prepare_imrotate"},[i("span",{class:"jlbinding"},"NNlib._prepare_imrotate")],-1)),s[184]||(s[184]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[185]||(s[185]=t('
    julia
    _prepare_imrotate(arr, θ, rotation_center)

    Prepate sin and cos, creates the output array and converts type of rotation_center if required.

    source

    ',3))]),i("details",us,[i("summary",null,[s[186]||(s[186]=i("a",{id:"NNlib.insert_singleton_spatial_dimension",href:"#NNlib.insert_singleton_spatial_dimension"},[i("span",{class:"jlbinding"},"NNlib.insert_singleton_spatial_dimension")],-1)),s[187]||(s[187]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[188]||(s[188]=t('
    julia
    insert_singleton_spatial_dimension(cdims::ConvDims)

    When converting a 1d convolution to a 2d, or a 2d to a 3d, we need to insert a singleton spatial dimension at the end of the spatial dimensions. This does so for a ConvDims.

    source

    ',3))]),i("details",ms,[i("summary",null,[s[189]||(s[189]=i("a",{id:"NNlib._fast_broadcast!",href:"#NNlib._fast_broadcast!"},[i("span",{class:"jlbinding"},"NNlib._fast_broadcast!")],-1)),s[190]||(s[190]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[191]||(s[191]=t('
    julia
    _fast_broadcast!(f, x, y, z...)

    This does x .= f.(x, y, z...), but works around an issue with broadcasting that prevents SIMD in such cases. Can perhaps be removed once https://github.com/JuliaLang/julia/issues/43153 is fixed.

    Has an rrule to avoid mutation within derivatives.

    Warning

    Not intended for general use. Uses @inbounds but does not check sizes! Assumes that f has no derivative!

    source

    ',5))]),i("details",bs,[i("summary",null,[s[192]||(s[192]=i("a",{id:"NNlib.hann_window",href:"#NNlib.hann_window"},[i("span",{class:"jlbinding"},"NNlib.hann_window")],-1)),s[193]||(s[193]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[200]||(s[200]=t(`
    julia
    hann_window(
    +)

    Create a spectrogram or a batch of spectrograms from a raw audio signal.

    Arguments

    • pad::Int: Then amount of padding to apply on both sides.

    • window_normalized::Bool: Whether to normalize the waveform by the window’s L2 energy.

    • power::Real: Exponent for the magnitude spectrogram (must be ≥ 0) e.g., 1 for magnitude, 2 for power, etc. If 0, complex spectrum is returned instead.

    See stft for other arguments.

    Returns

    Spectrogram in the shape (T, F, B), where T is the number of window hops and F = n_fft ÷ 2 + 1.

    source

    `,8))]),i("details",gs,[i("summary",null,[s[168]||(s[168]=i("a",{id:"NNlib.is_strided",href:"#NNlib.is_strided"},[i("span",{class:"jlbinding"},"NNlib.is_strided")],-1)),s[169]||(s[169]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[170]||(s[170]=t('
    julia
    is_strided(A::AbstractArray) -> Bool

    This generalises A isa StridedArray to treat wrappers like A::PermutedDimsArray, for which it returns is_strided(parent(A)).

    It returns true for CuArrays, and PermutedDimsArrays of those.

    Other wrappers (defined outside Base, LinearAlgebra) are assumed not to break strided-ness, and hence also return is_strided(parent(A)). This correctly handles things like NamedDimsArray wihch don't alter indexing. However, it's a little pessimistic in that e.g. a view of such a container will return false, even in cases where the same view of parent(A) would be a StridedArray.

    source

    ',5))]),i("details",Es,[i("summary",null,[s[171]||(s[171]=i("a",{id:"NNlib.conv_direct!",href:"#NNlib.conv_direct!"},[i("span",{class:"jlbinding"},"NNlib.conv_direct!")],-1)),s[172]||(s[172]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[173]||(s[173]=t('
    julia
    conv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct convolution implementation; used for debugging, tests, and mixing/matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    The keyword arguments alpha and beta control accumulation behavior; this function calculates y = alpha * x * w + beta * y, therefore by setting beta to a nonzero value, the user is able to accumulate values into a preallocated y buffer, or by setting alpha to a nonunitary value, an arbitrary gain factor can be applied.

    By defaulting beta to false, we make use of the Bradbury promotion trick to override NaN's that may pre-exist within our output buffer, as false*NaN == 0.0, whereas 0.0*NaN == NaN. Only set beta if you are certain that none of the elements within y are NaN.

    The basic implementation performs 3-dimensional convolution; 1-dimensional and 2- dimensional cases are supported by simply reshaping y, x and w, for which wrapper methods are available.

    source

    ',6))]),i("details",ys,[i("summary",null,[s[174]||(s[174]=i("a",{id:"NNlib.gemm!",href:"#NNlib.gemm!"},[i("span",{class:"jlbinding"},"NNlib.gemm!")],-1)),s[175]||(s[175]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[176]||(s[176]=t('
    julia
    gemm!()

    Low-level gemm!() call with pointers, borrowed from Knet.jl

    Calculates C = alpha*op(A)*op(B) + beta*C, where:

    • transA and transB set op(X) to be either identity() or transpose()

    • alpha and beta are scalars

    • op(A) is an (M, K) matrix

    • op(B) is a (K, N) matrix

    • C is an (M, N) matrix.

    source

    ',5))]),i("details",Fs,[i("summary",null,[s[177]||(s[177]=i("a",{id:"NNlib.calc_padding_regions",href:"#NNlib.calc_padding_regions"},[i("span",{class:"jlbinding"},"NNlib.calc_padding_regions")],-1)),s[178]||(s[178]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[179]||(s[179]=t('
    julia
    calc_padding_regions(dims)

    Padding is a jerk. A HUGE jerk that tries to sneak a bunch of conditionals and edge cases (quite literally) into our beautiful stencil operations such as convolution, pooling, etc... The way we deal with this is to, first, deal with everything in 3d, and then define a single padding region helper function that returns the seven regions that all 3d operations must deal with, including the central "unpadded" region where we can run at full bore, not paying any attention to padding.

    source

    ',3))]),i("details",cs,[i("summary",null,[s[180]||(s[180]=i("a",{id:"NNlib.∇depthwiseconv_data_im2col!",href:"#NNlib.∇depthwiseconv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_im2col!")],-1)),s[181]||(s[181]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[182]||(s[182]=t('
    julia
    ∇depthwiseconv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Depwthwise conv2d backward pass onto the input using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Cs,[i("summary",null,[s[183]||(s[183]=i("a",{id:"NNlib._prepare_imrotate",href:"#NNlib._prepare_imrotate"},[i("span",{class:"jlbinding"},"NNlib._prepare_imrotate")],-1)),s[184]||(s[184]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[185]||(s[185]=t('
    julia
    _prepare_imrotate(arr, θ, rotation_center)

    Prepate sin and cos, creates the output array and converts type of rotation_center if required.

    source

    ',3))]),i("details",us,[i("summary",null,[s[186]||(s[186]=i("a",{id:"NNlib.insert_singleton_spatial_dimension",href:"#NNlib.insert_singleton_spatial_dimension"},[i("span",{class:"jlbinding"},"NNlib.insert_singleton_spatial_dimension")],-1)),s[187]||(s[187]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[188]||(s[188]=t('
    julia
    insert_singleton_spatial_dimension(cdims::ConvDims)

    When converting a 1d convolution to a 2d, or a 2d to a 3d, we need to insert a singleton spatial dimension at the end of the spatial dimensions. This does so for a ConvDims.

    source

    ',3))]),i("details",ms,[i("summary",null,[s[189]||(s[189]=i("a",{id:"NNlib._fast_broadcast!",href:"#NNlib._fast_broadcast!"},[i("span",{class:"jlbinding"},"NNlib._fast_broadcast!")],-1)),s[190]||(s[190]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[191]||(s[191]=t('
    julia
    _fast_broadcast!(f, x, y, z...)

    This does x .= f.(x, y, z...), but works around an issue with broadcasting that prevents SIMD in such cases. Can perhaps be removed once https://github.com/JuliaLang/julia/issues/43153 is fixed.

    Has an rrule to avoid mutation within derivatives.

    Warning

    Not intended for general use. Uses @inbounds but does not check sizes! Assumes that f has no derivative!

    source

    ',5))]),i("details",Ts,[i("summary",null,[s[192]||(s[192]=i("a",{id:"NNlib.hann_window",href:"#NNlib.hann_window"},[i("span",{class:"jlbinding"},"NNlib.hann_window")],-1)),s[193]||(s[193]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[200]||(s[200]=t(`
    julia
    hann_window(
         window_length::Int, ::Type{T} = Float32; periodic::Bool = true,
    -) where T <: Real

    Hann window function (ref: Window function § Hann and Hamming windows - Wikipedia).

    `,2)),i("p",null,[i("mjx-container",Qs,[(h(),e("svg",Ts,s[194]||(s[194]=[t('',1)]))),s[195]||(s[195]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mfrac",null,[i("mn",null,"1"),i("mn",null,"2")]),i("mo",{stretchy:"false"},"["),i("mn",null,"1"),i("mo",null,"−"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")"),i("mo",{stretchy:"false"},"]")])],-1))])]),i("p",null,[s[198]||(s[198]=a("Where ")),i("mjx-container",Bs,[(h(),e("svg",fs,s[196]||(s[196]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[197]||(s[197]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[199]||(s[199]=a(" is the window length."))]),s[201]||(s[201]=t(`
    julia
    julia> lineplot(hann_window(100); width=30, height=10)
    +) where T <: Real

    Hann window function (ref: Window function § Hann and Hamming windows - Wikipedia).

    `,2)),i("p",null,[i("mjx-container",bs,[(h(),e("svg",Qs,s[194]||(s[194]=[t('',1)]))),s[195]||(s[195]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mfrac",null,[i("mn",null,"1"),i("mn",null,"2")]),i("mo",{stretchy:"false"},"["),i("mn",null,"1"),i("mo",null,"−"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")"),i("mo",{stretchy:"false"},"]")])],-1))])]),i("p",null,[s[198]||(s[198]=a("Where ")),i("mjx-container",Bs,[(h(),e("svg",_s,s[196]||(s[196]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[197]||(s[197]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[199]||(s[199]=a(" is the window length."))]),s[201]||(s[201]=t(`
    julia
    julia> lineplot(hann_window(100); width=30, height=10)
          ┌──────────────────────────────┐
        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⠚⠉⠉⠉⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡔⠁⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    @@ -543,18 +543,18 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
     true
     
     julia> hann_window(N)  hamming_window(N; α=0.5f0, β=0.5f0)
    -true

    Returns:

    Vector of length window_length and eltype T.

    source

    `,9))]),i("details",vs,[i("summary",null,[s[202]||(s[202]=i("a",{id:"NNlib._rng_from_array",href:"#NNlib._rng_from_array"},[i("span",{class:"jlbinding"},"NNlib._rng_from_array")],-1)),s[203]||(s[203]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[204]||(s[204]=t('
    julia
    _rng_from_array(x)

    Return the random number generator most appropriate for x: CUDA.default_rng() for CuArray, else Random.default_rng()

    source

    ',3))]),i("details",Ns,[i("summary",null,[s[205]||(s[205]=i("a",{id:"NNlib.∇depthwiseconv_filter_im2col!",href:"#NNlib.∇depthwiseconv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_im2col!")],-1)),s[206]||(s[206]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[207]||(s[207]=t(`
    julia
    ∇depthwiseconv_filter_im2col!(dw, w, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    -                              alpha=1, beta=0)

    Depthwise conv backward pass onto the weights using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",xs,[i("summary",null,[s[208]||(s[208]=i("a",{id:"NNlib.istft",href:"#NNlib.istft"},[i("span",{class:"jlbinding"},"NNlib.istft")],-1)),s[209]||(s[209]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[224]||(s[224]=t(`
    julia
    istft(y;
    +true

    Returns:

    Vector of length window_length and eltype T.

    source

    `,9))]),i("details",fs,[i("summary",null,[s[202]||(s[202]=i("a",{id:"NNlib._rng_from_array",href:"#NNlib._rng_from_array"},[i("span",{class:"jlbinding"},"NNlib._rng_from_array")],-1)),s[203]||(s[203]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[204]||(s[204]=t('
    julia
    _rng_from_array(x)

    Return the random number generator most appropriate for x: CUDA.default_rng() for CuArray, else Random.default_rng()

    source

    ',3))]),i("details",vs,[i("summary",null,[s[205]||(s[205]=i("a",{id:"NNlib.∇depthwiseconv_filter_im2col!",href:"#NNlib.∇depthwiseconv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_im2col!")],-1)),s[206]||(s[206]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[207]||(s[207]=t(`
    julia
    ∇depthwiseconv_filter_im2col!(dw, w, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    +                              alpha=1, beta=0)

    Depthwise conv backward pass onto the weights using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",Ns,[i("summary",null,[s[208]||(s[208]=i("a",{id:"NNlib.istft",href:"#NNlib.istft"},[i("span",{class:"jlbinding"},"NNlib.istft")],-1)),s[209]||(s[209]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[224]||(s[224]=t(`
    julia
    istft(y;
         n_fft::Int, hop_length::Int = n_fft ÷ 4, window = nothing,
         center::Bool = true, normalized::Bool = false,
         return_complex::Bool = false,
         original_length::Union{Nothing, Int} = nothing,
    -)

    Inverse Short-time Fourier Transform.

    Return the least squares estimation of the original signal

    Arguments:

    • y: Input complex array in the (n_fft, n_frames, B) shape. Where B is the optional batch dimension.

    Keyword Arguments:

    `,6)),i("ul",null,[s[222]||(s[222]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Window function that was applied to the input of stft. If nothing (default), then no window was applied.

  • ",3)),i("li",null,[i("p",null,[s[214]||(s[214]=i("code",null,"center::Bool",-1)),s[215]||(s[215]=a(": Whether input to ")),s[216]||(s[216]=i("code",null,"stft",-1)),s[217]||(s[217]=a(" was padded on both sides so that ")),i("mjx-container",js,[(h(),e("svg",As,s[210]||(s[210]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[211]||(s[211]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[218]||(s[218]=a("-th frame is centered at time ")),i("mjx-container",ws,[(h(),e("svg",Ds,s[212]||(s[212]=[t('',1)]))),s[213]||(s[213]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[219]||(s[219]=a(". Padding is done with ")),s[220]||(s[220]=i("code",null,"pad_reflect",-1)),s[221]||(s[221]=a(" function."))])]),s[223]||(s[223]=t("
  • normalized::Bool: Whether input to stft was normalized.

  • return_complex::Bool: Whether the output should be complex, or if the input should be assumed to derive from a real signal and window.

  • original_length::Union{Nothing, Int}: Optional size of the first dimension of the input to stft. Helps restoring the exact stft input size. Otherwise, the array might be a bit shorter.

  • ",3))]),s[225]||(s[225]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L173-L205",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ls,[i("summary",null,[s[226]||(s[226]=i("a",{id:"NNlib.transpose_swapbatch",href:"#NNlib.transpose_swapbatch"},[i("span",{class:"jlbinding"},"NNlib.transpose_swapbatch")],-1)),s[227]||(s[227]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[228]||(s[228]=t('
    julia
    transpose_swapbatch(x::AbstractArray)

    Given an AbstractArray, swap its batch and channel axes, as we must during transposed convolution. We do this to the operands during convolution, and then again to the output once we're done.

    source

    ',3))]),i("details",Hs,[i("summary",null,[s[229]||(s[229]=i("a",{id:"NNlib.transpose_pad",href:"#NNlib.transpose_pad"},[i("span",{class:"jlbinding"},"NNlib.transpose_pad")],-1)),s[230]||(s[230]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[231]||(s[231]=t('
    julia
    transpose_pad(cdims::ConvDims)

    Transposed convolution can be calculated in terms of typical convolution with some extra padding. This method computes the padding of the convolution that would result in the transposed convolution of two operands, in essence taking care of that "extra padding". Note that this method should almost always be accompanied by a call that predilates one of the operands.

    source

    ',3))]),i("details",Vs,[i("summary",null,[s[232]||(s[232]=i("a",{id:"NNlib.power_to_db",href:"#NNlib.power_to_db"},[i("span",{class:"jlbinding"},"NNlib.power_to_db")],-1)),s[233]||(s[233]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[234]||(s[234]=t('
    julia
    power_to_db(s; ref::Real = 1f0, amin::Real = 1f-10, top_db::Real = 80f0)

    Convert a power spectrogram (amplitude squared) to decibel (dB) units.

    Arguments

    • s: Input power.

    • ref: Scalar w.r.t. which the input is scaled.

    • amin: Minimum threshold for s.

    • top_db: Threshold the output at top_db below the peak: max.(s_db, maximum(s_db) - top_db).

    Returns

    s_db ~= 10 * log10(s) - 10 * log10(ref)

    source

    ',7))]),i("details",_s,[i("summary",null,[s[235]||(s[235]=i("a",{id:"NNlib.col2im!",href:"#NNlib.col2im!"},[i("span",{class:"jlbinding"},"NNlib.col2im!")],-1)),s[236]||(s[236]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[237]||(s[237]=t('
    julia
    col2im!(x, col, cdims, beta=0)

    Does the inverse of im2col!(), converting col back into a 3d image, used for backward passes, transposed convolutions, etc...

    Note that this method has not been optimized in the same way as im2col() has, because it is slightly more complicated due to the more chaotic data access patterns, and I'm not desperate enough yet.

    source

    ',4))]),i("details",Ms,[i("summary",null,[s[238]||(s[238]=i("a",{id:"NNlib.depthwiseconv_im2col!",href:"#NNlib.depthwiseconv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_im2col!")],-1)),s[239]||(s[239]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[240]||(s[240]=t('
    julia
    depthwiseconv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a depthwise convolution using im2col and GEMM, store the result in y. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",zs,[i("summary",null,[s[241]||(s[241]=i("a",{id:"NNlib.storage_type",href:"#NNlib.storage_type"},[i("span",{class:"jlbinding"},"NNlib.storage_type")],-1)),s[242]||(s[242]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[243]||(s[243]=t(`
    julia
    storage_type(A) -> Type

    Removes all wrappers to return the Array or CuArray (or whatever) type within.

    julia> view(reshape(ones(10)',2,5),:, 3:4) |> storage_type
    +)

    Inverse Short-time Fourier Transform.

    Return the least squares estimation of the original signal

    Arguments:

    • y: Input complex array in the (n_fft, n_frames, B) shape. Where B is the optional batch dimension.

    Keyword Arguments:

    `,6)),i("ul",null,[s[222]||(s[222]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Window function that was applied to the input of stft. If nothing (default), then no window was applied.

  • ",3)),i("li",null,[i("p",null,[s[214]||(s[214]=i("code",null,"center::Bool",-1)),s[215]||(s[215]=a(": Whether input to ")),s[216]||(s[216]=i("code",null,"stft",-1)),s[217]||(s[217]=a(" was padded on both sides so that ")),i("mjx-container",As,[(h(),e("svg",xs,s[210]||(s[210]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[211]||(s[211]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[218]||(s[218]=a("-th frame is centered at time ")),i("mjx-container",js,[(h(),e("svg",Ds,s[212]||(s[212]=[t('',1)]))),s[213]||(s[213]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[219]||(s[219]=a(". Padding is done with ")),s[220]||(s[220]=i("code",null,"pad_reflect",-1)),s[221]||(s[221]=a(" function."))])]),s[223]||(s[223]=t("
  • normalized::Bool: Whether input to stft was normalized.

  • return_complex::Bool: Whether the output should be complex, or if the input should be assumed to derive from a real signal and window.

  • original_length::Union{Nothing, Int}: Optional size of the first dimension of the input to stft. Helps restoring the exact stft input size. Otherwise, the array might be a bit shorter.

  • ",3))]),s[225]||(s[225]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L173-L205",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",ws,[i("summary",null,[s[226]||(s[226]=i("a",{id:"NNlib.transpose_swapbatch",href:"#NNlib.transpose_swapbatch"},[i("span",{class:"jlbinding"},"NNlib.transpose_swapbatch")],-1)),s[227]||(s[227]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[228]||(s[228]=t('
    julia
    transpose_swapbatch(x::AbstractArray)

    Given an AbstractArray, swap its batch and channel axes, as we must during transposed convolution. We do this to the operands during convolution, and then again to the output once we're done.

    source

    ',3))]),i("details",Vs,[i("summary",null,[s[229]||(s[229]=i("a",{id:"NNlib.transpose_pad",href:"#NNlib.transpose_pad"},[i("span",{class:"jlbinding"},"NNlib.transpose_pad")],-1)),s[230]||(s[230]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[231]||(s[231]=t('
    julia
    transpose_pad(cdims::ConvDims)

    Transposed convolution can be calculated in terms of typical convolution with some extra padding. This method computes the padding of the convolution that would result in the transposed convolution of two operands, in essence taking care of that "extra padding". Note that this method should almost always be accompanied by a call that predilates one of the operands.

    source

    ',3))]),i("details",Ls,[i("summary",null,[s[232]||(s[232]=i("a",{id:"NNlib.power_to_db",href:"#NNlib.power_to_db"},[i("span",{class:"jlbinding"},"NNlib.power_to_db")],-1)),s[233]||(s[233]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[234]||(s[234]=t('
    julia
    power_to_db(s; ref::Real = 1f0, amin::Real = 1f-10, top_db::Real = 80f0)

    Convert a power spectrogram (amplitude squared) to decibel (dB) units.

    Arguments

    • s: Input power.

    • ref: Scalar w.r.t. which the input is scaled.

    • amin: Minimum threshold for s.

    • top_db: Threshold the output at top_db below the peak: max.(s_db, maximum(s_db) - top_db).

    Returns

    s_db ~= 10 * log10(s) - 10 * log10(ref)

    source

    ',7))]),i("details",Hs,[i("summary",null,[s[235]||(s[235]=i("a",{id:"NNlib.col2im!",href:"#NNlib.col2im!"},[i("span",{class:"jlbinding"},"NNlib.col2im!")],-1)),s[236]||(s[236]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[237]||(s[237]=t('
    julia
    col2im!(x, col, cdims, beta=0)

    Does the inverse of im2col!(), converting col back into a 3d image, used for backward passes, transposed convolutions, etc...

    Note that this method has not been optimized in the same way as im2col() has, because it is slightly more complicated due to the more chaotic data access patterns, and I'm not desperate enough yet.

    source

    ',4))]),i("details",Ss,[i("summary",null,[s[238]||(s[238]=i("a",{id:"NNlib.depthwiseconv_im2col!",href:"#NNlib.depthwiseconv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_im2col!")],-1)),s[239]||(s[239]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[240]||(s[240]=t('
    julia
    depthwiseconv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a depthwise convolution using im2col and GEMM, store the result in y. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Ms,[i("summary",null,[s[241]||(s[241]=i("a",{id:"NNlib.storage_type",href:"#NNlib.storage_type"},[i("span",{class:"jlbinding"},"NNlib.storage_type")],-1)),s[242]||(s[242]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[243]||(s[243]=t(`
    julia
    storage_type(A) -> Type

    Removes all wrappers to return the Array or CuArray (or whatever) type within.

    julia> view(reshape(ones(10)',2,5),:, 3:4) |> storage_type
     Array{Float64,1}
     
     julia> reshape(sparse(rand(10)), 5,2) |> storage_type
    -SparseVector{Float64,Int64}

    source

    `,4))]),i("details",Zs,[i("summary",null,[s[244]||(s[244]=i("a",{id:"NNlib.im2col_dims",href:"#NNlib.im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.im2col_dims")],-1)),s[245]||(s[245]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[246]||(s[246]=t('
    julia
    im2col_dims(c::ConvDims)

    im2col calculates, for each output pixel, the "convolution" of N kernels where N is the number of output channels, by doing a matrix multiply. The dimensions of that matrix are given by this function.

    Note that because im2col is multithreaded, we need to allocate a separate workspace of memory per-thread; hence the dimensions returned by this will depend on the number of threads Julia is currently running with.

    source

    ',4))]),i("details",Is,[i("summary",null,[s[247]||(s[247]=i("a",{id:"NNlib.∇depthwiseconv_filter_direct!",href:"#NNlib.∇depthwiseconv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_direct!")],-1)),s[248]||(s[248]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[249]||(s[249]=t('
    julia
    ∇depthwiseconv_filter_direct!(dw, x, dy, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon w in the depthwise convolution y = x * w.

    source

    ',3))]),i("details",Os,[i("summary",null,[s[250]||(s[250]=i("a",{id:"NNlib.reverse_indices",href:"#NNlib.reverse_indices"},[i("span",{class:"jlbinding"},"NNlib.reverse_indices")],-1)),s[251]||(s[251]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[252]||(s[252]=t('
    julia
    reverse_indices(idx)

    Return the reverse indices of idx. The indices of idx will be values, and values of idx will be index.

    Arguments

    • idx: The indices to be reversed. Accepts array or cuarray of integer, tuple or CartesianIndex.

    source

    ',5))]),i("details",Ps,[i("summary",null,[s[253]||(s[253]=i("a",{id:"NNlib.∇conv_filter_im2col!",href:"#NNlib.∇conv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_im2col!")],-1)),s[254]||(s[254]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[255]||(s[255]=t(`
    julia
    ∇conv_filter_im2col!(dw, x, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    -                     alpha=1, beta=0)

    Conv backward pass onto the weights using im2col and GEMM; stores the result in dw. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",Ss,[i("summary",null,[s[256]||(s[256]=i("a",{id:"NNlib.conv_im2col!",href:"#NNlib.conv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.conv_im2col!")],-1)),s[257]||(s[257]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[258]||(s[258]=t('
    julia
    conv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a convolution using im2col and GEMM, store the result in y. The kwargs alpha and beta control accumulation behavior; internally this operation is implemented as a matrix multiply that boils down to y = alpha * x * w + beta * y, thus by setting beta to a nonzero value, multiple results can be accumulated into y, or by setting alpha to a nonunitary value, various gain factors can be applied.

    Note for the particularly performance-minded, you can provide a pre-allocated col, which should eliminate any need for large allocations within this method.

    source

    ',4))]),i("details",qs,[i("summary",null,[s[259]||(s[259]=i("a",{id:"NNlib.∇conv_data_direct!",href:"#NNlib.∇conv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_direct!")],-1)),s[260]||(s[260]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[261]||(s[261]=t('
    julia
    ∇conv_data_direct!(dx, dy, w, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon x in the convolution y = x * w.

    source

    ',3))]),i("details",Rs,[i("summary",null,[s[262]||(s[262]=i("a",{id:"NNlib.scatter_dims",href:"#NNlib.scatter_dims"},[i("span",{class:"jlbinding"},"NNlib.scatter_dims")],-1)),s[263]||(s[263]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[264]||(s[264]=i("p",null,"Performs dimensional consistency checks and return the dimensionality of the scattered objects.",-1)),s[265]||(s[265]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/scatter.jl#L16-L19",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ws,[i("summary",null,[s[266]||(s[266]=i("a",{id:"NNlib.∇conv_data_im2col!",href:"#NNlib.∇conv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_im2col!")],-1)),s[267]||(s[267]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[268]||(s[268]=t('
    julia
    ∇conv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Conv2d backward pass onto the input using im2col and GEMM; stores the result in dx. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Gs,[i("summary",null,[s[269]||(s[269]=i("a",{id:"NNlib.storage_typejoin",href:"#NNlib.storage_typejoin"},[i("span",{class:"jlbinding"},"NNlib.storage_typejoin")],-1)),s[270]||(s[270]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[271]||(s[271]=t(`
    julia
    storage_typejoin(A, B, C, ...) -> Type

    Reduces with Base.promote_typejoin, in order that this conveys useful information for dispatching to BLAS. It does not tell you what container to allocate:

    julia> storage_typejoin(rand(2), rand(Float32, 2))
    +SparseVector{Float64,Int64}

    source

    `,4))]),i("details",Is,[i("summary",null,[s[244]||(s[244]=i("a",{id:"NNlib.im2col_dims",href:"#NNlib.im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.im2col_dims")],-1)),s[245]||(s[245]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[246]||(s[246]=t('
    julia
    im2col_dims(c::ConvDims)

    im2col calculates, for each output pixel, the "convolution" of N kernels where N is the number of output channels, by doing a matrix multiply. The dimensions of that matrix are given by this function.

    Note that because im2col is multithreaded, we need to allocate a separate workspace of memory per-thread; hence the dimensions returned by this will depend on the number of threads Julia is currently running with.

    source

    ',4))]),i("details",Ps,[i("summary",null,[s[247]||(s[247]=i("a",{id:"NNlib.∇depthwiseconv_filter_direct!",href:"#NNlib.∇depthwiseconv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_direct!")],-1)),s[248]||(s[248]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[249]||(s[249]=t('
    julia
    ∇depthwiseconv_filter_direct!(dw, x, dy, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon w in the depthwise convolution y = x * w.

    source

    ',3))]),i("details",zs,[i("summary",null,[s[250]||(s[250]=i("a",{id:"NNlib.reverse_indices",href:"#NNlib.reverse_indices"},[i("span",{class:"jlbinding"},"NNlib.reverse_indices")],-1)),s[251]||(s[251]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[252]||(s[252]=t('
    julia
    reverse_indices(idx)

    Return the reverse indices of idx. The indices of idx will be values, and values of idx will be index.

    Arguments

    • idx: The indices to be reversed. Accepts array or cuarray of integer, tuple or CartesianIndex.

    source

    ',5))]),i("details",Rs,[i("summary",null,[s[253]||(s[253]=i("a",{id:"NNlib.∇conv_filter_im2col!",href:"#NNlib.∇conv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_im2col!")],-1)),s[254]||(s[254]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[255]||(s[255]=t(`
    julia
    ∇conv_filter_im2col!(dw, x, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    +                     alpha=1, beta=0)

    Conv backward pass onto the weights using im2col and GEMM; stores the result in dw. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",Zs,[i("summary",null,[s[256]||(s[256]=i("a",{id:"NNlib.conv_im2col!",href:"#NNlib.conv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.conv_im2col!")],-1)),s[257]||(s[257]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[258]||(s[258]=t('
    julia
    conv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a convolution using im2col and GEMM, store the result in y. The kwargs alpha and beta control accumulation behavior; internally this operation is implemented as a matrix multiply that boils down to y = alpha * x * w + beta * y, thus by setting beta to a nonzero value, multiple results can be accumulated into y, or by setting alpha to a nonunitary value, various gain factors can be applied.

    Note for the particularly performance-minded, you can provide a pre-allocated col, which should eliminate any need for large allocations within this method.

    source

    ',4))]),i("details",Os,[i("summary",null,[s[259]||(s[259]=i("a",{id:"NNlib.∇conv_data_direct!",href:"#NNlib.∇conv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_direct!")],-1)),s[260]||(s[260]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[261]||(s[261]=t('
    julia
    ∇conv_data_direct!(dx, dy, w, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon x in the convolution y = x * w.

    source

    ',3))]),i("details",qs,[i("summary",null,[s[262]||(s[262]=i("a",{id:"NNlib.scatter_dims",href:"#NNlib.scatter_dims"},[i("span",{class:"jlbinding"},"NNlib.scatter_dims")],-1)),s[263]||(s[263]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[264]||(s[264]=i("p",null,"Performs dimensional consistency checks and return the dimensionality of the scattered objects.",-1)),s[265]||(s[265]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/scatter.jl#L16-L19",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ws,[i("summary",null,[s[266]||(s[266]=i("a",{id:"NNlib.∇conv_data_im2col!",href:"#NNlib.∇conv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_im2col!")],-1)),s[267]||(s[267]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[268]||(s[268]=t('
    julia
    ∇conv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Conv2d backward pass onto the input using im2col and GEMM; stores the result in dx. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Gs,[i("summary",null,[s[269]||(s[269]=i("a",{id:"NNlib.storage_typejoin",href:"#NNlib.storage_typejoin"},[i("span",{class:"jlbinding"},"NNlib.storage_typejoin")],-1)),s[270]||(s[270]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[271]||(s[271]=t(`
    julia
    storage_typejoin(A, B, C, ...) -> Type

    Reduces with Base.promote_typejoin, in order that this conveys useful information for dispatching to BLAS. It does not tell you what container to allocate:

    julia> storage_typejoin(rand(2), rand(Float32, 2))
     Array{T,1} where T
     
     julia> eltype(ans) <: LinearAlgebra.BlasFloat
    @@ -565,10 +565,10 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
         freq_points::Vector{Float32}, all_freqs::Vector{Float32})

    Create triangular filter banks.

    Arguments:

    • freq_points::Vector{Float32}: Filter midpoints of size n_filters.

    • all_freqs::Vector{Float32}: Frequency points of size n_freqs.

    Returns:

    Array of size (n_freqs, n_filters).

    source

    `,7))]),i("details",$s,[i("summary",null,[s[286]||(s[286]=i("a",{id:"NNlib.∇depthwiseconv_data_direct!",href:"#NNlib.∇depthwiseconv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_direct!")],-1)),s[287]||(s[287]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[288]||(s[288]=t('
    julia
    ∇depthwiseconv_data_direct!(dx, dy, w, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon x in the depthwise convolution y = x * w. We make use of the fact that a depthwise convolution is equivalent to C_in separate normal convolutions between that channel of x and the C_mult different kernels that get applied to it. The output of such a convolution is the gradient imposed upon that particular channel of x, and so we simply walk through x, calculating the gradient for each batch and channel independently.

    source

    ',3))]),i("details",Ys,[i("summary",null,[s[289]||(s[289]=i("a",{id:"NNlib.db_to_power",href:"#NNlib.db_to_power"},[i("span",{class:"jlbinding"},"NNlib.db_to_power")],-1)),s[290]||(s[290]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[291]||(s[291]=t('
    julia
    db_to_power(s_db; ref::Real = 1f0)

    Inverse of power_to_db.

    source

    ',3))]),i("details",si,[i("summary",null,[s[292]||(s[292]=i("a",{id:"NNlib.predilated_size",href:"#NNlib.predilated_size"},[i("span",{class:"jlbinding"},"NNlib.predilated_size")],-1)),s[293]||(s[293]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[294]||(s[294]=t('
    julia
    predilated_size(x_size::Tuple, dilation::Tuple)

    Calculate the size of a predilated x given a particular dilation factor. This is used within predilate() and transpose_cdims().

    source

    ',3))]),i("details",ii,[i("summary",null,[s[295]||(s[295]=i("a",{id:"NNlib.stft",href:"#NNlib.stft"},[i("span",{class:"jlbinding"},"NNlib.stft")],-1)),s[296]||(s[296]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[328]||(s[328]=t(`
    julia
    stft(x;
         n_fft::Int, hop_length::Int = n_fft ÷ 4, window = nothing,
         center::Bool = true, normalized::Bool = false,
    -)

    Short-time Fourier transform (STFT).

    The STFT computes the Fourier transform of short overlapping windows of the input, giving frequency components of the signal as they change over time.

    `,3)),i("p",null,[i("mjx-container",ai,[(h(),e("svg",ti,s[297]||(s[297]=[t('',1)]))),s[298]||(s[298]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"Y"),i("mo",{stretchy:"false"},"["),i("mi",null,"ω"),i("mo",null,","),i("mi",null,"m"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("munderover",null,[i("mo",{"data-mjx-texclass":"OP"},"∑"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"k"),i("mo",null,"="),i("mn",null,"0")]),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mtext",null,"window"),i("mo",{stretchy:"false"},"["),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mtext",null,"input"),i("mo",{stretchy:"false"},"["),i("mi",null,"m"),i("mo",null,"×"),i("mtext",null,"hop length"),i("mo",null,"+"),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mi",null,"exp"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mo",null,"−"),i("mi",null,"j"),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"ω"),i("mi",null,"k")]),i("mtext",null,"n fft")]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[307]||(s[307]=a("where ")),i("mjx-container",ni,[(h(),e("svg",li,s[299]||(s[299]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[300]||(s[300]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[308]||(s[308]=a(" is the window length, ")),i("mjx-container",ei,[(h(),e("svg",hi,s[301]||(s[301]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D714",d:"M495 384Q495 406 514 424T555 443Q574 443 589 425T604 364Q604 334 592 278T555 155T483 38T377 -11Q297 -11 267 66Q266 68 260 61Q201 -11 125 -11Q15 -11 15 139Q15 230 56 325T123 434Q135 441 147 436Q160 429 160 418Q160 406 140 379T94 306T62 208Q61 202 61 187Q61 124 85 100T143 76Q201 76 245 129L253 137V156Q258 297 317 297Q348 297 348 261Q348 243 338 213T318 158L308 135Q309 133 310 129T318 115T334 97T358 83T393 76Q456 76 501 148T546 274Q546 305 533 325T508 357T495 384Z",style:{"stroke-width":"3"}})])])],-1)]))),s[302]||(s[302]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"ω")])],-1))]),s[309]||(s[309]=a(" is the frequency ")),i("mjx-container",pi,[(h(),e("svg",ki,s[303]||(s[303]=[t('',1)]))),s[304]||(s[304]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mn",null,"0"),i("mo",null,"≤"),i("mi",null,"ω"),i("mo",null,"<"),i("mtext",null,"n fft")])],-1))]),s[310]||(s[310]=a(" and ")),i("mjx-container",di,[(h(),e("svg",ri,s[305]||(s[305]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D45A",d:"M21 287Q22 293 24 303T36 341T56 388T88 425T132 442T175 435T205 417T221 395T229 376L231 369Q231 367 232 367L243 378Q303 442 384 442Q401 442 415 440T441 433T460 423T475 411T485 398T493 385T497 373T500 364T502 357L510 367Q573 442 659 442Q713 442 746 415T780 336Q780 285 742 178T704 50Q705 36 709 31T724 26Q752 26 776 56T815 138Q818 149 821 151T837 153Q857 153 857 145Q857 144 853 130Q845 101 831 73T785 17T716 -10Q669 -10 648 17T627 73Q627 92 663 193T700 345Q700 404 656 404H651Q565 404 506 303L499 291L466 157Q433 26 428 16Q415 -11 385 -11Q372 -11 364 -4T353 8T350 18Q350 29 384 161L420 307Q423 322 423 345Q423 404 379 404H374Q288 404 229 303L222 291L189 157Q156 26 151 16Q138 -11 108 -11Q95 -11 87 -5T76 7T74 17Q74 30 112 181Q151 335 151 342Q154 357 154 369Q154 405 129 405Q107 405 92 377T69 316T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),s[306]||(s[306]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"m")])],-1))]),s[311]||(s[311]=a(" is the index of the sliding window."))]),s[329]||(s[329]=i("p",null,[i("strong",null,"Arguments:")],-1)),s[330]||(s[330]=i("ul",null,[i("li",null,[i("code",null,"x"),a(": Input, must be either a 1D time sequence ("),i("code",null,"(L,)"),a(" shape) or a 2D batch of time sequence ("),i("code",null,"(L, B)"),a(" shape).")])],-1)),s[331]||(s[331]=i("p",null,[i("strong",null,"Keyword Arguments:")],-1)),i("ul",null,[s[327]||(s[327]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Optional window function to apply. Must be 1D vector 0 < length(window) ≤ n_fft. If window is shorter than n_fft, it is padded with zeros on both sides. If nothing (default), then no window is applied.

  • ",3)),i("li",null,[i("p",null,[s[316]||(s[316]=i("code",null,"center::Bool",-1)),s[317]||(s[317]=a(": Whether to pad input on both sides so that ")),i("mjx-container",oi,[(h(),e("svg",gi,s[312]||(s[312]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[313]||(s[313]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[318]||(s[318]=a("-th frame is centered at time ")),i("mjx-container",yi,[(h(),e("svg",Ei,s[314]||(s[314]=[t('',1)]))),s[315]||(s[315]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[319]||(s[319]=a(". Padding is done with ")),s[320]||(s[320]=i("code",null,"pad_reflect",-1)),s[321]||(s[321]=a(" function."))])]),i("li",null,[i("p",null,[s[324]||(s[324]=i("code",null,"normalized::Bool",-1)),s[325]||(s[325]=a(": Whether to return normalized STFT, i.e. multiplied with ")),i("mjx-container",Fi,[(h(),e("svg",ci,s[322]||(s[322]=[t('',1)]))),s[323]||(s[323]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("msup",null,[i("mtext",null,"n fft"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mo",null,"−"),i("mn",null,"0.5")])])])],-1))]),s[326]||(s[326]=a("."))])])]),s[332]||(s[332]=i("p",null,[i("strong",null,"Returns:")],-1)),s[333]||(s[333]=i("p",null,[a("Complex array of shape "),i("code",null,"(n_fft, n_frames, B)"),a(", where "),i("code",null,"B"),a(" is the optional batch dimension.")],-1)),s[334]||(s[334]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L130-L170",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ci,[i("summary",null,[s[335]||(s[335]=i("a",{id:"NNlib.hamming_window",href:"#NNlib.hamming_window"},[i("span",{class:"jlbinding"},"NNlib.hamming_window")],-1)),s[336]||(s[336]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[343]||(s[343]=t(`
    julia
    hamming_window(
    +)

    Short-time Fourier transform (STFT).

    The STFT computes the Fourier transform of short overlapping windows of the input, giving frequency components of the signal as they change over time.

    `,3)),i("p",null,[i("mjx-container",ai,[(h(),e("svg",ti,s[297]||(s[297]=[t('',1)]))),s[298]||(s[298]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"Y"),i("mo",{stretchy:"false"},"["),i("mi",null,"ω"),i("mo",null,","),i("mi",null,"m"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("munderover",null,[i("mo",{"data-mjx-texclass":"OP"},"∑"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"k"),i("mo",null,"="),i("mn",null,"0")]),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mtext",null,"window"),i("mo",{stretchy:"false"},"["),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mtext",null,"input"),i("mo",{stretchy:"false"},"["),i("mi",null,"m"),i("mo",null,"×"),i("mtext",null,"hop length"),i("mo",null,"+"),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mi",null,"exp"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mo",null,"−"),i("mi",null,"j"),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"ω"),i("mi",null,"k")]),i("mtext",null,"n fft")]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[307]||(s[307]=a("where ")),i("mjx-container",ni,[(h(),e("svg",li,s[299]||(s[299]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[300]||(s[300]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[308]||(s[308]=a(" is the window length, ")),i("mjx-container",ei,[(h(),e("svg",hi,s[301]||(s[301]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D714",d:"M495 384Q495 406 514 424T555 443Q574 443 589 425T604 364Q604 334 592 278T555 155T483 38T377 -11Q297 -11 267 66Q266 68 260 61Q201 -11 125 -11Q15 -11 15 139Q15 230 56 325T123 434Q135 441 147 436Q160 429 160 418Q160 406 140 379T94 306T62 208Q61 202 61 187Q61 124 85 100T143 76Q201 76 245 129L253 137V156Q258 297 317 297Q348 297 348 261Q348 243 338 213T318 158L308 135Q309 133 310 129T318 115T334 97T358 83T393 76Q456 76 501 148T546 274Q546 305 533 325T508 357T495 384Z",style:{"stroke-width":"3"}})])])],-1)]))),s[302]||(s[302]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"ω")])],-1))]),s[309]||(s[309]=a(" is the frequency ")),i("mjx-container",pi,[(h(),e("svg",ki,s[303]||(s[303]=[t('',1)]))),s[304]||(s[304]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mn",null,"0"),i("mo",null,"≤"),i("mi",null,"ω"),i("mo",null,"<"),i("mtext",null,"n fft")])],-1))]),s[310]||(s[310]=a(" and ")),i("mjx-container",di,[(h(),e("svg",ri,s[305]||(s[305]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D45A",d:"M21 287Q22 293 24 303T36 341T56 388T88 425T132 442T175 435T205 417T221 395T229 376L231 369Q231 367 232 367L243 378Q303 442 384 442Q401 442 415 440T441 433T460 423T475 411T485 398T493 385T497 373T500 364T502 357L510 367Q573 442 659 442Q713 442 746 415T780 336Q780 285 742 178T704 50Q705 36 709 31T724 26Q752 26 776 56T815 138Q818 149 821 151T837 153Q857 153 857 145Q857 144 853 130Q845 101 831 73T785 17T716 -10Q669 -10 648 17T627 73Q627 92 663 193T700 345Q700 404 656 404H651Q565 404 506 303L499 291L466 157Q433 26 428 16Q415 -11 385 -11Q372 -11 364 -4T353 8T350 18Q350 29 384 161L420 307Q423 322 423 345Q423 404 379 404H374Q288 404 229 303L222 291L189 157Q156 26 151 16Q138 -11 108 -11Q95 -11 87 -5T76 7T74 17Q74 30 112 181Q151 335 151 342Q154 357 154 369Q154 405 129 405Q107 405 92 377T69 316T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),s[306]||(s[306]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"m")])],-1))]),s[311]||(s[311]=a(" is the index of the sliding window."))]),s[329]||(s[329]=i("p",null,[i("strong",null,"Arguments:")],-1)),s[330]||(s[330]=i("ul",null,[i("li",null,[i("code",null,"x"),a(": Input, must be either a 1D time sequence ("),i("code",null,"(L,)"),a(" shape) or a 2D batch of time sequence ("),i("code",null,"(L, B)"),a(" shape).")])],-1)),s[331]||(s[331]=i("p",null,[i("strong",null,"Keyword Arguments:")],-1)),i("ul",null,[s[327]||(s[327]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Optional window function to apply. Must be 1D vector 0 < length(window) ≤ n_fft. If window is shorter than n_fft, it is padded with zeros on both sides. If nothing (default), then no window is applied.

  • ",3)),i("li",null,[i("p",null,[s[316]||(s[316]=i("code",null,"center::Bool",-1)),s[317]||(s[317]=a(": Whether to pad input on both sides so that ")),i("mjx-container",oi,[(h(),e("svg",gi,s[312]||(s[312]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[313]||(s[313]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[318]||(s[318]=a("-th frame is centered at time ")),i("mjx-container",Ei,[(h(),e("svg",yi,s[314]||(s[314]=[t('',1)]))),s[315]||(s[315]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[319]||(s[319]=a(". Padding is done with ")),s[320]||(s[320]=i("code",null,"pad_reflect",-1)),s[321]||(s[321]=a(" function."))])]),i("li",null,[i("p",null,[s[324]||(s[324]=i("code",null,"normalized::Bool",-1)),s[325]||(s[325]=a(": Whether to return normalized STFT, i.e. multiplied with ")),i("mjx-container",Fi,[(h(),e("svg",ci,s[322]||(s[322]=[t('',1)]))),s[323]||(s[323]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("msup",null,[i("mtext",null,"n fft"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mo",null,"−"),i("mn",null,"0.5")])])])],-1))]),s[326]||(s[326]=a("."))])])]),s[332]||(s[332]=i("p",null,[i("strong",null,"Returns:")],-1)),s[333]||(s[333]=i("p",null,[a("Complex array of shape "),i("code",null,"(n_fft, n_frames, B)"),a(", where "),i("code",null,"B"),a(" is the optional batch dimension.")],-1)),s[334]||(s[334]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L130-L170",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ci,[i("summary",null,[s[335]||(s[335]=i("a",{id:"NNlib.hamming_window",href:"#NNlib.hamming_window"},[i("span",{class:"jlbinding"},"NNlib.hamming_window")],-1)),s[336]||(s[336]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[343]||(s[343]=t(`
    julia
    hamming_window(
         window_length::Int, ::Type{T} = Float32; periodic::Bool = true,
         α::T = T(0.54), β::T = T(0.46),
    -) where T <: Real

    Hamming window function (ref: Window function § Hann and Hamming windows - Wikipedia). Generalized version of hann_window.

    `,2)),i("p",null,[i("mjx-container",ui,[(h(),e("svg",mi,s[337]||(s[337]=[t('',1)]))),s[338]||(s[338]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mi",null,"α"),i("mo",null,"−"),i("mi",null,"β"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[341]||(s[341]=a("Where ")),i("mjx-container",bi,[(h(),e("svg",Qi,s[339]||(s[339]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[340]||(s[340]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[342]||(s[342]=a(" is the window length."))]),s[344]||(s[344]=t(`
    julia
    julia> lineplot(hamming_window(100); width=30, height=10)
    +) where T <: Real

    Hamming window function (ref: Window function § Hann and Hamming windows - Wikipedia). Generalized version of hann_window.

    `,2)),i("p",null,[i("mjx-container",ui,[(h(),e("svg",mi,s[337]||(s[337]=[t('',1)]))),s[338]||(s[338]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mi",null,"α"),i("mo",null,"−"),i("mi",null,"β"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[341]||(s[341]=a("Where ")),i("mjx-container",Ti,[(h(),e("svg",bi,s[339]||(s[339]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[340]||(s[340]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[342]||(s[342]=a(" is the window length."))]),s[344]||(s[344]=t(`
    julia
    julia> lineplot(hamming_window(100); width=30, height=10)
          ┌──────────────────────────────┐
        1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠚⠉⠉⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
          │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠁⠀⠀⠀⠀⠀⠈⢢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    @@ -584,9 +584,9 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
          ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀100⠀

    Arguments:

    • window_length::Int: Size of the window.

    • ::Type{T}: Elemet type of the window.

    Keyword Arguments:

    • periodic::Bool: If true (default), returns a window to be used as periodic function. If false, return a symmetric window. Following always holds:
    julia
    julia> N = 256;
     
     julia> hamming_window(N; periodic=true)  hamming_window(N + 1; periodic=false)[1:end - 1]
    -true
    • α::Real: Coefficient α in the equation above.

    • β::Real: Coefficient β in the equation above.

    Returns:

    Vector of length window_length and eltype T.

    source

    `,10))]),i("details",Ti,[i("summary",null,[s[345]||(s[345]=i("a",{id:"NNlib.maximum_dims",href:"#NNlib.maximum_dims"},[i("span",{class:"jlbinding"},"NNlib.maximum_dims")],-1)),s[346]||(s[346]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[347]||(s[347]=t('
    julia
    maximum_dims(dims)

    Given an array of CartesianIndex{N} or NTuple{N,Int}, returns a tuple containing the maximum of all the 1st entries, all the 2nd entries, and so on up to N.

    Given an array of integers, returns (maximum(dims),).

    (These arguments are what scatter understands.)

    source

    ',5))]),i("details",Bi,[i("summary",null,[s[348]||(s[348]=i("a",{id:"NNlib.BatchedTranspose",href:"#NNlib.BatchedTranspose"},[i("span",{class:"jlbinding"},"NNlib.BatchedTranspose")],-1)),s[349]||(s[349]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[350]||(s[350]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    +true
    • α::Real: Coefficient α in the equation above.

    • β::Real: Coefficient β in the equation above.

    Returns:

    Vector of length window_length and eltype T.

    source

    `,10))]),i("details",Qi,[i("summary",null,[s[345]||(s[345]=i("a",{id:"NNlib.maximum_dims",href:"#NNlib.maximum_dims"},[i("span",{class:"jlbinding"},"NNlib.maximum_dims")],-1)),s[346]||(s[346]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[347]||(s[347]=t('
    julia
    maximum_dims(dims)

    Given an array of CartesianIndex{N} or NTuple{N,Int}, returns a tuple containing the maximum of all the 1st entries, all the 2nd entries, and so on up to N.

    Given an array of integers, returns (maximum(dims),).

    (These arguments are what scatter understands.)

    source

    ',5))]),i("details",Bi,[i("summary",null,[s[348]||(s[348]=i("a",{id:"NNlib.BatchedTranspose",href:"#NNlib.BatchedTranspose"},[i("span",{class:"jlbinding"},"NNlib.BatchedTranspose")],-1)),s[349]||(s[349]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[350]||(s[350]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
     batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
    -BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",fi,[i("summary",null,[s[351]||(s[351]=i("a",{id:"NNlib._rotate_coordinates",href:"#NNlib._rotate_coordinates"},[i("span",{class:"jlbinding"},"NNlib._rotate_coordinates")],-1)),s[352]||(s[352]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[353]||(s[353]=t('
    julia
    _rotate_coordinates(sinθ, cosθ, i, j, rotation_center, round_or_floor)

    This rotates the coordinates and either applies round(nearest neighbour) or floor for :bilinear interpolation)

    source

    ',3))]),i("details",vi,[i("summary",null,[s[354]||(s[354]=i("a",{id:"NNlib.melscale_filterbanks",href:"#NNlib.melscale_filterbanks"},[i("span",{class:"jlbinding"},"NNlib.melscale_filterbanks")],-1)),s[355]||(s[355]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[356]||(s[356]=t(`
    julia
    melscale_filterbanks(;
    +BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",_i,[i("summary",null,[s[351]||(s[351]=i("a",{id:"NNlib._rotate_coordinates",href:"#NNlib._rotate_coordinates"},[i("span",{class:"jlbinding"},"NNlib._rotate_coordinates")],-1)),s[352]||(s[352]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[353]||(s[353]=t('
    julia
    _rotate_coordinates(sinθ, cosθ, i, j, rotation_center, round_or_floor)

    This rotates the coordinates and either applies round(nearest neighbour) or floor for :bilinear interpolation)

    source

    ',3))]),i("details",fi,[i("summary",null,[s[354]||(s[354]=i("a",{id:"NNlib.melscale_filterbanks",href:"#NNlib.melscale_filterbanks"},[i("span",{class:"jlbinding"},"NNlib.melscale_filterbanks")],-1)),s[355]||(s[355]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[356]||(s[356]=t(`
    julia
    melscale_filterbanks(;
         n_freqs::Int, n_mels::Int, sample_rate::Int,
         fmin::Float32 = 0f0, fmax::Float32 = Float32(sample_rate ÷ 2))

    Create triangular Mel scale filter banks (ref: Mel scale - Wikipedia). Each column is a filterbank that highlights its own frequency.

    Arguments:

    • n_freqs::Int: Number of frequencies to highlight.

    • n_mels::Int: Number of mel filterbanks.

    • sample_rate::Int: Sample rate of the audio waveform.

    • fmin::Float32: Minimum frequency in Hz.

    • fmax::Float32: Maximum frequency in Hz.

    Returns:

    Filterbank matrix of shape (n_freqs, n_mels) where each column is a filterbank.

    julia
    julia> n_mels = 8;
     
    @@ -616,4 +616,4 @@ import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framewo
          │⡇⡇⢸⠇⢸⡇⠀⣿⠀⠀⢣⡇⠀⠀⠸⣄⠇⠀⠀⠀⠸⡀⡇⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄│
        0 │⣇⣇⣸⣀⣸⣀⣀⣟⣀⣀⣸⣃⣀⣀⣀⣿⣀⣀⣀⣀⣀⣿⣀⣀⣀⣀⣀⣀⣈⣇⣀⣀⣀⣀⣀⣀⣀⣀⣀⣱│
          └────────────────────────────────────────┘
    -     ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀

    source

    `,8))]),i("details",Ni,[i("summary",null,[s[357]||(s[357]=i("a",{id:"NNlib.logaddexp",href:"#NNlib.logaddexp"},[i("span",{class:"jlbinding"},"NNlib.logaddexp")],-1)),s[358]||(s[358]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[359]||(s[359]=t('
    julia
    logaddexp(a, b)

    Adds log-space a and b such that the result equals log(exp(a)+exp(b))

    source

    ',3))]),i("details",xi,[i("summary",null,[s[360]||(s[360]=i("a",{id:"NNlib.depthwiseconv_direct!",href:"#NNlib.depthwiseconv_direct!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_direct!")],-1)),s[361]||(s[361]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[362]||(s[362]=t('
    julia
    depthwiseconv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct depthwise convolution implementation; used for debugging, tests, and mixing/ matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    One subtlety about depthwise convolutions; the shape of a depthwise convolutional kernel is (spatial_dims..., C_mult, C_in), so the axis that must match with the number of channels in x is the last, not the second-to-last, as in a normal dense convolution.

    See the docstring for conv_direct!() for more on the optional parameters.

    source

    ',5))]),i("details",ji,[i("summary",null,[s[363]||(s[363]=i("a",{id:"NNlib.im2col!",href:"#NNlib.im2col!"},[i("span",{class:"jlbinding"},"NNlib.im2col!")],-1)),s[364]||(s[364]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[365]||(s[365]=t('
    julia
    im2col!(col, x, cdims)

    Converts a 3d image x into a matrix col for usage with GEMM-calculated convolution. Patches of x of size (kernel_w, kernel_h, kernel_d, C_in) will be extracted and laid out along the rows of col, one for each output pixel. This routine is used by all im2col-based convolutions, just with extra singleton dimensions added in the case of 2d or 1d images.

    source

    ',3))]),i("details",Ai,[i("summary",null,[s[366]||(s[366]=i("a",{id:"NNlib.predilate",href:"#NNlib.predilate"},[i("span",{class:"jlbinding"},"NNlib.predilate")],-1)),s[367]||(s[367]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[368]||(s[368]=t('
    julia
    predilate(x, dilation::Tuple)

    Places elements of x within a lattice of zeros, used in expressing a transposed convolution in terms of normal convolution. Note that while we call this "predilation" for aesthetic reasons, you are typically passing a "stride" value into here. Yes, transposed convolution is confusing.

    source

    ',3))]),i("details",wi,[i("summary",null,[s[369]||(s[369]=i("a",{id:"NNlib.safe_div",href:"#NNlib.safe_div"},[i("span",{class:"jlbinding"},"NNlib.safe_div")],-1)),s[370]||(s[370]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[371]||(s[371]=t('
    julia
    safe_div(x, y)

    Returns x/y unless y==0, in which case it just returns x. (Used internally by scatter.)

    source

    ',3))])])}const Ii=p(d,[["render",Di]]);export{Zi as __pageData,Ii as default}; + ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀

    source

    `,8))]),i("details",vi,[i("summary",null,[s[357]||(s[357]=i("a",{id:"NNlib.logaddexp",href:"#NNlib.logaddexp"},[i("span",{class:"jlbinding"},"NNlib.logaddexp")],-1)),s[358]||(s[358]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[359]||(s[359]=t('
    julia
    logaddexp(a, b)

    Adds log-space a and b such that the result equals log(exp(a)+exp(b))

    source

    ',3))]),i("details",Ni,[i("summary",null,[s[360]||(s[360]=i("a",{id:"NNlib.depthwiseconv_direct!",href:"#NNlib.depthwiseconv_direct!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_direct!")],-1)),s[361]||(s[361]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[362]||(s[362]=t('
    julia
    depthwiseconv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct depthwise convolution implementation; used for debugging, tests, and mixing/ matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    One subtlety about depthwise convolutions; the shape of a depthwise convolutional kernel is (spatial_dims..., C_mult, C_in), so the axis that must match with the number of channels in x is the last, not the second-to-last, as in a normal dense convolution.

    See the docstring for conv_direct!() for more on the optional parameters.

    source

    ',5))]),i("details",Ai,[i("summary",null,[s[363]||(s[363]=i("a",{id:"NNlib.im2col!",href:"#NNlib.im2col!"},[i("span",{class:"jlbinding"},"NNlib.im2col!")],-1)),s[364]||(s[364]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[365]||(s[365]=t('
    julia
    im2col!(col, x, cdims)

    Converts a 3d image x into a matrix col for usage with GEMM-calculated convolution. Patches of x of size (kernel_w, kernel_h, kernel_d, C_in) will be extracted and laid out along the rows of col, one for each output pixel. This routine is used by all im2col-based convolutions, just with extra singleton dimensions added in the case of 2d or 1d images.

    source

    ',3))]),i("details",xi,[i("summary",null,[s[366]||(s[366]=i("a",{id:"NNlib.predilate",href:"#NNlib.predilate"},[i("span",{class:"jlbinding"},"NNlib.predilate")],-1)),s[367]||(s[367]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[368]||(s[368]=t('
    julia
    predilate(x, dilation::Tuple)

    Places elements of x within a lattice of zeros, used in expressing a transposed convolution in terms of normal convolution. Note that while we call this "predilation" for aesthetic reasons, you are typically passing a "stride" value into here. Yes, transposed convolution is confusing.

    source

    ',3))]),i("details",ji,[i("summary",null,[s[369]||(s[369]=i("a",{id:"NNlib.safe_div",href:"#NNlib.safe_div"},[i("span",{class:"jlbinding"},"NNlib.safe_div")],-1)),s[370]||(s[370]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[371]||(s[371]=t('
    julia
    safe_div(x, y)

    Returns x/y unless y==0, in which case it just returns x. (Used internally by scatter.)

    source

    ',3))])])}const Pi=p(d,[["render",Di]]);export{Ii as __pageData,Pi as default}; diff --git a/dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.lean.js b/dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.lean.js new file mode 100644 index 0000000000..6340e9f5ac --- /dev/null +++ b/dev/assets/api_NN_Primitives_NNlib.md.-JRexgX5.lean.js @@ -0,0 +1 @@ +import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framework.BetCMmtc.js";const Ii=JSON.parse('{"title":"NNlib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/NNlib.md","filePath":"api/NN_Primitives/NNlib.md","lastUpdated":null}'),d={name:"api/NN_Primitives/NNlib.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},Z={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},J={class:"jldocstring custom-block"},K={class:"jldocstring custom-block"},X={class:"jldocstring custom-block"},$={class:"jldocstring custom-block"},Y={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},Es={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},Fs={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"},Cs={class:"jldocstring custom-block"},us={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},Ts={class:"jldocstring custom-block"},bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Qs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.451ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 10365.1 1267.9","aria-hidden":"true"},Bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},_s={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},fs={class:"jldocstring custom-block"},vs={class:"jldocstring custom-block"},Ns={class:"jldocstring custom-block"},As={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},xs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},js={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ds={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},ws={class:"jldocstring custom-block"},Vs={class:"jldocstring custom-block"},Ls={class:"jldocstring custom-block"},Hs={class:"jldocstring custom-block"},Ss={class:"jldocstring custom-block"},Ms={class:"jldocstring custom-block"},Is={class:"jldocstring custom-block"},Ps={class:"jldocstring custom-block"},zs={class:"jldocstring custom-block"},Rs={class:"jldocstring custom-block"},Zs={class:"jldocstring custom-block"},Os={class:"jldocstring custom-block"},qs={class:"jldocstring custom-block"},Ws={class:"jldocstring custom-block"},Gs={class:"jldocstring custom-block"},Us={class:"jldocstring custom-block"},Js={class:"jldocstring custom-block"},Ks={class:"jldocstring custom-block"},Xs={class:"jldocstring custom-block"},$s={class:"jldocstring custom-block"},Ys={class:"jldocstring custom-block"},si={class:"jldocstring custom-block"},ii={class:"jldocstring custom-block"},ai={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ti={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.797ex"},xmlns:"http://www.w3.org/2000/svg",width:"65.233ex",height:"2.969ex",role:"img",focusable:"false",viewBox:"0 -960 28832.9 1312.1","aria-hidden":"true"},ni={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},li={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},hi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.407ex",height:"1.027ex",role:"img",focusable:"false",viewBox:"0 -443 622 454","aria-hidden":"true"},pi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ki={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.312ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.661ex",height:"1.907ex",role:"img",focusable:"false",viewBox:"0 -705 5596.1 843","aria-hidden":"true"},di={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ri={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.986ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 878 453","aria-hidden":"true"},oi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},gi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},Ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},yi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Fi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ci={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.565ex",height:"2.066ex",role:"img",focusable:"false",viewBox:"0 -903 3343.8 913","aria-hidden":"true"},Ci={class:"jldocstring custom-block"},ui={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},mi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.372ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 9888.3 1267.9","aria-hidden":"true"},Ti={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},bi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},Qi={class:"jldocstring custom-block"},Bi={class:"jldocstring custom-block"},_i={class:"jldocstring custom-block"},fi={class:"jldocstring custom-block"},vi={class:"jldocstring custom-block"},Ni={class:"jldocstring custom-block"},Ai={class:"jldocstring custom-block"},xi={class:"jldocstring custom-block"},ji={class:"jldocstring custom-block"};function Di(wi,s,Vi,Li,Hi,Si){const n=k("Badge");return h(),e("div",null,[s[372]||(s[372]=t("",4)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.dot_product_attention",href:"#NNlib.dot_product_attention"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=t("",10))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.dot_product_attention_scores",href:"#NNlib.dot_product_attention_scores"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention_scores")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=t("",4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.make_causal_mask",href:"#NNlib.make_causal_mask"},[i("span",{class:"jlbinding"},"NNlib.make_causal_mask")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=t("",4))]),s[373]||(s[373]=i("h2",{id:"softmax",tabindex:"-1"},[a("Softmax "),i("a",{class:"header-anchor",href:"#softmax","aria-label":'Permalink to "Softmax"'},"​")],-1)),i("details",E,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.softmax",href:"#NNlib.softmax"},[i("span",{class:"jlbinding"},"NNlib.softmax")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=t("",11))]),i("details",y,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.logsoftmax",href:"#NNlib.logsoftmax"},[i("span",{class:"jlbinding"},"NNlib.logsoftmax")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=t("",6))]),s[374]||(s[374]=i("h2",{id:"pooling",tabindex:"-1"},[a("Pooling "),i("a",{class:"header-anchor",href:"#pooling","aria-label":'Permalink to "Pooling"'},"​")],-1)),i("details",F,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.PoolDims",href:"#NNlib.PoolDims"},[i("span",{class:"jlbinding"},"NNlib.PoolDims")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=t("",3))]),i("details",c,[i("summary",null,[s[18]||(s[18]=i("a",{id:"NNlib.maxpool",href:"#NNlib.maxpool"},[i("span",{class:"jlbinding"},"NNlib.maxpool")],-1)),s[19]||(s[19]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=t("",5))]),i("details",C,[i("summary",null,[s[21]||(s[21]=i("a",{id:"NNlib.meanpool",href:"#NNlib.meanpool"},[i("span",{class:"jlbinding"},"NNlib.meanpool")],-1)),s[22]||(s[22]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=t("",5))]),i("details",u,[i("summary",null,[s[24]||(s[24]=i("a",{id:"NNlib.lpnormpool",href:"#NNlib.lpnormpool"},[i("span",{class:"jlbinding"},"NNlib.lpnormpool")],-1)),s[25]||(s[25]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=t("",7))]),s[375]||(s[375]=i("h2",{id:"padding",tabindex:"-1"},[a("Padding "),i("a",{class:"header-anchor",href:"#padding","aria-label":'Permalink to "Padding"'},"​")],-1)),i("details",m,[i("summary",null,[s[27]||(s[27]=i("a",{id:"NNlib.pad_reflect",href:"#NNlib.pad_reflect"},[i("span",{class:"jlbinding"},"NNlib.pad_reflect")],-1)),s[28]||(s[28]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=t("",7))]),i("details",T,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.pad_symmetric",href:"#NNlib.pad_symmetric"},[i("span",{class:"jlbinding"},"NNlib.pad_symmetric")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=t("",7))]),i("details",b,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.pad_circular",href:"#NNlib.pad_circular"},[i("span",{class:"jlbinding"},"NNlib.pad_circular")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=t("",8))]),i("details",Q,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.pad_repeat",href:"#NNlib.pad_repeat"},[i("span",{class:"jlbinding"},"NNlib.pad_repeat")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=t("",7))]),i("details",B,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.pad_constant",href:"#NNlib.pad_constant"},[i("span",{class:"jlbinding"},"NNlib.pad_constant")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=t("",7))]),i("details",_,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.pad_zeros",href:"#NNlib.pad_zeros"},[i("span",{class:"jlbinding"},"NNlib.pad_zeros")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=t("",3))]),s[376]||(s[376]=i("h2",{id:"convolution",tabindex:"-1"},[a("Convolution "),i("a",{class:"header-anchor",href:"#convolution","aria-label":'Permalink to "Convolution"'},"​")],-1)),i("details",f,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.conv",href:"#NNlib.conv"},[i("span",{class:"jlbinding"},"NNlib.conv")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=t("",3))]),i("details",v,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.ConvDims",href:"#NNlib.ConvDims"},[i("span",{class:"jlbinding"},"NNlib.ConvDims")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[50]||(s[50]=t("",3))]),i("details",N,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.depthwiseconv",href:"#NNlib.depthwiseconv"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=t("",3))]),i("details",A,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.DepthwiseConvDims",href:"#NNlib.DepthwiseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DepthwiseConvDims")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[56]||(s[56]=t("",3))]),i("details",x,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.DenseConvDims",href:"#NNlib.DenseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DenseConvDims")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[59]||(s[59]=t("",3))]),i("details",j,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.unfold",href:"#NNlib.unfold"},[i("span",{class:"jlbinding"},"NNlib.unfold")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=t("",7))]),i("details",D,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.fold",href:"#NNlib.fold"},[i("span",{class:"jlbinding"},"NNlib.fold")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=t("",7))]),s[377]||(s[377]=i("h2",{id:"upsampling",tabindex:"-1"},[a("Upsampling "),i("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),i("details",w,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.upsample_nearest",href:"#NNlib.upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.upsample_nearest")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=t("",7))]),i("details",V,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.∇upsample_nearest",href:"#NNlib.∇upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_nearest")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=t("",6))]),i("details",L,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.upsample_linear",href:"#NNlib.upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.upsample_linear")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=t("",4))]),i("details",H,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.∇upsample_linear",href:"#NNlib.∇upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_linear")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=t("",6))]),i("details",S,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.upsample_bilinear",href:"#NNlib.upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_bilinear")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=t("",6))]),i("details",M,[i("summary",null,[s[81]||(s[81]=i("a",{id:"NNlib.∇upsample_bilinear",href:"#NNlib.∇upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_bilinear")],-1)),s[82]||(s[82]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[83]||(s[83]=t("",6))]),i("details",I,[i("summary",null,[s[84]||(s[84]=i("a",{id:"NNlib.upsample_trilinear",href:"#NNlib.upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_trilinear")],-1)),s[85]||(s[85]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[86]||(s[86]=t("",6))]),i("details",P,[i("summary",null,[s[87]||(s[87]=i("a",{id:"NNlib.∇upsample_trilinear",href:"#NNlib.∇upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_trilinear")],-1)),s[88]||(s[88]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[89]||(s[89]=t("",6))]),i("details",z,[i("summary",null,[s[90]||(s[90]=i("a",{id:"NNlib.pixel_shuffle",href:"#NNlib.pixel_shuffle"},[i("span",{class:"jlbinding"},"NNlib.pixel_shuffle")],-1)),s[91]||(s[91]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[92]||(s[92]=t("",7))]),s[378]||(s[378]=i("h2",{id:"rotation",tabindex:"-1"},[a("Rotation "),i("a",{class:"header-anchor",href:"#rotation","aria-label":'Permalink to "Rotation"'},"​")],-1)),s[379]||(s[379]=i("p",null,"Rotate images in the first two dimensions of an array.",-1)),i("details",R,[i("summary",null,[s[93]||(s[93]=i("a",{id:"NNlib.imrotate",href:"#NNlib.imrotate"},[i("span",{class:"jlbinding"},"NNlib.imrotate")],-1)),s[94]||(s[94]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[95]||(s[95]=t("",9))]),i("details",Z,[i("summary",null,[s[96]||(s[96]=i("a",{id:"NNlib.∇imrotate",href:"#NNlib.∇imrotate"},[i("span",{class:"jlbinding"},"NNlib.∇imrotate")],-1)),s[97]||(s[97]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[98]||(s[98]=t("",5))]),s[380]||(s[380]=i("h2",{id:"Batched-Operations",tabindex:"-1"},[a("Batched Operations "),i("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),i("details",O,[i("summary",null,[s[99]||(s[99]=i("a",{id:"NNlib.batched_mul",href:"#NNlib.batched_mul"},[i("span",{class:"jlbinding"},"NNlib.batched_mul")],-1)),s[100]||(s[100]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[101]||(s[101]=t("",15))]),i("details",q,[i("summary",null,[s[102]||(s[102]=i("a",{id:"NNlib.batched_mul!",href:"#NNlib.batched_mul!"},[i("span",{class:"jlbinding"},"NNlib.batched_mul!")],-1)),s[103]||(s[103]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[104]||(s[104]=t("",5))]),i("details",W,[i("summary",null,[s[105]||(s[105]=i("a",{id:"NNlib.batched_adjoint",href:"#NNlib.batched_adjoint"},[i("span",{class:"jlbinding"},"NNlib.batched_adjoint")],-1)),s[106]||(s[106]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[107]||(s[107]=t("",7))]),i("details",G,[i("summary",null,[s[108]||(s[108]=i("a",{id:"NNlib.batched_transpose",href:"#NNlib.batched_transpose"},[i("span",{class:"jlbinding"},"NNlib.batched_transpose")],-1)),s[109]||(s[109]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[110]||(s[110]=t("",7))]),i("details",U,[i("summary",null,[s[111]||(s[111]=i("a",{id:"NNlib.batched_vec",href:"#NNlib.batched_vec"},[i("span",{class:"jlbinding"},"NNlib.batched_vec")],-1)),s[112]||(s[112]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[113]||(s[113]=t("",5))]),s[381]||(s[381]=i("h2",{id:"Gather-and-Scatter",tabindex:"-1"},[a("Gather and Scatter "),i("a",{class:"header-anchor",href:"#Gather-and-Scatter","aria-label":'Permalink to "Gather and Scatter {#Gather-and-Scatter}"'},"​")],-1)),i("details",J,[i("summary",null,[s[114]||(s[114]=i("a",{id:"NNlib.gather",href:"#NNlib.gather"},[i("span",{class:"jlbinding"},"NNlib.gather")],-1)),s[115]||(s[115]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[116]||(s[116]=t("",16))]),i("details",K,[i("summary",null,[s[117]||(s[117]=i("a",{id:"NNlib.gather!",href:"#NNlib.gather!"},[i("span",{class:"jlbinding"},"NNlib.gather!")],-1)),s[118]||(s[118]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[119]||(s[119]=t("",9))]),i("details",X,[i("summary",null,[s[120]||(s[120]=i("a",{id:"NNlib.scatter",href:"#NNlib.scatter"},[i("span",{class:"jlbinding"},"NNlib.scatter")],-1)),s[121]||(s[121]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[122]||(s[122]=t("",7))]),i("details",$,[i("summary",null,[s[123]||(s[123]=i("a",{id:"NNlib.scatter!",href:"#NNlib.scatter!"},[i("span",{class:"jlbinding"},"NNlib.scatter!")],-1)),s[124]||(s[124]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[125]||(s[125]=t("",9))]),s[382]||(s[382]=i("h2",{id:"sampling",tabindex:"-1"},[a("Sampling "),i("a",{class:"header-anchor",href:"#sampling","aria-label":'Permalink to "Sampling"'},"​")],-1)),i("details",Y,[i("summary",null,[s[126]||(s[126]=i("a",{id:"NNlib.grid_sample",href:"#NNlib.grid_sample"},[i("span",{class:"jlbinding"},"NNlib.grid_sample")],-1)),s[127]||(s[127]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[128]||(s[128]=t("",11))]),i("details",ss,[i("summary",null,[s[129]||(s[129]=i("a",{id:"NNlib.∇grid_sample",href:"#NNlib.∇grid_sample"},[i("span",{class:"jlbinding"},"NNlib.∇grid_sample")],-1)),s[130]||(s[130]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[131]||(s[131]=t("",6))]),s[383]||(s[383]=i("h2",{id:"losses",tabindex:"-1"},[a("Losses "),i("a",{class:"header-anchor",href:"#losses","aria-label":'Permalink to "Losses"'},"​")],-1)),i("details",is,[i("summary",null,[s[132]||(s[132]=i("a",{id:"NNlib.ctc_loss",href:"#NNlib.ctc_loss"},[i("span",{class:"jlbinding"},"NNlib.ctc_loss")],-1)),s[133]||(s[133]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[134]||(s[134]=t("",3))]),s[384]||(s[384]=i("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",as,[i("summary",null,[s[135]||(s[135]=i("a",{id:"NNlib.logsumexp",href:"#NNlib.logsumexp"},[i("span",{class:"jlbinding"},"NNlib.logsumexp")],-1)),s[136]||(s[136]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[137]||(s[137]=t("",4))]),i("details",ts,[i("summary",null,[s[138]||(s[138]=i("a",{id:"NNlib.glu",href:"#NNlib.glu"},[i("span",{class:"jlbinding"},"NNlib.glu")],-1)),s[139]||(s[139]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[140]||(s[140]=t("",4))]),s[385]||(s[385]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[i("code",null,"within_gradient"),a(" function currently doesn't work for Enzyme. Prefer to use "),i("code",null,"LuxLib.Utils.within_autodiff"),a(" if needed. Though pay heed that this function is not part of the public API.")])],-1)),i("details",ns,[i("summary",null,[s[141]||(s[141]=i("a",{id:"NNlib.within_gradient",href:"#NNlib.within_gradient"},[i("span",{class:"jlbinding"},"NNlib.within_gradient")],-1)),s[142]||(s[142]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[143]||(s[143]=t("",7))]),s[386]||(s[386]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[a("Use "),i("code",null,"LuxLib.API.bias_activation!!"),a(" or "),i("code",null,"LuxLib.API.bias_activation"),a(" instead of "),i("code",null,"NNlib.bias_act!"),a(".")])],-1)),i("details",ls,[i("summary",null,[s[144]||(s[144]=i("a",{id:"NNlib.bias_act!",href:"#NNlib.bias_act!"},[i("span",{class:"jlbinding"},"NNlib.bias_act!")],-1)),s[145]||(s[145]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[146]||(s[146]=t("",5))]),s[387]||(s[387]=i("h2",{id:"dropout",tabindex:"-1"},[a("Dropout "),i("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),s[388]||(s[388]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[a("Use "),i("code",null,"LuxLib.API.dropout"),a(" instead of "),i("code",null,"NNlib.dropout"),a(".")])],-1)),i("details",es,[i("summary",null,[s[147]||(s[147]=i("a",{id:"NNlib.dropout",href:"#NNlib.dropout"},[i("span",{class:"jlbinding"},"NNlib.dropout")],-1)),s[148]||(s[148]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[149]||(s[149]=t("",7))]),i("details",hs,[i("summary",null,[s[150]||(s[150]=i("a",{id:"NNlib.dropout!",href:"#NNlib.dropout!"},[i("span",{class:"jlbinding"},"NNlib.dropout!")],-1)),s[151]||(s[151]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[152]||(s[152]=t("",3))]),s[389]||(s[389]=i("h2",{id:"Internal-NNlib-Functions",tabindex:"-1"},[a("Internal NNlib Functions "),i("a",{class:"header-anchor",href:"#Internal-NNlib-Functions","aria-label":'Permalink to "Internal NNlib Functions {#Internal-NNlib-Functions}"'},"​")],-1)),s[390]||(s[390]=i("p",null,"These functions are not part of the public API and are subject to change without notice.",-1)),i("details",ps,[i("summary",null,[s[153]||(s[153]=i("a",{id:"NNlib.BatchedAdjoint",href:"#NNlib.BatchedAdjoint"},[i("span",{class:"jlbinding"},"NNlib.BatchedAdjoint")],-1)),s[154]||(s[154]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[155]||(s[155]=t("",7))]),i("details",ks,[i("summary",null,[s[156]||(s[156]=i("a",{id:"NNlib.∇conv_filter_direct!",href:"#NNlib.∇conv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_direct!")],-1)),s[157]||(s[157]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[158]||(s[158]=t("",3))]),i("details",ds,[i("summary",null,[s[159]||(s[159]=i("a",{id:"NNlib._check_trivial_rotations!",href:"#NNlib._check_trivial_rotations!"},[i("span",{class:"jlbinding"},"NNlib._check_trivial_rotations!")],-1)),s[160]||(s[160]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[161]||(s[161]=t("",4))]),i("details",rs,[i("summary",null,[s[162]||(s[162]=i("a",{id:"NNlib.fast_act",href:"#NNlib.fast_act"},[i("span",{class:"jlbinding"},"NNlib.fast_act")],-1)),s[163]||(s[163]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[164]||(s[164]=t("",4))]),i("details",os,[i("summary",null,[s[165]||(s[165]=i("a",{id:"NNlib.spectrogram",href:"#NNlib.spectrogram"},[i("span",{class:"jlbinding"},"NNlib.spectrogram")],-1)),s[166]||(s[166]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[167]||(s[167]=t("",8))]),i("details",gs,[i("summary",null,[s[168]||(s[168]=i("a",{id:"NNlib.is_strided",href:"#NNlib.is_strided"},[i("span",{class:"jlbinding"},"NNlib.is_strided")],-1)),s[169]||(s[169]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[170]||(s[170]=t("",5))]),i("details",Es,[i("summary",null,[s[171]||(s[171]=i("a",{id:"NNlib.conv_direct!",href:"#NNlib.conv_direct!"},[i("span",{class:"jlbinding"},"NNlib.conv_direct!")],-1)),s[172]||(s[172]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[173]||(s[173]=t("",6))]),i("details",ys,[i("summary",null,[s[174]||(s[174]=i("a",{id:"NNlib.gemm!",href:"#NNlib.gemm!"},[i("span",{class:"jlbinding"},"NNlib.gemm!")],-1)),s[175]||(s[175]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[176]||(s[176]=t("",5))]),i("details",Fs,[i("summary",null,[s[177]||(s[177]=i("a",{id:"NNlib.calc_padding_regions",href:"#NNlib.calc_padding_regions"},[i("span",{class:"jlbinding"},"NNlib.calc_padding_regions")],-1)),s[178]||(s[178]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[179]||(s[179]=t("",3))]),i("details",cs,[i("summary",null,[s[180]||(s[180]=i("a",{id:"NNlib.∇depthwiseconv_data_im2col!",href:"#NNlib.∇depthwiseconv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_im2col!")],-1)),s[181]||(s[181]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[182]||(s[182]=t("",3))]),i("details",Cs,[i("summary",null,[s[183]||(s[183]=i("a",{id:"NNlib._prepare_imrotate",href:"#NNlib._prepare_imrotate"},[i("span",{class:"jlbinding"},"NNlib._prepare_imrotate")],-1)),s[184]||(s[184]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[185]||(s[185]=t("",3))]),i("details",us,[i("summary",null,[s[186]||(s[186]=i("a",{id:"NNlib.insert_singleton_spatial_dimension",href:"#NNlib.insert_singleton_spatial_dimension"},[i("span",{class:"jlbinding"},"NNlib.insert_singleton_spatial_dimension")],-1)),s[187]||(s[187]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[188]||(s[188]=t("",3))]),i("details",ms,[i("summary",null,[s[189]||(s[189]=i("a",{id:"NNlib._fast_broadcast!",href:"#NNlib._fast_broadcast!"},[i("span",{class:"jlbinding"},"NNlib._fast_broadcast!")],-1)),s[190]||(s[190]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[191]||(s[191]=t("",5))]),i("details",Ts,[i("summary",null,[s[192]||(s[192]=i("a",{id:"NNlib.hann_window",href:"#NNlib.hann_window"},[i("span",{class:"jlbinding"},"NNlib.hann_window")],-1)),s[193]||(s[193]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[200]||(s[200]=t("",2)),i("p",null,[i("mjx-container",bs,[(h(),e("svg",Qs,s[194]||(s[194]=[t("",1)]))),s[195]||(s[195]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mfrac",null,[i("mn",null,"1"),i("mn",null,"2")]),i("mo",{stretchy:"false"},"["),i("mn",null,"1"),i("mo",null,"−"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")"),i("mo",{stretchy:"false"},"]")])],-1))])]),i("p",null,[s[198]||(s[198]=a("Where ")),i("mjx-container",Bs,[(h(),e("svg",_s,s[196]||(s[196]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[197]||(s[197]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[199]||(s[199]=a(" is the window length."))]),s[201]||(s[201]=t("",9))]),i("details",fs,[i("summary",null,[s[202]||(s[202]=i("a",{id:"NNlib._rng_from_array",href:"#NNlib._rng_from_array"},[i("span",{class:"jlbinding"},"NNlib._rng_from_array")],-1)),s[203]||(s[203]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[204]||(s[204]=t("",3))]),i("details",vs,[i("summary",null,[s[205]||(s[205]=i("a",{id:"NNlib.∇depthwiseconv_filter_im2col!",href:"#NNlib.∇depthwiseconv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_im2col!")],-1)),s[206]||(s[206]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[207]||(s[207]=t("",3))]),i("details",Ns,[i("summary",null,[s[208]||(s[208]=i("a",{id:"NNlib.istft",href:"#NNlib.istft"},[i("span",{class:"jlbinding"},"NNlib.istft")],-1)),s[209]||(s[209]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[224]||(s[224]=t("",6)),i("ul",null,[s[222]||(s[222]=t("",3)),i("li",null,[i("p",null,[s[214]||(s[214]=i("code",null,"center::Bool",-1)),s[215]||(s[215]=a(": Whether input to ")),s[216]||(s[216]=i("code",null,"stft",-1)),s[217]||(s[217]=a(" was padded on both sides so that ")),i("mjx-container",As,[(h(),e("svg",xs,s[210]||(s[210]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[211]||(s[211]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[218]||(s[218]=a("-th frame is centered at time ")),i("mjx-container",js,[(h(),e("svg",Ds,s[212]||(s[212]=[t("",1)]))),s[213]||(s[213]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[219]||(s[219]=a(". Padding is done with ")),s[220]||(s[220]=i("code",null,"pad_reflect",-1)),s[221]||(s[221]=a(" function."))])]),s[223]||(s[223]=t("",3))]),s[225]||(s[225]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L173-L205",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",ws,[i("summary",null,[s[226]||(s[226]=i("a",{id:"NNlib.transpose_swapbatch",href:"#NNlib.transpose_swapbatch"},[i("span",{class:"jlbinding"},"NNlib.transpose_swapbatch")],-1)),s[227]||(s[227]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[228]||(s[228]=t("",3))]),i("details",Vs,[i("summary",null,[s[229]||(s[229]=i("a",{id:"NNlib.transpose_pad",href:"#NNlib.transpose_pad"},[i("span",{class:"jlbinding"},"NNlib.transpose_pad")],-1)),s[230]||(s[230]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[231]||(s[231]=t("",3))]),i("details",Ls,[i("summary",null,[s[232]||(s[232]=i("a",{id:"NNlib.power_to_db",href:"#NNlib.power_to_db"},[i("span",{class:"jlbinding"},"NNlib.power_to_db")],-1)),s[233]||(s[233]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[234]||(s[234]=t("",7))]),i("details",Hs,[i("summary",null,[s[235]||(s[235]=i("a",{id:"NNlib.col2im!",href:"#NNlib.col2im!"},[i("span",{class:"jlbinding"},"NNlib.col2im!")],-1)),s[236]||(s[236]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[237]||(s[237]=t("",4))]),i("details",Ss,[i("summary",null,[s[238]||(s[238]=i("a",{id:"NNlib.depthwiseconv_im2col!",href:"#NNlib.depthwiseconv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_im2col!")],-1)),s[239]||(s[239]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[240]||(s[240]=t("",3))]),i("details",Ms,[i("summary",null,[s[241]||(s[241]=i("a",{id:"NNlib.storage_type",href:"#NNlib.storage_type"},[i("span",{class:"jlbinding"},"NNlib.storage_type")],-1)),s[242]||(s[242]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[243]||(s[243]=t("",4))]),i("details",Is,[i("summary",null,[s[244]||(s[244]=i("a",{id:"NNlib.im2col_dims",href:"#NNlib.im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.im2col_dims")],-1)),s[245]||(s[245]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[246]||(s[246]=t("",4))]),i("details",Ps,[i("summary",null,[s[247]||(s[247]=i("a",{id:"NNlib.∇depthwiseconv_filter_direct!",href:"#NNlib.∇depthwiseconv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_direct!")],-1)),s[248]||(s[248]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[249]||(s[249]=t("",3))]),i("details",zs,[i("summary",null,[s[250]||(s[250]=i("a",{id:"NNlib.reverse_indices",href:"#NNlib.reverse_indices"},[i("span",{class:"jlbinding"},"NNlib.reverse_indices")],-1)),s[251]||(s[251]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[252]||(s[252]=t("",5))]),i("details",Rs,[i("summary",null,[s[253]||(s[253]=i("a",{id:"NNlib.∇conv_filter_im2col!",href:"#NNlib.∇conv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_im2col!")],-1)),s[254]||(s[254]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[255]||(s[255]=t("",3))]),i("details",Zs,[i("summary",null,[s[256]||(s[256]=i("a",{id:"NNlib.conv_im2col!",href:"#NNlib.conv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.conv_im2col!")],-1)),s[257]||(s[257]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[258]||(s[258]=t("",4))]),i("details",Os,[i("summary",null,[s[259]||(s[259]=i("a",{id:"NNlib.∇conv_data_direct!",href:"#NNlib.∇conv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_direct!")],-1)),s[260]||(s[260]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[261]||(s[261]=t("",3))]),i("details",qs,[i("summary",null,[s[262]||(s[262]=i("a",{id:"NNlib.scatter_dims",href:"#NNlib.scatter_dims"},[i("span",{class:"jlbinding"},"NNlib.scatter_dims")],-1)),s[263]||(s[263]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[264]||(s[264]=i("p",null,"Performs dimensional consistency checks and return the dimensionality of the scattered objects.",-1)),s[265]||(s[265]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/scatter.jl#L16-L19",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ws,[i("summary",null,[s[266]||(s[266]=i("a",{id:"NNlib.∇conv_data_im2col!",href:"#NNlib.∇conv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_im2col!")],-1)),s[267]||(s[267]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[268]||(s[268]=t("",3))]),i("details",Gs,[i("summary",null,[s[269]||(s[269]=i("a",{id:"NNlib.storage_typejoin",href:"#NNlib.storage_typejoin"},[i("span",{class:"jlbinding"},"NNlib.storage_typejoin")],-1)),s[270]||(s[270]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[271]||(s[271]=t("",4))]),i("details",Us,[i("summary",null,[s[272]||(s[272]=i("a",{id:"NNlib.add_blanks",href:"#NNlib.add_blanks"},[i("span",{class:"jlbinding"},"NNlib.add_blanks")],-1)),s[273]||(s[273]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[274]||(s[274]=t("",3))]),i("details",Js,[i("summary",null,[s[275]||(s[275]=i("a",{id:"NNlib.∇filter_im2col_dims",href:"#NNlib.∇filter_im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.∇filter_im2col_dims")],-1)),s[276]||(s[276]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[277]||(s[277]=t("",4))]),i("details",Ks,[i("summary",null,[s[278]||(s[278]=i("a",{id:"NNlib._bilinear_helper",href:"#NNlib._bilinear_helper"},[i("span",{class:"jlbinding"},"NNlib._bilinear_helper")],-1)),s[279]||(s[279]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[280]||(s[280]=i("p",null,"_bilinear_helper(yrot, xrot, yrot_f, xrot_f, yrot_int, xrot_int)",-1)),s[281]||(s[281]=i("p",null,"Some helper variables",-1)),s[282]||(s[282]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/rotation.jl#L20-L24",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Xs,[i("summary",null,[s[283]||(s[283]=i("a",{id:"NNlib._triangular_filterbanks",href:"#NNlib._triangular_filterbanks"},[i("span",{class:"jlbinding"},"NNlib._triangular_filterbanks")],-1)),s[284]||(s[284]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[285]||(s[285]=t("",7))]),i("details",$s,[i("summary",null,[s[286]||(s[286]=i("a",{id:"NNlib.∇depthwiseconv_data_direct!",href:"#NNlib.∇depthwiseconv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_direct!")],-1)),s[287]||(s[287]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[288]||(s[288]=t("",3))]),i("details",Ys,[i("summary",null,[s[289]||(s[289]=i("a",{id:"NNlib.db_to_power",href:"#NNlib.db_to_power"},[i("span",{class:"jlbinding"},"NNlib.db_to_power")],-1)),s[290]||(s[290]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[291]||(s[291]=t("",3))]),i("details",si,[i("summary",null,[s[292]||(s[292]=i("a",{id:"NNlib.predilated_size",href:"#NNlib.predilated_size"},[i("span",{class:"jlbinding"},"NNlib.predilated_size")],-1)),s[293]||(s[293]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[294]||(s[294]=t("",3))]),i("details",ii,[i("summary",null,[s[295]||(s[295]=i("a",{id:"NNlib.stft",href:"#NNlib.stft"},[i("span",{class:"jlbinding"},"NNlib.stft")],-1)),s[296]||(s[296]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[328]||(s[328]=t("",3)),i("p",null,[i("mjx-container",ai,[(h(),e("svg",ti,s[297]||(s[297]=[t("",1)]))),s[298]||(s[298]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"Y"),i("mo",{stretchy:"false"},"["),i("mi",null,"ω"),i("mo",null,","),i("mi",null,"m"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("munderover",null,[i("mo",{"data-mjx-texclass":"OP"},"∑"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"k"),i("mo",null,"="),i("mn",null,"0")]),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mtext",null,"window"),i("mo",{stretchy:"false"},"["),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mtext",null,"input"),i("mo",{stretchy:"false"},"["),i("mi",null,"m"),i("mo",null,"×"),i("mtext",null,"hop length"),i("mo",null,"+"),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mi",null,"exp"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mo",null,"−"),i("mi",null,"j"),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"ω"),i("mi",null,"k")]),i("mtext",null,"n fft")]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[307]||(s[307]=a("where ")),i("mjx-container",ni,[(h(),e("svg",li,s[299]||(s[299]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[300]||(s[300]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[308]||(s[308]=a(" is the window length, ")),i("mjx-container",ei,[(h(),e("svg",hi,s[301]||(s[301]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D714",d:"M495 384Q495 406 514 424T555 443Q574 443 589 425T604 364Q604 334 592 278T555 155T483 38T377 -11Q297 -11 267 66Q266 68 260 61Q201 -11 125 -11Q15 -11 15 139Q15 230 56 325T123 434Q135 441 147 436Q160 429 160 418Q160 406 140 379T94 306T62 208Q61 202 61 187Q61 124 85 100T143 76Q201 76 245 129L253 137V156Q258 297 317 297Q348 297 348 261Q348 243 338 213T318 158L308 135Q309 133 310 129T318 115T334 97T358 83T393 76Q456 76 501 148T546 274Q546 305 533 325T508 357T495 384Z",style:{"stroke-width":"3"}})])])],-1)]))),s[302]||(s[302]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"ω")])],-1))]),s[309]||(s[309]=a(" is the frequency ")),i("mjx-container",pi,[(h(),e("svg",ki,s[303]||(s[303]=[t("",1)]))),s[304]||(s[304]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mn",null,"0"),i("mo",null,"≤"),i("mi",null,"ω"),i("mo",null,"<"),i("mtext",null,"n fft")])],-1))]),s[310]||(s[310]=a(" and ")),i("mjx-container",di,[(h(),e("svg",ri,s[305]||(s[305]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D45A",d:"M21 287Q22 293 24 303T36 341T56 388T88 425T132 442T175 435T205 417T221 395T229 376L231 369Q231 367 232 367L243 378Q303 442 384 442Q401 442 415 440T441 433T460 423T475 411T485 398T493 385T497 373T500 364T502 357L510 367Q573 442 659 442Q713 442 746 415T780 336Q780 285 742 178T704 50Q705 36 709 31T724 26Q752 26 776 56T815 138Q818 149 821 151T837 153Q857 153 857 145Q857 144 853 130Q845 101 831 73T785 17T716 -10Q669 -10 648 17T627 73Q627 92 663 193T700 345Q700 404 656 404H651Q565 404 506 303L499 291L466 157Q433 26 428 16Q415 -11 385 -11Q372 -11 364 -4T353 8T350 18Q350 29 384 161L420 307Q423 322 423 345Q423 404 379 404H374Q288 404 229 303L222 291L189 157Q156 26 151 16Q138 -11 108 -11Q95 -11 87 -5T76 7T74 17Q74 30 112 181Q151 335 151 342Q154 357 154 369Q154 405 129 405Q107 405 92 377T69 316T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),s[306]||(s[306]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"m")])],-1))]),s[311]||(s[311]=a(" is the index of the sliding window."))]),s[329]||(s[329]=i("p",null,[i("strong",null,"Arguments:")],-1)),s[330]||(s[330]=i("ul",null,[i("li",null,[i("code",null,"x"),a(": Input, must be either a 1D time sequence ("),i("code",null,"(L,)"),a(" shape) or a 2D batch of time sequence ("),i("code",null,"(L, B)"),a(" shape).")])],-1)),s[331]||(s[331]=i("p",null,[i("strong",null,"Keyword Arguments:")],-1)),i("ul",null,[s[327]||(s[327]=t("",3)),i("li",null,[i("p",null,[s[316]||(s[316]=i("code",null,"center::Bool",-1)),s[317]||(s[317]=a(": Whether to pad input on both sides so that ")),i("mjx-container",oi,[(h(),e("svg",gi,s[312]||(s[312]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[313]||(s[313]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[318]||(s[318]=a("-th frame is centered at time ")),i("mjx-container",Ei,[(h(),e("svg",yi,s[314]||(s[314]=[t("",1)]))),s[315]||(s[315]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[319]||(s[319]=a(". Padding is done with ")),s[320]||(s[320]=i("code",null,"pad_reflect",-1)),s[321]||(s[321]=a(" function."))])]),i("li",null,[i("p",null,[s[324]||(s[324]=i("code",null,"normalized::Bool",-1)),s[325]||(s[325]=a(": Whether to return normalized STFT, i.e. multiplied with ")),i("mjx-container",Fi,[(h(),e("svg",ci,s[322]||(s[322]=[t("",1)]))),s[323]||(s[323]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("msup",null,[i("mtext",null,"n fft"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mo",null,"−"),i("mn",null,"0.5")])])])],-1))]),s[326]||(s[326]=a("."))])])]),s[332]||(s[332]=i("p",null,[i("strong",null,"Returns:")],-1)),s[333]||(s[333]=i("p",null,[a("Complex array of shape "),i("code",null,"(n_fft, n_frames, B)"),a(", where "),i("code",null,"B"),a(" is the optional batch dimension.")],-1)),s[334]||(s[334]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L130-L170",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ci,[i("summary",null,[s[335]||(s[335]=i("a",{id:"NNlib.hamming_window",href:"#NNlib.hamming_window"},[i("span",{class:"jlbinding"},"NNlib.hamming_window")],-1)),s[336]||(s[336]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[343]||(s[343]=t("",2)),i("p",null,[i("mjx-container",ui,[(h(),e("svg",mi,s[337]||(s[337]=[t("",1)]))),s[338]||(s[338]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mi",null,"α"),i("mo",null,"−"),i("mi",null,"β"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[341]||(s[341]=a("Where ")),i("mjx-container",Ti,[(h(),e("svg",bi,s[339]||(s[339]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[340]||(s[340]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[342]||(s[342]=a(" is the window length."))]),s[344]||(s[344]=t("",10))]),i("details",Qi,[i("summary",null,[s[345]||(s[345]=i("a",{id:"NNlib.maximum_dims",href:"#NNlib.maximum_dims"},[i("span",{class:"jlbinding"},"NNlib.maximum_dims")],-1)),s[346]||(s[346]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[347]||(s[347]=t("",5))]),i("details",Bi,[i("summary",null,[s[348]||(s[348]=i("a",{id:"NNlib.BatchedTranspose",href:"#NNlib.BatchedTranspose"},[i("span",{class:"jlbinding"},"NNlib.BatchedTranspose")],-1)),s[349]||(s[349]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[350]||(s[350]=t("",7))]),i("details",_i,[i("summary",null,[s[351]||(s[351]=i("a",{id:"NNlib._rotate_coordinates",href:"#NNlib._rotate_coordinates"},[i("span",{class:"jlbinding"},"NNlib._rotate_coordinates")],-1)),s[352]||(s[352]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[353]||(s[353]=t("",3))]),i("details",fi,[i("summary",null,[s[354]||(s[354]=i("a",{id:"NNlib.melscale_filterbanks",href:"#NNlib.melscale_filterbanks"},[i("span",{class:"jlbinding"},"NNlib.melscale_filterbanks")],-1)),s[355]||(s[355]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[356]||(s[356]=t("",8))]),i("details",vi,[i("summary",null,[s[357]||(s[357]=i("a",{id:"NNlib.logaddexp",href:"#NNlib.logaddexp"},[i("span",{class:"jlbinding"},"NNlib.logaddexp")],-1)),s[358]||(s[358]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[359]||(s[359]=t("",3))]),i("details",Ni,[i("summary",null,[s[360]||(s[360]=i("a",{id:"NNlib.depthwiseconv_direct!",href:"#NNlib.depthwiseconv_direct!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_direct!")],-1)),s[361]||(s[361]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[362]||(s[362]=t("",5))]),i("details",Ai,[i("summary",null,[s[363]||(s[363]=i("a",{id:"NNlib.im2col!",href:"#NNlib.im2col!"},[i("span",{class:"jlbinding"},"NNlib.im2col!")],-1)),s[364]||(s[364]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[365]||(s[365]=t("",3))]),i("details",xi,[i("summary",null,[s[366]||(s[366]=i("a",{id:"NNlib.predilate",href:"#NNlib.predilate"},[i("span",{class:"jlbinding"},"NNlib.predilate")],-1)),s[367]||(s[367]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[368]||(s[368]=t("",3))]),i("details",ji,[i("summary",null,[s[369]||(s[369]=i("a",{id:"NNlib.safe_div",href:"#NNlib.safe_div"},[i("span",{class:"jlbinding"},"NNlib.safe_div")],-1)),s[370]||(s[370]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[371]||(s[371]=t("",3))])])}const Pi=p(d,[["render",Di]]);export{Ii as __pageData,Pi as default}; diff --git a/dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.lean.js b/dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.lean.js deleted file mode 100644 index d7146387c6..0000000000 --- a/dev/assets/api_NN_Primitives_NNlib.md.DHUiCckb.lean.js +++ /dev/null @@ -1,619 +0,0 @@ -import{_ as p,c as e,a2 as t,j as i,a,G as l,B as k,o as h}from"./chunks/framework.I-x9Gl6h.js";const Zi=JSON.parse('{"title":"NNlib","description":"","frontmatter":{},"headers":[],"relativePath":"api/NN_Primitives/NNlib.md","filePath":"api/NN_Primitives/NNlib.md","lastUpdated":null}'),d={name:"api/NN_Primitives/NNlib.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},y={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"},F={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},C={class:"jldocstring custom-block"},u={class:"jldocstring custom-block"},m={class:"jldocstring custom-block"},b={class:"jldocstring custom-block"},Q={class:"jldocstring custom-block"},T={class:"jldocstring custom-block"},B={class:"jldocstring custom-block"},f={class:"jldocstring custom-block"},v={class:"jldocstring custom-block"},N={class:"jldocstring custom-block"},x={class:"jldocstring custom-block"},j={class:"jldocstring custom-block"},A={class:"jldocstring custom-block"},w={class:"jldocstring custom-block"},D={class:"jldocstring custom-block"},L={class:"jldocstring custom-block"},H={class:"jldocstring custom-block"},V={class:"jldocstring custom-block"},_={class:"jldocstring custom-block"},M={class:"jldocstring custom-block"},z={class:"jldocstring custom-block"},Z={class:"jldocstring custom-block"},I={class:"jldocstring custom-block"},O={class:"jldocstring custom-block"},P={class:"jldocstring custom-block"},S={class:"jldocstring custom-block"},q={class:"jldocstring custom-block"},R={class:"jldocstring custom-block"},W={class:"jldocstring custom-block"},G={class:"jldocstring custom-block"},U={class:"jldocstring custom-block"},J={class:"jldocstring custom-block"},K={class:"jldocstring custom-block"},X={class:"jldocstring custom-block"},$={class:"jldocstring custom-block"},Y={class:"jldocstring custom-block"},ss={class:"jldocstring custom-block"},is={class:"jldocstring custom-block"},as={class:"jldocstring custom-block"},ts={class:"jldocstring custom-block"},ns={class:"jldocstring custom-block"},ls={class:"jldocstring custom-block"},es={class:"jldocstring custom-block"},hs={class:"jldocstring custom-block"},ps={class:"jldocstring custom-block"},ks={class:"jldocstring custom-block"},ds={class:"jldocstring custom-block"},rs={class:"jldocstring custom-block"},os={class:"jldocstring custom-block"},gs={class:"jldocstring custom-block"},ys={class:"jldocstring custom-block"},Es={class:"jldocstring custom-block"},Fs={class:"jldocstring custom-block"},cs={class:"jldocstring custom-block"},Cs={class:"jldocstring custom-block"},us={class:"jldocstring custom-block"},ms={class:"jldocstring custom-block"},bs={class:"jldocstring custom-block"},Qs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ts={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"23.451ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 10365.1 1267.9","aria-hidden":"true"},Bs={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},fs={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},vs={class:"jldocstring custom-block"},Ns={class:"jldocstring custom-block"},xs={class:"jldocstring custom-block"},js={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},As={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},ws={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ds={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Ls={class:"jldocstring custom-block"},Hs={class:"jldocstring custom-block"},Vs={class:"jldocstring custom-block"},_s={class:"jldocstring custom-block"},Ms={class:"jldocstring custom-block"},zs={class:"jldocstring custom-block"},Zs={class:"jldocstring custom-block"},Is={class:"jldocstring custom-block"},Os={class:"jldocstring custom-block"},Ps={class:"jldocstring custom-block"},Ss={class:"jldocstring custom-block"},qs={class:"jldocstring custom-block"},Rs={class:"jldocstring custom-block"},Ws={class:"jldocstring custom-block"},Gs={class:"jldocstring custom-block"},Us={class:"jldocstring custom-block"},Js={class:"jldocstring custom-block"},Ks={class:"jldocstring custom-block"},Xs={class:"jldocstring custom-block"},$s={class:"jldocstring custom-block"},Ys={class:"jldocstring custom-block"},si={class:"jldocstring custom-block"},ii={class:"jldocstring custom-block"},ai={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ti={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.797ex"},xmlns:"http://www.w3.org/2000/svg",width:"65.233ex",height:"2.969ex",role:"img",focusable:"false",viewBox:"0 -960 28832.9 1312.1","aria-hidden":"true"},ni={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},li={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},ei={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},hi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.407ex",height:"1.027ex",role:"img",focusable:"false",viewBox:"0 -443 622 454","aria-hidden":"true"},pi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ki={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.312ex"},xmlns:"http://www.w3.org/2000/svg",width:"12.661ex",height:"1.907ex",role:"img",focusable:"false",viewBox:"0 -705 5596.1 843","aria-hidden":"true"},di={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ri={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.986ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 878 453","aria-hidden":"true"},oi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},gi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.817ex",height:"1.441ex",role:"img",focusable:"false",viewBox:"0 -626 361 637","aria-hidden":"true"},yi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Ei={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.466ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.956ex",height:"2.036ex",role:"img",focusable:"false",viewBox:"0 -694 6168.4 900","aria-hidden":"true"},Fi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},ci={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.023ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.565ex",height:"2.066ex",role:"img",focusable:"false",viewBox:"0 -903 3343.8 913","aria-hidden":"true"},Ci={class:"jldocstring custom-block"},ui={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},mi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.912ex"},xmlns:"http://www.w3.org/2000/svg",width:"22.372ex",height:"2.869ex",role:"img",focusable:"false",viewBox:"0 -864.9 9888.3 1267.9","aria-hidden":"true"},bi={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Qi={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.009ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 888 683","aria-hidden":"true"},Ti={class:"jldocstring custom-block"},Bi={class:"jldocstring custom-block"},fi={class:"jldocstring custom-block"},vi={class:"jldocstring custom-block"},Ni={class:"jldocstring custom-block"},xi={class:"jldocstring custom-block"},ji={class:"jldocstring custom-block"},Ai={class:"jldocstring custom-block"},wi={class:"jldocstring custom-block"};function Di(Li,s,Hi,Vi,_i,Mi){const n=k("Badge");return h(),e("div",null,[s[372]||(s[372]=t('

    NNlib

    Neural Network Primitives with custom bindings for different accelerator backends in Julia.

    Reexport of NNlib

    Lux doesn't re-export all of NNlib for now. Directly loading NNlib is the recommended approach for accessing these functions.

    Attention

    ',4)),i("details",r,[i("summary",null,[s[0]||(s[0]=i("a",{id:"NNlib.dot_product_attention",href:"#NNlib.dot_product_attention"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention")],-1)),s[1]||(s[1]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[2]||(s[2]=t(`
    julia
    dot_product_attention(query, key, value, [bias]; [fdrop, mask, nheads])

    Multihead dot product attention used in transformer architectures.

    The input arrays must have the first two dimensions given by the number of features and the sequence length, then an arbitrary number of batch dimensions or none.

    Returns the attention output array of size (v_dim, q_len, batch_size...) and the attention scores of size (kv_len, q_len, nheads, batch_size...).

    See also dot_product_attention_scores if you only need the attention scores.

    Arguments

    Examples

    julia
    q, k, v = rand(10, 20, 2), rand(10, 30, 2), rand(20, 30, 2)
    -y, α = dot_product_attention(q, k, v)

    source

    `,10))]),i("details",o,[i("summary",null,[s[3]||(s[3]=i("a",{id:"NNlib.dot_product_attention_scores",href:"#NNlib.dot_product_attention_scores"},[i("span",{class:"jlbinding"},"NNlib.dot_product_attention_scores")],-1)),s[4]||(s[4]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=t('
    julia
    dot_product_attention_scores(query, key, [bias]; [fdrop, mask])

    Return the attention scores for the dot_product_attention. Input arrays must have dimensions (num_features ÷ nheads, nheads, sequence_length, batch_size).

    See dot_product_attention for more details.

    source

    ',4))]),i("details",g,[i("summary",null,[s[6]||(s[6]=i("a",{id:"NNlib.make_causal_mask",href:"#NNlib.make_causal_mask"},[i("span",{class:"jlbinding"},"NNlib.make_causal_mask")],-1)),s[7]||(s[7]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=t('
    julia
    make_causal_mask(x, dims=2)

    Return a boolean square matrix m of the same type as x and of side size(x, dims). Its elements are set such that m[i, j] == i ≤ j.

    Can be used to mask the attention scores in dot_product_attention.

    source

    ',4))]),s[373]||(s[373]=i("h2",{id:"softmax",tabindex:"-1"},[a("Softmax "),i("a",{class:"header-anchor",href:"#softmax","aria-label":'Permalink to "Softmax"'},"​")],-1)),i("details",y,[i("summary",null,[s[9]||(s[9]=i("a",{id:"NNlib.softmax",href:"#NNlib.softmax"},[i("span",{class:"jlbinding"},"NNlib.softmax")],-1)),s[10]||(s[10]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[11]||(s[11]=t(`
    julia
    softmax(x; dims = 1)

    Softmax turns input array x into probability distributions that sum to 1 along the dimensions specified by dims. It is semantically equivalent to the following:

    softmax(x; dims = 1) = exp.(x) ./ sum(exp.(x), dims = dims)

    with additional manipulations enhancing numerical stability.

    For a matrix input x it will by default (dims = 1) treat it as a batch of vectors, with each column independent. Keyword dims = 2 will instead treat rows independently, and so on.

    See also logsoftmax.

    Examples

    julia
    julia> softmax([1, 2, 3])
    -3-element Vector{Float64}:
    - 0.09003057317038046
    - 0.24472847105479764
    - 0.6652409557748218
    -
    -julia> softmax([1 2 3; 2 2 2])  # dims=1
    -2×3 Matrix{Float64}:
    - 0.268941  0.5  0.731059
    - 0.731059  0.5  0.268941
    -
    -julia> softmax([1 2 3; 2 2 2]; dims=2)
    -2×3 Matrix{Float64}:
    - 0.0900306  0.244728  0.665241
    - 0.333333   0.333333  0.333333

    Note that, when used with Flux.jl, softmax must not be passed to layers like Dense which accept an activation function. The activation is broadcasted over the result, thus applies to individual numbers. But softmax always needs to see the whole column.

    julia
    julia> using Flux
    -
    -julia> x = randn(Float32, 4, 4, 3, 13);
    -
    -julia> model = Chain(Conv((4, 4), 3 => 8, tanh), Flux.flatten, Dense(8 => 7), softmax);
    -
    -julia> model(x) |> size
    -(7, 13)
    -
    -julia> Dense(4 => 7, softmax)(x)
    -ERROR: \`softmax(x)\` called with a number, but it expects an array.

    source

    `,11))]),i("details",E,[i("summary",null,[s[12]||(s[12]=i("a",{id:"NNlib.logsoftmax",href:"#NNlib.logsoftmax"},[i("span",{class:"jlbinding"},"NNlib.logsoftmax")],-1)),s[13]||(s[13]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[14]||(s[14]=t('
    julia
    logsoftmax(x; dims = 1)

    Computes the log of softmax in a more numerically stable way than directly taking log.(softmax(xs)). Commonly used in computing cross entropy loss.

    It is semantically equivalent to the following:

    logsoftmax(x; dims = 1) = x .- log.(sum(exp.(x), dims = dims))

    See also softmax.

    source

    ',6))]),s[374]||(s[374]=i("h2",{id:"pooling",tabindex:"-1"},[a("Pooling "),i("a",{class:"header-anchor",href:"#pooling","aria-label":'Permalink to "Pooling"'},"​")],-1)),i("details",F,[i("summary",null,[s[15]||(s[15]=i("a",{id:"NNlib.PoolDims",href:"#NNlib.PoolDims"},[i("span",{class:"jlbinding"},"NNlib.PoolDims")],-1)),s[16]||(s[16]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[17]||(s[17]=t(`
    julia
    PoolDims(x_size::NTuple{M}, k::Union{NTuple{L, Int}, Int};
    -         stride=k, padding=0, dilation=1)  where {M, L}

    Dimensions for a "pooling" operation that can have an arbitrary input size, kernel size, stride, dilation, and channel count. Used to dispatch onto efficient implementations at compile-time.

    source

    `,3))]),i("details",c,[i("summary",null,[s[18]||(s[18]=i("a",{id:"NNlib.maxpool",href:"#NNlib.maxpool"},[i("span",{class:"jlbinding"},"NNlib.maxpool")],-1)),s[19]||(s[19]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[20]||(s[20]=t('
    julia
    maxpool(x, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform max pool operation with window size k on input tensor x.

    Arguments:

    source

    ',5))]),i("details",C,[i("summary",null,[s[21]||(s[21]=i("a",{id:"NNlib.meanpool",href:"#NNlib.meanpool"},[i("span",{class:"jlbinding"},"NNlib.meanpool")],-1)),s[22]||(s[22]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[23]||(s[23]=t('
    julia
    meanpool(x, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform mean pool operation with window size k on input tensor x.

    Arguments:

    source

    ',5))]),i("details",u,[i("summary",null,[s[24]||(s[24]=i("a",{id:"NNlib.lpnormpool",href:"#NNlib.lpnormpool"},[i("span",{class:"jlbinding"},"NNlib.lpnormpool")],-1)),s[25]||(s[25]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[26]||(s[26]=t('
    julia
    lpnormpool(x, p::Real, k::NTuple{N, Integer}; pad=0, stride=k)

    Perform Lp pool operation with value of the Lp norm p and window size k on input tensor x, also known as LPPool in pytorch. This pooling operator from Learned-Norm Pooling for Deep Feedforward and Recurrent Neural Networks.

    Arguments:

    For all elements x in a size k window, lpnormpool computes (∑ᵢ xᵢ^p)^(1 / p) as an element of the output.

    Thus lpnormpool(x, 1, k) ./ prod(k) ≈ meanpool(x, k) and lpnormpool(x, 2, k).^2 ./ prod(k) ≈ meanpool(x.^2, k).

    source

    ',7))]),s[375]||(s[375]=i("h2",{id:"padding",tabindex:"-1"},[a("Padding "),i("a",{class:"header-anchor",href:"#padding","aria-label":'Permalink to "Padding"'},"​")],-1)),i("details",m,[i("summary",null,[s[27]||(s[27]=i("a",{id:"NNlib.pad_reflect",href:"#NNlib.pad_reflect"},[i("span",{class:"jlbinding"},"NNlib.pad_reflect")],-1)),s[28]||(s[28]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[29]||(s[29]=t(`
    julia
    pad_reflect(x, pad::Tuple; [dims])
    -pad_reflect(x, pad::Int; [dims])

    Pad the array x reflecting its values across the border.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_repeat, pad_symmetric, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
    -3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
    - 1  4  7
    - 2  5  8
    - 3  6  9
    -
    -julia> pad_reflect(r, (1,2,1,2))
    -6×6 Matrix{Int64}:
    - 5  2  5  8  5  2
    - 4  1  4  7  4  1
    - 5  2  5  8  5  2
    - 6  3  6  9  6  3
    - 5  2  5  8  5  2
    - 4  1  4  7  4  1

    source

    `,7))]),i("details",b,[i("summary",null,[s[30]||(s[30]=i("a",{id:"NNlib.pad_symmetric",href:"#NNlib.pad_symmetric"},[i("span",{class:"jlbinding"},"NNlib.pad_symmetric")],-1)),s[31]||(s[31]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[32]||(s[32]=t(`
    julia
    pad_symmetric(x, pad::Tuple; [dims])
    -pad_symmetric(x, pad::Int; [dims])

    Pad the array x reflecting its values symmetrically across the border, i.e. the border values of x are present in the padding values, in contrast to pad_reflect.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_repeat, pad_reflect, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
    -3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
    - 1  4  7
    - 2  5  8
    - 3  6  9
    -
    -julia> pad_symmetric(r, (1,2,1,2))
    -6×6 Matrix{Int64}:
    - 1  1  4  7  7  4
    - 1  1  4  7  7  4
    - 2  2  5  8  8  5
    - 3  3  6  9  9  6
    - 3  3  6  9  9  6
    - 2  2  5  8  8  5

    source

    `,7))]),i("details",Q,[i("summary",null,[s[33]||(s[33]=i("a",{id:"NNlib.pad_circular",href:"#NNlib.pad_circular"},[i("span",{class:"jlbinding"},"NNlib.pad_circular")],-1)),s[34]||(s[34]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[35]||(s[35]=t(`
    julia
    pad_circular(x, pad::Tuple; [dims])
    -pad_circular(x, pad::Int; [dims])

    Pad the array x "circularly" across the border by wrapping around values from the opposite side of x.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    The pad length on either side in any dimension must not exceed the size of x in that dimension, i.e. pad_circular is not able to create abitrary sized tilings of x.

    See also pad_repeat, pad_reflect, pad_symmetric, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
    -3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
    - 1  4  7
    - 2  5  8
    - 3  6  9
    -
    -julia> pad_circular(r, (1,2,1,2))
    -6×6 Matrix{Int64}:
    - 9  3  6  9  3  6
    - 7  1  4  7  1  4
    - 8  2  5  8  2  5
    - 9  3  6  9  3  6
    - 7  1  4  7  1  4
    - 8  2  5  8  2  5

    source

    `,8))]),i("details",T,[i("summary",null,[s[36]||(s[36]=i("a",{id:"NNlib.pad_repeat",href:"#NNlib.pad_repeat"},[i("span",{class:"jlbinding"},"NNlib.pad_repeat")],-1)),s[37]||(s[37]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[38]||(s[38]=t(`
    julia
    pad_repeat(x, pad::Tuple; [dims])
    -pad_repeat(x, pad::Int; [dims])

    Pad the array x repeating the values on the border.

    pad can a tuple of integers (l1, r1, ..., ln, rn) of some length 2n that specifies the left and right padding size for each of the dimensions in dims. If dims is not given, it defaults to the first n dimensions.

    If pad is an integer, it is applied on both sides on every dimension in dims. In this case, dims defaults to the first ndims(x)-2 dimensions (i.e. excludes the channel and batch dimension).

    See also pad_reflect, pad_symmetric, pad_circular, and pad_constant.

    julia
    julia> r = reshape(1:9, 3, 3)
    -3×3 reshape(::UnitRange{Int64}, 3, 3) with eltype Int64:
    - 1  4  7
    - 2  5  8
    - 3  6  9
    -
    -julia> pad_repeat(r, (1,2,3,4))
    -6×10 Matrix{Int64}:
    - 1  1  1  1  4  7  7  7  7  7
    - 1  1  1  1  4  7  7  7  7  7
    - 2  2  2  2  5  8  8  8  8  8
    - 3  3  3  3  6  9  9  9  9  9
    - 3  3  3  3  6  9  9  9  9  9
    - 3  3  3  3  6  9  9  9  9  9

    source

    `,7))]),i("details",B,[i("summary",null,[s[39]||(s[39]=i("a",{id:"NNlib.pad_constant",href:"#NNlib.pad_constant"},[i("span",{class:"jlbinding"},"NNlib.pad_constant")],-1)),s[40]||(s[40]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[41]||(s[41]=t(`
    julia
    pad_constant(x, pad::Tuple, val = 0; [dims = :])
    -pad_constant(x, pad::Int, val = 0; [dims = :])

    Pad the array x with the constant value val.

    pad can be a tuple of integers. If it is of some length 2 * length(dims) that specifies the left and right padding size for each of the dimensions in dims as (l1, r1, ..., ln, rn). If supplied with a tuple of length length(dims) instead, it applies symmetric padding. If dims is not given, it defaults to all dimensions.

    For integer pad input, it is applied on both sides on every dimension in dims.

    See also pad_zeros, pad_repeat, pad_reflect, pad_symmetric, and pad_circular.

    julia
    julia> r = reshape(1:4, 2, 2)
    -2×2 reshape(::UnitRange{Int64}, 2, 2) with eltype Int64:
    - 1  3
    - 2  4
    -
    -julia> pad_constant(r, (1, 2, 3, 4), 8)
    -5×9 Matrix{Int64}:
    - 8  8  8  8  8  8  8  8  8
    - 8  8  8  1  3  8  8  8  8
    - 8  8  8  2  4  8  8  8  8
    - 8  8  8  8  8  8  8  8  8
    - 8  8  8  8  8  8  8  8  8
    -
    -julia> pad_constant(r, 1, 8)
    -4×4 Matrix{Int64}:
    - 8  8  8  8
    - 8  1  3  8
    - 8  2  4  8
    - 8  8  8  8
    -
    -julia> r = reshape(1:27, 3, 3, 3)
    -3×3×3 reshape(::UnitRange{Int64}, 3, 3, 3) with eltype Int64:
    -[:, :, 1] =
    - 1  4  7
    - 2  5  8
    - 3  6  9
    -
    -[:, :, 2] =
    - 10  13  16
    - 11  14  17
    - 12  15  18
    -
    -[:, :, 3] =
    - 19  22  25
    - 20  23  26
    - 21  24  27
    -
    -julia> pad_constant(r, (2,1), dims = 1) # assymetric padding
    -6×3×3 Array{Int64, 3}:
    -[:, :, 1] =
    - 0  0  0
    - 0  0  0
    - 1  4  7
    - 2  5  8
    - 3  6  9
    - 0  0  0
    -
    -[:, :, 2] =
    -  0   0   0
    -  0   0   0
    - 10  13  16
    - 11  14  17
    - 12  15  18
    -  0   0   0
    -
    -[:, :, 3] =
    -  0   0   0
    -  0   0   0
    - 19  22  25
    - 20  23  26
    - 21  24  27
    -  0   0   0
    -
    -julia> pad_constant(r, (2,1, 3), dims = (1,2)) # padding must always be either the same length as dims, or double it
    -ERROR: ArgumentError: Could not parse padding (2, 1, 3) and dims (1, 2)
    -Stacktrace:
    -[...]

    source

    `,7))]),i("details",f,[i("summary",null,[s[42]||(s[42]=i("a",{id:"NNlib.pad_zeros",href:"#NNlib.pad_zeros"},[i("span",{class:"jlbinding"},"NNlib.pad_zeros")],-1)),s[43]||(s[43]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[44]||(s[44]=t(`
    julia
    pad_zeros(x, pad::Tuple; [dims])
    -pad_zeros(x, pad::Int; [dims])

    Pad the array x with zeros. Equivalent to pad_constant with the constant equal to 0.

    source

    `,3))]),s[376]||(s[376]=i("h2",{id:"convolution",tabindex:"-1"},[a("Convolution "),i("a",{class:"header-anchor",href:"#convolution","aria-label":'Permalink to "Convolution"'},"​")],-1)),i("details",v,[i("summary",null,[s[45]||(s[45]=i("a",{id:"NNlib.conv",href:"#NNlib.conv"},[i("span",{class:"jlbinding"},"NNlib.conv")],-1)),s[46]||(s[46]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[47]||(s[47]=t('
    julia
    conv(x, w; stride = 1, pad = 0, dilation = 1, flipped = false, groups = 1)

    Apply convolution filter w to input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively. x and w may have real or complex element types.

    source

    ',3))]),i("details",N,[i("summary",null,[s[48]||(s[48]=i("a",{id:"NNlib.ConvDims",href:"#NNlib.ConvDims"},[i("span",{class:"jlbinding"},"NNlib.ConvDims")],-1)),s[49]||(s[49]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[50]||(s[50]=t('
    julia
    ConvDims

    Type system-level information about convolution dimensions. Critical for things like im2col!() to generate efficient code, and helpful to reduce the number of kwargs getting passed around.

    source

    ',3))]),i("details",x,[i("summary",null,[s[51]||(s[51]=i("a",{id:"NNlib.depthwiseconv",href:"#NNlib.depthwiseconv"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv")],-1)),s[52]||(s[52]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[53]||(s[53]=t('
    julia
    depthwiseconv(x, w; stride=1, pad=0, dilation=1, flipped=false)

    Depthwise convolution operation with filter w on input x. x and w are 3d/4d/5d tensors in 1d/2d/3d convolutions respectively.

    source

    ',3))]),i("details",j,[i("summary",null,[s[54]||(s[54]=i("a",{id:"NNlib.DepthwiseConvDims",href:"#NNlib.DepthwiseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DepthwiseConvDims")],-1)),s[55]||(s[55]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[56]||(s[56]=t('
    julia
    DepthwiseConvDims

    Concrete subclass of ConvDims for a depthwise convolution. Differs primarily due to characterization by C_in, C_mult, rather than C_in, C_out. Useful to be separate from DenseConvDims primarily for channel calculation differences.

    source

    ',3))]),i("details",A,[i("summary",null,[s[57]||(s[57]=i("a",{id:"NNlib.DenseConvDims",href:"#NNlib.DenseConvDims"},[i("span",{class:"jlbinding"},"NNlib.DenseConvDims")],-1)),s[58]||(s[58]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[59]||(s[59]=t('
    julia
    DenseConvDims

    Concrete subclass of ConvDims for a normal, dense, conv2d/conv3d.

    source

    ',3))]),i("details",w,[i("summary",null,[s[60]||(s[60]=i("a",{id:"NNlib.unfold",href:"#NNlib.unfold"},[i("span",{class:"jlbinding"},"NNlib.unfold")],-1)),s[61]||(s[61]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[62]||(s[62]=t(`
    julia
    unfold(x, kernel_size; stride = 1, pad = 0, dilation = 0, flipped = true)

    Places sliding windows of x into a container tensor of size (num_windows, window_size, batchsize). The window size is determined by the prod(spatial dims of kernel)*input_channels. The number of sliding windows will match those of convolution (conv) with the same kernel_size and arguments. Note that by default conv flips the spatial dimensions of its kernel (default flipped=false), whereas unfold does not (default flipped=true). Uses NNlib.im2col! as backend.

    See also fold, the adjoint/transpose operator and a potential inverse of unfold.

    Example

    The below example demonstrates that unfold uses the same sliding windows as conv. In general batched_mul + unfold should not be used to achieve convolution.

    julia
    julia> x = reshape([100 2 3 40 5 6 700], 7, 1, 1);  # 1D data, 1 channel, batch of 1
    -
    -julia> w = reshape([1 0 -1], 3, 1, 1);  # 1D conv kernel of length 3
    -
    -julia> kws = (pad=1, stride=2, flipped=true);  # use same args for conv and unfold
    -
    -julia> z = NNlib.unfold(x, size(w); kws...)
    -4×3×1 Array{Int64, 3}:
    -[:, :, 1] =
    -  0  100   2
    -  2    3  40
    - 40    5   6
    -  6  700   0
    -
    -julia> y1 = conv(x, w; kws...)
    -4×1×1 Array{Int64, 3}:
    -[:, :, 1] =
    -  -2
    - -38
    -  34
    -   6
    -
    -julia> y2 = z  w  # ⊠ (\\boxtimes) is NNlib.batched_mul
    -4×1×1 Array{Int64, 3}:
    -[:, :, 1] =
    -  -2
    - -38
    -  34
    -   6

    source

    `,7))]),i("details",D,[i("summary",null,[s[63]||(s[63]=i("a",{id:"NNlib.fold",href:"#NNlib.fold"},[i("span",{class:"jlbinding"},"NNlib.fold")],-1)),s[64]||(s[64]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[65]||(s[65]=t(`
    julia
    fold(y, output_size, kernel_size; stride = 1, pad = 0, dilation = 0, flipped = true)

    The adjoint/transpose operator of unfold. It accumulates sliding windows from the output of unfold into a container tensor of size output_size. An inverse to unfold may be obtained (in some cases) by using fold and accounting for scaling issues with a divisor (see example). Uses NNlib.col2im! as backend.

    See also unfold.

    Example

    julia
    julia> x = reshape([100 2 3 40 5 6 700], 7, 1, 1);  # 1D data, 1 channel, batch of 1
    -
    -julia> y = NNlib.unfold(x, (3,1,1))  # sliding window of size 3
    -5×3×1 Array{Int64, 3}:
    -[:, :, 1] =
    - 100   2    3
    -   2   3   40
    -   3  40    5
    -  40   5    6
    -   5   6  700
    -
    -julia> z = NNlib.fold(y, size(x), (3,1,1))  # sum of contributions in y. 100 appears once, 40 three times
    -7×1×1 Array{Int64, 3}:
    -[:, :, 1] =
    - 100
    -   4
    -   9
    - 120
    -  15
    -  12
    - 700
    -
    -julia> divisor = NNlib.fold(NNlib.unfold(ones(size(x)...), (3,1,1)), size(x), (3,1,1))
    -7×1×1 Array{Float64, 3}:
    -[:, :, 1] =
    - 1.0
    - 2.0
    - 3.0
    - 3.0
    - 3.0
    - 2.0
    - 1.0
    -
    -julia> z ./ divisor
    -7×1×1 Array{Float64, 3}:
    -[:, :, 1] =
    - 100.0
    -   2.0
    -   3.0
    -  40.0
    -   5.0
    -   6.0
    - 700.0

    In general, an inverse to unfold does not exist if divisor contains zeros.

    source

    `,7))]),s[377]||(s[377]=i("h2",{id:"upsampling",tabindex:"-1"},[a("Upsampling "),i("a",{class:"header-anchor",href:"#upsampling","aria-label":'Permalink to "Upsampling"'},"​")],-1)),i("details",L,[i("summary",null,[s[66]||(s[66]=i("a",{id:"NNlib.upsample_nearest",href:"#NNlib.upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.upsample_nearest")],-1)),s[67]||(s[67]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[68]||(s[68]=t(`
    julia
    upsample_nearest(x, scale::NTuple{S,Int})
    -upsample_nearest(x; size::NTuple{S,Int})

    Upsamples the array x by integer multiples along the first S dimensions. Subsequent dimensions of x are not altered.

    Either the scale factors or the final output size can be specified.

    See also upsample_bilinear, for two dimensions of an N=4 array.

    Example

    julia
    julia> upsample_nearest([1 2 3; 4 5 6], (2, 3))
    -4×9 Matrix{Int64}:
    - 1  1  1  2  2  2  3  3  3
    - 1  1  1  2  2  2  3  3  3
    - 4  4  4  5  5  5  6  6  6
    - 4  4  4  5  5  5  6  6  6
    -
    -julia> ans == upsample_nearest([1 2 3; 4 5 6]; size=(4, 9))  # equivalent
    -true
    -
    -julia> upsample_nearest([1 2 3; 4 5 6], (2,))
    -4×3 Matrix{Int64}:
    - 1  2  3
    - 1  2  3
    - 4  5  6
    - 4  5  6
    -
    -julia> ans == upsample_nearest([1 2 3; 4 5 6], size=(4,))
    -true

    source

    `,7))]),i("details",H,[i("summary",null,[s[69]||(s[69]=i("a",{id:"NNlib.∇upsample_nearest",href:"#NNlib.∇upsample_nearest"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_nearest")],-1)),s[70]||(s[70]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[71]||(s[71]=t('
    julia
    ∇upsample_nearest::AbstractArray{T,3}, scales::NTuple{S, <:Integer}) where T

    Arguments

    Outputs

    source

    ',6))]),i("details",V,[i("summary",null,[s[72]||(s[72]=i("a",{id:"NNlib.upsample_linear",href:"#NNlib.upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.upsample_linear")],-1)),s[73]||(s[73]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[74]||(s[74]=t(`
    julia
    upsample_linear(x::AbstractArray{T,3}, scale::Real; align_corners::Bool = true)
    -upsample_linear(x::AbstractArray{T,3}; size::Integer, align_corners::Bool = true)

    Upsamples the first dimension of the array x by the upsample provided scale, using linear interpolation. As an alternative to using scale, the resulting array size can be directly specified with a keyword argument.

    The size of the output is equal to (scale*S1, S2, S3), where S1, S2, S3 = size(x).

    source

    `,4))]),i("details",_,[i("summary",null,[s[75]||(s[75]=i("a",{id:"NNlib.∇upsample_linear",href:"#NNlib.∇upsample_linear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_linear")],-1)),s[76]||(s[76]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[77]||(s[77]=t('
    julia
    ∇upsample_linear::AbstractArray{T,3}; size::Integer, align_corners::Bool = true) where T

    Arguments

    Outputs

    source

    ',6))]),i("details",M,[i("summary",null,[s[78]||(s[78]=i("a",{id:"NNlib.upsample_bilinear",href:"#NNlib.upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_bilinear")],-1)),s[79]||(s[79]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[80]||(s[80]=t(`
    julia
    upsample_bilinear(x::AbstractArray{T,4}, scale::NTuple{2,Real}; align_corners::Bool = true)
    -upsample_bilinear(x::AbstractArray{T,4}; size::NTuple{2,Integer}, align_corners::Bool = true)

    Upsamples the first 2 dimensions of the array x by the upsample factors stored in scale, using bilinear interpolation. As an alternative to using scale, the resulting image size can be directly specified with a keyword argument.

    The size of the output is equal to (scale[1]*S1, scale[2]*S2, S3, S4), where S1, S2, S3, S4 = size(x).

    Examples

    julia
    julia> x = reshape(Float32[1 2 3; 4 5 6], (2,3,1,1))
    -2×3×1×1 Array{Float32, 4}:
    -[:, :, 1, 1] =
    - 1.0  2.0  3.0
    - 4.0  5.0  6.0
    -
    -julia> upsample_bilinear(x, (2, 3))
    -4×9×1×1 Array{Float32, 4}:
    -[:, :, 1, 1] =
    - 1.0  1.25  1.5  1.75  2.0  2.25  2.5  2.75  3.0
    - 2.0  2.25  2.5  2.75  3.0  3.25  3.5  3.75  4.0
    - 3.0  3.25  3.5  3.75  4.0  4.25  4.5  4.75  5.0
    - 4.0  4.25  4.5  4.75  5.0  5.25  5.5  5.75  6.0
    -
    -julia> ans == upsample_bilinear(x; size=(4, 9))  # specify ouput size instead
    -true
    -
    -julia> upsample_bilinear(x, (2.5, 3.5))  # non-integer scaling factors are allowed
    -5×10×1×1 Array{Float32, 4}:
    -[:, :, 1, 1] =
    - 1.0   1.22222  1.44444  1.66667  1.88889  2.33333  2.55556  2.77778  3.0
    - 1.75  1.97222  2.19444  2.41667  2.63889     3.08333  3.30556  3.52778  3.75
    - 2.5   2.72222  2.94444  3.16667  3.38889     3.83333  4.05556  4.27778  4.5
    - 3.25  3.47222  3.69444  3.91667  4.13889     4.58333  4.80556  5.02778  5.25
    - 4.0   4.22222  4.44444  4.66667  4.88889     5.33333  5.55556  5.77778  6.0

    source

    `,6))]),i("details",z,[i("summary",null,[s[81]||(s[81]=i("a",{id:"NNlib.∇upsample_bilinear",href:"#NNlib.∇upsample_bilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_bilinear")],-1)),s[82]||(s[82]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[83]||(s[83]=t('
    julia
    ∇upsample_bilinear::AbstractArray{T,4}; size::NTuple{2,Integer}, align_corners::Bool = true) where T

    Arguments

    Outputs

    source

    ',6))]),i("details",Z,[i("summary",null,[s[84]||(s[84]=i("a",{id:"NNlib.upsample_trilinear",href:"#NNlib.upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.upsample_trilinear")],-1)),s[85]||(s[85]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[86]||(s[86]=t(`
    julia
    upsample_trilinear(x::AbstractArray{T,5}, scale::NTuple{3,Real}; align_corners::Bool = true)
    -upsample_trilinear(x::AbstractArray{T,5}; size::NTuple{3,Integer}, align_corners::Bool = true)

    Upsamples the first 3 dimensions of the array x by the upsample factors stored in scale, using trilinear interpolation. As an alternative to using scale, the resulting image size can be directly specified with a keyword argument.

    The size of the output is equal to (scale[1]*S1, scale[2]*S2, scale[3]*S3, S4, S5), where S1, S2, S3, S4, S5 = size(x).

    Examples

    julia
    upsample_trilinear(x, (2, 3, 4))
    -upsample_trilinear(x; size=(4, 9, 11))  # specify ouput size instead
    -upsample_trilinear(x, (2.5, 3.5, pi))  # non-integer scaling factors are allowed

    source

    `,6))]),i("details",I,[i("summary",null,[s[87]||(s[87]=i("a",{id:"NNlib.∇upsample_trilinear",href:"#NNlib.∇upsample_trilinear"},[i("span",{class:"jlbinding"},"NNlib.∇upsample_trilinear")],-1)),s[88]||(s[88]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[89]||(s[89]=t('
    julia
    ∇upsample_trilinear::AbstractArray{T,5}; size::NTuple{3,Integer}, align_corners::Bool = true) where T

    Arguments

    Outputs

    source

    ',6))]),i("details",O,[i("summary",null,[s[90]||(s[90]=i("a",{id:"NNlib.pixel_shuffle",href:"#NNlib.pixel_shuffle"},[i("span",{class:"jlbinding"},"NNlib.pixel_shuffle")],-1)),s[91]||(s[91]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[92]||(s[92]=t(`
    julia
    pixel_shuffle(x, r::Integer)

    Pixel shuffling operation, upscaling by a factor r.

    For 4-arrays representing N images, the operation converts input size(x) == (W, H, r^2*C, N) to output of size (r*W, r*H, C, N). For D-dimensional data, it expects ndims(x) == D+2 with channel and batch dimensions, and divides the number of channels by r^D.

    Used in super-resolution networks to upsample towards high resolution features. Reference: Shi et. al., "Real-Time Single Image and Video Super-Resolution ...", CVPR 2016, https://arxiv.org/abs/1609.05158

    Examples

    julia
    julia> x = [10i + j + channel/10 for i in 1:2, j in 1:3, channel in 1:4, batch in 1:1]
    -2×3×4×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 11.1  12.1  13.1
    - 21.1  22.1  23.1
    -
    -[:, :, 2, 1] =
    - 11.2  12.2  13.2
    - 21.2  22.2  23.2
    -
    -[:, :, 3, 1] =
    - 11.3  12.3  13.3
    - 21.3  22.3  23.3
    -
    -[:, :, 4, 1] =
    - 11.4  12.4  13.4
    - 21.4  22.4  23.4
    -
    -julia> pixel_shuffle(x, 2)  # 4 channels used up as 2x upscaling of image dimensions
    -4×6×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 11.1  11.3  12.1  12.3  13.1  13.3
    - 11.2  11.4  12.2  12.4  13.2  13.4
    - 21.1  21.3  22.1  22.3  23.1  23.3
    - 21.2  21.4  22.2  22.4  23.2  23.4
    -
    -julia> y = [i + channel/10 for i in 1:3, channel in 1:6, batch in 1:1]
    -3×6×1 Array{Float64, 3}:
    -[:, :, 1] =
    - 1.1  1.2  1.3  1.4  1.5  1.6
    - 2.1  2.2  2.3  2.4  2.5  2.6
    - 3.1  3.2  3.3  3.4  3.5  3.6
    -
    -julia> pixel_shuffle(y, 2)  # 1D image, with 6 channels reduced to 3
    -6×3×1 Array{Float64, 3}:
    -[:, :, 1] =
    - 1.1  1.3  1.5
    - 1.2  1.4  1.6
    - 2.1  2.3  2.5
    - 2.2  2.4  2.6
    - 3.1  3.3  3.5
    - 3.2  3.4  3.6

    source

    `,7))]),s[378]||(s[378]=i("h2",{id:"rotation",tabindex:"-1"},[a("Rotation "),i("a",{class:"header-anchor",href:"#rotation","aria-label":'Permalink to "Rotation"'},"​")],-1)),s[379]||(s[379]=i("p",null,"Rotate images in the first two dimensions of an array.",-1)),i("details",P,[i("summary",null,[s[93]||(s[93]=i("a",{id:"NNlib.imrotate",href:"#NNlib.imrotate"},[i("span",{class:"jlbinding"},"NNlib.imrotate")],-1)),s[94]||(s[94]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[95]||(s[95]=t(`
    julia
    imrotate(arr::AbstractArray{T, 4}, θ; method=:bilinear, rotation_center=size(arr)  2 .+ 1)

    Rotates an array in the first two dimensions around the center pixel rotation_center. The default value of rotation_center is defined such that there is a integer center pixel for even and odd sized arrays which it is rotated around. For an even sized array of size (4,4) this would be (3,3), for an odd array of size (3,3) this would be (2,2) However, rotation_center can be also non-integer numbers if specified.

    The angle θ is interpreted in radians.

    The adjoint is defined with ChainRulesCore.jl. This method also runs with CUDA (and in principle all KernelAbstractions.jl supported backends).

    Keywords

    Examples

    julia
    julia> arr = zeros((4,4,1,1)); arr[2,2,1,1] = 1;
    -
    -julia> arr
    -4×4×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  0.0  0.0  0.0
    - 0.0  1.0  0.0  0.0
    - 0.0  0.0  0.0  0.0
    - 0.0  0.0  0.0  0.0
    -
    -julia> NNlib.imrotate(arr, deg2rad(90)) # rotation around (3,3)
    -4×4×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  0.0  0.0  0.0
    - 0.0  0.0  0.0  1.0
    - 0.0  0.0  0.0  0.0
    - 0.0  0.0  0.0  0.0
    -
    -julia> NNlib.imrotate(arr, deg2rad(90), rotation_center=(2,2))
    -4×4×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  0.0  0.0  0.0
    - 0.0  1.0  0.0  0.0
    - 0.0  0.0  0.0  0.0
    - 0.0  0.0  0.0  0.0
    -
    -julia> arr = zeros((3,3,1,1)); arr[1,2,1,1] = 1
    -1
    -
    -julia> arr
    -3×3×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  1.0  0.0
    - 0.0  0.0  0.0
    - 0.0  0.0  0.0
    -
    -julia> NNlib.imrotate(arr, deg2rad(45))
    -3×3×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  0.207107  0.0
    - 0.0  0.0       0.207107
    - 0.0  0.0       0.0
    -
    -julia> NNlib.imrotate(arr, deg2rad(45), method=:nearest)
    -3×3×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  0.0  1.0
    - 0.0  0.0  0.0
    - 0.0  0.0  0.0

    source

    `,9))]),i("details",S,[i("summary",null,[s[96]||(s[96]=i("a",{id:"NNlib.∇imrotate",href:"#NNlib.∇imrotate"},[i("span",{class:"jlbinding"},"NNlib.∇imrotate")],-1)),s[97]||(s[97]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[98]||(s[98]=t(`
    julia
    ∇imrotate(dy, arr::AbstractArray{T, 4}, θ; method=:bilinear,
    -                                           rotation_center=size(arr)  2 .+ 1)

    Adjoint for imrotate. Gradient only with respect to arr and not θ.

    Arguments

    source

    `,5))]),s[380]||(s[380]=i("h2",{id:"Batched-Operations",tabindex:"-1"},[a("Batched Operations "),i("a",{class:"header-anchor",href:"#Batched-Operations","aria-label":'Permalink to "Batched Operations {#Batched-Operations}"'},"​")],-1)),i("details",q,[i("summary",null,[s[99]||(s[99]=i("a",{id:"NNlib.batched_mul",href:"#NNlib.batched_mul"},[i("span",{class:"jlbinding"},"NNlib.batched_mul")],-1)),s[100]||(s[100]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[101]||(s[101]=t(`
    julia
    batched_mul(A, B) -> C
    -A  B  # \\boxtimes

    Batched matrix multiplication. Result has C[:,:,k...] == A[:,:,k...] * B[:,:,k...] where k... represent any indices in the last dimensions.

    If ndims(A) == ndims(B) == 3 and size(B,3) == 1 then instead C[:,:,k] == A[:,:,k] * B[:,:,1], and similarly for A.

    To transpose each matrix, apply batched_transpose to the array, or batched_adjoint for conjugate-transpose:

    julia
    julia> A, B = randn(2,5,17), randn(5,9,17);
    -
    -julia> A  B |> size
    -(2, 9, 17)
    -
    -julia> batched_adjoint(A) |> size
    -(5, 2, 17)
    -
    -julia> batched_mul(A, batched_adjoint(randn(9,5,17))) |> size
    -(2, 9, 17)
    -
    -julia> A  randn(5,9,1) |> size
    -(2, 9, 17)
    -
    -julia> batched_transpose(A) == PermutedDimsArray(A, (2,1,3))
    -true

    The equivalent PermutedDimsArray may be used in place of batched_transpose. Other permutations are also handled by BLAS, provided that the batch index k is not the first dimension of the underlying array. Thus PermutedDimsArray(::Array, (1,3,2)) and PermutedDimsArray(::Array, (3,1,2)) are fine.

    However, A = PermutedDimsArray(::Array, (3,2,1)) is not acceptable to BLAS, since the batch dimension is the contiguous one: stride(A,3) == 1. This will be copied, as doing so is faster than batched_mul_generic!.

    Both this copy and batched_mul_generic! produce @debug messages, and setting for instance ENV["JULIA_DEBUG"] = NNlib will display them.

    source

    julia
    batched_mul(A::Array{T,3}, B::Matrix)
    -batched_mul(A::Matrix, B::Array{T,3})
    -A  B

    This is always matrix-matrix multiplication, but either A or B may lack a batch index.

    julia
    julia> randn(16,8,32)  randn(8,4) |> size
    -(16, 4, 32)
    -
    -julia> randn(16,8,32)  randn(8,4,1) |> size  # equivalent
    -(16, 4, 32)
    -
    -julia> randn(16,8)  randn(8,4,32) |> size
    -(16, 4, 32)

    See also batched_vec to regard B as a batch of vectors, A[:,:,k] * B[:,k].

    source

    `,15))]),i("details",R,[i("summary",null,[s[102]||(s[102]=i("a",{id:"NNlib.batched_mul!",href:"#NNlib.batched_mul!"},[i("span",{class:"jlbinding"},"NNlib.batched_mul!")],-1)),s[103]||(s[103]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[104]||(s[104]=t(`
    julia
    batched_mul!(C, A, B) -> C
    -batched_mul!(C, A, B, α=1, β=0)

    In-place batched matrix multiplication, equivalent to mul!(C[:,:,k], A[:,:,k], B[:,:,k], α, β) for all k. If size(B,3) == 1 then every batch uses B[:,:,1] instead.

    This will call batched_gemm! whenever possible. For real arrays this means that, for X ∈ [A,B,C], either stride(X,1)==1 or stride(X,2)==1, the latter may be caused by batched_transpose or by for instance PermutedDimsArray(::Array, (3,1,2)). Unlike batched_mul this will never make a copy.

    For complex arrays, the wrapper made by batched_adjoint must be outermost to be seen. In this case the strided accepted by BLAS are more restricted, if stride(C,1)==1 then only stride(AorB::BatchedAdjoint,2) == 1 is accepted.

    source

    `,5))]),i("details",W,[i("summary",null,[s[105]||(s[105]=i("a",{id:"NNlib.batched_adjoint",href:"#NNlib.batched_adjoint"},[i("span",{class:"jlbinding"},"NNlib.batched_adjoint")],-1)),s[106]||(s[106]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[107]||(s[107]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    -batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
    -BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",G,[i("summary",null,[s[108]||(s[108]=i("a",{id:"NNlib.batched_transpose",href:"#NNlib.batched_transpose"},[i("span",{class:"jlbinding"},"NNlib.batched_transpose")],-1)),s[109]||(s[109]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[110]||(s[110]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    -batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
    -BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",U,[i("summary",null,[s[111]||(s[111]=i("a",{id:"NNlib.batched_vec",href:"#NNlib.batched_vec"},[i("span",{class:"jlbinding"},"NNlib.batched_vec")],-1)),s[112]||(s[112]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[113]||(s[113]=t(`
    julia
    batched_vec(A::Array{T,3}, B::Matrix)
    -batched_vec(A::Array{T,3}, b::Vector)

    Batched matrix-vector multiplication: the result has C[:,:,k] == A[:,:,k] * B[:,k] for all k, or else C[:,:,k] == A[:,:,k] * b for b::Vector.

    With the same argument types, batched_mul(A, B) would regard B as a fixed matrix, not a batch of vectors. Both reshape and then call batched_mul(::Array{T,3}, ::Array{T,3}).

    julia
    julia> A, B, b = randn(16,8,32), randn(8,32), randn(8);
    -
    -julia> batched_vec(A,B) |> size
    -(16, 32)
    -
    -julia> batched_vec(A,b) |> size
    -(16, 32)

    source

    `,5))]),s[381]||(s[381]=i("h2",{id:"Gather-and-Scatter",tabindex:"-1"},[a("Gather and Scatter "),i("a",{class:"header-anchor",href:"#Gather-and-Scatter","aria-label":'Permalink to "Gather and Scatter {#Gather-and-Scatter}"'},"​")],-1)),i("details",J,[i("summary",null,[s[114]||(s[114]=i("a",{id:"NNlib.gather",href:"#NNlib.gather"},[i("span",{class:"jlbinding"},"NNlib.gather")],-1)),s[115]||(s[115]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[116]||(s[116]=t(`
    julia
    NNlib.gather(src, idx) -> dst

    Reverse operation of scatter. Gathers data from source src and writes it in a destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

    dst[:, ... , k] .= src[:, ... , idx[k]...]

    Notice that if idx is a vector containing integers and src is a matrix, previous expression simplifies to

    dst[:, k] .= src[:, idx[k]]

    and k will run over 1:length(idx).

    The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

    See gather! for an in-place version.

    Examples

    julia
    julia> NNlib.gather([1,20,300,4000], [2,4,2])
    -3-element Vector{Int64}:
    -   20
    - 4000
    -   20
    -
    -julia> NNlib.gather([1 2 3; 4 5 6], [1,3,1,3,1])
    -2×5 Matrix{Int64}:
    - 1  3  1  3  1
    - 4  6  4  6  4

    source

    julia
    gather(src, IJK...)

    Convert the tuple of integer vectors IJK to a tuple of CartesianIndex and call gather on it: gather(src, CartesianIndex.(IJK...)).

    Examples

    julia
    julia> src = reshape([1:15;], 3, 5)
    -3×5 Matrix{Int64}:
    - 1  4  7  10  13
    - 2  5  8  11  14
    - 3  6  9  12  15
    -
    -julia> NNlib.gather(src, [1, 2], [2, 4])
    -2-element Vector{Int64}:
    -  4
    - 11

    source

    `,16))]),i("details",K,[i("summary",null,[s[117]||(s[117]=i("a",{id:"NNlib.gather!",href:"#NNlib.gather!"},[i("span",{class:"jlbinding"},"NNlib.gather!")],-1)),s[118]||(s[118]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[119]||(s[119]=t('
    julia
    NNlib.gather!(dst, src, idx)

    Reverse operation of scatter!. Gathers data from source src and writes it in destination dst according to the index array idx. For each k in CartesianIndices(idx), assign values to dst according to

    dst[:, ... , k] .= src[:, ... , idx[k]...]

    Notice that if idx is a vector containing integers, and both dst and src are matrices, previous expression simplifies to

    dst[:, k] .= src[:, idx[k]]

    and k will run over 1:length(idx).

    The elements of idx can be integers or integer tuples and may be repeated. A single src column can end up being copied into zero, one, or multiple dst columns.

    See gather for an allocating version.

    source

    ',9))]),i("details",X,[i("summary",null,[s[120]||(s[120]=i("a",{id:"NNlib.scatter",href:"#NNlib.scatter"},[i("span",{class:"jlbinding"},"NNlib.scatter")],-1)),s[121]||(s[121]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[122]||(s[122]=t(`
    julia
    NNlib.scatter(op, src, idx; [init, dstsize])

    Scatter operation allocating a destination array dst and calling scatter!(op, dst, src, idx) on it.

    See scatter! for full details on how idx works.

    Examples

    julia
    julia> NNlib.scatter(+, [10,100,1000], [3,1,2])
    -3-element Vector{Int64}:
    -  100
    - 1000
    -   10
    -
    -julia> NNlib.scatter(+, [1 2 3 4; 5 6 7 8], [2,1,1,5])
    -2×5 Matrix{Int64}:
    -  5  1  0  0  4
    - 13  5  0  0  8
    -
    -julia> NNlib.scatter(*, [10,200,3000], [1,4,2]; init = 10, dstsize = 6)
    -6-element Vector{Int64}:
    -   100
    - 30000
    -    10
    -  2000
    -    10
    -    10

    source

    `,7))]),i("details",$,[i("summary",null,[s[123]||(s[123]=i("a",{id:"NNlib.scatter!",href:"#NNlib.scatter!"},[i("span",{class:"jlbinding"},"NNlib.scatter!")],-1)),s[124]||(s[124]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[125]||(s[125]=t(`
    julia
    NNlib.scatter!(op, dst, src, idx)

    Scatter operation, which writes data in src into dst at locations idx. A binary reduction operator op is applied during the scatter. For each index k in idx, accumulates values in dst according to

    dst[:, ..., idx[k]...] = (op).(dst[:, ..., idx[k]...], src[:, ..., k...])

    See also scatter, gather.

    Arguments

    Examples

    julia
    julia> NNlib.scatter!(+, ones(3), [10,100], [1,3])
    -3-element Vector{Float64}:
    -  11.0
    -   1.0
    - 101.0
    -
    -julia> NNlib.scatter!(*, fill(0.5, 2, 4), [1 10; 100 1000], [3,2])
    -2×4 Matrix{Float64}:
    - 0.5    5.0   0.5  0.5
    - 0.5  500.0  50.0  0.5

    source

    `,9))]),s[382]||(s[382]=i("h2",{id:"sampling",tabindex:"-1"},[a("Sampling "),i("a",{class:"header-anchor",href:"#sampling","aria-label":'Permalink to "Sampling"'},"​")],-1)),i("details",Y,[i("summary",null,[s[126]||(s[126]=i("a",{id:"NNlib.grid_sample",href:"#NNlib.grid_sample"},[i("span",{class:"jlbinding"},"NNlib.grid_sample")],-1)),s[127]||(s[127]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[128]||(s[128]=t(`
    julia
    grid_sample(input::AbstractArray{T, 4}, grid::AbstractArray{T, 4}; padding_mode = :zeros)

    Given input, compute output by sampling input values at pixel locations from grid. Uses bilinear interpolation to calculate output values.

    This implementation assumes the extrema (-1 and 1) are considered as referring to the center points of the input’s corner pixels (i.e. align corners is true).

    Arguments

    Returns

    (W_out, H_out, C, N) sampled grid from input.

    Examples

    In the example below, grid contains two out-of-bound sampling locations, which are handled differently, depending on the padding_mode.

    julia
    julia> x = reshape(collect(1.0:4.0), (2, 2, 1, 1))
    -2×2×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 1.0  3.0
    - 2.0  4.0
    -
    -julia> grid = Array{Float64}(undef, 2, 3, 2, 1);
    -
    -julia> grid[:, 1, 1, 1] .= (-3, -1);
    -
    -julia> grid[:, 2, 1, 1] .= (0, -1);
    -
    -julia> grid[:, 3, 1, 1] .= (1, -1);
    -
    -julia> grid[:, 1, 2, 1] .= (-1, 1);
    -
    -julia> grid[:, 2, 2, 1] .= (0, 1);
    -
    -julia> grid[:, 3, 2, 1] .= (3, 1);
    -
    -julia> grid_sample(x, grid; padding_mode=:zeros)
    -3×2×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 0.0  3.0
    - 1.5  3.5
    - 2.0  0.0
    -
    -julia> grid_sample(x, grid; padding_mode=:border)
    -3×2×1×1 Array{Float64, 4}:
    -[:, :, 1, 1] =
    - 1.0  3.0
    - 1.5  3.5
    - 2.0  4.0

    source

    `,11))]),i("details",ss,[i("summary",null,[s[129]||(s[129]=i("a",{id:"NNlib.∇grid_sample",href:"#NNlib.∇grid_sample"},[i("span",{class:"jlbinding"},"NNlib.∇grid_sample")],-1)),s[130]||(s[130]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[131]||(s[131]=t('
    julia
    ∇grid_sample::AbstractArray{T, 4}, input::AbstractArray{T, 4}, grid::AbstractArray{T, 4}; padding_mode = :zeros) where T

    Arguments

    Returns

    dinput (same shape as input) and dgrid (same shape as grid) gradients.

    source

    ',6))]),s[383]||(s[383]=i("h2",{id:"losses",tabindex:"-1"},[a("Losses "),i("a",{class:"header-anchor",href:"#losses","aria-label":'Permalink to "Losses"'},"​")],-1)),i("details",is,[i("summary",null,[s[132]||(s[132]=i("a",{id:"NNlib.ctc_loss",href:"#NNlib.ctc_loss"},[i("span",{class:"jlbinding"},"NNlib.ctc_loss")],-1)),s[133]||(s[133]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[134]||(s[134]=t('
    julia
    ctc_loss(ŷ, y)

    Computes the connectionist temporal classification loss between and y. must be a classes-by-time matrices, i.e., each row represents a class and each column represents a time step. Additionally, the logsoftmax function will be applied to , so must be the raw activation values from the neural network and not, for example, the activations after being passed through a softmax activation function. y must be a 1D array of the labels associated with . The blank label is assumed to be the last label category in , so it is equivalent to size(ŷ, 1). Used for sequence-to-sequence classification problems such as speech recognition and handwriting recognition where the exact time-alignment of the output (e.g., letters) is not needed to solve the problem. See Graves et al. (2006) or Graves (2012) for mathematical details.

    source

    ',3))]),s[384]||(s[384]=i("h2",{id:"miscellaneous",tabindex:"-1"},[a("Miscellaneous "),i("a",{class:"header-anchor",href:"#miscellaneous","aria-label":'Permalink to "Miscellaneous"'},"​")],-1)),i("details",as,[i("summary",null,[s[135]||(s[135]=i("a",{id:"NNlib.logsumexp",href:"#NNlib.logsumexp"},[i("span",{class:"jlbinding"},"NNlib.logsumexp")],-1)),s[136]||(s[136]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[137]||(s[137]=t('
    julia
    logsumexp(x; dims = :)

    Computes log.(sum(exp.(x); dims)) in a numerically stable way. Without dims keyword this returns a scalar.

    See also logsoftmax.

    source

    ',4))]),i("details",ts,[i("summary",null,[s[138]||(s[138]=i("a",{id:"NNlib.glu",href:"#NNlib.glu"},[i("span",{class:"jlbinding"},"NNlib.glu")],-1)),s[139]||(s[139]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[140]||(s[140]=t('
    julia
    glu(x, dim = 1)

    The gated linear unit from the "Language Modeling with Gated Convolutional Networks" paper.

    Calculates a .* sigmoid(b), where x is split in half along given dimension dim to form a and b.

    source

    ',4))]),s[385]||(s[385]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[i("code",null,"within_gradient"),a(" function currently doesn't work for Enzyme. Prefer to use "),i("code",null,"LuxLib.Utils.within_autodiff"),a(" if needed. Though pay heed that this function is not part of the public API.")])],-1)),i("details",ns,[i("summary",null,[s[141]||(s[141]=i("a",{id:"NNlib.within_gradient",href:"#NNlib.within_gradient"},[i("span",{class:"jlbinding"},"NNlib.within_gradient")],-1)),s[142]||(s[142]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[143]||(s[143]=t(`
    julia
    within_gradient(x) --> Bool

    Returns false except when used inside a gradient call, when it returns true. Useful for Flux regularisation layers which behave differently during training and inference.

    This should work with any ChainRules-based differentiation package, in which case x is ignored. But Tracker.jl overloads with_gradient(x::TrackedArray), thus for widest use you should pass it an array whose gradient is of interest. There is also an overload for ForwardDiff.jl's Dual types (and arrays of them).

    Examples

    julia
    julia> using ForwardDiff, Zygote, NNlib
    -
    -julia> f_good(x) = if NNlib.within_gradient(x)
    -                     @show 10x
    -                   else
    -                     x
    -                   end;
    -
    -julia> Zygote.withgradient(f_good, 1.0)
    -10x = 10.0
    -(val = 10.0, grad = (10.0,))
    -
    -julia> ForwardDiff.derivative(f_good, 1.0)
    -10x = Dual{ForwardDiff.Tag{typeof(f_good), Float64}}(10.0,10.0)
    -10.0
    -
    -julia> f_bad(x, y) = if any(NNlib.within_gradient, (x, y))
    -                       @show x * y
    -                     else
    -                       x / y
    -                     end;
    -
    -julia> Zygote.withgradient(f_bad, 2.0, 3.0)
    -(val = 0.6666666666666666, grad = (0.3333333333333333, -0.2222222222222222))
    -
    -julia> ForwardDiff.derivative(x -> f_bad(x, 3.0), 2.0)
    -x * y = Dual{ForwardDiff.Tag{var"#9#10", Float64}}(6.0,3.0)
    -3.0

    What goes wrong in f_bad is that Zygote knows any to be non-differentiable, and thus completely ignores its contents. This is not a perfect mechanism, and the only style recommended is precisely that of f_good above.

    source

    `,7))]),s[386]||(s[386]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[a("Use "),i("code",null,"LuxLib.API.bias_activation!!"),a(" or "),i("code",null,"LuxLib.API.bias_activation"),a(" instead of "),i("code",null,"NNlib.bias_act!"),a(".")])],-1)),i("details",ls,[i("summary",null,[s[144]||(s[144]=i("a",{id:"NNlib.bias_act!",href:"#NNlib.bias_act!"},[i("span",{class:"jlbinding"},"NNlib.bias_act!")],-1)),s[145]||(s[145]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[146]||(s[146]=t('
    julia
    bias_act!(σ, x, b)

    This is equivalent to x .= σ.(x .+ b), also replacing sigmoid & tanh with sigmoid_fast & tanh_fast. It will only overwrite x when x isa StridedArray{<:AbstractFloat}.

    When used within a gradient, it will overwrite only when σ has a method of derivatives_given_output which does not need the input at all. Such methods are defined by e.g. @scalar_rule relu(x) Ω > 0 where the derivative contains only Ω (the output) not x.

    Warning

    This is not safe to use if x is still needed for the gradient of some other function. Incorrect use will give silently wrong answers. It is intended mainly for Flux layers, in which the previous operation is known to be safe, e.g. bias_act!(σ, weight * input, bias) for a Dense layer.

    source

    ',5))]),s[387]||(s[387]=i("h2",{id:"dropout",tabindex:"-1"},[a("Dropout "),i("a",{class:"header-anchor",href:"#dropout","aria-label":'Permalink to "Dropout"'},"​")],-1)),s[388]||(s[388]=i("div",{class:"tip custom-block"},[i("p",{class:"custom-block-title"},"Tip"),i("p",null,[a("Use "),i("code",null,"LuxLib.API.dropout"),a(" instead of "),i("code",null,"NNlib.dropout"),a(".")])],-1)),i("details",es,[i("summary",null,[s[147]||(s[147]=i("a",{id:"NNlib.dropout",href:"#NNlib.dropout"},[i("span",{class:"jlbinding"},"NNlib.dropout")],-1)),s[148]||(s[148]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[149]||(s[149]=t(`
    julia
    dropout([rng], A, p; [dims])

    Returns an array in which each element of A is either replaced with zero, with probability p, or else multiplied by 1/(1-p).

    By default every element is treated independently. With keyword dims=1, a choice is made for every value of the 1st index i.e. each row of a matrix is either zero or not.

    Optional first argument is the random number generator used.

    Examples

    julia
    julia> dropout(ones(2, 10), 0.2)
    -2×10 Matrix{Float64}:
    - 1.25  1.25  0.0   1.25  1.25  1.25  1.25  1.25  1.25  1.25
    - 1.25  1.25  1.25  0.0   1.25  1.25  0.0   1.25  1.25  1.25
    -
    -julia> mean(dropout(ones(10^4, 5), 0.2), dims=1)
    -1×5 Matrix{Float64}:
    - 0.998  1.00075  0.99125  0.99575  1.00075
    -
    -julia> dropout(ones(5, 5), 0.7, dims=1)  # whole row the same
    -5×5 Matrix{Float64}:
    - 3.33333  3.33333  3.33333  3.33333  3.33333
    - 0.0      0.0      0.0      0.0      0.0
    - 0.0      0.0      0.0      0.0      0.0
    - 3.33333  3.33333  3.33333  3.33333  3.33333
    - 0.0      0.0      0.0      0.0      0.0
    -
    -julia> mean(dropout(ones(10^4, 5), 0.3, dims=1), dims=1)
    -1×5 Matrix{Float64}:
    - 1.00571  1.00571  1.00571  1.00571  1.00571

    source

    `,7))]),i("details",hs,[i("summary",null,[s[150]||(s[150]=i("a",{id:"NNlib.dropout!",href:"#NNlib.dropout!"},[i("span",{class:"jlbinding"},"NNlib.dropout!")],-1)),s[151]||(s[151]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[152]||(s[152]=t('
    julia
    dropout!(B, A, p; [dims])

    This does exactly B .= dropout(A, p; dims), or rather, it's the implementation of out-of-place dropout.

    source

    ',3))]),s[389]||(s[389]=i("h2",{id:"Internal-NNlib-Functions",tabindex:"-1"},[a("Internal NNlib Functions "),i("a",{class:"header-anchor",href:"#Internal-NNlib-Functions","aria-label":'Permalink to "Internal NNlib Functions {#Internal-NNlib-Functions}"'},"​")],-1)),s[390]||(s[390]=i("p",null,"These functions are not part of the public API and are subject to change without notice.",-1)),i("details",ps,[i("summary",null,[s[153]||(s[153]=i("a",{id:"NNlib.BatchedAdjoint",href:"#NNlib.BatchedAdjoint"},[i("span",{class:"jlbinding"},"NNlib.BatchedAdjoint")],-1)),s[154]||(s[154]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[155]||(s[155]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    -batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
    -BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",ks,[i("summary",null,[s[156]||(s[156]=i("a",{id:"NNlib.∇conv_filter_direct!",href:"#NNlib.∇conv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_direct!")],-1)),s[157]||(s[157]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[158]||(s[158]=t('
    julia
    ∇conv_filter_direct!(dw, x, dy, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon w in the convolution y = x * w.

    source

    ',3))]),i("details",ds,[i("summary",null,[s[159]||(s[159]=i("a",{id:"NNlib._check_trivial_rotations!",href:"#NNlib._check_trivial_rotations!"},[i("span",{class:"jlbinding"},"NNlib._check_trivial_rotations!")],-1)),s[160]||(s[160]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[161]||(s[161]=t('
    julia
    _check_trivial_rotations!(out, arr, θ, rotation_center)

    When θ = 0 || π /2 || π || 3/2 || π and if rotation_center is in the middle of the array. For an even array of size 4, the rotation_center would need to be 2.5. For an odd array of size 5, the rotation_center would need to be 3.

    In those cases, rotations are trivial just by reversing or swapping some axes.

    source

    ',4))]),i("details",rs,[i("summary",null,[s[162]||(s[162]=i("a",{id:"NNlib.fast_act",href:"#NNlib.fast_act"},[i("span",{class:"jlbinding"},"NNlib.fast_act")],-1)),s[163]||(s[163]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[164]||(s[164]=t('
    julia
    NNlib.fast_act(f, [x::AbstractArray])

    Replaces f == tanh with tanh_fast, etc.

    Takes an optional 2nd argument, so that you can disable this replacement for some array or element types.

    source

    ',4))]),i("details",os,[i("summary",null,[s[165]||(s[165]=i("a",{id:"NNlib.spectrogram",href:"#NNlib.spectrogram"},[i("span",{class:"jlbinding"},"NNlib.spectrogram")],-1)),s[166]||(s[166]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[167]||(s[167]=t(`
    julia
    spectrogram(waveform;
    -    pad::Int = 0, n_fft::Int, hop_length::Int, window,
    -    center::Bool = true, power::Real = 2.0,
    -    normalized::Bool = false, window_normalized::Bool = false,
    -)

    Create a spectrogram or a batch of spectrograms from a raw audio signal.

    Arguments

    See stft for other arguments.

    Returns

    Spectrogram in the shape (T, F, B), where T is the number of window hops and F = n_fft ÷ 2 + 1.

    source

    `,8))]),i("details",gs,[i("summary",null,[s[168]||(s[168]=i("a",{id:"NNlib.is_strided",href:"#NNlib.is_strided"},[i("span",{class:"jlbinding"},"NNlib.is_strided")],-1)),s[169]||(s[169]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[170]||(s[170]=t('
    julia
    is_strided(A::AbstractArray) -> Bool

    This generalises A isa StridedArray to treat wrappers like A::PermutedDimsArray, for which it returns is_strided(parent(A)).

    It returns true for CuArrays, and PermutedDimsArrays of those.

    Other wrappers (defined outside Base, LinearAlgebra) are assumed not to break strided-ness, and hence also return is_strided(parent(A)). This correctly handles things like NamedDimsArray wihch don't alter indexing. However, it's a little pessimistic in that e.g. a view of such a container will return false, even in cases where the same view of parent(A) would be a StridedArray.

    source

    ',5))]),i("details",ys,[i("summary",null,[s[171]||(s[171]=i("a",{id:"NNlib.conv_direct!",href:"#NNlib.conv_direct!"},[i("span",{class:"jlbinding"},"NNlib.conv_direct!")],-1)),s[172]||(s[172]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[173]||(s[173]=t('
    julia
    conv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct convolution implementation; used for debugging, tests, and mixing/matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    The keyword arguments alpha and beta control accumulation behavior; this function calculates y = alpha * x * w + beta * y, therefore by setting beta to a nonzero value, the user is able to accumulate values into a preallocated y buffer, or by setting alpha to a nonunitary value, an arbitrary gain factor can be applied.

    By defaulting beta to false, we make use of the Bradbury promotion trick to override NaN's that may pre-exist within our output buffer, as false*NaN == 0.0, whereas 0.0*NaN == NaN. Only set beta if you are certain that none of the elements within y are NaN.

    The basic implementation performs 3-dimensional convolution; 1-dimensional and 2- dimensional cases are supported by simply reshaping y, x and w, for which wrapper methods are available.

    source

    ',6))]),i("details",Es,[i("summary",null,[s[174]||(s[174]=i("a",{id:"NNlib.gemm!",href:"#NNlib.gemm!"},[i("span",{class:"jlbinding"},"NNlib.gemm!")],-1)),s[175]||(s[175]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[176]||(s[176]=t('
    julia
    gemm!()

    Low-level gemm!() call with pointers, borrowed from Knet.jl

    Calculates C = alpha*op(A)*op(B) + beta*C, where:

    source

    ',5))]),i("details",Fs,[i("summary",null,[s[177]||(s[177]=i("a",{id:"NNlib.calc_padding_regions",href:"#NNlib.calc_padding_regions"},[i("span",{class:"jlbinding"},"NNlib.calc_padding_regions")],-1)),s[178]||(s[178]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[179]||(s[179]=t('
    julia
    calc_padding_regions(dims)

    Padding is a jerk. A HUGE jerk that tries to sneak a bunch of conditionals and edge cases (quite literally) into our beautiful stencil operations such as convolution, pooling, etc... The way we deal with this is to, first, deal with everything in 3d, and then define a single padding region helper function that returns the seven regions that all 3d operations must deal with, including the central "unpadded" region where we can run at full bore, not paying any attention to padding.

    source

    ',3))]),i("details",cs,[i("summary",null,[s[180]||(s[180]=i("a",{id:"NNlib.∇depthwiseconv_data_im2col!",href:"#NNlib.∇depthwiseconv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_im2col!")],-1)),s[181]||(s[181]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[182]||(s[182]=t('
    julia
    ∇depthwiseconv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Depwthwise conv2d backward pass onto the input using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Cs,[i("summary",null,[s[183]||(s[183]=i("a",{id:"NNlib._prepare_imrotate",href:"#NNlib._prepare_imrotate"},[i("span",{class:"jlbinding"},"NNlib._prepare_imrotate")],-1)),s[184]||(s[184]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[185]||(s[185]=t('
    julia
    _prepare_imrotate(arr, θ, rotation_center)

    Prepate sin and cos, creates the output array and converts type of rotation_center if required.

    source

    ',3))]),i("details",us,[i("summary",null,[s[186]||(s[186]=i("a",{id:"NNlib.insert_singleton_spatial_dimension",href:"#NNlib.insert_singleton_spatial_dimension"},[i("span",{class:"jlbinding"},"NNlib.insert_singleton_spatial_dimension")],-1)),s[187]||(s[187]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[188]||(s[188]=t('
    julia
    insert_singleton_spatial_dimension(cdims::ConvDims)

    When converting a 1d convolution to a 2d, or a 2d to a 3d, we need to insert a singleton spatial dimension at the end of the spatial dimensions. This does so for a ConvDims.

    source

    ',3))]),i("details",ms,[i("summary",null,[s[189]||(s[189]=i("a",{id:"NNlib._fast_broadcast!",href:"#NNlib._fast_broadcast!"},[i("span",{class:"jlbinding"},"NNlib._fast_broadcast!")],-1)),s[190]||(s[190]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[191]||(s[191]=t('
    julia
    _fast_broadcast!(f, x, y, z...)

    This does x .= f.(x, y, z...), but works around an issue with broadcasting that prevents SIMD in such cases. Can perhaps be removed once https://github.com/JuliaLang/julia/issues/43153 is fixed.

    Has an rrule to avoid mutation within derivatives.

    Warning

    Not intended for general use. Uses @inbounds but does not check sizes! Assumes that f has no derivative!

    source

    ',5))]),i("details",bs,[i("summary",null,[s[192]||(s[192]=i("a",{id:"NNlib.hann_window",href:"#NNlib.hann_window"},[i("span",{class:"jlbinding"},"NNlib.hann_window")],-1)),s[193]||(s[193]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[200]||(s[200]=t(`
    julia
    hann_window(
    -    window_length::Int, ::Type{T} = Float32; periodic::Bool = true,
    -) where T <: Real

    Hann window function (ref: Window function § Hann and Hamming windows - Wikipedia).

    `,2)),i("p",null,[i("mjx-container",Qs,[(h(),e("svg",Ts,s[194]||(s[194]=[t('',1)]))),s[195]||(s[195]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mfrac",null,[i("mn",null,"1"),i("mn",null,"2")]),i("mo",{stretchy:"false"},"["),i("mn",null,"1"),i("mo",null,"−"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")"),i("mo",{stretchy:"false"},"]")])],-1))])]),i("p",null,[s[198]||(s[198]=a("Where ")),i("mjx-container",Bs,[(h(),e("svg",fs,s[196]||(s[196]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[197]||(s[197]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[199]||(s[199]=a(" is the window length."))]),s[201]||(s[201]=t(`
    julia
    julia> lineplot(hann_window(100); width=30, height=10)
    -     ┌──────────────────────────────┐
    -   1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⠚⠉⠉⠉⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡔⠁⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⢀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢣⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⢀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⢀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀│
    -     │⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠣⡀⠀⠀│
    -   0 │⣀⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢤⣀│
    -     └──────────────────────────────┘
    -     ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀100⠀

    Arguments:

    Keyword Arguments:

    julia
    julia> N = 256;
    -
    -julia> hann_window(N; periodic=true)  hann_window(N + 1; periodic=false)[1:end - 1]
    -true
    -
    -julia> hann_window(N)  hamming_window(N; α=0.5f0, β=0.5f0)
    -true

    Returns:

    Vector of length window_length and eltype T.

    source

    `,9))]),i("details",vs,[i("summary",null,[s[202]||(s[202]=i("a",{id:"NNlib._rng_from_array",href:"#NNlib._rng_from_array"},[i("span",{class:"jlbinding"},"NNlib._rng_from_array")],-1)),s[203]||(s[203]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[204]||(s[204]=t('
    julia
    _rng_from_array(x)

    Return the random number generator most appropriate for x: CUDA.default_rng() for CuArray, else Random.default_rng()

    source

    ',3))]),i("details",Ns,[i("summary",null,[s[205]||(s[205]=i("a",{id:"NNlib.∇depthwiseconv_filter_im2col!",href:"#NNlib.∇depthwiseconv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_im2col!")],-1)),s[206]||(s[206]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[207]||(s[207]=t(`
    julia
    ∇depthwiseconv_filter_im2col!(dw, w, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    -                              alpha=1, beta=0)

    Depthwise conv backward pass onto the weights using im2col and GEMM. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",xs,[i("summary",null,[s[208]||(s[208]=i("a",{id:"NNlib.istft",href:"#NNlib.istft"},[i("span",{class:"jlbinding"},"NNlib.istft")],-1)),s[209]||(s[209]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[224]||(s[224]=t(`
    julia
    istft(y;
    -    n_fft::Int, hop_length::Int = n_fft ÷ 4, window = nothing,
    -    center::Bool = true, normalized::Bool = false,
    -    return_complex::Bool = false,
    -    original_length::Union{Nothing, Int} = nothing,
    -)

    Inverse Short-time Fourier Transform.

    Return the least squares estimation of the original signal

    Arguments:

    Keyword Arguments:

    `,6)),i("ul",null,[s[222]||(s[222]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Window function that was applied to the input of stft. If nothing (default), then no window was applied.

  • ",3)),i("li",null,[i("p",null,[s[214]||(s[214]=i("code",null,"center::Bool",-1)),s[215]||(s[215]=a(": Whether input to ")),s[216]||(s[216]=i("code",null,"stft",-1)),s[217]||(s[217]=a(" was padded on both sides so that ")),i("mjx-container",js,[(h(),e("svg",As,s[210]||(s[210]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[211]||(s[211]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[218]||(s[218]=a("-th frame is centered at time ")),i("mjx-container",ws,[(h(),e("svg",Ds,s[212]||(s[212]=[t('',1)]))),s[213]||(s[213]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[219]||(s[219]=a(". Padding is done with ")),s[220]||(s[220]=i("code",null,"pad_reflect",-1)),s[221]||(s[221]=a(" function."))])]),s[223]||(s[223]=t("
  • normalized::Bool: Whether input to stft was normalized.

  • return_complex::Bool: Whether the output should be complex, or if the input should be assumed to derive from a real signal and window.

  • original_length::Union{Nothing, Int}: Optional size of the first dimension of the input to stft. Helps restoring the exact stft input size. Otherwise, the array might be a bit shorter.

  • ",3))]),s[225]||(s[225]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L173-L205",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ls,[i("summary",null,[s[226]||(s[226]=i("a",{id:"NNlib.transpose_swapbatch",href:"#NNlib.transpose_swapbatch"},[i("span",{class:"jlbinding"},"NNlib.transpose_swapbatch")],-1)),s[227]||(s[227]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[228]||(s[228]=t('
    julia
    transpose_swapbatch(x::AbstractArray)

    Given an AbstractArray, swap its batch and channel axes, as we must during transposed convolution. We do this to the operands during convolution, and then again to the output once we're done.

    source

    ',3))]),i("details",Hs,[i("summary",null,[s[229]||(s[229]=i("a",{id:"NNlib.transpose_pad",href:"#NNlib.transpose_pad"},[i("span",{class:"jlbinding"},"NNlib.transpose_pad")],-1)),s[230]||(s[230]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[231]||(s[231]=t('
    julia
    transpose_pad(cdims::ConvDims)

    Transposed convolution can be calculated in terms of typical convolution with some extra padding. This method computes the padding of the convolution that would result in the transposed convolution of two operands, in essence taking care of that "extra padding". Note that this method should almost always be accompanied by a call that predilates one of the operands.

    source

    ',3))]),i("details",Vs,[i("summary",null,[s[232]||(s[232]=i("a",{id:"NNlib.power_to_db",href:"#NNlib.power_to_db"},[i("span",{class:"jlbinding"},"NNlib.power_to_db")],-1)),s[233]||(s[233]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[234]||(s[234]=t('
    julia
    power_to_db(s; ref::Real = 1f0, amin::Real = 1f-10, top_db::Real = 80f0)

    Convert a power spectrogram (amplitude squared) to decibel (dB) units.

    Arguments

    Returns

    s_db ~= 10 * log10(s) - 10 * log10(ref)

    source

    ',7))]),i("details",_s,[i("summary",null,[s[235]||(s[235]=i("a",{id:"NNlib.col2im!",href:"#NNlib.col2im!"},[i("span",{class:"jlbinding"},"NNlib.col2im!")],-1)),s[236]||(s[236]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[237]||(s[237]=t('
    julia
    col2im!(x, col, cdims, beta=0)

    Does the inverse of im2col!(), converting col back into a 3d image, used for backward passes, transposed convolutions, etc...

    Note that this method has not been optimized in the same way as im2col() has, because it is slightly more complicated due to the more chaotic data access patterns, and I'm not desperate enough yet.

    source

    ',4))]),i("details",Ms,[i("summary",null,[s[238]||(s[238]=i("a",{id:"NNlib.depthwiseconv_im2col!",href:"#NNlib.depthwiseconv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_im2col!")],-1)),s[239]||(s[239]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[240]||(s[240]=t('
    julia
    depthwiseconv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a depthwise convolution using im2col and GEMM, store the result in y. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",zs,[i("summary",null,[s[241]||(s[241]=i("a",{id:"NNlib.storage_type",href:"#NNlib.storage_type"},[i("span",{class:"jlbinding"},"NNlib.storage_type")],-1)),s[242]||(s[242]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[243]||(s[243]=t(`
    julia
    storage_type(A) -> Type

    Removes all wrappers to return the Array or CuArray (or whatever) type within.

    julia> view(reshape(ones(10)',2,5),:, 3:4) |> storage_type
    -Array{Float64,1}
    -
    -julia> reshape(sparse(rand(10)), 5,2) |> storage_type
    -SparseVector{Float64,Int64}

    source

    `,4))]),i("details",Zs,[i("summary",null,[s[244]||(s[244]=i("a",{id:"NNlib.im2col_dims",href:"#NNlib.im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.im2col_dims")],-1)),s[245]||(s[245]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[246]||(s[246]=t('
    julia
    im2col_dims(c::ConvDims)

    im2col calculates, for each output pixel, the "convolution" of N kernels where N is the number of output channels, by doing a matrix multiply. The dimensions of that matrix are given by this function.

    Note that because im2col is multithreaded, we need to allocate a separate workspace of memory per-thread; hence the dimensions returned by this will depend on the number of threads Julia is currently running with.

    source

    ',4))]),i("details",Is,[i("summary",null,[s[247]||(s[247]=i("a",{id:"NNlib.∇depthwiseconv_filter_direct!",href:"#NNlib.∇depthwiseconv_filter_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_filter_direct!")],-1)),s[248]||(s[248]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[249]||(s[249]=t('
    julia
    ∇depthwiseconv_filter_direct!(dw, x, dy, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon w in the depthwise convolution y = x * w.

    source

    ',3))]),i("details",Os,[i("summary",null,[s[250]||(s[250]=i("a",{id:"NNlib.reverse_indices",href:"#NNlib.reverse_indices"},[i("span",{class:"jlbinding"},"NNlib.reverse_indices")],-1)),s[251]||(s[251]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[252]||(s[252]=t('
    julia
    reverse_indices(idx)

    Return the reverse indices of idx. The indices of idx will be values, and values of idx will be index.

    Arguments

    source

    ',5))]),i("details",Ps,[i("summary",null,[s[253]||(s[253]=i("a",{id:"NNlib.∇conv_filter_im2col!",href:"#NNlib.∇conv_filter_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_filter_im2col!")],-1)),s[254]||(s[254]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[255]||(s[255]=t(`
    julia
    ∇conv_filter_im2col!(dw, x, dy, cdims, col=similar(dw, ∇filter_im2col_dims(cdims));
    -                     alpha=1, beta=0)

    Conv backward pass onto the weights using im2col and GEMM; stores the result in dw. See conv_im2col! for explanation of optional parameters.

    source

    `,3))]),i("details",Ss,[i("summary",null,[s[256]||(s[256]=i("a",{id:"NNlib.conv_im2col!",href:"#NNlib.conv_im2col!"},[i("span",{class:"jlbinding"},"NNlib.conv_im2col!")],-1)),s[257]||(s[257]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[258]||(s[258]=t('
    julia
    conv_im2col!(y, x, w, cdims, col=similar(x); alpha=1, beta=0)

    Perform a convolution using im2col and GEMM, store the result in y. The kwargs alpha and beta control accumulation behavior; internally this operation is implemented as a matrix multiply that boils down to y = alpha * x * w + beta * y, thus by setting beta to a nonzero value, multiple results can be accumulated into y, or by setting alpha to a nonunitary value, various gain factors can be applied.

    Note for the particularly performance-minded, you can provide a pre-allocated col, which should eliminate any need for large allocations within this method.

    source

    ',4))]),i("details",qs,[i("summary",null,[s[259]||(s[259]=i("a",{id:"NNlib.∇conv_data_direct!",href:"#NNlib.∇conv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_direct!")],-1)),s[260]||(s[260]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[261]||(s[261]=t('
    julia
    ∇conv_data_direct!(dx, dy, w, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon x in the convolution y = x * w.

    source

    ',3))]),i("details",Rs,[i("summary",null,[s[262]||(s[262]=i("a",{id:"NNlib.scatter_dims",href:"#NNlib.scatter_dims"},[i("span",{class:"jlbinding"},"NNlib.scatter_dims")],-1)),s[263]||(s[263]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[264]||(s[264]=i("p",null,"Performs dimensional consistency checks and return the dimensionality of the scattered objects.",-1)),s[265]||(s[265]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/scatter.jl#L16-L19",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ws,[i("summary",null,[s[266]||(s[266]=i("a",{id:"NNlib.∇conv_data_im2col!",href:"#NNlib.∇conv_data_im2col!"},[i("span",{class:"jlbinding"},"NNlib.∇conv_data_im2col!")],-1)),s[267]||(s[267]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[268]||(s[268]=t('
    julia
    ∇conv_data_im2col!(dx, w, dy, cdims, col=similar(dx); alpha=1, beta=0)

    Conv2d backward pass onto the input using im2col and GEMM; stores the result in dx. See conv_im2col! for explanation of optional parameters.

    source

    ',3))]),i("details",Gs,[i("summary",null,[s[269]||(s[269]=i("a",{id:"NNlib.storage_typejoin",href:"#NNlib.storage_typejoin"},[i("span",{class:"jlbinding"},"NNlib.storage_typejoin")],-1)),s[270]||(s[270]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[271]||(s[271]=t(`
    julia
    storage_typejoin(A, B, C, ...) -> Type

    Reduces with Base.promote_typejoin, in order that this conveys useful information for dispatching to BLAS. It does not tell you what container to allocate:

    julia> storage_typejoin(rand(2), rand(Float32, 2))
    -Array{T,1} where T
    -
    -julia> eltype(ans) <: LinearAlgebra.BlasFloat
    -false
    -
    -julia> storage_typejoin(rand(2), rand(2,3), rand(2,3,4))
    -Array{Float64,N} where N

    source

    `,4))]),i("details",Us,[i("summary",null,[s[272]||(s[272]=i("a",{id:"NNlib.add_blanks",href:"#NNlib.add_blanks"},[i("span",{class:"jlbinding"},"NNlib.add_blanks")],-1)),s[273]||(s[273]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[274]||(s[274]=t('
    julia
    add_blanks(z)

    Adds blanks to the start and end of z, and between items in z

    source

    ',3))]),i("details",Js,[i("summary",null,[s[275]||(s[275]=i("a",{id:"NNlib.∇filter_im2col_dims",href:"#NNlib.∇filter_im2col_dims"},[i("span",{class:"jlbinding"},"NNlib.∇filter_im2col_dims")],-1)),s[276]||(s[276]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[277]||(s[277]=t('
    julia
    ∇filter_im2col_dims(c::ConvDims)

    Like im2col_dims, but saves some memory because multiple (Julia) threads are not required for the filter gradient calculation.

    Note: in the future, this may return Dims{2} instead of Dims{3}.

    source

    ',4))]),i("details",Ks,[i("summary",null,[s[278]||(s[278]=i("a",{id:"NNlib._bilinear_helper",href:"#NNlib._bilinear_helper"},[i("span",{class:"jlbinding"},"NNlib._bilinear_helper")],-1)),s[279]||(s[279]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[280]||(s[280]=i("p",null,"_bilinear_helper(yrot, xrot, yrot_f, xrot_f, yrot_int, xrot_int)",-1)),s[281]||(s[281]=i("p",null,"Some helper variables",-1)),s[282]||(s[282]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/rotation.jl#L20-L24",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Xs,[i("summary",null,[s[283]||(s[283]=i("a",{id:"NNlib._triangular_filterbanks",href:"#NNlib._triangular_filterbanks"},[i("span",{class:"jlbinding"},"NNlib._triangular_filterbanks")],-1)),s[284]||(s[284]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[285]||(s[285]=t(`
    julia
    _triangular_filterbanks(
    -    freq_points::Vector{Float32}, all_freqs::Vector{Float32})

    Create triangular filter banks.

    Arguments:

    Returns:

    Array of size (n_freqs, n_filters).

    source

    `,7))]),i("details",$s,[i("summary",null,[s[286]||(s[286]=i("a",{id:"NNlib.∇depthwiseconv_data_direct!",href:"#NNlib.∇depthwiseconv_data_direct!"},[i("span",{class:"jlbinding"},"NNlib.∇depthwiseconv_data_direct!")],-1)),s[287]||(s[287]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[288]||(s[288]=t('
    julia
    ∇depthwiseconv_data_direct!(dx, dy, w, cdims; alpha=1, beta=0)

    Calculate the gradient imposed upon x in the depthwise convolution y = x * w. We make use of the fact that a depthwise convolution is equivalent to C_in separate normal convolutions between that channel of x and the C_mult different kernels that get applied to it. The output of such a convolution is the gradient imposed upon that particular channel of x, and so we simply walk through x, calculating the gradient for each batch and channel independently.

    source

    ',3))]),i("details",Ys,[i("summary",null,[s[289]||(s[289]=i("a",{id:"NNlib.db_to_power",href:"#NNlib.db_to_power"},[i("span",{class:"jlbinding"},"NNlib.db_to_power")],-1)),s[290]||(s[290]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[291]||(s[291]=t('
    julia
    db_to_power(s_db; ref::Real = 1f0)

    Inverse of power_to_db.

    source

    ',3))]),i("details",si,[i("summary",null,[s[292]||(s[292]=i("a",{id:"NNlib.predilated_size",href:"#NNlib.predilated_size"},[i("span",{class:"jlbinding"},"NNlib.predilated_size")],-1)),s[293]||(s[293]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[294]||(s[294]=t('
    julia
    predilated_size(x_size::Tuple, dilation::Tuple)

    Calculate the size of a predilated x given a particular dilation factor. This is used within predilate() and transpose_cdims().

    source

    ',3))]),i("details",ii,[i("summary",null,[s[295]||(s[295]=i("a",{id:"NNlib.stft",href:"#NNlib.stft"},[i("span",{class:"jlbinding"},"NNlib.stft")],-1)),s[296]||(s[296]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[328]||(s[328]=t(`
    julia
    stft(x;
    -    n_fft::Int, hop_length::Int = n_fft ÷ 4, window = nothing,
    -    center::Bool = true, normalized::Bool = false,
    -)

    Short-time Fourier transform (STFT).

    The STFT computes the Fourier transform of short overlapping windows of the input, giving frequency components of the signal as they change over time.

    `,3)),i("p",null,[i("mjx-container",ai,[(h(),e("svg",ti,s[297]||(s[297]=[t('',1)]))),s[298]||(s[298]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"Y"),i("mo",{stretchy:"false"},"["),i("mi",null,"ω"),i("mo",null,","),i("mi",null,"m"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("munderover",null,[i("mo",{"data-mjx-texclass":"OP"},"∑"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"k"),i("mo",null,"="),i("mn",null,"0")]),i("mrow",{"data-mjx-texclass":"ORD"},[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mtext",null,"window"),i("mo",{stretchy:"false"},"["),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mtext",null,"input"),i("mo",{stretchy:"false"},"["),i("mi",null,"m"),i("mo",null,"×"),i("mtext",null,"hop length"),i("mo",null,"+"),i("mi",null,"k"),i("mo",{stretchy:"false"},"]"),i("mi",null,"exp"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mo",null,"−"),i("mi",null,"j"),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"ω"),i("mi",null,"k")]),i("mtext",null,"n fft")]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[307]||(s[307]=a("where ")),i("mjx-container",ni,[(h(),e("svg",li,s[299]||(s[299]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[300]||(s[300]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[308]||(s[308]=a(" is the window length, ")),i("mjx-container",ei,[(h(),e("svg",hi,s[301]||(s[301]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D714",d:"M495 384Q495 406 514 424T555 443Q574 443 589 425T604 364Q604 334 592 278T555 155T483 38T377 -11Q297 -11 267 66Q266 68 260 61Q201 -11 125 -11Q15 -11 15 139Q15 230 56 325T123 434Q135 441 147 436Q160 429 160 418Q160 406 140 379T94 306T62 208Q61 202 61 187Q61 124 85 100T143 76Q201 76 245 129L253 137V156Q258 297 317 297Q348 297 348 261Q348 243 338 213T318 158L308 135Q309 133 310 129T318 115T334 97T358 83T393 76Q456 76 501 148T546 274Q546 305 533 325T508 357T495 384Z",style:{"stroke-width":"3"}})])])],-1)]))),s[302]||(s[302]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"ω")])],-1))]),s[309]||(s[309]=a(" is the frequency ")),i("mjx-container",pi,[(h(),e("svg",ki,s[303]||(s[303]=[t('',1)]))),s[304]||(s[304]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mn",null,"0"),i("mo",null,"≤"),i("mi",null,"ω"),i("mo",null,"<"),i("mtext",null,"n fft")])],-1))]),s[310]||(s[310]=a(" and ")),i("mjx-container",di,[(h(),e("svg",ri,s[305]||(s[305]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D45A",d:"M21 287Q22 293 24 303T36 341T56 388T88 425T132 442T175 435T205 417T221 395T229 376L231 369Q231 367 232 367L243 378Q303 442 384 442Q401 442 415 440T441 433T460 423T475 411T485 398T493 385T497 373T500 364T502 357L510 367Q573 442 659 442Q713 442 746 415T780 336Q780 285 742 178T704 50Q705 36 709 31T724 26Q752 26 776 56T815 138Q818 149 821 151T837 153Q857 153 857 145Q857 144 853 130Q845 101 831 73T785 17T716 -10Q669 -10 648 17T627 73Q627 92 663 193T700 345Q700 404 656 404H651Q565 404 506 303L499 291L466 157Q433 26 428 16Q415 -11 385 -11Q372 -11 364 -4T353 8T350 18Q350 29 384 161L420 307Q423 322 423 345Q423 404 379 404H374Q288 404 229 303L222 291L189 157Q156 26 151 16Q138 -11 108 -11Q95 -11 87 -5T76 7T74 17Q74 30 112 181Q151 335 151 342Q154 357 154 369Q154 405 129 405Q107 405 92 377T69 316T57 280Q55 278 41 278H27Q21 284 21 287Z",style:{"stroke-width":"3"}})])])],-1)]))),s[306]||(s[306]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"m")])],-1))]),s[311]||(s[311]=a(" is the index of the sliding window."))]),s[329]||(s[329]=i("p",null,[i("strong",null,"Arguments:")],-1)),s[330]||(s[330]=i("ul",null,[i("li",null,[i("code",null,"x"),a(": Input, must be either a 1D time sequence ("),i("code",null,"(L,)"),a(" shape) or a 2D batch of time sequence ("),i("code",null,"(L, B)"),a(" shape).")])],-1)),s[331]||(s[331]=i("p",null,[i("strong",null,"Keyword Arguments:")],-1)),i("ul",null,[s[327]||(s[327]=t("
  • n_fft::Int: Size of Fourier transform.

  • hop_length::Int: Distance between neighboring sliding window frames.

  • window: Optional window function to apply. Must be 1D vector 0 < length(window) ≤ n_fft. If window is shorter than n_fft, it is padded with zeros on both sides. If nothing (default), then no window is applied.

  • ",3)),i("li",null,[i("p",null,[s[316]||(s[316]=i("code",null,"center::Bool",-1)),s[317]||(s[317]=a(": Whether to pad input on both sides so that ")),i("mjx-container",oi,[(h(),e("svg",gi,s[312]||(s[312]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D461",d:"M26 385Q19 392 19 395Q19 399 22 411T27 425Q29 430 36 430T87 431H140L159 511Q162 522 166 540T173 566T179 586T187 603T197 615T211 624T229 626Q247 625 254 615T261 596Q261 589 252 549T232 470L222 433Q222 431 272 431H323Q330 424 330 420Q330 398 317 385H210L174 240Q135 80 135 68Q135 26 162 26Q197 26 230 60T283 144Q285 150 288 151T303 153H307Q322 153 322 145Q322 142 319 133Q314 117 301 95T267 48T216 6T155 -11Q125 -11 98 4T59 56Q57 64 57 83V101L92 241Q127 382 128 383Q128 385 77 385H26Z",style:{"stroke-width":"3"}})])])],-1)]))),s[313]||(s[313]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t")])],-1))]),s[318]||(s[318]=a("-th frame is centered at time ")),i("mjx-container",yi,[(h(),e("svg",Ei,s[314]||(s[314]=[t('',1)]))),s[315]||(s[315]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"t"),i("mo",null,"×"),i("mtext",null,"hop length")])],-1))]),s[319]||(s[319]=a(". Padding is done with ")),s[320]||(s[320]=i("code",null,"pad_reflect",-1)),s[321]||(s[321]=a(" function."))])]),i("li",null,[i("p",null,[s[324]||(s[324]=i("code",null,"normalized::Bool",-1)),s[325]||(s[325]=a(": Whether to return normalized STFT, i.e. multiplied with ")),i("mjx-container",Fi,[(h(),e("svg",ci,s[322]||(s[322]=[t('',1)]))),s[323]||(s[323]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("msup",null,[i("mtext",null,"n fft"),i("mrow",{"data-mjx-texclass":"ORD"},[i("mo",null,"−"),i("mn",null,"0.5")])])])],-1))]),s[326]||(s[326]=a("."))])])]),s[332]||(s[332]=i("p",null,[i("strong",null,"Returns:")],-1)),s[333]||(s[333]=i("p",null,[a("Complex array of shape "),i("code",null,"(n_fft, n_frames, B)"),a(", where "),i("code",null,"B"),a(" is the optional batch dimension.")],-1)),s[334]||(s[334]=i("p",null,[i("a",{href:"https://github.com/FluxML/NNlib.jl/blob/v0.9.27/src/audio/stft.jl#L130-L170",target:"_blank",rel:"noreferrer"},"source")],-1))]),i("details",Ci,[i("summary",null,[s[335]||(s[335]=i("a",{id:"NNlib.hamming_window",href:"#NNlib.hamming_window"},[i("span",{class:"jlbinding"},"NNlib.hamming_window")],-1)),s[336]||(s[336]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[343]||(s[343]=t(`
    julia
    hamming_window(
    -    window_length::Int, ::Type{T} = Float32; periodic::Bool = true,
    -    α::T = T(0.54), β::T = T(0.46),
    -) where T <: Real

    Hamming window function (ref: Window function § Hann and Hamming windows - Wikipedia). Generalized version of hann_window.

    `,2)),i("p",null,[i("mjx-container",ui,[(h(),e("svg",mi,s[337]||(s[337]=[t('',1)]))),s[338]||(s[338]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"w"),i("mo",{stretchy:"false"},"["),i("mi",null,"n"),i("mo",{stretchy:"false"},"]"),i("mo",null,"="),i("mi",null,"α"),i("mo",null,"−"),i("mi",null,"β"),i("mi",null,"cos"),i("mo",{"data-mjx-texclass":"NONE"},"⁡"),i("mo",{stretchy:"false"},"("),i("mfrac",null,[i("mrow",null,[i("mn",null,"2"),i("mi",null,"π"),i("mi",null,"n")]),i("mrow",null,[i("mi",null,"N"),i("mo",null,"−"),i("mn",null,"1")])]),i("mo",{stretchy:"false"},")")])],-1))])]),i("p",null,[s[341]||(s[341]=a("Where ")),i("mjx-container",bi,[(h(),e("svg",Qi,s[339]||(s[339]=[i("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[i("g",{"data-mml-node":"math"},[i("g",{"data-mml-node":"mi"},[i("path",{"data-c":"1D441",d:"M234 637Q231 637 226 637Q201 637 196 638T191 649Q191 676 202 682Q204 683 299 683Q376 683 387 683T401 677Q612 181 616 168L670 381Q723 592 723 606Q723 633 659 637Q635 637 635 648Q635 650 637 660Q641 676 643 679T653 683Q656 683 684 682T767 680Q817 680 843 681T873 682Q888 682 888 672Q888 650 880 642Q878 637 858 637Q787 633 769 597L620 7Q618 0 599 0Q585 0 582 2Q579 5 453 305L326 604L261 344Q196 88 196 79Q201 46 268 46H278Q284 41 284 38T282 19Q278 6 272 0H259Q228 2 151 2Q123 2 100 2T63 2T46 1Q31 1 31 10Q31 14 34 26T39 40Q41 46 62 46Q130 49 150 85Q154 91 221 362L289 634Q287 635 234 637Z",style:{"stroke-width":"3"}})])])],-1)]))),s[340]||(s[340]=i("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[i("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[i("mi",null,"N")])],-1))]),s[342]||(s[342]=a(" is the window length."))]),s[344]||(s[344]=t(`
    julia
    julia> lineplot(hamming_window(100); width=30, height=10)
    -     ┌──────────────────────────────┐
    -   1 │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠚⠉⠉⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠁⠀⠀⠀⠀⠀⠈⢢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⠀⣠⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⡀⠀⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀│
    -     │⠀⠀⠀⠀⠀⡰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀│
    -     │⠀⠀⠀⢀⠴⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀│
    -     │⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣀⠀│
    -   0 │⠉⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉│
    -     └──────────────────────────────┘
    -     ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀100⠀

    Arguments:

    Keyword Arguments:

    julia
    julia> N = 256;
    -
    -julia> hamming_window(N; periodic=true)  hamming_window(N + 1; periodic=false)[1:end - 1]
    -true

    Returns:

    Vector of length window_length and eltype T.

    source

    `,10))]),i("details",Ti,[i("summary",null,[s[345]||(s[345]=i("a",{id:"NNlib.maximum_dims",href:"#NNlib.maximum_dims"},[i("span",{class:"jlbinding"},"NNlib.maximum_dims")],-1)),s[346]||(s[346]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[347]||(s[347]=t('
    julia
    maximum_dims(dims)

    Given an array of CartesianIndex{N} or NTuple{N,Int}, returns a tuple containing the maximum of all the 1st entries, all the 2nd entries, and so on up to N.

    Given an array of integers, returns (maximum(dims),).

    (These arguments are what scatter understands.)

    source

    ',5))]),i("details",Bi,[i("summary",null,[s[348]||(s[348]=i("a",{id:"NNlib.BatchedTranspose",href:"#NNlib.BatchedTranspose"},[i("span",{class:"jlbinding"},"NNlib.BatchedTranspose")],-1)),s[349]||(s[349]=a()),l(n,{type:"info",class:"jlObjectType jlType",text:"Type"})]),s[350]||(s[350]=t(`
    julia
    batched_transpose(A::AbstractArray{T,3})
    -batched_adjoint(A)

    Equivalent to applying transpose or adjoint to each matrix A[:,:,k].

    These exist to control how batched_mul behaves, as it operates on such matrix slices of an array with ndims(A)==3.

    PermutedDimsArray(A, (2,1,3)) is equivalent to batched_transpose(A), and is also understood by batched_mul (and more widely supported elsewhere).

    BatchedTranspose{T, S} <: AbstractBatchedMatrix{T, 3}
    -BatchedAdjoint{T, S}

    Lazy wrappers analogous to Transpose and Adjoint, returned by batched_transpose etc.

    source

    `,7))]),i("details",fi,[i("summary",null,[s[351]||(s[351]=i("a",{id:"NNlib._rotate_coordinates",href:"#NNlib._rotate_coordinates"},[i("span",{class:"jlbinding"},"NNlib._rotate_coordinates")],-1)),s[352]||(s[352]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[353]||(s[353]=t('
    julia
    _rotate_coordinates(sinθ, cosθ, i, j, rotation_center, round_or_floor)

    This rotates the coordinates and either applies round(nearest neighbour) or floor for :bilinear interpolation)

    source

    ',3))]),i("details",vi,[i("summary",null,[s[354]||(s[354]=i("a",{id:"NNlib.melscale_filterbanks",href:"#NNlib.melscale_filterbanks"},[i("span",{class:"jlbinding"},"NNlib.melscale_filterbanks")],-1)),s[355]||(s[355]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[356]||(s[356]=t(`
    julia
    melscale_filterbanks(;
    -    n_freqs::Int, n_mels::Int, sample_rate::Int,
    -    fmin::Float32 = 0f0, fmax::Float32 = Float32(sample_rate ÷ 2))

    Create triangular Mel scale filter banks (ref: Mel scale - Wikipedia). Each column is a filterbank that highlights its own frequency.

    Arguments:

    Returns:

    Filterbank matrix of shape (n_freqs, n_mels) where each column is a filterbank.

    julia
    julia> n_mels = 8;
    -
    -julia> fb = melscale_filterbanks(; n_freqs=200, n_mels, sample_rate=16000);
    -
    -julia> plot = lineplot(fb[:, 1]);
    -
    -julia> for i in 2:n_mels
    -           lineplot!(plot, fb[:, i])
    -       end
    -
    -julia> plot
    -     ┌────────────────────────────────────────┐
    -   1 │⠀⡀⢸⠀⢸⠀⠀⣧⠀⠀⢸⡄⠀⠀⠀⣷⠀⠀⠀⠀⠀⣷⠀⠀⠀⠀⠀⠀⢀⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⡇⢸⡆⢸⡇⠀⣿⠀⠀⡜⡇⠀⠀⢰⠋⡆⠀⠀⠀⢰⠁⡇⠀⠀⠀⠀⠀⡸⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⣿⢸⡇⡇⡇⢰⠹⡄⠀⡇⢱⠀⠀⢸⠀⢣⠀⠀⠀⡜⠀⢸⡀⠀⠀⠀⢀⠇⠀⠈⡇⠀⠀⠀⠀⠀⠀⠀⠀│
    -     │⠀⣿⡇⡇⡇⡇⢸⠀⡇⢀⠇⠸⡀⠀⡇⠀⠸⡀⠀⢀⠇⠀⠀⢇⠀⠀⠀⡸⠀⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀│
    -     │⢠⢻⡇⡇⡇⢱⢸⠀⢇⢸⠀⠀⡇⢀⠇⠀⠀⡇⠀⢸⠀⠀⠀⠸⡀⠀⢠⠇⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀│
    -     │⢸⢸⡇⢱⡇⢸⡇⠀⢸⢸⠀⠀⢣⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⢇⠀⡜⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀⠀│
    -     │⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⡎⠀⠀⠀⠈⣶⠁⠀⠀⠀⠀⠸⣤⠃⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀⠀⠀│
    -     │⢸⠀⡇⢸⠀⠀⡇⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⠀⠀⢱⡀⠀⠀⠀⠀│
    -     │⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⢇⠀⠀⠀⢀⠿⡀⠀⠀⠀⠀⢰⠛⡄⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀│
    -     │⢸⢸⡇⡸⡇⢸⡇⠀⢸⢸⠀⠀⡜⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⡎⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀│
    -     │⢸⢸⡇⡇⡇⡸⢸⠀⡎⢸⠀⠀⡇⠈⡆⠀⠀⡇⠀⢸⠀⠀⠀⢰⠁⠀⠘⡆⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄⠀⠀│
    -     │⡇⢸⡇⡇⡇⡇⢸⠀⡇⠈⡆⢰⠁⠀⡇⠀⢰⠁⠀⠈⡆⠀⠀⡎⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀│
    -     │⡇⢸⢸⡇⡇⡇⠸⣰⠃⠀⡇⡸⠀⠀⢸⠀⡜⠀⠀⠀⢣⠀⢸⠁⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀│
    -     │⡇⡇⢸⠇⢸⡇⠀⣿⠀⠀⢣⡇⠀⠀⠸⣄⠇⠀⠀⠀⠸⡀⡇⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄│
    -   0 │⣇⣇⣸⣀⣸⣀⣀⣟⣀⣀⣸⣃⣀⣀⣀⣿⣀⣀⣀⣀⣀⣿⣀⣀⣀⣀⣀⣀⣈⣇⣀⣀⣀⣀⣀⣀⣀⣀⣀⣱│
    -     └────────────────────────────────────────┘
    -     ⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀

    source

    `,8))]),i("details",Ni,[i("summary",null,[s[357]||(s[357]=i("a",{id:"NNlib.logaddexp",href:"#NNlib.logaddexp"},[i("span",{class:"jlbinding"},"NNlib.logaddexp")],-1)),s[358]||(s[358]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[359]||(s[359]=t('
    julia
    logaddexp(a, b)

    Adds log-space a and b such that the result equals log(exp(a)+exp(b))

    source

    ',3))]),i("details",xi,[i("summary",null,[s[360]||(s[360]=i("a",{id:"NNlib.depthwiseconv_direct!",href:"#NNlib.depthwiseconv_direct!"},[i("span",{class:"jlbinding"},"NNlib.depthwiseconv_direct!")],-1)),s[361]||(s[361]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[362]||(s[362]=t('
    julia
    depthwiseconv_direct!(y, x, w, cdims; alpha=1, beta=0)

    Direct depthwise convolution implementation; used for debugging, tests, and mixing/ matching of strange datatypes within a single convolution. Uses naive nested for loop implementation and does not attempt to optimize performance. Rather, this implementation is intended to be maximally understandable and debuggable, to aid in testing other, more performant implementations. We also explicitly support mixing and matching of strange datatypes, so that if the user really wants to convolve an image of UInt8's with a Float16 kernel, storing the result in a Float32 output, there is at least a function call for that madness.

    One subtlety about depthwise convolutions; the shape of a depthwise convolutional kernel is (spatial_dims..., C_mult, C_in), so the axis that must match with the number of channels in x is the last, not the second-to-last, as in a normal dense convolution.

    See the docstring for conv_direct!() for more on the optional parameters.

    source

    ',5))]),i("details",ji,[i("summary",null,[s[363]||(s[363]=i("a",{id:"NNlib.im2col!",href:"#NNlib.im2col!"},[i("span",{class:"jlbinding"},"NNlib.im2col!")],-1)),s[364]||(s[364]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[365]||(s[365]=t('
    julia
    im2col!(col, x, cdims)

    Converts a 3d image x into a matrix col for usage with GEMM-calculated convolution. Patches of x of size (kernel_w, kernel_h, kernel_d, C_in) will be extracted and laid out along the rows of col, one for each output pixel. This routine is used by all im2col-based convolutions, just with extra singleton dimensions added in the case of 2d or 1d images.

    source

    ',3))]),i("details",Ai,[i("summary",null,[s[366]||(s[366]=i("a",{id:"NNlib.predilate",href:"#NNlib.predilate"},[i("span",{class:"jlbinding"},"NNlib.predilate")],-1)),s[367]||(s[367]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[368]||(s[368]=t('
    julia
    predilate(x, dilation::Tuple)

    Places elements of x within a lattice of zeros, used in expressing a transposed convolution in terms of normal convolution. Note that while we call this "predilation" for aesthetic reasons, you are typically passing a "stride" value into here. Yes, transposed convolution is confusing.

    source

    ',3))]),i("details",wi,[i("summary",null,[s[369]||(s[369]=i("a",{id:"NNlib.safe_div",href:"#NNlib.safe_div"},[i("span",{class:"jlbinding"},"NNlib.safe_div")],-1)),s[370]||(s[370]=a()),l(n,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[371]||(s[371]=t('
    julia
    safe_div(x, y)

    Returns x/y unless y==0, in which case it just returns x. (Used internally by scatter.)

    source

    ',3))])])}const Ii=p(d,[["render",Di]]);export{Zi as __pageData,Ii as default}; diff --git a/dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.lean.js b/dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.lean.js deleted file mode 100644 index 50b9301be0..0000000000 --- a/dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.lean.js +++ /dev/null @@ -1,12 +0,0 @@ -import{_ as n,c as d,a2 as e,j as t,a as i,G as l,B as h,o as p}from"./chunks/framework.I-x9Gl6h.js";const C=JSON.parse('{"title":"LuxTestUtils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Testing_Functionality/LuxTestUtils.md","filePath":"api/Testing_Functionality/LuxTestUtils.md","lastUpdated":null}'),k={name:"api/Testing_Functionality/LuxTestUtils.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"};function u(y,s,f,b,x,F){const a=h("Badge");return p(),d("div",null,[s[15]||(s[15]=e('

    LuxTestUtils

    Warning

    This is a testing package. Hence, we don't use features like weak dependencies to reduce load times. It is recommended that you exclusively use this package for testing and not add a dependency to it in your main package Project.toml.

    Implements utilities for testing gradient correctness and dynamic dispatch of Lux.jl models.

    Testing using JET.jl

    ',4)),t("details",r,[t("summary",null,[s[0]||(s[0]=t("a",{id:"LuxTestUtils.@jet",href:"#LuxTestUtils.@jet"},[t("span",{class:"jlbinding"},"LuxTestUtils.@jet")],-1)),s[1]||(s[1]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[2]||(s[2]=e(`
    julia
    @jet f(args...) call_broken=false opt_broken=false

    Run JET tests on the function f with the arguments args.... If JET.jl fails to compile, then the macro will be a no-op.

    Keyword Arguments

    All additional arguments will be forwarded to JET.@test_call and JET.@test_opt.

    Tip

    Instead of specifying target_modules with every call, you can set global target modules using jet_target_modules!.

    julia
    using LuxTestUtils
    -
    -jet_target_modules!(["Lux", "LuxLib"]) # Expects Lux and LuxLib to be present in the module calling \`@jet\`

    Example

    julia
    julia> @jet sum([1, 2, 3]) target_modules=(Base, Core)
    -Test Passed
    -
    -julia> @jet sum(1, 1) target_modules=(Base, Core) opt_broken=true call_broken=true
    -Test Broken
    -  Expression: #= REPL[21]:1 =# JET.@test_opt target_modules = (Base, Core) sum(1, 1)

    source

    `,9))]),t("details",o,[t("summary",null,[s[3]||(s[3]=t("a",{id:"LuxTestUtils.jet_target_modules!",href:"#LuxTestUtils.jet_target_modules!"},[t("span",{class:"jlbinding"},"LuxTestUtils.jet_target_modules!")],-1)),s[4]||(s[4]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e('
    julia
    jet_target_modules!(list::Vector{String}; force::Bool=false)

    This sets target_modules for all JET tests when using @jet.

    source

    ',3))]),s[16]||(s[16]=t("h2",{id:"Gradient-Correctness",tabindex:"-1"},[i("Gradient Correctness "),t("a",{class:"header-anchor",href:"#Gradient-Correctness","aria-label":'Permalink to "Gradient Correctness {#Gradient-Correctness}"'},"​")],-1)),t("details",g,[t("summary",null,[s[6]||(s[6]=t("a",{id:"LuxTestUtils.test_gradients",href:"#LuxTestUtils.test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.test_gradients")],-1)),s[7]||(s[7]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e(`
    julia
    test_gradients(f, args...; skip_backends=[], broken_backends=[], kwargs...)

    Test the gradients of f with respect to args using the specified backends.

    BackendADTypeCPUGPUNotes
    Zygote.jlAutoZygote()
    Tracker.jlAutoTracker()
    ReverseDiff.jlAutoReverseDiff()
    ForwardDiff.jlAutoForwardDiff()len ≤ 100
    FiniteDiff.jlAutoFiniteDiff()len ≤ 100
    Enzyme.jlAutoEnzyme()Only Reverse Mode

    Arguments

    Keyword Arguments

    Example

    julia
    julia> f(x, y, z) = x .+ sum(abs2, y.t) + sum(y.x.z)
    -
    -julia> x = (; t=rand(10), x=(z=[2.0],))
    -
    -julia> test_gradients(f, 1.0, x, nothing)

    source

    `,10))]),t("details",c,[t("summary",null,[s[9]||(s[9]=t("a",{id:"LuxTestUtils.@test_gradients",href:"#LuxTestUtils.@test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_gradients")],-1)),s[10]||(s[10]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[11]||(s[11]=e('
    julia
    @test_gradients(f, args...; kwargs...)

    See the documentation of test_gradients for more details. This macro provides correct line information for the failing tests.

    source

    ',3))]),s[17]||(s[17]=t("h2",{id:"Extensions-to-@test",tabindex:"-1"},[i("Extensions to "),t("code",null,"@test"),i(),t("a",{class:"header-anchor",href:"#Extensions-to-@test","aria-label":'Permalink to "Extensions to `@test` {#Extensions-to-@test}"'},"​")],-1)),t("details",E,[t("summary",null,[s[12]||(s[12]=t("a",{id:"LuxTestUtils.@test_softfail",href:"#LuxTestUtils.@test_softfail"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_softfail")],-1)),s[13]||(s[13]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
    julia
    @test_softfail expr

    Evaluate expr and record a test result. If expr throws an exception, the test result will be recorded as an error. If expr returns a value, and it is not a boolean, the test result will be recorded as an error.

    If the test result is false then the test will be recorded as a broken test, else it will be recorded as a pass.

    source

    ',4))])])}const j=n(k,[["render",u]]);export{C as __pageData,j as default}; diff --git a/dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.js b/dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.js similarity index 97% rename from dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.js rename to dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.js index 50b9301be0..251fc9a8a8 100644 --- a/dev/assets/api_Testing_Functionality_LuxTestUtils.md.DlO2GhUE.js +++ b/dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.js @@ -1,12 +1,12 @@ -import{_ as n,c as d,a2 as e,j as t,a as i,G as l,B as h,o as p}from"./chunks/framework.I-x9Gl6h.js";const C=JSON.parse('{"title":"LuxTestUtils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Testing_Functionality/LuxTestUtils.md","filePath":"api/Testing_Functionality/LuxTestUtils.md","lastUpdated":null}'),k={name:"api/Testing_Functionality/LuxTestUtils.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"};function u(y,s,f,b,x,F){const a=h("Badge");return p(),d("div",null,[s[15]||(s[15]=e('

    LuxTestUtils

    Warning

    This is a testing package. Hence, we don't use features like weak dependencies to reduce load times. It is recommended that you exclusively use this package for testing and not add a dependency to it in your main package Project.toml.

    Implements utilities for testing gradient correctness and dynamic dispatch of Lux.jl models.

    Testing using JET.jl

    ',4)),t("details",r,[t("summary",null,[s[0]||(s[0]=t("a",{id:"LuxTestUtils.@jet",href:"#LuxTestUtils.@jet"},[t("span",{class:"jlbinding"},"LuxTestUtils.@jet")],-1)),s[1]||(s[1]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[2]||(s[2]=e(`
    julia
    @jet f(args...) call_broken=false opt_broken=false

    Run JET tests on the function f with the arguments args.... If JET.jl fails to compile, then the macro will be a no-op.

    Keyword Arguments

    All additional arguments will be forwarded to JET.@test_call and JET.@test_opt.

    Tip

    Instead of specifying target_modules with every call, you can set global target modules using jet_target_modules!.

    julia
    using LuxTestUtils
    +import{_ as n,c as d,a2 as e,j as t,a as i,G as l,B as h,o as p}from"./chunks/framework.BetCMmtc.js";const T=JSON.parse('{"title":"LuxTestUtils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Testing_Functionality/LuxTestUtils.md","filePath":"api/Testing_Functionality/LuxTestUtils.md","lastUpdated":null}'),k={name:"api/Testing_Functionality/LuxTestUtils.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"};function u(y,s,f,b,_,x){const a=h("Badge");return p(),d("div",null,[s[15]||(s[15]=e('

    LuxTestUtils

    Warning

    This is a testing package. Hence, we don't use features like weak dependencies to reduce load times. It is recommended that you exclusively use this package for testing and not add a dependency to it in your main package Project.toml.

    Implements utilities for testing gradient correctness and dynamic dispatch of Lux.jl models.

    Testing using JET.jl

    ',4)),t("details",r,[t("summary",null,[s[0]||(s[0]=t("a",{id:"LuxTestUtils.@jet",href:"#LuxTestUtils.@jet"},[t("span",{class:"jlbinding"},"LuxTestUtils.@jet")],-1)),s[1]||(s[1]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[2]||(s[2]=e(`
    julia
    @jet f(args...) call_broken=false opt_broken=false

    Run JET tests on the function f with the arguments args.... If JET.jl fails to compile, then the macro will be a no-op.

    Keyword Arguments

    • call_broken: Marks the test_call as broken.

    • opt_broken: Marks the test_opt as broken.

    All additional arguments will be forwarded to JET.@test_call and JET.@test_opt.

    Tip

    Instead of specifying target_modules with every call, you can set global target modules using jet_target_modules!.

    julia
    using LuxTestUtils
     
     jet_target_modules!(["Lux", "LuxLib"]) # Expects Lux and LuxLib to be present in the module calling \`@jet\`

    Example

    julia
    julia> @jet sum([1, 2, 3]) target_modules=(Base, Core)
     Test Passed
     
     julia> @jet sum(1, 1) target_modules=(Base, Core) opt_broken=true call_broken=true
     Test Broken
    -  Expression: #= REPL[21]:1 =# JET.@test_opt target_modules = (Base, Core) sum(1, 1)

    source

    `,9))]),t("details",o,[t("summary",null,[s[3]||(s[3]=t("a",{id:"LuxTestUtils.jet_target_modules!",href:"#LuxTestUtils.jet_target_modules!"},[t("span",{class:"jlbinding"},"LuxTestUtils.jet_target_modules!")],-1)),s[4]||(s[4]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e('
    julia
    jet_target_modules!(list::Vector{String}; force::Bool=false)

    This sets target_modules for all JET tests when using @jet.

    source

    ',3))]),s[16]||(s[16]=t("h2",{id:"Gradient-Correctness",tabindex:"-1"},[i("Gradient Correctness "),t("a",{class:"header-anchor",href:"#Gradient-Correctness","aria-label":'Permalink to "Gradient Correctness {#Gradient-Correctness}"'},"​")],-1)),t("details",g,[t("summary",null,[s[6]||(s[6]=t("a",{id:"LuxTestUtils.test_gradients",href:"#LuxTestUtils.test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.test_gradients")],-1)),s[7]||(s[7]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e(`
    julia
    test_gradients(f, args...; skip_backends=[], broken_backends=[], kwargs...)

    Test the gradients of f with respect to args using the specified backends.

    BackendADTypeCPUGPUNotes
    Zygote.jlAutoZygote()
    Tracker.jlAutoTracker()
    ReverseDiff.jlAutoReverseDiff()
    ForwardDiff.jlAutoForwardDiff()len ≤ 100
    FiniteDiff.jlAutoFiniteDiff()len ≤ 100
    Enzyme.jlAutoEnzyme()Only Reverse Mode

    Arguments

    • f: The function to test the gradients of.

    • args: The arguments to test the gradients of. Only AbstractArrays are considered for gradient computation. Gradients wrt all other arguments are assumed to be NoTangent().

    Keyword Arguments

    • skip_backends: A list of backends to skip.

    • broken_backends: A list of backends to treat as broken.

    • soft_fail: If true, then the test will be recorded as a soft_fail test. This overrides any broken kwargs. Alternatively, a list of backends can be passed to soft_fail to allow soft_fail tests for only those backends.

    • enzyme_set_runtime_activity: If true, then activate runtime activity for Enzyme.

    • enable_enzyme_reverse_mode: If true, then enable reverse mode for Enzyme.

    • kwargs: Additional keyword arguments to pass to check_approx.

    Example

    julia
    julia> f(x, y, z) = x .+ sum(abs2, y.t) + sum(y.x.z)
    +  Expression: #= REPL[21]:1 =# JET.@test_opt target_modules = (Base, Core) sum(1, 1)

    source

    `,9))]),t("details",o,[t("summary",null,[s[3]||(s[3]=t("a",{id:"LuxTestUtils.jet_target_modules!",href:"#LuxTestUtils.jet_target_modules!"},[t("span",{class:"jlbinding"},"LuxTestUtils.jet_target_modules!")],-1)),s[4]||(s[4]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e('
    julia
    jet_target_modules!(list::Vector{String}; force::Bool=false)

    This sets target_modules for all JET tests when using @jet.

    source

    ',3))]),s[16]||(s[16]=t("h2",{id:"Gradient-Correctness",tabindex:"-1"},[i("Gradient Correctness "),t("a",{class:"header-anchor",href:"#Gradient-Correctness","aria-label":'Permalink to "Gradient Correctness {#Gradient-Correctness}"'},"​")],-1)),t("details",g,[t("summary",null,[s[6]||(s[6]=t("a",{id:"LuxTestUtils.test_gradients",href:"#LuxTestUtils.test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.test_gradients")],-1)),s[7]||(s[7]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e(`
    julia
    test_gradients(f, args...; skip_backends=[], broken_backends=[], kwargs...)

    Test the gradients of f with respect to args using the specified backends.

    BackendADTypeCPUGPUNotes
    Zygote.jlAutoZygote()
    Tracker.jlAutoTracker()
    ReverseDiff.jlAutoReverseDiff()
    ForwardDiff.jlAutoForwardDiff()len ≤ 100
    FiniteDiff.jlAutoFiniteDiff()len ≤ 100
    Enzyme.jlAutoEnzyme()Only Reverse Mode

    Arguments

    • f: The function to test the gradients of.

    • args: The arguments to test the gradients of. Only AbstractArrays are considered for gradient computation. Gradients wrt all other arguments are assumed to be NoTangent().

    Keyword Arguments

    • skip_backends: A list of backends to skip.

    • broken_backends: A list of backends to treat as broken.

    • soft_fail: If true, then the test will be recorded as a soft_fail test. This overrides any broken kwargs. Alternatively, a list of backends can be passed to soft_fail to allow soft_fail tests for only those backends.

    • enzyme_set_runtime_activity: If true, then activate runtime activity for Enzyme.

    • enable_enzyme_reverse_mode: If true, then enable reverse mode for Enzyme.

    • kwargs: Additional keyword arguments to pass to check_approx.

    Example

    julia
    julia> f(x, y, z) = x .+ sum(abs2, y.t) + sum(y.x.z)
     
     julia> x = (; t=rand(10), x=(z=[2.0],))
     
    -julia> test_gradients(f, 1.0, x, nothing)

    source

    `,10))]),t("details",c,[t("summary",null,[s[9]||(s[9]=t("a",{id:"LuxTestUtils.@test_gradients",href:"#LuxTestUtils.@test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_gradients")],-1)),s[10]||(s[10]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[11]||(s[11]=e('
    julia
    @test_gradients(f, args...; kwargs...)

    See the documentation of test_gradients for more details. This macro provides correct line information for the failing tests.

    source

    ',3))]),s[17]||(s[17]=t("h2",{id:"Extensions-to-@test",tabindex:"-1"},[i("Extensions to "),t("code",null,"@test"),i(),t("a",{class:"header-anchor",href:"#Extensions-to-@test","aria-label":'Permalink to "Extensions to `@test` {#Extensions-to-@test}"'},"​")],-1)),t("details",E,[t("summary",null,[s[12]||(s[12]=t("a",{id:"LuxTestUtils.@test_softfail",href:"#LuxTestUtils.@test_softfail"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_softfail")],-1)),s[13]||(s[13]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
    julia
    @test_softfail expr

    Evaluate expr and record a test result. If expr throws an exception, the test result will be recorded as an error. If expr returns a value, and it is not a boolean, the test result will be recorded as an error.

    If the test result is false then the test will be recorded as a broken test, else it will be recorded as a pass.

    source

    ',4))])])}const j=n(k,[["render",u]]);export{C as __pageData,j as default}; +julia> test_gradients(f, 1.0, x, nothing)

    source

    `,10))]),t("details",c,[t("summary",null,[s[9]||(s[9]=t("a",{id:"LuxTestUtils.@test_gradients",href:"#LuxTestUtils.@test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_gradients")],-1)),s[10]||(s[10]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[11]||(s[11]=e('
    julia
    @test_gradients(f, args...; kwargs...)

    See the documentation of test_gradients for more details. This macro provides correct line information for the failing tests.

    source

    ',3))]),s[17]||(s[17]=t("h2",{id:"Extensions-to-@test",tabindex:"-1"},[i("Extensions to "),t("code",null,"@test"),i(),t("a",{class:"header-anchor",href:"#Extensions-to-@test","aria-label":'Permalink to "Extensions to `@test` {#Extensions-to-@test}"'},"​")],-1)),t("details",E,[t("summary",null,[s[12]||(s[12]=t("a",{id:"LuxTestUtils.@test_softfail",href:"#LuxTestUtils.@test_softfail"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_softfail")],-1)),s[13]||(s[13]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e('
    julia
    @test_softfail expr

    Evaluate expr and record a test result. If expr throws an exception, the test result will be recorded as an error. If expr returns a value, and it is not a boolean, the test result will be recorded as an error.

    If the test result is false then the test will be recorded as a broken test, else it will be recorded as a pass.

    source

    ',4))])])}const m=n(k,[["render",u]]);export{T as __pageData,m as default}; diff --git a/dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.lean.js b/dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.lean.js new file mode 100644 index 0000000000..1a9cc5fa21 --- /dev/null +++ b/dev/assets/api_Testing_Functionality_LuxTestUtils.md.Dut_312M.lean.js @@ -0,0 +1 @@ +import{_ as n,c as d,a2 as e,j as t,a as i,G as l,B as h,o as p}from"./chunks/framework.BetCMmtc.js";const T=JSON.parse('{"title":"LuxTestUtils","description":"","frontmatter":{},"headers":[],"relativePath":"api/Testing_Functionality/LuxTestUtils.md","filePath":"api/Testing_Functionality/LuxTestUtils.md","lastUpdated":null}'),k={name:"api/Testing_Functionality/LuxTestUtils.md"},r={class:"jldocstring custom-block"},o={class:"jldocstring custom-block"},g={class:"jldocstring custom-block"},c={class:"jldocstring custom-block"},E={class:"jldocstring custom-block"};function u(y,s,f,b,_,x){const a=h("Badge");return p(),d("div",null,[s[15]||(s[15]=e("",4)),t("details",r,[t("summary",null,[s[0]||(s[0]=t("a",{id:"LuxTestUtils.@jet",href:"#LuxTestUtils.@jet"},[t("span",{class:"jlbinding"},"LuxTestUtils.@jet")],-1)),s[1]||(s[1]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[2]||(s[2]=e("",9))]),t("details",o,[t("summary",null,[s[3]||(s[3]=t("a",{id:"LuxTestUtils.jet_target_modules!",href:"#LuxTestUtils.jet_target_modules!"},[t("span",{class:"jlbinding"},"LuxTestUtils.jet_target_modules!")],-1)),s[4]||(s[4]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[5]||(s[5]=e("",3))]),s[16]||(s[16]=t("h2",{id:"Gradient-Correctness",tabindex:"-1"},[i("Gradient Correctness "),t("a",{class:"header-anchor",href:"#Gradient-Correctness","aria-label":'Permalink to "Gradient Correctness {#Gradient-Correctness}"'},"​")],-1)),t("details",g,[t("summary",null,[s[6]||(s[6]=t("a",{id:"LuxTestUtils.test_gradients",href:"#LuxTestUtils.test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.test_gradients")],-1)),s[7]||(s[7]=i()),l(a,{type:"info",class:"jlObjectType jlFunction",text:"Function"})]),s[8]||(s[8]=e("",10))]),t("details",c,[t("summary",null,[s[9]||(s[9]=t("a",{id:"LuxTestUtils.@test_gradients",href:"#LuxTestUtils.@test_gradients"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_gradients")],-1)),s[10]||(s[10]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[11]||(s[11]=e("",3))]),s[17]||(s[17]=t("h2",{id:"Extensions-to-@test",tabindex:"-1"},[i("Extensions to "),t("code",null,"@test"),i(),t("a",{class:"header-anchor",href:"#Extensions-to-@test","aria-label":'Permalink to "Extensions to `@test` {#Extensions-to-@test}"'},"​")],-1)),t("details",E,[t("summary",null,[s[12]||(s[12]=t("a",{id:"LuxTestUtils.@test_softfail",href:"#LuxTestUtils.@test_softfail"},[t("span",{class:"jlbinding"},"LuxTestUtils.@test_softfail")],-1)),s[13]||(s[13]=i()),l(a,{type:"info",class:"jlObjectType jlMacro",text:"Macro"})]),s[14]||(s[14]=e("",4))])])}const m=n(k,[["render",u]]);export{T as __pageData,m as default}; diff --git a/dev/assets/app.BEnVnAT7.js b/dev/assets/app.BEnVnAT7.js new file mode 100644 index 0000000000..8b6307410e --- /dev/null +++ b/dev/assets/app.BEnVnAT7.js @@ -0,0 +1 @@ +import{R as p}from"./chunks/theme.Cah_qJpF.js";import{R as s,a6 as i,a7 as u,a8 as c,a9 as l,aa as f,ab as d,ac as m,ad as h,ae as g,af as A,d as v,u as R,v as w,s as y,ag as C,ah as P,ai as b,a5 as E}from"./chunks/framework.BetCMmtc.js";function r(e){if(e.extends){const a=r(e.extends);return{...a,...e,async enhanceApp(t){a.enhanceApp&&await a.enhanceApp(t),e.enhanceApp&&await e.enhanceApp(t)}}}return e}const n=r(p),S=v({name:"VitePressApp",setup(){const{site:e,lang:a,dir:t}=R();return w(()=>{y(()=>{document.documentElement.lang=a.value,document.documentElement.dir=t.value})}),e.value.router.prefetchLinks&&C(),P(),b(),n.setup&&n.setup(),()=>E(n.Layout)}});async function T(){globalThis.__VITEPRESS__=!0;const e=_(),a=D();a.provide(u,e);const t=c(e.route);return a.provide(l,t),a.component("Content",f),a.component("ClientOnly",d),Object.defineProperties(a.config.globalProperties,{$frontmatter:{get(){return t.frontmatter.value}},$params:{get(){return t.page.value.params}}}),n.enhanceApp&&await n.enhanceApp({app:a,router:e,siteData:m}),{app:a,router:e,data:t}}function D(){return h(S)}function _(){let e=s;return g(a=>{let t=A(a),o=null;return t&&(e&&(t=t.replace(/\.js$/,".lean.js")),o=import(t)),s&&(e=!1),o},n.NotFound)}s&&T().then(({app:e,router:a,data:t})=>{a.go().then(()=>{i(a.route,t.site),e.mount("#app")})});export{T as createApp}; diff --git a/dev/assets/app.Dx6MRkUl.js b/dev/assets/app.Dx6MRkUl.js deleted file mode 100644 index 700d3ad47b..0000000000 --- a/dev/assets/app.Dx6MRkUl.js +++ /dev/null @@ -1 +0,0 @@ -import{R as p}from"./chunks/theme.Dw8Jqbck.js";import{R as o,a6 as u,a7 as c,a8 as l,a9 as f,aa as d,ab as m,ac as h,ad as g,ae as A,af as v,d as P,u as R,v as w,s as y,ag as C,ah as b,ai as E,a5 as S}from"./chunks/framework.I-x9Gl6h.js";function i(e){if(e.extends){const a=i(e.extends);return{...a,...e,async enhanceApp(t){a.enhanceApp&&await a.enhanceApp(t),e.enhanceApp&&await e.enhanceApp(t)}}}return e}const s=i(p),T=P({name:"VitePressApp",setup(){const{site:e,lang:a,dir:t}=R();return w(()=>{y(()=>{document.documentElement.lang=a.value,document.documentElement.dir=t.value})}),e.value.router.prefetchLinks&&C(),b(),E(),s.setup&&s.setup(),()=>S(s.Layout)}});async function D(){globalThis.__VITEPRESS__=!0;const e=j(),a=_();a.provide(c,e);const t=l(e.route);return a.provide(f,t),a.component("Content",d),a.component("ClientOnly",m),Object.defineProperties(a.config.globalProperties,{$frontmatter:{get(){return t.frontmatter.value}},$params:{get(){return t.page.value.params}}}),s.enhanceApp&&await s.enhanceApp({app:a,router:e,siteData:h}),{app:a,router:e,data:t}}function _(){return g(T)}function j(){let e=o,a;return A(t=>{let n=v(t),r=null;return n&&(e&&(a=n),(e||a===n)&&(n=n.replace(/\.js$/,".lean.js")),r=import(n)),o&&(e=!1),r},s.NotFound)}o&&D().then(({app:e,router:a,data:t})=>{a.go().then(()=>{u(a.route,t.site),e.mount("#app")})});export{D as createApp}; diff --git a/dev/assets/chunks/@localSearchIndexroot.Cbmt1wEA.js b/dev/assets/chunks/@localSearchIndexroot.Cbmt1wEA.js new file mode 100644 index 0000000000..57a08c5b51 --- /dev/null +++ b/dev/assets/chunks/@localSearchIndexroot.Cbmt1wEA.js @@ -0,0 +1 @@ +const e='{"documentCount":298,"nextId":298,"documentIds":{"0":"/dev/api/Accelerator_Support/MLDataDevices#MLDataDevices-API","1":"/dev/api/Accelerator_Support/MLDataDevices#preferences","2":"/dev/api/Accelerator_Support/MLDataDevices#Data-Transfer","3":"/dev/api/Accelerator_Support/MLDataDevices#miscellaneous","4":"/dev/api/Accelerator_Support/MLDataDevices#Multi-GPU-Support","5":"/dev/api/Accelerator_Support/MLDataDevices#iteration","6":"/dev/api/Building_Blocks/LuxCore#luxcore","7":"/dev/api/Building_Blocks/LuxCore#Abstract-Types","8":"/dev/api/Building_Blocks/LuxCore#general","9":"/dev/api/Building_Blocks/LuxCore#parameters","10":"/dev/api/Building_Blocks/LuxCore#states","11":"/dev/api/Building_Blocks/LuxCore#Layer-size","12":"/dev/api/Building_Blocks/WeightInitializers#WeightInitializers-API","13":"/dev/api/Building_Blocks/WeightInitializers#Supported-RNG-Types-WeightInit","14":"/dev/api/Building_Blocks/WeightInitializers#API-Reference","15":"/dev/api/Building_Blocks/WeightInitializers#Main-Functions","16":"/dev/api/Building_Blocks/WeightInitializers#Other-Convenience-Functions","17":"/dev/api/Lux/autodiff#autodiff-lux-helpers","18":"/dev/api/Lux/autodiff#JVP-and-VJP-Wrappers","19":"/dev/api/Lux/autodiff#Batched-AD","20":"/dev/api/Lux/autodiff#Nested-2nd-Order-AD","21":"/dev/api/Lux/contrib#Experimental-Features","22":"/dev/api/Lux/contrib#Parameter-Freezing","23":"/dev/api/Lux/contrib#Map-over-Layer","24":"/dev/api/Lux/contrib#Debugging-Functionality","25":"/dev/api/Lux/contrib#Tied-Parameters","26":"/dev/api/Lux/distributed_utils#Distributed-Utils","27":"/dev/api/Lux/distributed_utils#communication-backends","28":"/dev/api/Lux/distributed_utils#initialization","29":"/dev/api/Lux/distributed_utils#Helper-Functions","30":"/dev/api/Lux/distributed_utils#Communication-Primitives","31":"/dev/api/Lux/distributed_utils#Optimizers.jl-Integration","32":"/dev/api/Lux/distributed_utils#MLUtils.jl-Integration","33":"/dev/api/Lux/layers#Built-In-Layers","34":"/dev/api/Lux/layers#containers","35":"/dev/api/Lux/layers#Convolutional-Layers","36":"/dev/api/Lux/layers#Dropout-Layers","37":"/dev/api/Lux/layers#Pooling-Layers","38":"/dev/api/Lux/layers#Recurrent-Layers","39":"/dev/api/Lux/layers#Linear-Layers","40":"/dev/api/Lux/layers#Misc.-Helper-Layers","41":"/dev/api/Lux/layers#Normalization-Layers","42":"/dev/api/Lux/layers#upsampling","43":"/dev/api/Lux/interop#Interoperability-between-Lux-and-other-packages","44":"/dev/api/Lux/interop#Switching-from-older-frameworks","45":"/dev/api/Lux/interop#flux-to-lux-migrate-api","46":"/dev/api/Lux/interop#Using-a-different-backend-for-Lux","47":"/dev/api/Lux/interop#Lux-Models-to-Simple-Chains","48":"/dev/api/Lux/utilities#utilities","49":"/dev/api/Lux/utilities#Training-API","50":"/dev/api/Lux/utilities#Loss-Functions","51":"/dev/api/Lux/utilities#LuxOps-Module","52":"/dev/api/Lux/utilities#Recursive-Operations","53":"/dev/api/Lux/utilities#Updating-Floating-Point-Precision","54":"/dev/api/Lux/utilities#Element-Type-Matching","55":"/dev/api/Lux/utilities#Stateful-Layer","56":"/dev/api/Lux/utilities#Compact-Layer","57":"/dev/api/Lux/utilities#miscellaneous","58":"/dev/api/NN_Primitives/ActivationFunctions#NNlib-ActivationFunctions-API","59":"/dev/api/NN_Primitives/LuxLib#LuxLib-API","60":"/dev/api/NN_Primitives/LuxLib#Apply-Activation","61":"/dev/api/NN_Primitives/LuxLib#Batched-Operations","62":"/dev/api/NN_Primitives/LuxLib#Bias-Activation","63":"/dev/api/NN_Primitives/LuxLib#Convolutional-Layers","64":"/dev/api/NN_Primitives/LuxLib#dropout","65":"/dev/api/NN_Primitives/LuxLib#Fully-Connected-Layers","66":"/dev/api/NN_Primitives/LuxLib#normalization","67":"/dev/api/NN_Primitives/LuxLib#Helper-Functions","68":"/dev/#How-to-Install-Lux.jl?","69":"/dev/#Want-GPU-Support?","70":"/dev/#Want-Reactant-(XLA)-Support?","71":"/dev/introduction/citation#citation","72":"/dev/api/NN_Primitives/NNlib#NNlib-API","73":"/dev/api/NN_Primitives/NNlib#attention","74":"/dev/api/NN_Primitives/NNlib#softmax","75":"/dev/api/NN_Primitives/NNlib#pooling","76":"/dev/api/NN_Primitives/NNlib#padding","77":"/dev/api/NN_Primitives/NNlib#convolution","78":"/dev/api/NN_Primitives/NNlib#upsampling","79":"/dev/api/NN_Primitives/NNlib#rotation","80":"/dev/api/NN_Primitives/NNlib#Batched-Operations","81":"/dev/api/NN_Primitives/NNlib#Gather-and-Scatter","82":"/dev/api/NN_Primitives/NNlib#sampling","83":"/dev/api/NN_Primitives/NNlib#losses","84":"/dev/api/NN_Primitives/NNlib#miscellaneous","85":"/dev/api/NN_Primitives/NNlib#dropout","86":"/dev/api/NN_Primitives/NNlib#Internal-NNlib-Functions","87":"/dev/introduction/#getting-started","88":"/dev/introduction/#installation","89":"/dev/introduction/#quickstart","90":"/dev/introduction/#Defining-Custom-Layers","91":"/dev/introduction/#Additional-Packages","92":"/dev/introduction/#XLA-(CPU/GPU/TPU)-Support","93":"/dev/introduction/#GPU-Support","94":"/dev/api/Testing_Functionality/LuxTestUtils#luxtestutils","95":"/dev/api/Testing_Functionality/LuxTestUtils#Testing-using-JET.jl","96":"/dev/api/Testing_Functionality/LuxTestUtils#Gradient-Correctness","97":"/dev/api/Testing_Functionality/LuxTestUtils#Extensions-to-@test","98":"/dev/introduction/overview#Why-we-wrote-Lux?","99":"/dev/introduction/overview#Design-Principles","100":"/dev/introduction/overview#Why-use-Lux-over-Flux?","101":"/dev/introduction/resources#Resources-to-Get-Started","102":"/dev/introduction/updating_to_v1#updating-to-v1","103":"/dev/introduction/updating_to_v1#LuxLib.jl","104":"/dev/introduction/updating_to_v1#Breaking-Changes","105":"/dev/introduction/updating_to_v1#New-Major-Features","106":"/dev/introduction/updating_to_v1#LuxCore.jl","107":"/dev/introduction/updating_to_v1#Breaking-Changes-2","108":"/dev/introduction/updating_to_v1#New-Major-Features-2","109":"/dev/introduction/updating_to_v1#WeightInitializers.jl","110":"/dev/introduction/updating_to_v1#MLDataDevices.jl","111":"/dev/introduction/updating_to_v1#Breaking-Changes-3","112":"/dev/introduction/updating_to_v1#New-Major-Features-3","113":"/dev/introduction/updating_to_v1#Lux.jl","114":"/dev/introduction/updating_to_v1#Breaking-Changes-(Removed-Functionality)","115":"/dev/introduction/updating_to_v1#Breaking-Changes-(Moved-Functionality)","116":"/dev/introduction/updating_to_v1#Breaking-Changes-(Changes-in-Defaults)","117":"/dev/introduction/updating_to_v1#New-Features","118":"/dev/manual/compiling_lux_models#reactant-compilation","119":"/dev/manual/compiling_lux_models#compile_lux_model_trainstate","120":"/dev/manual/autodiff#autodiff-lux","121":"/dev/manual/autodiff#overview","122":"/dev/manual/autodiff#autodiff-recommendations","123":"/dev/manual/autodiff#Support-Class","124":"/dev/manual/autodiff#footnotes","125":"/dev/manual/debugging#debug-lux-layers","126":"/dev/manual/debugging#Incorrect-Model-Specification:-Dimension-Mismatch-Problems","127":"/dev/manual/debugging#Tracking-down-NaNs","128":"/dev/manual/debugging#conclusion","129":"/dev/manual/dispatch_custom_input#Dispatching-on-Custom-Input-Types","130":"/dev/manual/dispatch_custom_input#Which-function-should-participate-in-dispatch?","131":"/dev/manual/dispatch_custom_input#Concrete-Example","132":"/dev/manual/dispatch_custom_input#Time-Dependent-Chain-Implementation","133":"/dev/manual/dispatch_custom_input#Running-the-TDChain","134":"/dev/manual/dispatch_custom_input#Writing-the-Correct-Dispatch-Rules","135":"/dev/manual/dispatch_custom_input#Using-the-Same-Input-for-Non-TD-Models","136":"/dev/manual/distributed_utils#Distributed-Data-Parallel-Training","137":"/dev/manual/distributed_utils#Guide-to-Integrating-DistributedUtils-into-your-code","138":"/dev/manual/distributed_utils#Migration-Guide-from-FluxMPI.jl","139":"/dev/manual/distributed_utils#Removed-Functionality","140":"/dev/manual/distributed_utils#Key-Differences","141":"/dev/manual/distributed_utils#Known-Shortcomings","142":"/dev/manual/freezing_model_parameters#freezing-model-parameters","143":"/dev/manual/freezing_model_parameters#Freezing-Layers-of-a-Particular-Kind","144":"/dev/manual/freezing_model_parameters#Freezing-by-Layer-Name","145":"/dev/manual/freezing_model_parameters#Freezing-Part-of-the-Parameters","146":"/dev/manual/freezing_model_parameters#Freezing-Part-of-a-Chain","147":"/dev/manual/exporting_to_jax#Exporting-Lux-Models-to-Jax-(via-EnzymeJAX-and-Reactant)","148":"/dev/manual/gpu_management#GPU-Management","149":"/dev/manual/gpu_management#Automatic-Backend-Management-(Recommended-Approach)","150":"/dev/manual/gpu_management#Manual-Backend-Management","151":"/dev/manual/interface#lux-interface","152":"/dev/manual/interface#Layer-Interface","153":"/dev/manual/interface#Singular-Layer","154":"/dev/manual/interface#Container-Layer","155":"/dev/manual/interface#Parameter-Interface","156":"/dev/manual/interface#State-Interface","157":"/dev/manual/performance_pitfalls#Performance-Pitfalls-and-How-to-Catch-Them","158":"/dev/manual/performance_pitfalls#Spurious-Type-Promotion","159":"/dev/manual/performance_pitfalls#Scalar-Indexing-on-GPU-Arrays","160":"/dev/manual/performance_pitfalls#Type-Instabilities","161":"/dev/manual/performance_pitfalls#Faster-Primitives","162":"/dev/manual/performance_pitfalls#Optional-Dependencies-for-Performance","163":"/dev/manual/performance_pitfalls#Data-Loading-and-Device-Transfer","164":"/dev/manual/nested_autodiff#nested_autodiff","165":"/dev/manual/nested_autodiff#Loss-Function-containing-Jacobian-Computation","166":"/dev/manual/nested_autodiff#Using-Batched-Jacobian-for-Multiple-Inputs","167":"/dev/manual/nested_autodiff#Loss-Function-contains-Gradient-Computation","168":"/dev/manual/nested_autodiff#Loss-Function-computing-the-Jacobian-of-the-Parameters","169":"/dev/manual/nested_autodiff#Hutchinson-Trace-Estimation","170":"/dev/manual/nested_autodiff#Computing-using-the-Vector-Jacobian-Product","171":"/dev/manual/nested_autodiff#Computing-using-the-Jacobian-Vector-Product","172":"/dev/manual/nested_autodiff#Computing-using-the-Full-Jacobian","173":"/dev/manual/migrate_from_flux#migrate-from-flux","174":"/dev/manual/migrate_from_flux#Implementing-Custom-Layers","175":"/dev/manual/migrate_from_flux#Certain-Important-Implementation-Details","176":"/dev/manual/migrate_from_flux#Training/Inference-Mode","177":"/dev/manual/migrate_from_flux#Can-we-still-use-Flux-Layers?","178":"/dev/manual/preferences#Preferences-for-Lux.jl","179":"/dev/manual/preferences#Nested-Automatic-Differentiation","180":"/dev/manual/preferences#gpu-aware-mpi-preferences","181":"/dev/manual/preferences#GPU-Backend-Selection","182":"/dev/manual/preferences#automatic-eltypes-preference","183":"/dev/manual/preferences#dispatch-doctor-preference","184":"/dev/manual/preferences#disable_loop_vectorization","185":"/dev/manual/weight_initializers#Initializing-Weights","186":"/dev/manual/weight_initializers#Quick-examples","187":"/dev/tutorials/beginner/1_Basics#Julia-and-Lux-for-the-Uninitiated","188":"/dev/tutorials/beginner/1_Basics#arrays","189":"/dev/tutorials/beginner/1_Basics#CUDA-Arrays","190":"/dev/tutorials/beginner/1_Basics#im-mutability","191":"/dev/tutorials/beginner/1_Basics#Managing-Randomness","192":"/dev/tutorials/beginner/1_Basics#Automatic-Differentiation","193":"/dev/tutorials/beginner/1_Basics#gradients","194":"/dev/tutorials/beginner/1_Basics#Jacobian-Vector-Product","195":"/dev/tutorials/beginner/1_Basics#Vector-Jacobian-Product","196":"/dev/tutorials/beginner/1_Basics#Linear-Regression","197":"/dev/tutorials/beginner/1_Basics#appendix","198":"/dev/tutorials/beginner/3_SimpleRNN#Training-a-Simple-LSTM","199":"/dev/tutorials/beginner/3_SimpleRNN#Package-Imports","200":"/dev/tutorials/beginner/3_SimpleRNN#dataset","201":"/dev/tutorials/beginner/3_SimpleRNN#Creating-a-Classifier","202":"/dev/tutorials/beginner/3_SimpleRNN#Using-the-@compact-API","203":"/dev/tutorials/beginner/3_SimpleRNN#Defining-Accuracy,-Loss-and-Optimiser","204":"/dev/tutorials/beginner/3_SimpleRNN#Training-the-Model","205":"/dev/tutorials/beginner/3_SimpleRNN#Saving-the-Model","206":"/dev/tutorials/beginner/3_SimpleRNN#appendix","207":"/dev/tutorials/beginner/4_SimpleChains#MNIST-Classification-with-SimpleChains","208":"/dev/tutorials/beginner/4_SimpleChains#Package-Imports","209":"/dev/tutorials/beginner/4_SimpleChains#Loading-MNIST","210":"/dev/tutorials/beginner/4_SimpleChains#Define-the-Model","211":"/dev/tutorials/beginner/4_SimpleChains#Helper-Functions","212":"/dev/tutorials/beginner/4_SimpleChains#Define-the-Training-Loop","213":"/dev/tutorials/beginner/4_SimpleChains#Finally-Training-the-Model","214":"/dev/tutorials/beginner/4_SimpleChains#appendix","215":"/dev/tutorials/#tutorials","216":"/dev/tutorials/#beginner-tutorials","217":"/dev/tutorials/#intermediate-tutorials","218":"/dev/tutorials/#advanced-tutorials","219":"/dev/tutorials/#larger-models","220":"/dev/tutorials/#selected-3rd-party-tutorials","221":"/dev/tutorials/beginner/5_OptimizationIntegration#Optimization-Lux-Tutorial","222":"/dev/tutorials/beginner/5_OptimizationIntegration#Imports-packages","223":"/dev/tutorials/beginner/5_OptimizationIntegration#Generate-some-training-data","224":"/dev/tutorials/beginner/5_OptimizationIntegration#Define-the-DataLoader","225":"/dev/tutorials/beginner/5_OptimizationIntegration#Training-the-model","226":"/dev/tutorials/beginner/5_OptimizationIntegration#Plotting-the-results","227":"/dev/tutorials/beginner/5_OptimizationIntegration#appendix","228":"/dev/tutorials/intermediate/1_NeuralODE#MNIST-Classification-using-Neural-ODEs","229":"/dev/tutorials/intermediate/1_NeuralODE#Package-Imports","230":"/dev/tutorials/intermediate/1_NeuralODE#Loading-MNIST","231":"/dev/tutorials/intermediate/1_NeuralODE#Define-the-Neural-ODE-Layer","232":"/dev/tutorials/intermediate/1_NeuralODE#Create-and-Initialize-the-Neural-ODE-Layer","233":"/dev/tutorials/intermediate/1_NeuralODE#Define-Utility-Functions","234":"/dev/tutorials/intermediate/1_NeuralODE#training","235":"/dev/tutorials/intermediate/1_NeuralODE#Alternate-Implementation-using-Stateful-Layer","236":"/dev/tutorials/intermediate/1_NeuralODE#Train-the-new-Stateful-Neural-ODE","237":"/dev/tutorials/intermediate/1_NeuralODE#Type-Stability","238":"/dev/tutorials/intermediate/1_NeuralODE#appendix","239":"/dev/tutorials/intermediate/3_HyperNet#Training-a-HyperNetwork-on-MNIST-and-FashionMNIST","240":"/dev/tutorials/intermediate/3_HyperNet#Package-Imports","241":"/dev/tutorials/intermediate/3_HyperNet#Loading-Datasets","242":"/dev/tutorials/intermediate/3_HyperNet#Implement-a-HyperNet-Layer","243":"/dev/tutorials/intermediate/3_HyperNet#Create-and-Initialize-the-HyperNet","244":"/dev/tutorials/intermediate/3_HyperNet#Define-Utility-Functions","245":"/dev/tutorials/intermediate/3_HyperNet#training","246":"/dev/tutorials/intermediate/3_HyperNet#appendix","247":"/dev/tutorials/intermediate/4_PINN2DPDE#Training-a-PINN-on-2D-PDE","248":"/dev/tutorials/intermediate/4_PINN2DPDE#Package-Imports","249":"/dev/tutorials/intermediate/4_PINN2DPDE#Problem-Definition","250":"/dev/tutorials/intermediate/4_PINN2DPDE#Define-the-Neural-Networks","251":"/dev/tutorials/intermediate/4_PINN2DPDE#Define-the-Loss-Functions","252":"/dev/tutorials/intermediate/4_PINN2DPDE#Generate-the-Data","253":"/dev/tutorials/intermediate/4_PINN2DPDE#training","254":"/dev/tutorials/intermediate/4_PINN2DPDE#Visualizing-the-Results","255":"/dev/tutorials/intermediate/4_PINN2DPDE#appendix","256":"/dev/tutorials/intermediate/6_GCN_Cora#GCN-Tutorial-Cora","257":"/dev/tutorials/intermediate/6_GCN_Cora#Loading-Cora-Dataset","258":"/dev/tutorials/intermediate/6_GCN_Cora#Model-Definition","259":"/dev/tutorials/intermediate/6_GCN_Cora#Helper-Functions","260":"/dev/tutorials/intermediate/6_GCN_Cora#Training-the-Model","261":"/dev/tutorials/intermediate/6_GCN_Cora#appendix","262":"/dev/tutorials/beginner/2_PolynomialFitting#Fitting-a-Polynomial-using-MLP","263":"/dev/tutorials/beginner/2_PolynomialFitting#Package-Imports","264":"/dev/tutorials/beginner/2_PolynomialFitting#dataset","265":"/dev/tutorials/beginner/2_PolynomialFitting#Neural-Network","266":"/dev/tutorials/beginner/2_PolynomialFitting#optimizer","267":"/dev/tutorials/beginner/2_PolynomialFitting#Loss-Function","268":"/dev/tutorials/beginner/2_PolynomialFitting#training","269":"/dev/tutorials/beginner/2_PolynomialFitting#appendix","270":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Convolutional-VAE-Tutorial","271":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Model-Definition","272":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Loading-MNIST","273":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Helper-Functions","274":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Training-the-Model","275":"/dev/tutorials/intermediate/5_ConvolutionalVAE#appendix","276":"/dev/tutorials/intermediate/2_BayesianNN#Bayesian-Neural-Network","277":"/dev/tutorials/intermediate/2_BayesianNN#Generating-data","278":"/dev/tutorials/intermediate/2_BayesianNN#Building-the-Neural-Network","279":"/dev/tutorials/intermediate/2_BayesianNN#Prediction-Visualization","280":"/dev/tutorials/intermediate/2_BayesianNN#appendix","281":"/dev/tutorials/advanced/1_GravitationalWaveForm#Training-a-Neural-ODE-to-Model-Gravitational-Waveforms","282":"/dev/tutorials/advanced/1_GravitationalWaveForm#Package-Imports","283":"/dev/tutorials/advanced/1_GravitationalWaveForm#Define-some-Utility-Functions","284":"/dev/tutorials/advanced/1_GravitationalWaveForm#Simulating-the-True-Model","285":"/dev/tutorials/advanced/1_GravitationalWaveForm#Defiing-a-Neural-Network-Model","286":"/dev/tutorials/advanced/1_GravitationalWaveForm#Setting-Up-for-Training-the-Neural-Network","287":"/dev/tutorials/advanced/1_GravitationalWaveForm#Training-the-Neural-Network","288":"/dev/tutorials/advanced/1_GravitationalWaveForm#Visualizing-the-Results","289":"/dev/tutorials/advanced/1_GravitationalWaveForm#appendix","290":"/dev/tutorials/intermediate/7_RealNVP#RealNVP-Tutorial","291":"/dev/tutorials/intermediate/7_RealNVP#Define-and-Load-the-Moons-Dataset","292":"/dev/tutorials/intermediate/7_RealNVP#Bijectors-Implementation","293":"/dev/tutorials/intermediate/7_RealNVP#Model-Definition","294":"/dev/tutorials/intermediate/7_RealNVP#Helper-Functions","295":"/dev/tutorials/intermediate/7_RealNVP#Training-the-Model","296":"/dev/tutorials/intermediate/7_RealNVP#Visualizing-the-Results","297":"/dev/tutorials/intermediate/7_RealNVP#appendix"},"fieldIds":{"title":0,"titles":1,"text":2},"fieldLength":{"0":[1,1,14],"1":[1,1,46],"2":[2,1,163],"3":[1,1,193],"4":[3,1,93],"5":[1,1,114],"6":[1,1,38],"7":[2,1,160],"8":[1,1,214],"9":[1,1,19],"10":[1,1,62],"11":[2,1,77],"12":[1,1,16],"13":[3,1,28],"14":[2,1,1],"15":[2,3,342],"16":[3,3,92],"17":[3,1,1],"18":[4,3,72],"19":[2,3,86],"20":[4,3,13],"21":[2,1,54],"22":[2,2,131],"23":[3,2,124],"24":[2,2,140],"25":[2,2,118],"26":[2,1,11],"27":[1,2,29],"28":[1,2,81],"29":[2,2,18],"30":[2,2,58],"31":[3,2,31],"32":[3,2,35],"33":[3,1,1],"34":[1,3,292],"35":[2,3,257],"36":[2,3,115],"37":[2,3,168],"38":[2,3,361],"39":[2,3,191],"40":[3,3,243],"41":[2,3,294],"42":[1,3,165],"43":[6,1,1],"44":[4,6,1],"45":[4,10,172],"46":[6,6,1],"47":[5,11,188],"48":[1,1,1],"49":[2,1,241],"50":[2,1,449],"51":[2,1,134],"52":[2,1,186],"53":[4,1,75],"54":[3,1,108],"55":[2,1,112],"56":[2,1,365],"57":[1,1,54],"58":[2,1,536],"59":[1,1,5],"60":[2,1,100],"61":[2,1,44],"62":[2,1,68],"63":[2,1,128],"64":[1,1,160],"65":[3,1,132],"66":[1,1,232],"67":[2,1,92],"68":[6,1,47],"69":[4,1,34],"70":[5,1,33],"71":[1,1,67],"72":[1,1,33],"73":[1,1,139],"74":[1,1,152],"75":[1,1,126],"76":[1,1,166],"77":[1,1,229],"78":[1,1,247],"79":[1,1,119],"80":[2,1,220],"81":[3,1,181],"82":[1,1,127],"83":[1,1,96],"84":[1,1,243],"85":[1,1,99],"86":[3,1,959],"87":[2,1,1],"88":[1,2,51],"89":[1,2,399],"90":[3,2,229],"91":[2,2,36],"92":[5,2,13],"93":[2,2,17],"94":[1,1,48],"95":[4,1,91],"96":[2,1,130],"97":[3,1,33],"98":[5,1,63],"99":[2,5,58],"100":[6,5,218],"101":[4,1,48],"102":[4,1,41],"103":[2,4,1],"104":[2,6,27],"105":[3,6,22],"106":[2,4,1],"107":[2,6,119],"108":[3,6,19],"109":[2,4,30],"110":[2,4,48],"111":[2,6,22],"112":[3,6,30],"113":[2,4,1],"114":[5,5,117],"115":[5,5,54],"116":[5,5,68],"117":[2,5,63],"118":[6,1,569],"119":[4,6,191],"120":[2,1,35],"121":[1,2,37],"122":[1,2,71],"123":[2,2,65],"124":[1,2,86],"125":[3,1,76],"126":[6,3,156],"127":[3,3,194],"128":[1,3,51],"129":[5,1,1],"130":[7,5,36],"131":[2,5,40],"132":[4,7,72],"133":[3,7,83],"134":[5,7,48],"135":[8,7,46],"136":[4,1,27],"137":[7,4,122],"138":[5,4,42],"139":[2,9,28],"140":[2,9,79],"141":[2,4,52],"142":[3,1,27],"143":[6,3,123],"144":[4,3,53],"145":[5,3,43],"146":[5,3,49],"147":[10,1,1544],"148":[2,1,58],"149":[6,2,76],"150":[3,2,44],"151":[2,1,80],"152":[2,2,1],"153":[2,3,242],"154":[2,3,158],"155":[2,2,151],"156":[2,2,47],"157":[7,1,18],"158":[3,7,149],"159":[5,7,32],"160":[2,7,39],"161":[2,7,43],"162":[4,7,39],"163":[5,7,75],"164":[3,1,169],"165":[5,3,262],"166":[6,8,154],"167":[5,3,220],"168":[7,3,216],"169":[3,3,88],"170":[6,6,46],"171":[6,6,31],"172":[5,6,111],"173":[5,1,67],"174":[3,5,164],"175":[4,5,1],"176":[3,9,49],"177":[7,5,34],"178":[4,1,49],"179":[3,4,21],"180":[4,4,40],"181":[3,4,33],"182":[3,4,23],"183":[2,4,61],"184":[4,4,58],"185":[2,1,133],"186":[2,2,64],"187":[6,1,284],"188":[1,6,286],"189":[2,7,65],"190":[3,6,110],"191":[2,6,130],"192":[2,6,243],"193":[1,8,72],"194":[3,8,106],"195":[3,8,30],"196":[2,6,253],"197":[1,6,101],"198":[4,1,38],"199":[2,4,49],"200":[1,4,96],"201":[3,4,160],"202":[4,4,74],"203":[5,4,56],"204":[3,4,260],"205":[3,4,48],"206":[1,4,109],"207":[4,1,33],"208":[2,4,24],"209":[2,4,71],"210":[3,4,64],"211":[2,4,38],"212":[4,4,81],"213":[4,4,156],"214":[1,4,109],"215":[1,1,1],"216":[2,1,1],"217":[2,1,1],"218":[2,1,1],"219":[2,1,41],"220":[4,1,65],"221":[6,1,78],"222":[2,6,27],"223":[4,6,58],"224":[3,6,75],"225":[3,6,242],"226":[3,6,47],"227":[1,6,101],"228":[5,1,24],"229":[2,5,433],"230":[2,5,74],"231":[5,5,139],"232":[7,5,60],"233":[3,5,37],"234":[1,5,235],"235":[5,5,60],"236":[6,5,70],"237":[2,5,181],"238":[1,5,153],"239":[7,1,1],"240":[2,7,94],"241":[2,7,57],"242":[4,7,76],"243":[5,7,42],"244":[3,7,39],"245":[1,7,268],"246":[1,7,153],"247":[6,1,45],"248":[2,6,27],"249":[2,6,26],"250":[4,6,55],"251":[4,6,71],"252":[3,6,86],"253":[1,6,775],"254":[3,6,73],"255":[1,6,109],"256":[5,1,47],"257":[3,5,54],"258":[2,5,42],"259":[2,5,28],"260":[3,5,2076],"261":[1,5,109],"262":[5,1,16],"263":[2,5,97],"264":[1,5,274],"265":[2,5,37],"266":[1,5,18],"267":[2,5,115],"268":[1,5,158],"269":[1,5,109],"270":[6,1,45],"271":[2,6,129],"272":[2,6,70],"273":[2,6,118],"274":[3,6,636],"275":[1,6,109],"276":[3,1,979],"277":[2,3,105],"278":[4,3,548],"279":[2,3,174],"280":[1,3,100],"281":[8,1,21],"282":[2,8,1034],"283":[4,8,202],"284":[4,8,124],"285":[5,8,1354],"286":[7,8,73],"287":[4,8,1249],"288":[3,8,83],"289":[1,8,100],"290":[5,1,43],"291":[6,5,93],"292":[2,5,55],"293":[2,5,107],"294":[2,5,25],"295":[3,5,263],"296":[3,5,43],"297":[1,5,109]},"averageFieldLength":[2.8859060402684555,3.533557046979864,133.12416107382558],"storedFields":{"0":{"title":"MLDataDevices","titles":[]},"1":{"title":"Preferences","titles":["MLDataDevices"]},"2":{"title":"Data Transfer","titles":["MLDataDevices"]},"3":{"title":"Miscellaneous","titles":["MLDataDevices"]},"4":{"title":"Multi-GPU Support","titles":["MLDataDevices"]},"5":{"title":"Iteration","titles":["MLDataDevices"]},"6":{"title":"LuxCore","titles":[]},"7":{"title":"Abstract Types","titles":["LuxCore"]},"8":{"title":"General","titles":["LuxCore"]},"9":{"title":"Parameters","titles":["LuxCore"]},"10":{"title":"States","titles":["LuxCore"]},"11":{"title":"Layer size","titles":["LuxCore"]},"12":{"title":"WeightInitializers","titles":[]},"13":{"title":"Supported RNG Types","titles":["WeightInitializers"]},"14":{"title":"API Reference","titles":["WeightInitializers"]},"15":{"title":"Main Functions","titles":["WeightInitializers","API Reference"]},"16":{"title":"Other Convenience Functions","titles":["WeightInitializers","API Reference"]},"17":{"title":"Automatic Differentiation Helpers","titles":[]},"18":{"title":"JVP & VJP Wrappers","titles":["Automatic Differentiation Helpers"]},"19":{"title":"Batched AD","titles":["Automatic Differentiation Helpers"]},"20":{"title":"Nested 2nd Order AD","titles":["Automatic Differentiation Helpers"]},"21":{"title":"Experimental Features","titles":[]},"22":{"title":"Parameter Freezing","titles":["Experimental Features"]},"23":{"title":"Map over Layer","titles":["Experimental Features"]},"24":{"title":"Debugging Functionality","titles":["Experimental Features"]},"25":{"title":"Tied Parameters","titles":["Experimental Features"]},"26":{"title":"Distributed Utils","titles":[]},"27":{"title":"Backends","titles":["Distributed Utils"]},"28":{"title":"Initialization","titles":["Distributed Utils"]},"29":{"title":"Helper Functions","titles":["Distributed Utils"]},"30":{"title":"Communication Primitives","titles":["Distributed Utils"]},"31":{"title":"Optimizers.jl Integration","titles":["Distributed Utils"]},"32":{"title":"MLUtils.jl Integration","titles":["Distributed Utils"]},"33":{"title":"Built-In Layers","titles":[]},"34":{"title":"Containers","titles":["Built-In Layers"]},"35":{"title":"Convolutional Layers","titles":["Built-In Layers"]},"36":{"title":"Dropout Layers","titles":["Built-In Layers"]},"37":{"title":"Pooling Layers","titles":["Built-In Layers"]},"38":{"title":"Recurrent Layers","titles":["Built-In Layers"]},"39":{"title":"Linear Layers","titles":["Built-In Layers"]},"40":{"title":"Misc. Helper Layers","titles":["Built-In Layers"]},"41":{"title":"Normalization Layers","titles":["Built-In Layers"]},"42":{"title":"Upsampling","titles":["Built-In Layers"]},"43":{"title":"Interoperability between Lux and other packages","titles":[]},"44":{"title":"Switching from older frameworks","titles":["Interoperability between Lux and other packages"]},"45":{"title":"Flux Models to Lux Models","titles":["Interoperability between Lux and other packages","Switching from older frameworks"]},"46":{"title":"Using a different backend for Lux","titles":["Interoperability between Lux and other packages"]},"47":{"title":"Lux Models to Simple Chains","titles":["Interoperability between Lux and other packages","Using a different backend for Lux"]},"48":{"title":"Utilities","titles":[]},"49":{"title":"Training API","titles":["Utilities"]},"50":{"title":"Loss Functions","titles":["Utilities"]},"51":{"title":"LuxOps Module","titles":["Utilities"]},"52":{"title":"Recursive Operations","titles":["Utilities"]},"53":{"title":"Updating Floating Point Precision","titles":["Utilities"]},"54":{"title":"Element Type Matching","titles":["Utilities"]},"55":{"title":"Stateful Layer","titles":["Utilities"]},"56":{"title":"Compact Layer","titles":["Utilities"]},"57":{"title":"Miscellaneous","titles":["Utilities"]},"58":{"title":"Activation Functions","titles":[]},"59":{"title":"LuxLib","titles":[]},"60":{"title":"Apply Activation","titles":["LuxLib"]},"61":{"title":"Batched Operations","titles":["LuxLib"]},"62":{"title":"Bias Activation","titles":["LuxLib"]},"63":{"title":"Convolutional Layers","titles":["LuxLib"]},"64":{"title":"Dropout","titles":["LuxLib"]},"65":{"title":"Fully Connected Layers","titles":["LuxLib"]},"66":{"title":"Normalization","titles":["LuxLib"]},"67":{"title":"Helper Functions","titles":["LuxLib"]},"68":{"title":"How to Install Lux.jl?","titles":[]},"69":{"title":"Want GPU Support?","titles":[]},"70":{"title":"Want Reactant (XLA) Support?","titles":[]},"71":{"title":"Citation","titles":[]},"72":{"title":"NNlib","titles":[]},"73":{"title":"Attention","titles":["NNlib"]},"74":{"title":"Softmax","titles":["NNlib"]},"75":{"title":"Pooling","titles":["NNlib"]},"76":{"title":"Padding","titles":["NNlib"]},"77":{"title":"Convolution","titles":["NNlib"]},"78":{"title":"Upsampling","titles":["NNlib"]},"79":{"title":"Rotation","titles":["NNlib"]},"80":{"title":"Batched Operations","titles":["NNlib"]},"81":{"title":"Gather and Scatter","titles":["NNlib"]},"82":{"title":"Sampling","titles":["NNlib"]},"83":{"title":"Losses","titles":["NNlib"]},"84":{"title":"Miscellaneous","titles":["NNlib"]},"85":{"title":"Dropout","titles":["NNlib"]},"86":{"title":"Internal NNlib Functions","titles":["NNlib"]},"87":{"title":"Getting Started","titles":[]},"88":{"title":"Installation","titles":["Getting Started"]},"89":{"title":"Quickstart","titles":["Getting Started"]},"90":{"title":"Defining Custom Layers","titles":["Getting Started"]},"91":{"title":"Additional Packages","titles":["Getting Started"]},"92":{"title":"XLA (CPU/GPU/TPU) Support","titles":["Getting Started"]},"93":{"title":"GPU Support","titles":["Getting Started"]},"94":{"title":"LuxTestUtils","titles":[]},"95":{"title":"Testing using JET.jl","titles":["LuxTestUtils"]},"96":{"title":"Gradient Correctness","titles":["LuxTestUtils"]},"97":{"title":"Extensions to @test","titles":["LuxTestUtils"]},"98":{"title":"Why we wrote Lux?","titles":[]},"99":{"title":"Design Principles","titles":["Why we wrote Lux?"]},"100":{"title":"Why use Lux over Flux?","titles":["Why we wrote Lux?"]},"101":{"title":"Resources to Get Started","titles":[]},"102":{"title":"Updating to Lux v1","titles":[]},"103":{"title":"LuxLib.jl","titles":["Updating to Lux v1"]},"104":{"title":"Breaking Changes","titles":["Updating to Lux v1","LuxLib.jl"]},"105":{"title":"New Major Features","titles":["Updating to Lux v1","LuxLib.jl"]},"106":{"title":"LuxCore.jl","titles":["Updating to Lux v1"]},"107":{"title":"Breaking Changes","titles":["Updating to Lux v1","LuxCore.jl"]},"108":{"title":"New Major Features","titles":["Updating to Lux v1","LuxCore.jl"]},"109":{"title":"WeightInitializers.jl","titles":["Updating to Lux v1"]},"110":{"title":"MLDataDevices.jl","titles":["Updating to Lux v1"]},"111":{"title":"Breaking Changes","titles":["Updating to Lux v1","MLDataDevices.jl"]},"112":{"title":"New Major Features","titles":["Updating to Lux v1","MLDataDevices.jl"]},"113":{"title":"Lux.jl","titles":["Updating to Lux v1"]},"114":{"title":"Breaking Changes (Removed Functionality)","titles":["Updating to Lux v1","Lux.jl"]},"115":{"title":"Breaking Changes (Moved Functionality)","titles":["Updating to Lux v1","Lux.jl"]},"116":{"title":"Breaking Changes (Changes in Defaults)","titles":["Updating to Lux v1","Lux.jl"]},"117":{"title":"New Features","titles":["Updating to Lux v1","Lux.jl"]},"118":{"title":"Compiling Lux Models using Reactant.jl","titles":[]},"119":{"title":"Using the TrainState API","titles":["Compiling Lux Models using Reactant.jl"]},"120":{"title":"Automatic Differentiation","titles":[]},"121":{"title":"Overview","titles":["Automatic Differentiation"]},"122":{"title":"Recommendations","titles":["Automatic Differentiation"]},"123":{"title":"Support Class","titles":["Automatic Differentiation"]},"124":{"title":"Footnotes","titles":["Automatic Differentiation"]},"125":{"title":"Debugging Lux Models","titles":[]},"126":{"title":"Incorrect Model Specification: Dimension Mismatch Problems","titles":["Debugging Lux Models"]},"127":{"title":"Tracking down NaNs","titles":["Debugging Lux Models"]},"128":{"title":"Conclusion","titles":["Debugging Lux Models"]},"129":{"title":"Dispatching on Custom Input Types","titles":[]},"130":{"title":"Which function should participate in dispatch?","titles":["Dispatching on Custom Input Types"]},"131":{"title":"Concrete Example","titles":["Dispatching on Custom Input Types"]},"132":{"title":"Time-Dependent Chain Implementation","titles":["Dispatching on Custom Input Types","Concrete Example"]},"133":{"title":"Running the TDChain","titles":["Dispatching on Custom Input Types","Concrete Example"]},"134":{"title":"Writing the Correct Dispatch Rules","titles":["Dispatching on Custom Input Types","Concrete Example"]},"135":{"title":"Using the Same Input for Non-TD Models","titles":["Dispatching on Custom Input Types","Concrete Example"]},"136":{"title":"Distributed Data Parallel Training","titles":[]},"137":{"title":"Guide to Integrating DistributedUtils into your code","titles":["Distributed Data Parallel Training"]},"138":{"title":"Migration Guide from FluxMPI.jl","titles":["Distributed Data Parallel Training"]},"139":{"title":"Removed Functionality","titles":["Distributed Data Parallel Training","Migration Guide from FluxMPI.jl"]},"140":{"title":"Key Differences","titles":["Distributed Data Parallel Training","Migration Guide from FluxMPI.jl"]},"141":{"title":"Known Shortcomings","titles":["Distributed Data Parallel Training"]},"142":{"title":"Freezing Model Parameters","titles":[]},"143":{"title":"Freezing Layers of a Particular Kind","titles":["Freezing Model Parameters"]},"144":{"title":"Freezing by Layer Name","titles":["Freezing Model Parameters"]},"145":{"title":"Freezing Part of the Parameters","titles":["Freezing Model Parameters"]},"146":{"title":"Freezing Part of a Chain","titles":["Freezing Model Parameters"]},"147":{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","titles":[]},"148":{"title":"GPU Management","titles":[]},"149":{"title":"Automatic Backend Management (Recommended Approach)","titles":["GPU Management"]},"150":{"title":"Manual Backend Management","titles":["GPU Management"]},"151":{"title":"Lux Interface","titles":[]},"152":{"title":"Layer Interface","titles":["Lux Interface"]},"153":{"title":"Singular Layer","titles":["Lux Interface","Layer Interface"]},"154":{"title":"Container Layer","titles":["Lux Interface","Layer Interface"]},"155":{"title":"Parameter Interface","titles":["Lux Interface"]},"156":{"title":"State Interface","titles":["Lux Interface"]},"157":{"title":"Performance Pitfalls & How to Catch Them","titles":[]},"158":{"title":"Spurious Type-Promotion","titles":["Performance Pitfalls & How to Catch Them"]},"159":{"title":"Scalar Indexing on GPU Arrays","titles":["Performance Pitfalls & How to Catch Them"]},"160":{"title":"Type Instabilities","titles":["Performance Pitfalls & How to Catch Them"]},"161":{"title":"Faster Primitives","titles":["Performance Pitfalls & How to Catch Them"]},"162":{"title":"Optional Dependencies for Performance","titles":["Performance Pitfalls & How to Catch Them"]},"163":{"title":"Data Loading and Device Transfer","titles":["Performance Pitfalls & How to Catch Them"]},"164":{"title":"Nested Automatic Differentiation","titles":[]},"165":{"title":"Loss Function containing Jacobian Computation","titles":["Nested Automatic Differentiation"]},"166":{"title":"Using Batched Jacobian for Multiple Inputs","titles":["Nested Automatic Differentiation","Loss Function containing Jacobian Computation"]},"167":{"title":"Loss Function contains Gradient Computation","titles":["Nested Automatic Differentiation"]},"168":{"title":"Loss Function computing the Jacobian of the Parameters","titles":["Nested Automatic Differentiation"]},"169":{"title":"Hutchinson Trace Estimation","titles":["Nested Automatic Differentiation"]},"170":{"title":"Computing using the Vector-Jacobian Product","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"171":{"title":"Computing using the Jacobian-Vector Product","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"172":{"title":"Computing using the Full Jacobian","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"173":{"title":"Migrating from Flux to Lux","titles":[]},"174":{"title":"Implementing Custom Layers","titles":["Migrating from Flux to Lux"]},"175":{"title":"Certain Important Implementation Details","titles":["Migrating from Flux to Lux"]},"176":{"title":"Training/Inference Mode","titles":["Migrating from Flux to Lux","Certain Important Implementation Details"]},"177":{"title":"Can we still use Flux Layers?","titles":["Migrating from Flux to Lux"]},"178":{"title":"Preferences for Lux.jl","titles":[]},"179":{"title":"Nested Automatic Differentiation","titles":["Preferences for Lux.jl"]},"180":{"title":"GPU-Aware MPI Support","titles":["Preferences for Lux.jl"]},"181":{"title":"GPU Backend Selection","titles":["Preferences for Lux.jl"]},"182":{"title":"Automatic Eltype Conversion","titles":["Preferences for Lux.jl"]},"183":{"title":"Dispatch Doctor","titles":["Preferences for Lux.jl"]},"184":{"title":"Disabling Loop Vectorization / Octavian","titles":["Preferences for Lux.jl"]},"185":{"title":"Initializing Weights","titles":[]},"186":{"title":"Quick examples","titles":["Initializing Weights"]},"187":{"title":"Julia & Lux for the Uninitiated","titles":[]},"188":{"title":"Arrays","titles":["Julia & Lux for the Uninitiated"]},"189":{"title":"CUDA Arrays","titles":["Julia & Lux for the Uninitiated","Arrays"]},"190":{"title":"(Im)mutability","titles":["Julia & Lux for the Uninitiated"]},"191":{"title":"Managing Randomness","titles":["Julia & Lux for the Uninitiated"]},"192":{"title":"Automatic Differentiation","titles":["Julia & Lux for the Uninitiated"]},"193":{"title":"Gradients","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"194":{"title":"Jacobian-Vector Product","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"195":{"title":"Vector-Jacobian Product","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"196":{"title":"Linear Regression","titles":["Julia & Lux for the Uninitiated"]},"197":{"title":"Appendix","titles":["Julia & Lux for the Uninitiated"]},"198":{"title":"Training a Simple LSTM","titles":[]},"199":{"title":"Package Imports","titles":["Training a Simple LSTM"]},"200":{"title":"Dataset","titles":["Training a Simple LSTM"]},"201":{"title":"Creating a Classifier","titles":["Training a Simple LSTM"]},"202":{"title":"Using the @compact API","titles":["Training a Simple LSTM"]},"203":{"title":"Defining Accuracy, Loss and Optimiser","titles":["Training a Simple LSTM"]},"204":{"title":"Training the Model","titles":["Training a Simple LSTM"]},"205":{"title":"Saving the Model","titles":["Training a Simple LSTM"]},"206":{"title":"Appendix","titles":["Training a Simple LSTM"]},"207":{"title":"MNIST Classification with SimpleChains","titles":[]},"208":{"title":"Package Imports","titles":["MNIST Classification with SimpleChains"]},"209":{"title":"Loading MNIST","titles":["MNIST Classification with SimpleChains"]},"210":{"title":"Define the Model","titles":["MNIST Classification with SimpleChains"]},"211":{"title":"Helper Functions","titles":["MNIST Classification with SimpleChains"]},"212":{"title":"Define the Training Loop","titles":["MNIST Classification with SimpleChains"]},"213":{"title":"Finally Training the Model","titles":["MNIST Classification with SimpleChains"]},"214":{"title":"Appendix","titles":["MNIST Classification with SimpleChains"]},"215":{"title":"Tutorials","titles":[]},"216":{"title":"Beginner Tutorials","titles":["Tutorials"]},"217":{"title":"Intermediate Tutorials","titles":["Tutorials"]},"218":{"title":"Advanced Tutorials","titles":["Tutorials"]},"219":{"title":"Larger Models","titles":["Tutorials"]},"220":{"title":"Selected 3rd Party Tutorials","titles":["Tutorials"]},"221":{"title":"Training Lux Models using Optimization.jl","titles":[]},"222":{"title":"Imports packages","titles":["Training Lux Models using Optimization.jl"]},"223":{"title":"Generate some training data","titles":["Training Lux Models using Optimization.jl"]},"224":{"title":"Define the DataLoader","titles":["Training Lux Models using Optimization.jl"]},"225":{"title":"Training the model","titles":["Training Lux Models using Optimization.jl"]},"226":{"title":"Plotting the results","titles":["Training Lux Models using Optimization.jl"]},"227":{"title":"Appendix","titles":["Training Lux Models using Optimization.jl"]},"228":{"title":"MNIST Classification using Neural ODEs","titles":[]},"229":{"title":"Package Imports","titles":["MNIST Classification using Neural ODEs"]},"230":{"title":"Loading MNIST","titles":["MNIST Classification using Neural ODEs"]},"231":{"title":"Define the Neural ODE Layer","titles":["MNIST Classification using Neural ODEs"]},"232":{"title":"Create and Initialize the Neural ODE Layer","titles":["MNIST Classification using Neural ODEs"]},"233":{"title":"Define Utility Functions","titles":["MNIST Classification using Neural ODEs"]},"234":{"title":"Training","titles":["MNIST Classification using Neural ODEs"]},"235":{"title":"Alternate Implementation using Stateful Layer","titles":["MNIST Classification using Neural ODEs"]},"236":{"title":"Train the new Stateful Neural ODE","titles":["MNIST Classification using Neural ODEs"]},"237":{"title":"Type Stability","titles":["MNIST Classification using Neural ODEs"]},"238":{"title":"Appendix","titles":["MNIST Classification using Neural ODEs"]},"239":{"title":"Training a HyperNetwork on MNIST and FashionMNIST","titles":[]},"240":{"title":"Package Imports","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"241":{"title":"Loading Datasets","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"242":{"title":"Implement a HyperNet Layer","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"243":{"title":"Create and Initialize the HyperNet","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"244":{"title":"Define Utility Functions","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"245":{"title":"Training","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"246":{"title":"Appendix","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"247":{"title":"Training a PINN on 2D PDE","titles":[]},"248":{"title":"Package Imports","titles":["Training a PINN on 2D PDE"]},"249":{"title":"Problem Definition","titles":["Training a PINN on 2D PDE"]},"250":{"title":"Define the Neural Networks","titles":["Training a PINN on 2D PDE"]},"251":{"title":"Define the Loss Functions","titles":["Training a PINN on 2D PDE"]},"252":{"title":"Generate the Data","titles":["Training a PINN on 2D PDE"]},"253":{"title":"Training","titles":["Training a PINN on 2D PDE"]},"254":{"title":"Visualizing the Results","titles":["Training a PINN on 2D PDE"]},"255":{"title":"Appendix","titles":["Training a PINN on 2D PDE"]},"256":{"title":"Graph Convolutional Networks on Cora","titles":[]},"257":{"title":"Loading Cora Dataset","titles":["Graph Convolutional Networks on Cora"]},"258":{"title":"Model Definition","titles":["Graph Convolutional Networks on Cora"]},"259":{"title":"Helper Functions","titles":["Graph Convolutional Networks on Cora"]},"260":{"title":"Training the Model","titles":["Graph Convolutional Networks on Cora"]},"261":{"title":"Appendix","titles":["Graph Convolutional Networks on Cora"]},"262":{"title":"Fitting a Polynomial using MLP","titles":[]},"263":{"title":"Package Imports","titles":["Fitting a Polynomial using MLP"]},"264":{"title":"Dataset","titles":["Fitting a Polynomial using MLP"]},"265":{"title":"Neural Network","titles":["Fitting a Polynomial using MLP"]},"266":{"title":"Optimizer","titles":["Fitting a Polynomial using MLP"]},"267":{"title":"Loss Function","titles":["Fitting a Polynomial using MLP"]},"268":{"title":"Training","titles":["Fitting a Polynomial using MLP"]},"269":{"title":"Appendix","titles":["Fitting a Polynomial using MLP"]},"270":{"title":"Convolutional VAE for MNIST using Reactant","titles":[]},"271":{"title":"Model Definition","titles":["Convolutional VAE for MNIST using Reactant"]},"272":{"title":"Loading MNIST","titles":["Convolutional VAE for MNIST using Reactant"]},"273":{"title":"Helper Functions","titles":["Convolutional VAE for MNIST using Reactant"]},"274":{"title":"Training the Model","titles":["Convolutional VAE for MNIST using Reactant"]},"275":{"title":"Appendix","titles":["Convolutional VAE for MNIST using Reactant"]},"276":{"title":"Bayesian Neural Network","titles":[]},"277":{"title":"Generating data","titles":["Bayesian Neural Network"]},"278":{"title":"Building the Neural Network","titles":["Bayesian Neural Network"]},"279":{"title":"Prediction Visualization","titles":["Bayesian Neural Network"]},"280":{"title":"Appendix","titles":["Bayesian Neural Network"]},"281":{"title":"Training a Neural ODE to Model Gravitational Waveforms","titles":[]},"282":{"title":"Package Imports","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"283":{"title":"Define some Utility Functions","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"284":{"title":"Simulating the True Model","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"285":{"title":"Defiing a Neural Network Model","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"286":{"title":"Setting Up for Training the Neural Network","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"287":{"title":"Training the Neural Network","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"288":{"title":"Visualizing the Results","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"289":{"title":"Appendix","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"290":{"title":"Normalizing Flows for Density Estimation","titles":[]},"291":{"title":"Define & Load the Moons Dataset","titles":["Normalizing Flows for Density Estimation"]},"292":{"title":"Bijectors Implementation","titles":["Normalizing Flows for Density Estimation"]},"293":{"title":"Model Definition","titles":["Normalizing Flows for Density Estimation"]},"294":{"title":"Helper Functions","titles":["Normalizing Flows for Density Estimation"]},"295":{"title":"Training the Model","titles":["Normalizing Flows for Density Estimation"]},"296":{"title":"Visualizing the Results","titles":["Normalizing Flows for Density Estimation"]},"297":{"title":"Appendix","titles":["Normalizing Flows for Density Estimation"]}},"dirtCount":0,"index":[["↦",{"2":{"283":1}}],["ϕ̇",{"2":{"284":2,"285":2}}],["ϕ",{"2":{"283":5,"284":1,"285":1}}],["χ̇",{"2":{"284":2,"285":2}}],["χ",{"2":{"283":4,"284":4,"285":2}}],["~",{"2":{"278":2}}],["~=",{"2":{"86":1}}],["\\u001b",{"2":{"276":1,"282":1}}],["μ",{"2":{"271":5,"273":3}}],["₋₋₋kwargs₋₋₋",{"2":{"237":4}}],["₋₋₋no",{"2":{"237":3}}],["─",{"2":{"237":3}}],["∥22we",{"2":{"196":1}}],["⟶∑i=1k12∥yi−fw",{"2":{"196":1}}],["$",{"2":{"254":1,"295":1,"296":1}}],["$i",{"2":{"191":2,"279":1}}],["$name",{"2":{"23":1}}],["✓",{"2":{"187":94,"192":55,"199":7,"229":174,"240":20,"263":25,"276":474,"282":508}}],["∞",{"2":{"165":4,"166":4,"167":4,"168":4,"172":8}}],["∘",{"2":{"164":4,"167":1,"251":3,"272":2}}],["⋱",{"2":{"147":4}}],["⋮",{"2":{"147":8}}],["└──",{"2":{"237":3}}],["└──────────────────────────────┘",{"2":{"86":2}}],["└────────────────────────────────────────┘",{"2":{"58":34,"86":1}}],["└",{"2":{"126":1,"165":2,"204":2,"260":2,"274":1,"276":1,"282":1}}],["┌",{"2":{"126":1,"165":2,"204":2,"260":2,"274":1,"276":1,"282":1}}],["┌──────────────────────────────┐",{"2":{"86":2}}],["┌────────────────────────────────────────┐",{"2":{"58":34,"86":1}}],["↩︎",{"2":{"124":9}}],["❓",{"2":{"121":6}}],["❌",{"2":{"121":13}}],["∂w",{"2":{"251":2}}],["∂v",{"2":{"251":2}}],["∂y",{"2":{"251":4}}],["∂u",{"2":{"251":10}}],["∂t",{"2":{"167":7,"251":2}}],["∂xyt",{"2":{"251":4}}],["∂x",{"2":{"165":7,"166":7,"168":7,"172":11,"251":4}}],["∂ps",{"2":{"118":4,"165":8,"166":8,"167":8,"168":8,"172":11}}],["∂f∂x",{"2":{"18":2}}],["✖",{"2":{"96":4}}],["✔️",{"2":{"121":17}}],["✔",{"2":{"96":8}}],["\\tval",{"2":{"260":28}}],["\\tfashionmnist\\ttraining",{"2":{"245":1}}],["\\tfashionmnist\\ttime",{"2":{"245":50}}],["\\ttraining",{"2":{"295":11}}],["\\ttest",{"2":{"234":45,"236":9,"245":102}}],["\\ttime",{"2":{"234":45,"236":9}}],["\\tloss",{"2":{"119":11,"204":50}}],["\\t",{"2":{"90":11,"213":60,"245":51,"253":200}}],["ω",{"2":{"84":2,"86":2}}],["⊡",{"2":{"80":1}}],["∇f",{"2":{"193":3}}],["∇filter",{"2":{"86":3}}],["∇offending",{"2":{"127":2}}],["∇depthwiseconv",{"2":{"86":4}}],["∇conv",{"2":{"86":4}}],["∇grid",{"2":{"82":1}}],["∇imrotate",{"2":{"79":1}}],["∇upsample",{"2":{"78":4}}],["θ|x",{"2":{"279":2}}],["θ",{"2":{"79":5,"86":3,"225":2,"278":2,"279":9,"286":3}}],["⊠",{"2":{"77":2,"80":7}}],["∑ᵢ",{"2":{"75":1}}],["÷",{"2":{"73":1,"79":4,"86":4,"200":4,"271":16,"273":2,"291":1,"292":1,"296":1}}],["qoi",{"2":{"276":1,"282":1}}],["qk",{"2":{"73":2}}],["q",{"2":{"73":6}}],["quantiles",{"2":{"278":1}}],["quadrupole",{"2":{"283":10}}],["quadratic",{"2":{"58":1,"264":1,"268":1}}],["quadgkenzymeext",{"2":{"282":1}}],["quadgk",{"2":{"276":1,"282":2}}],["questions",{"2":{"101":3,"164":1,"192":1}}],["query",{"2":{"73":4}}],["quick",{"0":{"186":1},"2":{"187":1}}],["quickstart",{"0":{"89":1},"2":{"101":1,"153":1}}],["quite",{"2":{"86":1,"98":1,"190":1,"192":1}}],["quoting",{"2":{"118":1}}],["quot",{"2":{"1":2,"8":2,"15":12,"34":4,"40":2,"41":2,"50":8,"52":2,"54":16,"56":2,"57":6,"58":32,"64":4,"65":1,"66":8,"75":2,"76":2,"78":2,"80":2,"84":2,"86":10,"89":4,"118":4,"140":2,"148":2,"158":2,"174":4,"181":8}}],["λβ",{"2":{"64":1}}],["λ=0",{"2":{"58":1}}],["λ",{"2":{"58":6}}],["`θ`",{"2":{"279":1}}],["`x`",{"2":{"279":1}}],["`p",{"2":{"231":1}}],["`ps",{"2":{"154":1}}],["`ps`",{"2":{"147":1}}],["`tasklocalrng`",{"2":{"204":4,"260":2}}],["`training`",{"2":{"165":2,"260":1,"274":1}}],["`replicate`",{"2":{"204":2,"260":1}}],["`carry`",{"2":{"201":2}}],["`iterators",{"2":{"201":1}}],["`eachslice`",{"2":{"201":1}}],["`∇`",{"2":{"193":1}}],["`b`",{"2":{"174":2}}],["`val",{"2":{"165":2,"260":1,"274":1}}],["`zygote",{"2":{"165":1}}],["`denselayerparameters`",{"2":{"155":2}}],["`namedtuple`",{"2":{"155":2}}],["`model",{"2":{"154":1}}],["`lstm",{"2":{"201":1}}],["`linear",{"2":{"154":2}}],["`l",{"2":{"153":1}}],["`luxcore",{"2":{"165":2,"260":1,"274":1}}],["`luxcore`",{"2":{"153":1}}],["`lux",{"2":{"165":2,"260":1,"274":1}}],["`lux``",{"2":{"189":1}}],["`lux`",{"2":{"153":1}}],["```",{"2":{"149":1}}],["`st",{"2":{"154":1}}],["`st`",{"2":{"147":1,"154":1,"285":2}}],["`softmax",{"2":{"74":1}}],["`autoforwarddiff",{"2":{"166":1}}],["`autozygote",{"2":{"166":1}}],["`apply`",{"2":{"133":1}}],["`a`",{"2":{"58":1,"174":3}}],["`octavian",{"2":{"65":1}}],["`",{"2":{"58":1,"74":1,"89":2,"95":1,"149":1,"153":1,"154":1,"165":2,"166":2,"193":1,"260":1,"274":1}}],["`u",{"2":{"58":1}}],["π",{"2":{"58":1,"86":3,"283":2,"284":1,"291":1}}],["√",{"2":{"58":1,"283":2}}],["√t",{"2":{"15":2}}],["⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀",{"2":{"86":1}}],["⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀100⠀",{"2":{"86":2}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀",{"2":{"58":1}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀",{"2":{"58":32}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x",{"2":{"58":1}}],["⠀",{"2":{"58":34}}],["│",{"2":{"237":6,"276":1,"282":1}}],["│⣇⣇⣸⣀⣸⣀⣀⣟⣀⣀⣸⣃⣀⣀⣀⣿⣀⣀⣀⣀⣀⣿⣀⣀⣀⣀⣀⣀⣈⣇⣀⣀⣀⣀⣀⣀⣀⣀⣀⣱│",{"2":{"86":1}}],["│⡇⡇⢸⠇⢸⡇⠀⣿⠀⠀⢣⡇⠀⠀⠸⣄⠇⠀⠀⠀⠸⡀⡇⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄│",{"2":{"86":1}}],["│⡇⢸⢸⡇⡇⡇⠸⣰⠃⠀⡇⡸⠀⠀⢸⠀⡜⠀⠀⠀⢣⠀⢸⠁⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀│",{"2":{"86":1}}],["│⡇⢸⡇⡇⡇⡇⢸⠀⡇⠈⡆⢰⠁⠀⡇⠀⢰⠁⠀⠈⡆⠀⠀⡎⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀│",{"2":{"86":1}}],["│⢸⠀⡇⢸⠀⠀⡇⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⠀⠀⢱⡀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⢸⢸⡇⡇⡇⡸⢸⠀⡎⢸⠀⠀⡇⠈⡆⠀⠀⡇⠀⢸⠀⠀⠀⢰⠁⠀⠘⡆⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄⠀⠀│",{"2":{"86":1}}],["│⢸⢸⡇⡸⡇⢸⡇⠀⢸⢸⠀⠀⡜⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⡎⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀│",{"2":{"86":1}}],["│⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⢇⠀⠀⠀⢀⠿⡀⠀⠀⠀⠀⢰⠛⡄⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀│",{"2":{"86":1}}],["│⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⡎⠀⠀⠀⠈⣶⠁⠀⠀⠀⠀⠸⣤⠃⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⢸⢸⡇⢱⡇⢸⡇⠀⢸⢸⠀⠀⢣⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⢇⠀⡜⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⢠⢻⡇⡇⡇⢱⢸⠀⢇⢸⠀⠀⡇⢀⠇⠀⠀⡇⠀⢸⠀⠀⠀⠸⡀⠀⢠⠇⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⡠⠴⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠒⠒⠢⠤⢄⣀⡀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠀│",{"2":{"58":1}}],["│⠒⠒⠒⠒⠒⠊⠉⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣒⣒⣒⣒⣒⣊⣉⣉⣉⣉⣁⣀⣀⡠⠤⠒⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠊⣉⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠤⠤⠖⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠮⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⡤⠤⠤⠤⠤⠤⠤⡧⠤⠤⠤⠤⠶⠮⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⣤⡤⡧⠴⠶⠯⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡯⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡷⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⠤⣤⣤⢤⣤⣤⠤⠤⠤⢼⠮⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠾⠿⠯⠭⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠤⠤⠤⠤⠔⠒⠒⠒⠊⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠃⠉⠙⠘⠃⠈⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣰⢀⣆⡄⣄⡄⡠⡰⠦⠷⡜⢢⠷⠳⠢⠊⠉⠉⠀⠀⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⡤⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠉⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉│",{"2":{"86":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡜│",{"2":{"58":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⣉⠭⠛⡏⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"58":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠒⠒⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸│",{"2":{"58":1}}],["│⠉⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠉⠑⠒⠒⠒⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠉⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│",{"2":{"58":1}}],["│⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠢⣄⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⣀⠔│",{"2":{"58":1}}],["│⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔│",{"2":{"58":2}}],["│⠀⣿⡇⡇⡇⡇⢸⠀⡇⢀⠇⠸⡀⠀⡇⠀⠸⡀⠀⢀⠇⠀⠀⢇⠀⠀⠀⡸⠀⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⣿⢸⡇⡇⡇⢰⠹⡄⠀⡇⢱⠀⠀⢸⠀⢣⠀⠀⠀⡜⠀⢸⡀⠀⠀⠀⢀⠇⠀⠈⡇⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⡇⢸⡆⢸⡇⠀⣿⠀⠀⡜⡇⠀⠀⢰⠋⡆⠀⠀⠀⢰⠁⡇⠀⠀⠀⠀⠀⡸⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⡀⢸⠀⢸⠀⠀⣧⠀⠀⢸⡄⠀⠀⠀⣷⠀⠀⠀⠀⠀⣷⠀⠀⠀⠀⠀⠀⢀⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣀⠀│",{"2":{"86":1}}],["│⠀⢀⣀⡠⠤⠖⢒⣋⠭⠗⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⣀⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⢀⠴⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠣⡀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠑⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⢀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⡰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⢀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⣀⡠⠴⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠈⠉⠑⠒⠦⢄⣘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⢀⣀⠤⠖⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⣠⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⡀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢣⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠤⠒⠒⠋⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠦⣌⡓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⣁⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠁⠀⠀⠀⠀⠀⠈⢢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⢋⠕⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⠋⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠒⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⠀⠀⢠⡇⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡔⠁⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠒⠉⠁⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠒⠉│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠚⠉⠉⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⠚⠉⠉⠉⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"86":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠪⠷⣦⣄⣀⣀⣇⣀⣀⣤⠶⠕⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠴⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠑⡖⠦⢄⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⢔⠏⠁⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⣄⠀⠀⠀⠀⠀⢠⡞⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⡤⠤⠒⠊⠉⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⠢⢄⣀⣀⣇⣀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⣄⣀⣀⣇⣀⣀⠤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠣⣄⠀⠉⠑⠒⠦⠤⢄⣀⣀⣀⣀⡠⠤⠖⣊⠕⠁⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡴⠃⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⠤⡀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⣉⠥⠚⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⡗⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠒⠢⠤⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⡞⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⡠⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠊⠉⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⡔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⣃⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠊⠉⠉⠉⣉⣉⣉⣉⣉⠭⠭⠭⠭⠭│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⣀⡤⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⢤⡲⠝⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⠤⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⡠⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡤⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⠔⠋⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⣒⣋⠥⠤⠒⠊⠉⠁⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⣀⡠⠤⠒⠊⠉⠁⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⣀⣀⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⡴⠞⠋⠁⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠊⠁⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠎⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│",{"2":{"58":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⣉⡡│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⠉⠁│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠊│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":8}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│",{"2":{"58":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⡴⠞⠋⠁│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠤⠒⠒⠒⠊⠉⠉⠉│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⡤⢣⠃⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠋⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠊⠁⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠔⠒⠋⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠉⠉⠉⠉⠉⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠉⠢⢄⡀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⡠⠔⠋⠁⠀⠀⠀⠀│",{"2":{"58":1}}],["│⠀⠀⠀⠀⠀⠈⠉⠒⠤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⠁│",{"2":{"58":1}}],["│⠀⠈⠑⠢⣀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⣀⠔⠊⠁⠀│",{"2":{"58":1}}],["│⠀⠈⠑⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀│",{"2":{"58":2}}],["│⣀⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢤⣀│",{"2":{"86":1}}],["│⣀⣀⠤⠤⠒⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⡠⠤⠤⠤⠖⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣠⣤⣤⣤⣤⣔⣒⣒⣚⣉⣉⣁⣀⣇⠴⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤⠔⠒⠒⠚⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣀⣀⣀⣀⣀⣀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⡧⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣧⣔⣊⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣤⣤⣖⣚⣉⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠔⠋⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⠤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":2}}],["│⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"58":4}}],["│⠑⠒⠢⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇│",{"2":{"58":1}}],["│⣤⣤⣤⣤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⡤⡧⠶⠶⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"58":1}}],["σ=identity",{"2":{"66":1}}],["σ",{"2":{"58":13,"60":10,"62":5,"63":3,"65":3,"66":7,"84":4,"161":3,"271":3}}],["^bb0",{"2":{"147":2}}],["^2",{"2":{"75":2,"284":2,"285":1}}],["^",{"2":{"56":1,"66":9,"75":1,"90":1,"118":1,"119":1,"273":1,"283":2}}],["δ",{"2":{"50":1,"78":12,"82":2,"127":5,"223":2}}],["ϵ",{"2":{"50":1,"271":2}}],["ŷ",{"2":{"50":6}}],["α=0",{"2":{"86":1}}],["α=1",{"2":{"58":2,"80":1}}],["α",{"2":{"50":3,"58":5,"64":3,"73":1,"80":1,"86":3,"223":2,"279":3}}],["∈",{"2":{"50":2,"75":3,"80":1,"283":1}}],["∗y+α∗size",{"2":{"50":1}}],["∗y+α∗0",{"2":{"50":1}}],["∗y^−logσ",{"2":{"50":1}}],["∗log⁡",{"2":{"50":1}}],["−j2πωkn",{"2":{"86":1}}],["−log⁡",{"2":{"50":1}}],["−∑y~log⁡",{"2":{"50":1}}],["−",{"2":{"50":1}}],["−y~∗log⁡",{"2":{"50":1}}],["−x",{"2":{"15":1}}],["≈∑θ∼p",{"2":{"279":1}}],["≈",{"2":{"50":25,"56":1,"58":2,"75":2,"86":3,"149":1,"150":1,"191":2}}],["β=0",{"2":{"80":1,"86":1}}],["β",{"2":{"41":1,"66":4,"80":1,"86":3,"223":2}}],["γ=0",{"2":{"50":2}}],["γ",{"2":{"41":1,"50":1,"66":4,"223":2}}],["⋅n+z⋅hprevarguments",{"2":{"38":1}}],["↗",{"2":{"34":2}}],["↘",{"2":{"34":2}}],["→",{"2":{"34":5,"187":32,"192":27,"199":5,"229":83,"240":17,"263":12,"276":149,"282":179}}],["8\\ttrain",{"2":{"260":1}}],["8c79",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["8x8x16x4xi1>",{"2":{"147":2}}],["8x8x16x4xf32>",{"2":{"147":8}}],["863690\\tthroughput",{"2":{"295":1}}],["8638596993155374e",{"2":{"287":1}}],["863989f",{"2":{"285":1}}],["863200",{"2":{"274":1}}],["865",{"2":{"276":1}}],["860",{"2":{"282":1}}],["8603811",{"2":{"264":1}}],["86081326",{"2":{"264":1}}],["8601828553599953",{"2":{"191":1}}],["8614",{"2":{"278":1}}],["86135",{"2":{"260":1}}],["861023e",{"2":{"167":1}}],["862671f",{"2":{"285":1}}],["862364",{"2":{"274":1}}],["862",{"2":{"260":3}}],["86299",{"2":{"147":1}}],["86290836",{"2":{"143":1}}],["86",{"2":{"245":1,"260":12}}],["8667",{"2":{"278":1}}],["86614174",{"2":{"264":2}}],["8664s\\ttraining",{"2":{"236":1}}],["86624",{"2":{"147":1}}],["869344056483262e",{"2":{"287":1}}],["869451829649755e",{"2":{"287":1}}],["8690624f",{"2":{"285":1}}],["86975616",{"2":{"264":1}}],["869",{"2":{"260":6}}],["869632f",{"2":{"285":1}}],["869632",{"2":{"165":1}}],["86957",{"2":{"147":1}}],["867202",{"2":{"165":1}}],["867646",{"2":{"147":1}}],["855777f",{"2":{"285":1}}],["8581494",{"2":{"264":1}}],["8587",{"2":{"199":1}}],["851399415276393e",{"2":{"287":1}}],["851279000101203e",{"2":{"287":1}}],["851474f",{"2":{"285":1}}],["8516684f",{"2":{"285":1}}],["851",{"2":{"282":1}}],["8515625",{"2":{"274":1}}],["85107",{"2":{"260":1}}],["851857311674287e",{"2":{"287":1}}],["85185",{"2":{"234":1}}],["8532",{"2":{"278":1}}],["8536",{"2":{"278":1}}],["853",{"2":{"260":6}}],["8564453",{"2":{"274":1}}],["856428",{"2":{"147":1}}],["856",{"2":{"260":1,"276":2}}],["85",{"2":{"213":1,"234":4,"245":1,"260":16}}],["8502940490249345e",{"2":{"287":1}}],["8500568f",{"2":{"285":1}}],["8509",{"2":{"278":1}}],["850",{"2":{"210":2,"260":1,"276":2,"282":1}}],["8574219",{"2":{"274":1}}],["85747254",{"2":{"264":1}}],["857412",{"2":{"185":1}}],["8571",{"2":{"260":4}}],["857",{"2":{"246":1,"276":1}}],["8573820474674845",{"2":{"188":1}}],["852089209801681e",{"2":{"287":1}}],["852038f",{"2":{"285":1}}],["852565f",{"2":{"285":1}}],["852",{"2":{"187":1}}],["859",{"2":{"260":6,"276":2,"282":1}}],["859405",{"2":{"167":1,"168":1}}],["8595328f",{"2":{"118":1}}],["8541596",{"2":{"165":1}}],["895298045758524e",{"2":{"287":1}}],["8955353f",{"2":{"285":1}}],["895",{"2":{"282":1}}],["895668",{"2":{"147":1}}],["892",{"2":{"282":1}}],["894373769798364e",{"2":{"287":1}}],["8946",{"2":{"278":1}}],["894248",{"2":{"147":1}}],["8936058075982555e",{"2":{"287":1}}],["89308852718539e",{"2":{"287":1}}],["893078",{"2":{"274":1}}],["893225f",{"2":{"285":1}}],["893",{"2":{"276":1}}],["890239988291987e",{"2":{"287":1}}],["8901034f",{"2":{"285":1}}],["890",{"2":{"260":6}}],["89779611791719e",{"2":{"287":1}}],["8979631020290376e",{"2":{"287":1}}],["8976737f",{"2":{"285":1}}],["8976378",{"2":{"264":2}}],["8971",{"2":{"278":1}}],["897",{"2":{"260":20}}],["89",{"2":{"245":6,"260":11}}],["89651568563434e",{"2":{"287":1}}],["8966704f",{"2":{"285":1}}],["896",{"2":{"210":2,"260":14}}],["8990987847486295e",{"2":{"287":1}}],["8990184509333085e",{"2":{"287":1}}],["8990106f",{"2":{"285":1}}],["8991",{"2":{"282":1}}],["899",{"2":{"260":23,"282":1}}],["8997",{"2":{"253":1}}],["899766",{"2":{"188":1}}],["899997",{"2":{"168":1}}],["891376670121949e",{"2":{"287":1}}],["891489f",{"2":{"285":1}}],["8916016",{"2":{"274":1}}],["891",{"2":{"187":1,"282":1}}],["891006",{"2":{"147":1}}],["898",{"2":{"260":20}}],["89859",{"2":{"167":1}}],["89874",{"2":{"147":1}}],["826360644620271e",{"2":{"287":1}}],["826957070286014e",{"2":{"287":1}}],["826447923283547e",{"2":{"287":1}}],["826203f",{"2":{"285":1}}],["826293f",{"2":{"285":1}}],["822291952395642e",{"2":{"287":1}}],["822155f",{"2":{"285":1}}],["825738874388089e",{"2":{"287":1}}],["825",{"2":{"282":1}}],["8219",{"2":{"276":1}}],["821545",{"2":{"260":1}}],["821541",{"2":{"260":1}}],["821538",{"2":{"260":1}}],["821535",{"2":{"260":1}}],["821532",{"2":{"260":1}}],["821529",{"2":{"260":1}}],["821526",{"2":{"260":1}}],["821524",{"2":{"260":1}}],["821521",{"2":{"260":1}}],["821518",{"2":{"260":1}}],["821504",{"2":{"260":1}}],["828942388516075e",{"2":{"287":1}}],["8282",{"2":{"282":1}}],["828",{"2":{"276":1}}],["82805836",{"2":{"143":1}}],["82379496",{"2":{"264":1}}],["827098f",{"2":{"285":1}}],["827675",{"2":{"260":1}}],["827670",{"2":{"260":1}}],["827667",{"2":{"260":1}}],["827664",{"2":{"260":1}}],["827662",{"2":{"260":1}}],["827659",{"2":{"260":1}}],["827656",{"2":{"260":1}}],["827653",{"2":{"260":1}}],["827650",{"2":{"260":1}}],["827647",{"2":{"260":1}}],["827634",{"2":{"260":1}}],["827",{"2":{"229":1}}],["824340979952633e",{"2":{"287":1}}],["8243717f",{"2":{"285":1}}],["82443f",{"2":{"285":1}}],["82478",{"2":{"282":1}}],["824589",{"2":{"260":1}}],["824583",{"2":{"260":1}}],["824580",{"2":{"260":1}}],["824577",{"2":{"260":1}}],["824574",{"2":{"260":1}}],["824571",{"2":{"260":1}}],["824569",{"2":{"260":1}}],["824566",{"2":{"260":1}}],["824563",{"2":{"260":1}}],["824560",{"2":{"260":1}}],["824546",{"2":{"260":1}}],["824",{"2":{"229":2,"282":1}}],["82",{"2":{"213":4,"234":7,"236":2,"245":4,"260":4,"276":1,"282":1}}],["8204",{"2":{"276":1}}],["8202727",{"2":{"267":1}}],["820",{"2":{"187":1}}],["8290443f",{"2":{"285":1}}],["829",{"2":{"192":1}}],["829671",{"2":{"147":1}}],["82945836",{"2":{"89":1}}],["871226225663575e",{"2":{"287":1}}],["876",{"2":{"282":1}}],["876615f",{"2":{"118":1}}],["8795",{"2":{"295":1}}],["879939f",{"2":{"285":1}}],["8791",{"2":{"278":1}}],["87961",{"2":{"260":3}}],["8786",{"2":{"278":1}}],["878222",{"2":{"147":1}}],["8747025",{"2":{"264":1}}],["8740157",{"2":{"264":2}}],["874",{"2":{"260":6}}],["8727",{"2":{"278":1}}],["8720",{"2":{"276":1}}],["872",{"2":{"260":4,"276":1,"282":1}}],["8731666",{"2":{"264":1}}],["873",{"2":{"229":1}}],["870913f",{"2":{"285":1}}],["870",{"2":{"282":1}}],["870240e",{"2":{"225":1}}],["870364",{"2":{"147":1}}],["877",{"2":{"213":1}}],["877497",{"2":{"194":1,"195":1}}],["875",{"2":{"187":1,"260":6,"282":1}}],["87",{"2":{"126":1,"213":1,"245":3,"260":25}}],["8f",{"2":{"119":1,"196":1}}],["8177127425844396e",{"2":{"287":1}}],["817894f",{"2":{"285":1}}],["8174334f",{"2":{"285":1}}],["81735",{"2":{"276":1}}],["8173828",{"2":{"274":1}}],["814800770563814e",{"2":{"287":1}}],["81481",{"2":{"234":3,"236":1}}],["81441371488035e",{"2":{"287":1}}],["8147690734371506e",{"2":{"287":1}}],["8149544f",{"2":{"285":1}}],["8145673f",{"2":{"285":1}}],["819",{"2":{"282":1}}],["8191s\\ttraining",{"2":{"234":1}}],["8191676f",{"2":{"118":1}}],["8166",{"2":{"278":1,"295":1}}],["8164148",{"2":{"264":1}}],["8121114425815524e",{"2":{"287":1}}],["812194",{"2":{"260":1}}],["8123564f",{"2":{"285":1}}],["812267460526811e",{"2":{"287":1}}],["812237f",{"2":{"285":1}}],["812236",{"2":{"260":1}}],["812231",{"2":{"260":1}}],["812228",{"2":{"260":1}}],["812225",{"2":{"260":1}}],["812223",{"2":{"260":1}}],["812220",{"2":{"260":1}}],["812217",{"2":{"260":1}}],["812214",{"2":{"260":1}}],["812211",{"2":{"260":1}}],["812208",{"2":{"260":1}}],["81240",{"2":{"204":1}}],["81175237060008e",{"2":{"287":1}}],["811489f",{"2":{"285":1}}],["8110236",{"2":{"264":2}}],["811",{"2":{"260":4}}],["8112s\\ttraining",{"2":{"234":1}}],["811523",{"2":{"147":1}}],["81",{"2":{"213":2,"229":1,"234":6,"236":1,"245":36,"260":29}}],["8180064396950286e",{"2":{"287":1}}],["8187343541857908e",{"2":{"287":1}}],["81879705",{"2":{"264":1}}],["8186989f",{"2":{"285":1}}],["818480",{"2":{"260":1}}],["818475",{"2":{"260":1}}],["818472",{"2":{"260":1}}],["818470",{"2":{"260":1}}],["818467",{"2":{"260":1}}],["818464",{"2":{"260":1}}],["818461",{"2":{"260":1}}],["818458",{"2":{"260":1}}],["818455",{"2":{"260":1}}],["818452",{"2":{"260":1}}],["818439",{"2":{"260":1}}],["81842",{"2":{"253":1}}],["818",{"2":{"187":1,"282":1}}],["818807",{"2":{"147":1}}],["810913164675587e",{"2":{"287":1}}],["810568f",{"2":{"285":1}}],["8106497f",{"2":{"285":1}}],["810",{"2":{"187":1,"192":1}}],["81012277e",{"2":{"90":1}}],["813661273152041e",{"2":{"287":1}}],["813745335309354e",{"2":{"287":1}}],["81377im",{"2":{"185":1}}],["813143f",{"2":{"285":1}}],["8130",{"2":{"278":1}}],["813359",{"2":{"147":1}}],["8151585f",{"2":{"285":1}}],["8158957f",{"2":{"285":1}}],["8158",{"2":{"276":1}}],["815",{"2":{"276":2,"282":1}}],["815383",{"2":{"260":1}}],["815379",{"2":{"260":1}}],["815376",{"2":{"260":1}}],["815373",{"2":{"260":1}}],["815370",{"2":{"260":1}}],["815367",{"2":{"260":1}}],["815365",{"2":{"260":1}}],["815362",{"2":{"260":1}}],["815357",{"2":{"260":1}}],["815354",{"2":{"260":1}}],["815341",{"2":{"260":1}}],["815336",{"2":{"168":1}}],["815659f",{"2":{"118":1}}],["8837704278719262e",{"2":{"287":1}}],["8837891",{"2":{"274":1}}],["883097888152574e",{"2":{"287":1}}],["883672f",{"2":{"285":1}}],["8834435",{"2":{"267":1}}],["881671372602889e",{"2":{"287":1}}],["881702f",{"2":{"285":1}}],["881",{"2":{"260":15}}],["88135",{"2":{"147":1}}],["8893",{"2":{"278":1}}],["889",{"2":{"260":4,"282":1}}],["884",{"2":{"260":3,"282":2}}],["884919f",{"2":{"118":1}}],["88",{"2":{"213":2,"245":14,"260":12}}],["8804327300928263e",{"2":{"287":1}}],["880328395936734e",{"2":{"287":1}}],["8808594",{"2":{"274":1}}],["880",{"2":{"192":1,"260":6,"282":1}}],["8827359f",{"2":{"285":1}}],["8827085f",{"2":{"285":1}}],["882",{"2":{"282":1}}],["88214755",{"2":{"168":1}}],["8828",{"2":{"147":1}}],["886005136988138e",{"2":{"287":1}}],["886018",{"2":{"147":1}}],["886",{"2":{"229":1,"260":6,"276":1,"282":1}}],["886353",{"2":{"147":1}}],["88897",{"2":{"147":1}}],["88889",{"2":{"78":2}}],["8873",{"2":{"278":1}}],["887229e",{"2":{"225":1}}],["887089",{"2":{"147":1}}],["8875",{"2":{"147":1}}],["8874373",{"2":{"134":1}}],["885580955170764e",{"2":{"287":1}}],["8853016f",{"2":{"285":1}}],["885355",{"2":{"118":1}}],["8854318f",{"2":{"285":1}}],["885",{"2":{"260":5}}],["8852951933960912e",{"2":{"287":1}}],["8852217",{"2":{"168":1}}],["885272",{"2":{"147":1}}],["8321",{"2":{"282":1}}],["83289f",{"2":{"89":1}}],["839611\\tthroughput",{"2":{"295":1}}],["839537467418251e",{"2":{"287":1}}],["8394504f",{"2":{"285":1}}],["839",{"2":{"282":1}}],["8393",{"2":{"188":1}}],["8393034",{"2":{"188":1}}],["839303",{"2":{"188":2}}],["8353",{"2":{"295":1}}],["835763589894574e",{"2":{"287":1}}],["8358964f",{"2":{"285":1}}],["835",{"2":{"276":1}}],["8314",{"2":{"282":1}}],["831206",{"2":{"274":1}}],["8317274f",{"2":{"196":1}}],["8385266",{"2":{"267":1}}],["838",{"2":{"260":3,"282":1}}],["838790",{"2":{"260":1}}],["838756f",{"2":{"118":1}}],["8381",{"2":{"253":3}}],["83",{"2":{"213":1,"234":6,"236":1,"245":2,"260":5}}],["8364",{"2":{"278":1}}],["836937f",{"2":{"285":1}}],["8369",{"2":{"253":2}}],["836",{"2":{"192":1,"210":2}}],["836232",{"2":{"147":1}}],["83084594331987e",{"2":{"287":1}}],["830899",{"2":{"260":1}}],["830987f",{"2":{"285":1}}],["830940",{"2":{"260":1}}],["830936",{"2":{"260":1}}],["830933",{"2":{"260":1}}],["830930",{"2":{"260":1}}],["830927",{"2":{"260":1}}],["830924",{"2":{"260":1}}],["830921",{"2":{"260":1}}],["830918",{"2":{"260":1}}],["830916",{"2":{"260":1}}],["830913",{"2":{"260":1}}],["830",{"2":{"192":1,"282":1}}],["837076912639522e",{"2":{"287":1}}],["837447",{"2":{"260":1}}],["837443",{"2":{"260":1}}],["837440",{"2":{"260":1}}],["837437",{"2":{"260":1}}],["837434",{"2":{"260":1}}],["837431",{"2":{"260":1}}],["837429",{"2":{"260":1}}],["837424",{"2":{"260":1}}],["837421",{"2":{"260":1}}],["837418",{"2":{"260":1}}],["837405",{"2":{"260":1}}],["837",{"2":{"187":1,"282":1}}],["8379912",{"2":{"168":1}}],["8344706f",{"2":{"285":1}}],["8346457",{"2":{"264":2}}],["834233",{"2":{"260":1}}],["834228",{"2":{"260":1}}],["834225",{"2":{"260":1}}],["834222",{"2":{"260":1}}],["834220",{"2":{"260":1}}],["834217",{"2":{"260":1}}],["834214",{"2":{"260":1}}],["834211",{"2":{"260":1}}],["834208",{"2":{"260":1}}],["834205",{"2":{"260":1}}],["834",{"2":{"260":3}}],["834190",{"2":{"260":1}}],["8341",{"2":{"229":1}}],["834073",{"2":{"168":1}}],["834789",{"2":{"147":1}}],["8338686637164163e",{"2":{"287":1}}],["833807204501072e",{"2":{"287":1}}],["8338072f",{"2":{"285":1}}],["8336450325969847e",{"2":{"287":1}}],["833785f",{"2":{"285":1}}],["8337032",{"2":{"143":1}}],["8331",{"2":{"278":1}}],["833",{"2":{"192":1,"276":1,"282":1}}],["83333",{"2":{"78":1}}],["833333333333333",{"2":{"50":1}}],["8039465805291092e",{"2":{"287":1}}],["803389029209924e",{"2":{"287":1}}],["8038461f",{"2":{"285":1}}],["803404f",{"2":{"285":1}}],["803491",{"2":{"147":1}}],["803",{"2":{"276":1}}],["8031496",{"2":{"264":2}}],["8022490603825117e",{"2":{"287":1}}],["8022136f",{"2":{"285":1}}],["802866041404844e",{"2":{"287":1}}],["8027405f",{"2":{"285":1}}],["8027763f",{"2":{"118":1}}],["8021",{"2":{"278":1}}],["802",{"2":{"276":1}}],["802970",{"2":{"260":1}}],["802966",{"2":{"260":1}}],["802963",{"2":{"260":1}}],["802959",{"2":{"260":1}}],["802956",{"2":{"260":1}}],["802953",{"2":{"260":1}}],["802950",{"2":{"260":1}}],["802947",{"2":{"260":1}}],["802944",{"2":{"260":1}}],["802941",{"2":{"260":1}}],["802928",{"2":{"260":1}}],["808",{"2":{"260":1}}],["808093",{"2":{"147":1}}],["804",{"2":{"276":2}}],["804977f",{"2":{"285":1}}],["804968",{"2":{"274":1}}],["8049s\\ttraining",{"2":{"236":1}}],["80465555",{"2":{"196":1}}],["8071",{"2":{"295":1}}],["8076464949467985e",{"2":{"287":1}}],["8079",{"2":{"278":1}}],["8079s\\ttraining",{"2":{"234":1}}],["807",{"2":{"260":3,"276":1}}],["807818e",{"2":{"225":1}}],["80",{"2":{"213":2,"234":10,"236":2,"238":1,"245":2,"246":1,"260":1,"277":1}}],["8069",{"2":{"278":1}}],["806015",{"2":{"260":1}}],["806010",{"2":{"260":1}}],["806008",{"2":{"260":1}}],["806005",{"2":{"260":1}}],["806002",{"2":{"260":1}}],["8068",{"2":{"240":1}}],["8067697",{"2":{"167":1}}],["80644",{"2":{"147":1}}],["80128",{"2":{"147":1}}],["8002180440631902e",{"2":{"287":1}}],["8002308f",{"2":{"285":1}}],["8000",{"2":{"260":7,"295":1}}],["800118310282789e",{"2":{"287":1}}],["8001",{"2":{"196":1,"253":1}}],["8003641",{"2":{"167":1}}],["800",{"2":{"119":1,"260":3,"276":2,"296":1}}],["809259159832102e",{"2":{"287":1}}],["80922f",{"2":{"118":1}}],["809427047338184e",{"2":{"287":1}}],["809456",{"2":{"188":1}}],["8091490017820345e",{"2":{"287":1}}],["809125f",{"2":{"285":1}}],["8093298f",{"2":{"285":1}}],["809068",{"2":{"260":1}}],["809064",{"2":{"260":1}}],["809061",{"2":{"260":1}}],["809058",{"2":{"260":1}}],["809055",{"2":{"260":1}}],["809052",{"2":{"260":1}}],["809049",{"2":{"260":1}}],["809047",{"2":{"260":1}}],["809044",{"2":{"260":1}}],["809041",{"2":{"260":1}}],["809028",{"2":{"260":1}}],["809001f",{"2":{"118":1}}],["805077607942942e",{"2":{"287":1}}],["805657",{"2":{"274":1}}],["805999",{"2":{"260":1}}],["805996",{"2":{"260":1}}],["805993",{"2":{"260":1}}],["805990",{"2":{"260":1}}],["805987",{"2":{"260":1}}],["805973",{"2":{"260":1}}],["8059s\\ttraining",{"2":{"234":1}}],["805",{"2":{"229":1}}],["8058887f",{"2":{"118":1}}],["80556",{"2":{"78":1}}],["80f0",{"2":{"86":1}}],["844",{"2":{"282":1}}],["8437500",{"2":{"274":1}}],["843768\\ttrain",{"2":{"260":1}}],["8488981810281866e",{"2":{"287":1}}],["8482455",{"2":{"264":1}}],["84867f",{"2":{"89":1}}],["8425196",{"2":{"264":2}}],["842",{"2":{"260":6,"282":1}}],["842642",{"2":{"147":1}}],["847535",{"2":{"267":1}}],["847396",{"2":{"264":1}}],["847",{"2":{"260":7}}],["8471048",{"2":{"168":1}}],["846378f",{"2":{"285":1}}],["846389",{"2":{"185":1}}],["8464594",{"2":{"168":1}}],["841857105456514e",{"2":{"287":1}}],["8416994f",{"2":{"285":1}}],["841",{"2":{"282":1}}],["84174657",{"2":{"267":1}}],["8417",{"2":{"263":1,"278":1}}],["841231",{"2":{"168":1}}],["841543",{"2":{"147":1}}],["840849882211186e",{"2":{"287":1}}],["8409518f",{"2":{"285":1}}],["840588",{"2":{"165":1}}],["84009",{"2":{"147":1}}],["849053f",{"2":{"285":1}}],["8490221",{"2":{"155":1}}],["8493",{"2":{"278":1}}],["8496094",{"2":{"274":2}}],["849",{"2":{"260":6}}],["8499s\\ttraining",{"2":{"236":1}}],["8492",{"2":{"229":1}}],["84954",{"2":{"147":1}}],["8495f",{"2":{"89":1}}],["84x4xi1>",{"2":{"147":2}}],["84x4xf32>",{"2":{"147":8}}],["84x10xf32>",{"2":{"147":2}}],["84xf32>",{"2":{"147":2}}],["8451466977956513e",{"2":{"287":1}}],["845496434557529e",{"2":{"287":1}}],["845412",{"2":{"168":1}}],["8450175f",{"2":{"285":1}}],["845",{"2":{"187":1,"276":2}}],["8457602",{"2":{"89":1}}],["8455785f0",{"2":{"89":1}}],["84",{"2":{"47":2,"147":5,"210":6,"234":1,"236":1,"240":1,"245":8,"260":15}}],["8",{"2":{"34":2,"40":1,"50":1,"58":1,"74":2,"76":77,"80":9,"81":3,"86":1,"89":36,"118":7,"147":2,"165":1,"167":2,"168":3,"187":6,"188":2,"190":4,"192":1,"197":1,"199":2,"200":1,"204":3,"213":2,"225":1,"227":1,"229":13,"234":5,"236":1,"245":2,"253":4,"260":41,"263":2,"264":1,"266":1,"268":2,"271":4,"273":4,"274":2,"276":51,"278":4,"282":60,"285":69,"287":68}}],["7f",{"2":{"274":2}}],["7\\ttrain",{"2":{"260":1}}],["7683138139520778e",{"2":{"287":1}}],["7683138f",{"2":{"285":1}}],["768594",{"2":{"260":1}}],["768587",{"2":{"260":1}}],["768583",{"2":{"260":1}}],["768579",{"2":{"260":1}}],["768576",{"2":{"260":1}}],["768572",{"2":{"260":1}}],["768568",{"2":{"260":1}}],["768564",{"2":{"260":1}}],["768559",{"2":{"260":1}}],["768555",{"2":{"260":1}}],["768537",{"2":{"260":1}}],["7638423195613334e",{"2":{"287":1}}],["7639313f",{"2":{"285":1}}],["763",{"2":{"282":1}}],["7632381",{"2":{"264":1}}],["7674",{"2":{"278":1}}],["767",{"2":{"260":6,"276":1}}],["7673453",{"2":{"143":1}}],["760027939415189e",{"2":{"287":1}}],["760344178101243e",{"2":{"287":1}}],["760375f",{"2":{"285":1}}],["7607062f",{"2":{"285":1}}],["760",{"2":{"260":6}}],["769151501091377e",{"2":{"287":1}}],["7697f",{"2":{"285":1}}],["769051f",{"2":{"285":1}}],["769",{"2":{"260":6}}],["76929",{"2":{"147":1}}],["76",{"2":{"213":1,"225":2,"234":7,"236":2,"260":9,"278":1}}],["76462083259314e",{"2":{"287":1}}],["764534f",{"2":{"285":1}}],["764007",{"2":{"274":1}}],["764",{"2":{"187":1,"260":9}}],["7649312",{"2":{"168":1}}],["762488286196048e",{"2":{"287":1}}],["7626",{"2":{"278":1}}],["7626953",{"2":{"274":1}}],["762",{"2":{"187":1,"276":1}}],["76251",{"2":{"147":1}}],["7656",{"2":{"278":1}}],["765315",{"2":{"260":1}}],["765301",{"2":{"260":1}}],["765294",{"2":{"260":1}}],["765288",{"2":{"260":1}}],["765282",{"2":{"260":1}}],["765275",{"2":{"260":1}}],["765269",{"2":{"260":1}}],["765262",{"2":{"260":1}}],["765255",{"2":{"260":1}}],["765247",{"2":{"260":1}}],["765178",{"2":{"260":1}}],["76507\\taccuracy",{"2":{"204":1}}],["76545",{"2":{"185":1}}],["765517",{"2":{"147":1}}],["7669062518711875e",{"2":{"287":1}}],["7660477f",{"2":{"285":1}}],["766022",{"2":{"147":1}}],["766737",{"2":{"147":1}}],["7615s\\ttraining",{"2":{"234":1}}],["76159",{"2":{"147":1}}],["761",{"2":{"192":1}}],["76188",{"2":{"147":1}}],["71847956255333e",{"2":{"287":1}}],["7184796f",{"2":{"285":1}}],["718951f",{"2":{"285":1}}],["7181164",{"2":{"165":1}}],["7111382723364154e",{"2":{"287":1}}],["7112368f",{"2":{"285":1}}],["711",{"2":{"282":1}}],["7115",{"2":{"278":1}}],["711561",{"2":{"185":1}}],["7102",{"2":{"278":1}}],["7172643",{"2":{"264":1}}],["714685618071617e",{"2":{"287":1}}],["7141395f",{"2":{"285":1}}],["714",{"2":{"276":1,"282":1}}],["7143",{"2":{"260":3}}],["714467",{"2":{"147":1}}],["7155",{"2":{"278":1}}],["715",{"2":{"260":5,"282":1}}],["71",{"2":{"234":2,"245":12,"260":30}}],["7162",{"2":{"278":1}}],["7165354",{"2":{"264":2}}],["71654",{"2":{"204":1}}],["71638",{"2":{"147":1}}],["7121293982795475e",{"2":{"287":1}}],["712504179240339e",{"2":{"287":1}}],["7122175f",{"2":{"285":1}}],["712353f",{"2":{"285":1}}],["71243864",{"2":{"264":1}}],["712",{"2":{"192":1,"260":2,"263":1}}],["7139587397087056e",{"2":{"287":1}}],["713",{"2":{"187":1,"229":1}}],["71921015585537e",{"2":{"287":1}}],["7190399417145638e",{"2":{"287":1}}],["7190555f",{"2":{"285":1}}],["719941\\ttrain",{"2":{"260":1}}],["71991",{"2":{"147":1}}],["719",{"2":{"192":1}}],["719462f",{"2":{"118":1}}],["747219f",{"2":{"285":1}}],["747831",{"2":{"147":1}}],["7484819557499064e",{"2":{"287":1}}],["7485945f",{"2":{"285":1}}],["7483137",{"2":{"264":1}}],["7480315",{"2":{"264":2}}],["748",{"2":{"260":3,"282":1}}],["74884f",{"2":{"89":1}}],["745",{"2":{"260":10,"282":1}}],["74550",{"2":{"204":1}}],["7445",{"2":{"278":1}}],["744",{"2":{"260":3,"282":1}}],["744489",{"2":{"165":1}}],["74448",{"2":{"147":1}}],["741649618470397e",{"2":{"287":1}}],["741665f",{"2":{"285":1}}],["741",{"2":{"260":11}}],["74",{"2":{"234":4,"260":10}}],["743",{"2":{"229":1,"260":6,"276":1,"282":1}}],["74311",{"2":{"147":1}}],["740",{"2":{"260":1}}],["74074",{"2":{"234":4}}],["7402",{"2":{"206":1,"214":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"297":1}}],["7401575",{"2":{"264":2}}],["74012",{"2":{"147":1}}],["7401915f",{"2":{"118":1}}],["7466",{"2":{"278":1}}],["746",{"2":{"260":1}}],["7468008914093891",{"2":{"191":4}}],["746801",{"2":{"188":1}}],["74694291453392",{"2":{"191":4}}],["746943",{"2":{"188":1}}],["7495",{"2":{"278":1}}],["7492",{"2":{"278":1}}],["749",{"2":{"187":1,"276":1}}],["742170310945483e",{"2":{"287":1}}],["7423113f",{"2":{"285":1}}],["742347",{"2":{"147":1}}],["742",{"2":{"187":1}}],["7429947",{"2":{"146":1}}],["791531273908925e",{"2":{"287":1}}],["7917685f",{"2":{"285":1}}],["791",{"2":{"276":1}}],["7911117",{"2":{"167":1}}],["79",{"2":{"234":4,"260":12,"276":1}}],["7982741830431546e",{"2":{"287":1}}],["7983097f",{"2":{"285":1}}],["7984",{"2":{"276":1}}],["798144\\ttrain",{"2":{"260":1}}],["798",{"2":{"260":3,"276":1}}],["7985",{"2":{"229":1}}],["798029",{"2":{"167":1}}],["799739014046271e",{"2":{"287":1}}],["7995854f",{"2":{"285":1}}],["799862",{"2":{"260":1}}],["799857",{"2":{"260":1}}],["799854",{"2":{"260":1}}],["799851",{"2":{"260":1}}],["799848",{"2":{"260":1}}],["799846",{"2":{"260":1}}],["799843",{"2":{"260":1}}],["799840",{"2":{"260":1}}],["799837",{"2":{"260":1}}],["799834",{"2":{"260":1}}],["799819",{"2":{"260":1}}],["799",{"2":{"229":1,"260":1,"276":2}}],["7971",{"2":{"282":1}}],["7976",{"2":{"278":1}}],["797922",{"2":{"274":1}}],["797",{"2":{"229":1}}],["796816",{"2":{"260":1}}],["796811",{"2":{"260":1}}],["796808",{"2":{"260":1}}],["796805",{"2":{"260":1}}],["796803",{"2":{"260":1}}],["796800",{"2":{"260":1}}],["796797",{"2":{"260":1}}],["796794",{"2":{"260":1}}],["796791",{"2":{"260":1}}],["796788",{"2":{"260":1}}],["796775",{"2":{"260":1}}],["796",{"2":{"229":1,"260":6}}],["796158e",{"2":{"225":1}}],["79691f",{"2":{"89":1}}],["793530990416719e",{"2":{"287":1}}],["79367996510995e",{"2":{"287":1}}],["793135504622469e",{"2":{"287":1}}],["793159",{"2":{"188":1}}],["7934",{"2":{"278":1}}],["793780",{"2":{"260":1}}],["793775",{"2":{"260":1}}],["793773",{"2":{"260":1}}],["793770",{"2":{"260":1}}],["793767",{"2":{"260":1}}],["793764",{"2":{"260":1}}],["793761",{"2":{"260":1}}],["793758",{"2":{"260":1}}],["793755",{"2":{"260":1}}],["793752f",{"2":{"285":1}}],["793752",{"2":{"260":1}}],["793736",{"2":{"260":1}}],["793",{"2":{"260":5}}],["79329f",{"2":{"285":1}}],["7932",{"2":{"187":1}}],["7943674060113e",{"2":{"287":1}}],["794371",{"2":{"147":1}}],["7942952f",{"2":{"285":1}}],["794",{"2":{"192":1,"260":3,"282":1}}],["794864",{"2":{"185":1}}],["794909",{"2":{"167":1}}],["790691",{"2":{"260":1}}],["790686",{"2":{"260":1}}],["790683",{"2":{"260":1}}],["790681",{"2":{"260":1}}],["790678",{"2":{"260":1}}],["790675",{"2":{"260":1}}],["790672",{"2":{"260":1}}],["790669",{"2":{"260":1}}],["790666",{"2":{"260":1}}],["790663",{"2":{"260":1}}],["790648",{"2":{"260":1}}],["790177",{"2":{"147":1}}],["7907005f",{"2":{"118":1}}],["792474",{"2":{"295":1}}],["792360600625302e",{"2":{"287":1}}],["792514f",{"2":{"285":1}}],["7925",{"2":{"278":1}}],["7925039",{"2":{"167":1}}],["792",{"2":{"260":1}}],["79206",{"2":{"147":1}}],["792885",{"2":{"147":1}}],["7795275",{"2":{"264":2}}],["7794657",{"2":{"143":1}}],["778190",{"2":{"260":1}}],["778184",{"2":{"260":1}}],["778180",{"2":{"260":1}}],["778176",{"2":{"260":1}}],["778172",{"2":{"260":1}}],["778168",{"2":{"260":1}}],["778164",{"2":{"260":1}}],["778160",{"2":{"260":1}}],["778156",{"2":{"260":1}}],["778152",{"2":{"260":1}}],["778134",{"2":{"260":1}}],["778",{"2":{"260":3,"276":1,"282":1}}],["771981262173578e",{"2":{"287":1}}],["7715501752726425e",{"2":{"287":1}}],["7718395f",{"2":{"285":1}}],["77165353",{"2":{"264":2}}],["771775",{"2":{"260":1}}],["771768",{"2":{"260":1}}],["771764",{"2":{"260":1}}],["771760",{"2":{"260":1}}],["771756",{"2":{"260":1}}],["771753",{"2":{"260":1}}],["771749",{"2":{"260":1}}],["771744",{"2":{"260":1}}],["771740",{"2":{"260":1}}],["771736",{"2":{"260":1}}],["771718",{"2":{"260":1}}],["771",{"2":{"229":1}}],["77",{"2":{"213":1,"234":1,"245":1,"260":8}}],["776256110785641e",{"2":{"287":1}}],["7763566f",{"2":{"285":1}}],["776",{"2":{"282":1}}],["7766",{"2":{"188":1}}],["776598",{"2":{"188":2}}],["773",{"2":{"260":6,"282":1}}],["773665\\tval",{"2":{"260":1}}],["7733535276924052",{"2":{"191":4}}],["773354",{"2":{"188":1}}],["7732686f",{"2":{"118":1}}],["774543675833383e",{"2":{"287":1}}],["774544f",{"2":{"285":1}}],["7745",{"2":{"278":1}}],["7745113",{"2":{"264":1}}],["774954",{"2":{"260":1}}],["774947",{"2":{"260":1}}],["774943",{"2":{"260":1}}],["774939",{"2":{"260":1}}],["774935",{"2":{"260":1}}],["774931",{"2":{"260":1}}],["774927",{"2":{"260":1}}],["774923",{"2":{"260":1}}],["774919",{"2":{"260":1}}],["774915",{"2":{"260":1}}],["774898",{"2":{"260":1}}],["7746656838366666",{"2":{"191":1}}],["774",{"2":{"187":1}}],["7720167f",{"2":{"285":1}}],["77208",{"2":{"168":1}}],["7721",{"2":{"282":1}}],["7721s\\ttraining",{"2":{"234":1}}],["772",{"2":{"187":1}}],["770347811649274e",{"2":{"287":1}}],["770333f",{"2":{"285":1}}],["77037",{"2":{"167":1}}],["770",{"2":{"260":6}}],["77004",{"2":{"167":1}}],["77713266523798e",{"2":{"287":1}}],["777102f",{"2":{"285":1}}],["777762491315963e",{"2":{"287":1}}],["77778",{"2":{"78":2,"234":1}}],["777582f",{"2":{"285":1}}],["777",{"2":{"276":1}}],["777074",{"2":{"274":1}}],["777076",{"2":{"147":1}}],["7773438",{"2":{"274":2}}],["73726080158895e",{"2":{"287":1}}],["737046",{"2":{"274":1}}],["731109f",{"2":{"285":1}}],["731",{"2":{"282":1}}],["7316",{"2":{"278":1}}],["731059",{"2":{"50":2,"74":2}}],["734613647834317e",{"2":{"287":1}}],["734714f",{"2":{"285":1}}],["734",{"2":{"260":5}}],["7349",{"2":{"229":1}}],["732677409019296e",{"2":{"287":1}}],["732497f",{"2":{"285":1}}],["732",{"2":{"260":4,"282":2}}],["7367303351226586e",{"2":{"287":1}}],["7368713f",{"2":{"285":1}}],["736",{"2":{"260":3}}],["7365s\\ttraining",{"2":{"234":1}}],["739",{"2":{"260":1}}],["739295",{"2":{"260":1}}],["739571",{"2":{"147":1}}],["735918411618291e",{"2":{"287":1}}],["7359853f",{"2":{"118":1}}],["735572f",{"2":{"285":1}}],["735",{"2":{"282":1}}],["7353516",{"2":{"274":1}}],["735638",{"2":{"274":1}}],["73507",{"2":{"253":1}}],["73",{"2":{"234":2,"236":1,"245":1,"260":12}}],["733698846498297e",{"2":{"287":1}}],["733962f",{"2":{"285":1}}],["73391f",{"2":{"89":1}}],["7333984",{"2":{"274":1}}],["733548\\ttrain",{"2":{"260":1}}],["733",{"2":{"229":1,"260":4,"276":2}}],["738",{"2":{"260":3}}],["738356",{"2":{"188":1}}],["738786f",{"2":{"118":1}}],["730",{"2":{"229":1,"282":1}}],["730944",{"2":{"168":1}}],["730082",{"2":{"147":1}}],["728896",{"2":{"295":1}}],["728063510608977e",{"2":{"287":1}}],["7282035f",{"2":{"285":1}}],["7285156",{"2":{"274":1}}],["723383789305397e",{"2":{"287":1}}],["723035999626726e",{"2":{"287":1}}],["723036f",{"2":{"285":1}}],["723244f",{"2":{"285":1}}],["7237308297251812",{"2":{"191":1}}],["72373",{"2":{"188":1}}],["723731",{"2":{"188":2}}],["72094519154548e",{"2":{"287":1}}],["720955060381146e",{"2":{"287":1}}],["720632f",{"2":{"285":1}}],["7206657f",{"2":{"285":1}}],["720",{"2":{"282":1}}],["7207",{"2":{"278":1}}],["720018",{"2":{"274":1}}],["7261589592279639e",{"2":{"287":1}}],["7268",{"2":{"278":1}}],["7267406",{"2":{"264":1}}],["7264304",{"2":{"264":1}}],["7262076",{"2":{"167":1}}],["721580293839207e",{"2":{"287":1}}],["72172f",{"2":{"285":1}}],["721",{"2":{"260":1,"263":1,"282":1}}],["725682893064122e",{"2":{"287":1}}],["725823f",{"2":{"285":1}}],["725284f",{"2":{"285":1}}],["725547577642607e",{"2":{"287":1}}],["7255859",{"2":{"274":1}}],["725559",{"2":{"147":1}}],["725",{"2":{"260":4}}],["72",{"2":{"234":3,"245":1,"260":14}}],["727855",{"2":{"260":1}}],["727",{"2":{"187":1,"229":1}}],["7242786699771552e",{"2":{"287":1}}],["72428167",{"2":{"143":1}}],["7245823f",{"2":{"285":1}}],["7244594f",{"2":{"285":1}}],["72471243",{"2":{"167":1}}],["722445079439977e",{"2":{"287":1}}],["7222874f",{"2":{"285":1}}],["72222",{"2":{"78":1}}],["722046e",{"2":{"167":1}}],["72281",{"2":{"147":1}}],["7225742f",{"2":{"118":2}}],["7290931",{"2":{"168":1}}],["729647",{"2":{"167":1}}],["7292344",{"2":{"165":1}}],["729",{"2":{"89":6}}],["70916715965324e",{"2":{"287":1}}],["709038f",{"2":{"285":1}}],["7092047",{"2":{"264":1}}],["7092",{"2":{"147":1}}],["706651342668618e",{"2":{"287":1}}],["7063633f",{"2":{"285":1}}],["706",{"2":{"282":1}}],["706174",{"2":{"185":1}}],["7059",{"2":{"295":1}}],["705581151975598e",{"2":{"287":1}}],["705469f",{"2":{"285":1}}],["7054552",{"2":{"167":1}}],["705",{"2":{"282":2}}],["7041016",{"2":{"274":1}}],["708851",{"2":{"295":1}}],["708071617554962e",{"2":{"287":1}}],["708045515364746e",{"2":{"287":1}}],["708392f",{"2":{"285":1}}],["7089",{"2":{"278":1}}],["70866144",{"2":{"264":2}}],["7084s\\ttraining",{"2":{"234":1}}],["702169e",{"2":{"225":1}}],["70276451e",{"2":{"90":1}}],["70",{"2":{"213":1,"234":1,"245":1,"260":3,"282":1}}],["701",{"2":{"192":1,"263":1}}],["7018592",{"2":{"165":1}}],["7079637f",{"2":{"285":1}}],["70718f",{"2":{"285":1}}],["707",{"2":{"282":1}}],["707417241600633e",{"2":{"287":1}}],["7074",{"2":{"192":1}}],["707534",{"2":{"188":1}}],["703",{"2":{"282":1}}],["703542",{"2":{"274":1}}],["70370",{"2":{"234":1}}],["703892",{"2":{"147":1}}],["70390546",{"2":{"143":1}}],["7033041",{"2":{"89":1}}],["700076691438824e",{"2":{"287":1}}],["7000918f",{"2":{"285":1}}],["7000",{"2":{"260":1,"295":1}}],["70002\\taccuracy",{"2":{"204":1}}],["7001",{"2":{"196":1,"253":1}}],["700",{"2":{"77":6,"119":1,"276":1,"282":1}}],["7530",{"2":{"278":1}}],["7590913625494985e",{"2":{"287":1}}],["7598531f",{"2":{"285":1}}],["759355f",{"2":{"285":1}}],["759",{"2":{"276":2}}],["759412",{"2":{"168":1}}],["757048528530397e",{"2":{"287":1}}],["7578126386655354e",{"2":{"287":1}}],["7578125",{"2":{"274":1}}],["7572",{"2":{"229":1}}],["752",{"2":{"260":3}}],["752638",{"2":{"185":1}}],["758424338530282e",{"2":{"287":1}}],["7585643f",{"2":{"285":1}}],["758767",{"2":{"260":1}}],["7580993408473766",{"2":{"64":1}}],["7580194f0",{"2":{"58":1}}],["7519531",{"2":{"274":1}}],["751",{"2":{"260":3,"282":1}}],["7513776",{"2":{"167":1}}],["7559217f",{"2":{"285":1}}],["755",{"2":{"260":6,"276":1}}],["7556195160621146e",{"2":{"287":1}}],["7556",{"2":{"253":3}}],["755543f",{"2":{"118":1}}],["756880598575071e",{"2":{"287":1}}],["7569085f",{"2":{"285":1}}],["75678f",{"2":{"285":1}}],["756",{"2":{"260":3}}],["756263e",{"2":{"225":1}}],["756535",{"2":{"147":1}}],["750170577758233e",{"2":{"287":1}}],["750158",{"2":{"147":1}}],["7503f",{"2":{"285":1}}],["750",{"2":{"238":1,"246":1}}],["7502",{"2":{"197":1,"227":1,"278":1,"280":1,"289":1}}],["750511",{"2":{"147":1}}],["7507f",{"2":{"89":1}}],["754",{"2":{"260":4,"276":1}}],["754746",{"2":{"168":1}}],["7544622",{"2":{"168":1}}],["754008",{"2":{"147":1}}],["75433f",{"2":{"89":1}}],["75",{"2":{"78":10,"213":2,"234":1,"236":1,"245":30,"260":12,"278":1,"284":1,"285":2,"288":4}}],["7×1×1",{"2":{"77":3}}],["789211946671537e",{"2":{"287":1}}],["789301f",{"2":{"285":1}}],["789",{"2":{"282":1}}],["78971",{"2":{"147":1}}],["7863328696196246e",{"2":{"287":1}}],["7863642f",{"2":{"285":1}}],["7860",{"2":{"278":1}}],["786438",{"2":{"274":1}}],["786528",{"2":{"274":1}}],["7865s\\ttraining",{"2":{"234":1}}],["783985531032455e",{"2":{"287":1}}],["78395927",{"2":{"264":1}}],["783353331229137e",{"2":{"287":1}}],["78389981208593e",{"2":{"287":1}}],["7838985f",{"2":{"285":1}}],["783617f",{"2":{"285":1}}],["783719f",{"2":{"285":1}}],["783",{"2":{"276":1}}],["7835",{"2":{"276":1}}],["7834253",{"2":{"167":1}}],["7881870436823375e",{"2":{"287":1}}],["7883678f",{"2":{"285":1}}],["788",{"2":{"276":1}}],["7880859",{"2":{"274":1}}],["7887s\\ttraining",{"2":{"234":1}}],["788505",{"2":{"147":1}}],["78",{"2":{"213":4,"234":4,"236":1,"245":12,"260":17,"282":1}}],["7852515311823013e",{"2":{"287":1}}],["785294133604728e",{"2":{"287":1}}],["785435f",{"2":{"285":1}}],["785",{"2":{"192":1}}],["78565f",{"2":{"89":1}}],["781918f",{"2":{"285":1}}],["7812",{"2":{"282":1}}],["7812s\\ttraining",{"2":{"234":1}}],["781428",{"2":{"260":1}}],["781424",{"2":{"260":1}}],["781421",{"2":{"260":1}}],["781418",{"2":{"260":1}}],["781415",{"2":{"260":1}}],["781411",{"2":{"260":1}}],["781408",{"2":{"260":1}}],["781405",{"2":{"260":1}}],["781402",{"2":{"260":1}}],["781399",{"2":{"260":1}}],["781387",{"2":{"260":1}}],["781",{"2":{"229":1,"276":1}}],["7817315740767116",{"2":{"191":1}}],["7811481",{"2":{"165":1}}],["780281620543845e",{"2":{"287":1}}],["780156f",{"2":{"285":1}}],["7804",{"2":{"282":1}}],["78000563",{"2":{"154":1}}],["7808904",{"2":{"71":2}}],["782020171536004e",{"2":{"287":1}}],["782770510486932e",{"2":{"287":1}}],["7824684f",{"2":{"285":1}}],["782",{"2":{"192":1}}],["78226817",{"2":{"153":1}}],["782822",{"2":{"147":1}}],["787573",{"2":{"260":1}}],["787569",{"2":{"260":1}}],["787566",{"2":{"260":1}}],["787563",{"2":{"260":1}}],["787560",{"2":{"260":1}}],["787557",{"2":{"260":1}}],["787554",{"2":{"260":1}}],["787552",{"2":{"260":1}}],["787549",{"2":{"260":1}}],["787546",{"2":{"260":1}}],["787533",{"2":{"260":1}}],["787159",{"2":{"165":1}}],["787919",{"2":{"147":1}}],["78741f",{"2":{"89":1}}],["7849494f",{"2":{"285":1}}],["7849",{"2":{"276":1}}],["784510",{"2":{"260":1}}],["784505",{"2":{"260":1}}],["784502",{"2":{"260":1}}],["784499",{"2":{"260":1}}],["784497",{"2":{"260":1}}],["784494",{"2":{"260":1}}],["784491",{"2":{"260":1}}],["784488",{"2":{"260":1}}],["784485",{"2":{"260":1}}],["784482",{"2":{"260":1}}],["784469",{"2":{"260":1}}],["784768",{"2":{"167":1,"168":1}}],["784",{"2":{"41":6,"232":1,"237":6,"243":1,"282":1}}],["7",{"2":{"34":4,"50":5,"58":2,"66":9,"74":3,"76":26,"77":2,"81":2,"85":1,"89":18,"90":1,"118":8,"147":2,"165":5,"167":2,"168":5,"172":2,"187":13,"190":3,"192":7,"196":1,"199":1,"204":2,"213":2,"225":1,"229":20,"234":5,"236":1,"238":2,"240":4,"245":2,"246":2,"260":28,"263":5,"264":5,"274":2,"276":51,"278":8,"282":49,"285":68,"287":69}}],["9\\ttrain",{"2":{"260":1}}],["9f",{"2":{"253":8}}],["9567980822806048e",{"2":{"287":1}}],["956",{"2":{"282":1}}],["959178540309124e",{"2":{"287":1}}],["959184",{"2":{"168":1}}],["959786727605242e",{"2":{"287":1}}],["959567142444426e",{"2":{"287":1}}],["959",{"2":{"260":2,"282":1}}],["951898f",{"2":{"285":1}}],["951399593050289e",{"2":{"287":1}}],["9513845f",{"2":{"285":1}}],["951388\\ttrain",{"2":{"260":1}}],["9513",{"2":{"282":1}}],["951689",{"2":{"267":1}}],["9515314",{"2":{"264":1}}],["951",{"2":{"260":5}}],["9511774",{"2":{"167":1}}],["95",{"2":{"245":5,"260":12}}],["954485945953685e",{"2":{"287":1}}],["9543978f",{"2":{"285":1}}],["95425",{"2":{"260":3}}],["9542s\\ttraining",{"2":{"234":1}}],["954",{"2":{"260":3}}],["9540s\\ttraining",{"2":{"234":1}}],["953",{"2":{"276":1}}],["953902",{"2":{"260":1}}],["9534",{"2":{"229":1}}],["953683",{"2":{"147":1}}],["9554",{"2":{"278":1}}],["9550781",{"2":{"274":1}}],["955",{"2":{"192":1,"260":2}}],["958300",{"2":{"253":1}}],["958",{"2":{"187":1,"260":8,"282":1}}],["958919",{"2":{"147":1}}],["952055773314264e",{"2":{"287":1}}],["952302f",{"2":{"285":1}}],["952",{"2":{"187":1,"260":6}}],["9500852630290257e",{"2":{"287":1}}],["9500434",{"2":{"167":1}}],["950",{"2":{"260":1,"282":1}}],["9501405",{"2":{"168":1}}],["957602f",{"2":{"285":1}}],["9570874f",{"2":{"285":1}}],["9571",{"2":{"278":1}}],["957152",{"2":{"147":1}}],["9577401",{"2":{"167":1}}],["989538331847273e",{"2":{"287":1}}],["989133024933483e",{"2":{"287":1}}],["989259f",{"2":{"285":1}}],["989273f",{"2":{"285":1}}],["9890",{"2":{"274":1}}],["989053",{"2":{"260":1}}],["984315904109752e",{"2":{"287":1}}],["9848861446223545e",{"2":{"287":1}}],["984604f",{"2":{"285":1}}],["984598f",{"2":{"285":1}}],["9847254f",{"2":{"285":1}}],["987475014367264e",{"2":{"287":1}}],["9876286f",{"2":{"285":1}}],["987689",{"2":{"165":1}}],["987",{"2":{"282":2}}],["988816441868734e",{"2":{"287":1}}],["9889574f",{"2":{"285":1}}],["988",{"2":{"282":1}}],["988234",{"2":{"260":1}}],["988229",{"2":{"260":1}}],["988226",{"2":{"260":1}}],["988221",{"2":{"260":1}}],["988219",{"2":{"260":1}}],["988216",{"2":{"260":1}}],["988213",{"2":{"260":1}}],["988210",{"2":{"260":1}}],["988207",{"2":{"260":1}}],["988204",{"2":{"260":1}}],["988189",{"2":{"260":1}}],["983226886937267e",{"2":{"287":1}}],["9833288f",{"2":{"285":1}}],["983608",{"2":{"260":1}}],["983603",{"2":{"260":1}}],["983600",{"2":{"260":1}}],["983597",{"2":{"260":1}}],["983594",{"2":{"260":1}}],["983592",{"2":{"260":1}}],["983588",{"2":{"260":1}}],["983586",{"2":{"260":1}}],["983583",{"2":{"260":1}}],["983579",{"2":{"260":1}}],["983564",{"2":{"260":1}}],["98396176",{"2":{"127":1}}],["9810932f",{"2":{"285":1}}],["9810639",{"2":{"264":1}}],["9813725448144703e",{"2":{"287":1}}],["9813",{"2":{"278":1}}],["981321",{"2":{"260":1}}],["981316",{"2":{"260":1}}],["981313",{"2":{"260":1}}],["981310",{"2":{"260":1}}],["981308",{"2":{"260":1}}],["981305",{"2":{"260":1}}],["981302",{"2":{"260":1}}],["981339",{"2":{"188":1}}],["981299",{"2":{"260":1}}],["981294",{"2":{"260":1}}],["981291",{"2":{"260":1}}],["981276",{"2":{"260":1}}],["982179398723983e",{"2":{"287":1}}],["982214420080307e",{"2":{"287":1}}],["982089f",{"2":{"285":1}}],["982357264",{"2":{"253":2}}],["982",{"2":{"238":1}}],["98",{"2":{"213":2,"229":2,"234":1,"240":1,"245":1,"260":23,"282":1}}],["986",{"2":{"192":1}}],["9857867341915306e",{"2":{"287":1}}],["985794",{"2":{"260":1}}],["985014791185486e",{"2":{"287":1}}],["9856978f",{"2":{"285":1}}],["985848",{"2":{"260":1}}],["985844",{"2":{"260":1}}],["985841",{"2":{"260":1}}],["985838",{"2":{"260":1}}],["985835",{"2":{"260":1}}],["985832",{"2":{"260":1}}],["985829",{"2":{"260":1}}],["985826",{"2":{"260":1}}],["985823",{"2":{"260":1}}],["985820",{"2":{"260":1}}],["985403\\ttrain",{"2":{"260":1}}],["985",{"2":{"192":1,"260":6}}],["9801877f",{"2":{"285":1}}],["9806326f",{"2":{"285":1}}],["980",{"2":{"162":1}}],["980885",{"2":{"147":1}}],["91538521049109e",{"2":{"287":1}}],["915148f",{"2":{"285":1}}],["9155488",{"2":{"267":1}}],["912605226605487e",{"2":{"287":1}}],["912003f",{"2":{"285":1}}],["912781",{"2":{"172":1}}],["9127817",{"2":{"172":2}}],["9104295811547196e",{"2":{"287":1}}],["9104065379185334e",{"2":{"287":1}}],["9102656f",{"2":{"285":1}}],["910249",{"2":{"167":1}}],["910033f",{"2":{"285":1}}],["911050761530619e",{"2":{"287":1}}],["91134f",{"2":{"285":1}}],["911968",{"2":{"274":1}}],["91188717",{"2":{"264":1}}],["9178",{"2":{"278":1}}],["9179",{"2":{"278":1}}],["917",{"2":{"260":3,"276":1}}],["9140647f",{"2":{"285":1}}],["914",{"2":{"260":5}}],["914827",{"2":{"147":1}}],["91",{"2":{"245":3,"260":13}}],["913484180673843e",{"2":{"287":1}}],["9139110305870014e",{"2":{"287":1}}],["9139607",{"2":{"168":1}}],["913138f",{"2":{"285":1}}],["913635f",{"2":{"285":1}}],["913",{"2":{"187":1,"276":1}}],["9199219",{"2":{"274":1}}],["919",{"2":{"187":1,"276":2}}],["9182",{"2":{"276":1}}],["918",{"2":{"260":3}}],["918918",{"2":{"185":1}}],["91862",{"2":{"147":1}}],["9164",{"2":{"278":1}}],["9164896",{"2":{"168":1}}],["9168",{"2":{"278":1}}],["9167142",{"2":{"267":1}}],["91667",{"2":{"78":1}}],["931476154886771e",{"2":{"287":1}}],["9316854124754463e",{"2":{"287":1}}],["9316854f",{"2":{"285":1}}],["931345f",{"2":{"285":1}}],["93197",{"2":{"147":1}}],["932429836863505e",{"2":{"287":1}}],["9325303f",{"2":{"285":1}}],["932",{"2":{"282":1}}],["9322",{"2":{"278":1}}],["932907",{"2":{"147":1}}],["9378853f",{"2":{"285":1}}],["9372884",{"2":{"267":1}}],["9370079",{"2":{"264":2}}],["938021873952194e",{"2":{"287":1}}],["9382",{"2":{"278":1}}],["9384",{"2":{"263":1}}],["93891s\\ttraining",{"2":{"245":1}}],["93",{"2":{"245":2,"260":6}}],["9399s\\ttraining",{"2":{"234":1}}],["9397587f",{"2":{"118":1}}],["9355129382313795e",{"2":{"287":1}}],["935",{"2":{"229":1,"260":3}}],["9360056",{"2":{"264":1}}],["936",{"2":{"187":1}}],["933947754691703e",{"2":{"287":1}}],["9335938",{"2":{"274":1}}],["933",{"2":{"187":1}}],["9338737",{"2":{"168":1}}],["933106",{"2":{"147":1}}],["934294f",{"2":{"285":1}}],["934",{"2":{"260":2}}],["9340",{"2":{"199":1}}],["9340098",{"2":{"167":1}}],["93466",{"2":{"147":1}}],["9348226f",{"2":{"118":1}}],["946491869802371e",{"2":{"287":1}}],["94621549489345e",{"2":{"287":1}}],["9462900599868895e",{"2":{"287":1}}],["946247f",{"2":{"285":1}}],["9462025f",{"2":{"285":1}}],["9460107f",{"2":{"285":1}}],["946637",{"2":{"147":1}}],["945",{"2":{"282":1}}],["9443338822283293e",{"2":{"287":1}}],["9441803f",{"2":{"285":1}}],["9449",{"2":{"278":1}}],["944",{"2":{"260":3}}],["94444",{"2":{"78":1}}],["9416360670759e",{"2":{"287":1}}],["9419773f",{"2":{"285":1}}],["9417897f",{"2":{"285":1}}],["9415703",{"2":{"264":1}}],["941",{"2":{"260":8}}],["941303",{"2":{"147":1}}],["948479136614798e",{"2":{"287":1}}],["9487164f",{"2":{"285":1}}],["9487925f",{"2":{"118":1}}],["948015",{"2":{"264":1}}],["948",{"2":{"260":7}}],["9420127482943934e",{"2":{"287":1}}],["9425241302887315e",{"2":{"287":1}}],["942657f",{"2":{"285":1}}],["9424",{"2":{"278":1}}],["942",{"2":{"260":7}}],["94947f",{"2":{"285":1}}],["9492188",{"2":{"274":1}}],["9492173",{"2":{"264":1}}],["949",{"2":{"260":8}}],["943",{"2":{"260":6}}],["94387",{"2":{"253":1}}],["9436797",{"2":{"196":1}}],["94",{"2":{"187":1,"213":1,"245":6,"260":7,"276":1,"282":1}}],["940",{"2":{"276":1}}],["940373",{"2":{"274":1}}],["9405848223512736",{"2":{"191":4}}],["940585",{"2":{"188":1}}],["940073",{"2":{"147":1}}],["940139",{"2":{"147":1}}],["947805134415912e",{"2":{"287":1}}],["947568f",{"2":{"285":1}}],["947",{"2":{"282":1}}],["947919",{"2":{"274":1}}],["947371",{"2":{"147":1}}],["9470896",{"2":{"146":1}}],["9700021231792414e",{"2":{"287":1}}],["970",{"2":{"276":1}}],["9705f",{"2":{"118":1}}],["977006014750804e",{"2":{"287":1}}],["9776",{"2":{"274":1}}],["977469f",{"2":{"118":1}}],["9746094",{"2":{"274":1}}],["9791078585908083e",{"2":{"287":1}}],["979",{"2":{"276":1}}],["9799082",{"2":{"264":1}}],["979036",{"2":{"274":1}}],["979035",{"2":{"260":1}}],["979025",{"2":{"260":1}}],["979018",{"2":{"260":1}}],["979011",{"2":{"260":1}}],["979005",{"2":{"260":1}}],["97957f",{"2":{"118":1}}],["976851f",{"2":{"285":1}}],["976457",{"2":{"260":1}}],["976446",{"2":{"260":1}}],["976440",{"2":{"260":1}}],["976433",{"2":{"260":1}}],["976426",{"2":{"260":1}}],["976420",{"2":{"260":1}}],["976413",{"2":{"260":1}}],["976406",{"2":{"260":1}}],["976400",{"2":{"260":1}}],["976393",{"2":{"260":1}}],["976363",{"2":{"260":1}}],["9766699015845924",{"2":{"191":4}}],["97667",{"2":{"188":1}}],["973",{"2":{"282":1}}],["9736328",{"2":{"274":1}}],["973954",{"2":{"260":1}}],["973936",{"2":{"260":1}}],["973929",{"2":{"260":1}}],["973923",{"2":{"260":1}}],["973916",{"2":{"260":1}}],["973909",{"2":{"260":1}}],["973902",{"2":{"260":1}}],["973895",{"2":{"260":1}}],["973888",{"2":{"260":1}}],["973880",{"2":{"260":1}}],["973802",{"2":{"260":1}}],["9734574",{"2":{"165":1}}],["9757359255881376e",{"2":{"287":1}}],["975448f",{"2":{"285":1}}],["9752s\\ttraining",{"2":{"234":1}}],["975",{"2":{"229":1}}],["975088",{"2":{"147":1}}],["97",{"2":{"213":1,"229":1,"245":6,"260":7,"276":1,"278":1,"282":4}}],["9785156",{"2":{"274":1}}],["9789954f",{"2":{"285":1}}],["978998",{"2":{"260":1}}],["978992",{"2":{"260":1}}],["978985",{"2":{"260":1}}],["978978",{"2":{"260":1}}],["978971",{"2":{"260":1}}],["978935",{"2":{"260":1}}],["978",{"2":{"187":1}}],["971881091890168e",{"2":{"287":1}}],["971899",{"2":{"126":1}}],["971",{"2":{"282":1}}],["971431",{"2":{"147":1}}],["9721705f",{"2":{"285":1}}],["9725",{"2":{"278":1}}],["9727281",{"2":{"167":1}}],["972335",{"2":{"147":1}}],["97222",{"2":{"78":1}}],["9246597301697616e",{"2":{"287":1}}],["92424f",{"2":{"285":1}}],["924262",{"2":{"147":1}}],["924479f",{"2":{"285":1}}],["922969491606616e",{"2":{"287":1}}],["9229023f",{"2":{"285":1}}],["92295444",{"2":{"264":1}}],["9226",{"2":{"282":1}}],["9236247482210526e",{"2":{"287":1}}],["923057022351725e",{"2":{"287":1}}],["923989f",{"2":{"285":1}}],["9233653",{"2":{"264":1}}],["923",{"2":{"263":1,"282":1}}],["9238",{"2":{"229":1}}],["927215899596406e",{"2":{"287":1}}],["9273414f",{"2":{"285":1}}],["927",{"2":{"260":4}}],["92788666",{"2":{"118":1}}],["9280",{"2":{"278":1}}],["92863405",{"2":{"264":1}}],["928609",{"2":{"165":1}}],["92880917",{"2":{"264":1}}],["928",{"2":{"260":2,"276":1}}],["925291294616152e",{"2":{"287":1}}],["92560254857468e",{"2":{"287":1}}],["925446f",{"2":{"285":1}}],["9254697f",{"2":{"285":1}}],["925",{"2":{"260":6}}],["9253",{"2":{"253":1}}],["92593",{"2":{"234":1,"236":1}}],["92",{"2":{"245":3,"260":13}}],["929012",{"2":{"295":1}}],["92913383",{"2":{"264":2}}],["929",{"2":{"187":1}}],["92059475058155e",{"2":{"287":1}}],["920693f",{"2":{"285":1}}],["9206433",{"2":{"165":1}}],["920108",{"2":{"274":1}}],["920099f",{"2":{"118":1}}],["9269",{"2":{"278":1}}],["92652786",{"2":{"264":1}}],["926518",{"2":{"118":1}}],["926",{"2":{"260":3}}],["92663",{"2":{"147":1}}],["92155594",{"2":{"193":4}}],["9214194",{"2":{"168":1}}],["921201",{"2":{"147":1}}],["921374",{"2":{"147":1}}],["921",{"2":{"56":1}}],["961",{"2":{"276":1}}],["960560299142385e",{"2":{"287":1}}],["960718f",{"2":{"285":1}}],["960024f",{"2":{"285":1}}],["960991",{"2":{"264":1}}],["96062994",{"2":{"264":2}}],["96578",{"2":{"263":1}}],["965117",{"2":{"143":1}}],["966363703982947e",{"2":{"287":1}}],["9665214f",{"2":{"285":1}}],["9668",{"2":{"278":1}}],["9667969",{"2":{"274":1}}],["966",{"2":{"260":3,"276":1}}],["966289",{"2":{"147":1}}],["968789995125722e",{"2":{"287":1}}],["968088768988207e",{"2":{"287":1}}],["968921f",{"2":{"285":1}}],["968504",{"2":{"264":2}}],["968",{"2":{"260":1}}],["96308008904852e",{"2":{"287":1}}],["963052",{"2":{"264":1}}],["963209f",{"2":{"285":1}}],["963165",{"2":{"274":1}}],["963",{"2":{"260":8}}],["96357",{"2":{"260":2}}],["9635s\\ttraining",{"2":{"234":1}}],["964306419051972e",{"2":{"287":1}}],["9643372f",{"2":{"285":1}}],["9648438",{"2":{"274":1}}],["964",{"2":{"260":5}}],["96",{"2":{"245":10,"260":4}}],["967487f",{"2":{"285":1}}],["9673",{"2":{"278":1}}],["967832",{"2":{"274":1}}],["9676",{"2":{"263":1}}],["967",{"2":{"192":1}}],["9670442",{"2":{"118":1}}],["9699017f",{"2":{"285":1}}],["9693907",{"2":{"168":1}}],["96927",{"2":{"147":1}}],["962330",{"2":{"274":1}}],["96296",{"2":{"234":1,"236":1}}],["9626999",{"2":{"167":1}}],["96227f",{"2":{"89":1}}],["9g",{"2":{"90":1}}],["906",{"2":{"260":6}}],["90537953",{"2":{"264":1}}],["9055119",{"2":{"264":2}}],["90554",{"2":{"260":3}}],["905",{"2":{"260":21}}],["90508",{"2":{"143":1,"146":1}}],["9042969",{"2":{"274":1}}],["9049499",{"2":{"264":1}}],["904",{"2":{"260":20}}],["9043228",{"2":{"165":1}}],["903898747186057e",{"2":{"287":1}}],["903797f",{"2":{"285":1}}],["9032",{"2":{"278":1}}],["903",{"2":{"260":14,"282":1}}],["90215015",{"2":{"264":1}}],["902",{"2":{"260":20}}],["902204",{"2":{"147":1}}],["9017",{"2":{"276":1}}],["901",{"2":{"260":20,"263":1}}],["901397",{"2":{"147":1}}],["9098782394779683e",{"2":{"287":1}}],["9098144f",{"2":{"285":1}}],["9096684966662476e",{"2":{"287":1}}],["909740560110761e",{"2":{"287":1}}],["90957f",{"2":{"285":1}}],["9091032",{"2":{"168":1}}],["9094505",{"2":{"165":1}}],["90908",{"2":{"147":1}}],["9075113975980495e",{"2":{"287":1}}],["907165f",{"2":{"285":1}}],["907794",{"2":{"274":1}}],["90772897",{"2":{"143":1}}],["9070533",{"2":{"264":1}}],["907",{"2":{"260":6}}],["9073486e",{"2":{"168":1}}],["9076533",{"2":{"165":1}}],["9000",{"2":{"295":1}}],["9001",{"2":{"196":1,"253":1}}],["900",{"2":{"119":1,"187":1,"260":20,"282":1}}],["90",{"2":{"79":2,"245":7,"260":1}}],["9968343595745904e",{"2":{"287":1}}],["9968258f",{"2":{"285":1}}],["9962324f",{"2":{"285":1}}],["9912203159406166e",{"2":{"287":1}}],["99125",{"2":{"85":1}}],["991346f",{"2":{"285":1}}],["9925024389491433e",{"2":{"287":1}}],["9925024f",{"2":{"285":1}}],["992536f",{"2":{"285":1}}],["992126",{"2":{"264":2}}],["9927821840237917e",{"2":{"287":1}}],["992773f",{"2":{"285":1}}],["992724",{"2":{"260":1}}],["992720",{"2":{"260":1}}],["992715",{"2":{"274":1}}],["992717",{"2":{"260":1}}],["992714",{"2":{"260":1}}],["992711",{"2":{"260":1}}],["992708",{"2":{"260":1}}],["992705",{"2":{"260":1}}],["992702",{"2":{"260":1}}],["992636616774106e",{"2":{"287":1}}],["992699",{"2":{"260":1}}],["992696",{"2":{"260":1}}],["992682",{"2":{"260":1}}],["992662",{"2":{"165":1}}],["9940",{"2":{"278":1}}],["994953",{"2":{"260":1}}],["994948",{"2":{"260":1}}],["994945",{"2":{"260":1}}],["994942",{"2":{"260":1}}],["994939",{"2":{"260":1}}],["994936",{"2":{"260":1}}],["994933",{"2":{"260":1}}],["994931",{"2":{"260":1}}],["994928",{"2":{"260":1}}],["994924",{"2":{"260":1}}],["994909",{"2":{"260":1}}],["994",{"2":{"260":4}}],["994433",{"2":{"147":1}}],["9930843f",{"2":{"285":1}}],["993",{"2":{"260":3}}],["990380024146575e",{"2":{"287":1}}],["990159356431851e",{"2":{"287":1}}],["9905488f",{"2":{"285":1}}],["990495",{"2":{"260":1}}],["990491",{"2":{"260":1}}],["990488",{"2":{"260":1}}],["990485",{"2":{"260":1}}],["990482",{"2":{"260":1}}],["990466",{"2":{"260":1}}],["990451",{"2":{"260":1}}],["990479",{"2":{"260":1}}],["990476",{"2":{"260":1}}],["990473",{"2":{"260":1}}],["990470",{"2":{"260":1}}],["99047",{"2":{"253":1}}],["990099f0",{"2":{"58":1}}],["9951172",{"2":{"274":1}}],["995",{"2":{"229":1}}],["99575",{"2":{"85":1}}],["997105115541275e",{"2":{"287":1}}],["997193",{"2":{"260":1}}],["997188",{"2":{"260":1}}],["997186",{"2":{"260":1}}],["997183",{"2":{"260":1}}],["997180",{"2":{"260":1}}],["997177",{"2":{"260":1}}],["997174",{"2":{"260":1}}],["997171",{"2":{"260":1}}],["997168",{"2":{"260":1}}],["997165",{"2":{"260":1}}],["997149",{"2":{"260":1}}],["99799",{"2":{"253":1}}],["997719e",{"2":{"225":1}}],["997003",{"2":{"89":6}}],["99",{"2":{"168":1,"229":1,"245":5,"260":4}}],["998792669915691e",{"2":{"287":1}}],["9985",{"2":{"229":1}}],["998",{"2":{"85":1,"260":1}}],["9997863146570874e",{"2":{"287":1}}],["9998882f",{"2":{"285":1}}],["999377100458256e",{"2":{"287":1}}],["9993416f",{"2":{"285":1}}],["999399",{"2":{"260":1}}],["9990234",{"2":{"274":1}}],["99900760833609",{"2":{"50":1}}],["999443",{"2":{"260":1}}],["999438",{"2":{"260":1}}],["999435",{"2":{"260":1}}],["999432",{"2":{"260":1}}],["999429",{"2":{"260":1}}],["999426",{"2":{"260":1}}],["999423",{"2":{"260":1}}],["999420",{"2":{"260":1}}],["999418",{"2":{"260":1}}],["999414",{"2":{"260":1}}],["999",{"2":{"89":7,"266":1,"268":2}}],["9999881",{"2":{"127":2}}],["999986",{"2":{"126":1}}],["99998605",{"2":{"126":1}}],["9999092f0",{"2":{"58":1}}],["9999546f0",{"2":{"58":2}}],["9",{"2":{"34":2,"50":4,"58":4,"76":34,"77":1,"78":3,"80":6,"81":1,"84":1,"89":9,"90":1,"114":1,"118":1,"143":2,"146":2,"147":2,"165":1,"167":1,"168":2,"187":8,"188":1,"192":4,"196":1,"199":2,"204":2,"209":1,"212":1,"213":2,"229":17,"230":1,"234":52,"236":10,"240":1,"241":2,"245":2,"253":3,"260":9,"263":2,"266":1,"268":2,"274":2,"276":52,"278":4,"282":60,"285":62,"287":87}}],["×dn−2×1×1",{"2":{"66":1}}],["×dn−2×1×1`",{"2":{"41":1}}],["×dn−2×1×dn",{"2":{"41":1,"66":1}}],["×",{"2":{"19":6,"197":1,"200":2,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["≥",{"2":{"19":2,"51":1,"58":4,"86":1,"253":1,"293":1,"295":1}}],["zstd",{"2":{"276":1,"282":1}}],["znver2",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["zlib",{"2":{"192":1}}],["zipfile",{"2":{"229":1}}],["zip",{"2":{"119":1,"253":1,"258":1,"293":2}}],["zips",{"2":{"23":1}}],["zyg",{"2":{"118":2}}],["zygotevjp",{"2":{"237":13}}],["zygotecolorsext",{"2":{"229":2,"240":2,"282":2}}],["zygotetrackerext",{"2":{"229":1,"282":1}}],["zygoterules",{"2":{"192":1,"276":1,"282":1}}],["zygote",{"2":{"18":1,"19":1,"49":1,"50":1,"56":2,"84":4,"89":3,"96":1,"100":1,"118":3,"121":1,"122":3,"127":2,"164":3,"165":1,"166":1,"167":3,"168":3,"172":3,"173":2,"174":2,"187":1,"190":1,"192":5,"193":4,"198":1,"208":1,"229":4,"240":2,"282":3}}],["z=",{"2":{"96":1}}],["z=σ",{"2":{"38":1}}],["zenodo",{"2":{"71":3}}],["zeroing",{"2":{"23":4,"52":1}}],["zero",{"2":{"15":3,"23":3,"35":1,"36":3,"39":1,"50":2,"51":2,"52":9,"58":1,"81":2,"85":2,"166":1,"188":1}}],["zeroed",{"2":{"15":3}}],["zerosc64",{"2":{"16":1}}],["zerosc32",{"2":{"16":1}}],["zerosc16",{"2":{"16":1}}],["zeros64",{"2":{"16":1}}],["zeros32",{"2":{"16":1,"153":2}}],["zeros16",{"2":{"16":1}}],["zeros",{"2":{"15":3,"16":6,"75":3,"76":5,"77":1,"79":2,"82":7,"86":2,"188":1,"252":3,"277":1,"278":1,"293":1}}],["z",{"2":{"39":3,"77":4,"86":5,"96":2,"271":8,"273":3,"274":2,"278":2,"279":8,"291":7,"296":7}}],["≤",{"2":{"15":2,"39":2,"73":1,"86":1,"96":2,"293":2}}],["ylabel=",{"2":{"254":1,"264":1,"268":1,"277":1,"284":1,"285":1,"288":2,"291":1}}],["ys",{"2":{"254":6}}],["y∈",{"2":{"252":1}}],["yᵢ",{"2":{"119":2}}],["y=x2−2x",{"2":{"264":1}}],["y=x−e",{"2":{"41":1,"66":1}}],["y==0",{"2":{"86":1}}],["yrot",{"2":{"86":3}}],["yes",{"2":{"86":1,"99":1}}],["year",{"2":{"71":2}}],["yet",{"2":{"45":1,"53":1,"86":1,"257":1}}],["yuxin",{"2":{"66":1}}],["yann",{"2":{"50":1}}],["y+ϵ",{"2":{"50":1}}],["yi∈rm",{"2":{"196":1}}],["yi",{"2":{"50":2,"196":1}}],["y~=",{"2":{"50":2}}],["y~",{"2":{"50":4}}],["y^2+y∗max",{"2":{"50":1}}],["y^−y∗log⁡",{"2":{"50":1}}],["y^−y",{"2":{"50":1}}],["y^",{"2":{"50":7}}],["y^+ϵ",{"2":{"50":3}}],["ŷ",{"2":{"50":11,"83":8,"165":2,"166":2,"167":2,"203":2,"204":5}}],["y3",{"2":{"34":1}}],["y2",{"2":{"34":1,"77":1,"277":2}}],["y1",{"2":{"34":1,"77":1,"277":2}}],["y",{"2":{"15":3,"19":5,"34":8,"35":2,"37":9,"39":10,"40":6,"41":4,"50":63,"51":3,"52":6,"55":1,"56":10,"61":2,"62":2,"66":1,"73":1,"77":4,"78":2,"82":4,"83":3,"84":5,"86":25,"89":1,"90":6,"96":3,"118":11,"119":2,"127":2,"134":4,"153":3,"154":4,"163":5,"165":7,"166":7,"196":11,"200":4,"201":5,"202":3,"203":12,"204":12,"209":6,"211":2,"212":2,"223":4,"230":6,"233":2,"234":2,"241":4,"242":2,"244":2,"245":2,"252":7,"254":1,"259":9,"264":5,"268":5,"271":5,"273":3,"277":1,"283":6,"285":3,"291":5,"292":11}}],["yoshua",{"2":{"15":2}}],["yourself",{"2":{"220":1}}],["your",{"0":{"137":1},"2":{"41":3,"45":2,"56":2,"58":1,"94":1,"102":1,"125":2,"127":1,"141":1,"151":1,"153":2,"154":2,"155":3,"160":1,"164":1,"165":2,"177":2,"178":1,"180":2,"205":1}}],["you",{"2":{"2":1,"6":2,"24":2,"45":1,"49":3,"56":5,"58":4,"68":3,"71":2,"73":1,"84":1,"86":5,"88":3,"89":1,"90":1,"91":1,"94":1,"95":1,"114":1,"118":1,"119":2,"124":1,"125":3,"126":2,"127":2,"128":1,"137":2,"141":1,"144":1,"148":3,"151":1,"153":9,"154":6,"155":5,"159":1,"160":2,"164":5,"165":4,"166":1,"173":2,"177":2,"180":2,"185":1,"188":1,"189":3,"190":1,"192":1,"198":1,"202":1,"205":1,"219":1,"220":1,"234":2,"242":1,"252":1,"260":1,"265":1,"274":1,"285":1}}],["5g",{"2":{"268":1}}],["5gb",{"2":{"238":1,"246":1}}],["5\\ttrain",{"2":{"260":1}}],["5+0",{"2":{"238":1,"246":1}}],["5fs",{"2":{"245":1}}],["5f",{"2":{"204":3,"234":2}}],["5f0",{"2":{"58":8,"86":2,"143":1,"146":1,"172":1,"203":1,"277":12}}],["5662005521181286e",{"2":{"287":1}}],["5665761476211666e",{"2":{"287":1}}],["5661135f",{"2":{"285":1}}],["5667161f",{"2":{"285":1}}],["5649924437308832e",{"2":{"287":1}}],["5646903f",{"2":{"285":1}}],["5643",{"2":{"282":1}}],["564",{"2":{"282":1}}],["5641",{"2":{"147":1}}],["563",{"2":{"276":1}}],["5621",{"2":{"295":1}}],["562357",{"2":{"264":1}}],["5627227",{"2":{"165":1}}],["561498414358847e",{"2":{"287":1}}],["5615",{"2":{"240":1}}],["5610",{"2":{"240":1}}],["5613574f",{"2":{"285":1}}],["5613",{"2":{"229":1}}],["5613775f",{"2":{"118":1}}],["56",{"2":{"213":1,"234":1,"245":1,"260":23}}],["56s",{"2":{"213":2}}],["5604907793802504e",{"2":{"287":1}}],["5608524f",{"2":{"285":1}}],["56080",{"2":{"204":1}}],["5609",{"2":{"282":1}}],["5606484f",{"2":{"285":1}}],["5606",{"2":{"276":1}}],["560",{"2":{"238":1,"246":1,"282":1}}],["5601s\\ttraining",{"2":{"234":1}}],["56055",{"2":{"188":1}}],["565",{"2":{"192":1}}],["56542",{"2":{"147":1}}],["5676858f",{"2":{"285":1}}],["5678337f",{"2":{"285":1}}],["567987955479808e",{"2":{"287":1}}],["5679",{"2":{"278":1}}],["5675557670686644",{"2":{"191":1}}],["5677332440145157e",{"2":{"287":1}}],["56779",{"2":{"188":1}}],["567794",{"2":{"188":2}}],["567746",{"2":{"185":1}}],["5683594",{"2":{"274":1}}],["5681",{"2":{"282":1}}],["5681s\\ttraining",{"2":{"236":1}}],["56817",{"2":{"147":1}}],["5688s\\ttraining",{"2":{"234":1}}],["568",{"2":{"229":1}}],["568298",{"2":{"147":1}}],["569339",{"2":{"295":1}}],["569962f",{"2":{"285":1}}],["569024",{"2":{"274":1}}],["569012",{"2":{"147":1}}],["569522",{"2":{"260":1}}],["56926",{"2":{"147":1}}],["5727478828392096e",{"2":{"287":1}}],["5729056f",{"2":{"285":1}}],["572",{"2":{"282":1}}],["5799",{"2":{"278":1}}],["579",{"2":{"276":1}}],["5797",{"2":{"147":1,"260":1}}],["575985f",{"2":{"285":1}}],["575",{"2":{"276":1}}],["57555",{"2":{"147":1}}],["57115207947323e",{"2":{"287":1}}],["5717224f",{"2":{"285":1}}],["5712",{"2":{"278":1}}],["571",{"2":{"260":1}}],["5714",{"2":{"260":6}}],["571858973546631e",{"2":{"287":1}}],["5718",{"2":{"147":1,"283":2}}],["5766973440404484e",{"2":{"287":1}}],["5766s\\ttraining",{"2":{"234":1}}],["576768109372526e",{"2":{"287":1}}],["57671577",{"2":{"155":1}}],["576559290565641e",{"2":{"287":1}}],["5763206457895183e",{"2":{"287":1}}],["5763341929826572e",{"2":{"287":1}}],["576856f",{"2":{"285":1}}],["5768614f",{"2":{"285":1}}],["5768s\\ttraining",{"2":{"234":1}}],["5760326f",{"2":{"285":1}}],["576",{"2":{"282":1}}],["5761932f",{"2":{"285":1}}],["5761",{"2":{"240":1}}],["57",{"2":{"234":4,"245":2,"260":745,"276":1,"282":1}}],["57s",{"2":{"213":2}}],["574338144268075e",{"2":{"287":1}}],["5748880569342254e",{"2":{"287":1}}],["574478f",{"2":{"285":1}}],["57498033675986e",{"2":{"287":1}}],["5749885f",{"2":{"285":1}}],["57492\\taccuracy",{"2":{"204":1}}],["5746s\\ttraining",{"2":{"234":1}}],["574128f",{"2":{"118":1}}],["5731",{"2":{"240":1}}],["5735s\\ttraining",{"2":{"234":1}}],["5732",{"2":{"229":1}}],["573",{"2":{"192":1,"229":1}}],["570074834024823e",{"2":{"287":1}}],["570324",{"2":{"264":1}}],["570",{"2":{"192":1,"276":1,"282":2}}],["577362477045988e",{"2":{"287":1}}],["5772078f",{"2":{"285":1}}],["5772s\\ttraining",{"2":{"234":1}}],["577",{"2":{"276":1}}],["5776s\\ttraining",{"2":{"234":1}}],["5776052f0",{"2":{"50":1}}],["5771s\\ttraining",{"2":{"234":1}}],["577181",{"2":{"188":1}}],["5788",{"2":{"276":1}}],["578243732",{"2":{"253":2}}],["578",{"2":{"187":2,"260":3,"282":1}}],["578686",{"2":{"147":1}}],["591865110448923e",{"2":{"287":1}}],["591725f",{"2":{"285":1}}],["591793",{"2":{"147":1}}],["591",{"2":{"276":1}}],["596659763544528e",{"2":{"287":1}}],["596",{"2":{"276":1}}],["596730",{"2":{"274":1}}],["596318",{"2":{"147":1}}],["5900952330766533e",{"2":{"287":1}}],["5900598f",{"2":{"285":1}}],["5905511",{"2":{"264":2}}],["590",{"2":{"229":1}}],["598391712",{"2":{"253":2}}],["598",{"2":{"229":1,"260":1,"276":1}}],["598188",{"2":{"147":1}}],["59s",{"2":{"213":1}}],["5956465206446694e",{"2":{"287":1}}],["595683448733404e",{"2":{"287":1}}],["595747f",{"2":{"285":1}}],["595",{"2":{"276":3}}],["595323",{"2":{"274":1}}],["59519",{"2":{"204":1}}],["5954",{"2":{"187":1}}],["59307770899759e",{"2":{"287":1}}],["593513526473096e",{"2":{"287":1}}],["59397485454242e",{"2":{"287":1}}],["5931658f",{"2":{"285":1}}],["5932255f",{"2":{"285":1}}],["5938493f",{"2":{"285":1}}],["593876f",{"2":{"285":1}}],["59338\\taccuracy",{"2":{"204":1}}],["593",{"2":{"187":1,"260":2,"282":1}}],["59377456",{"2":{"147":1}}],["59",{"2":{"168":1,"245":3,"260":159,"276":1,"278":1,"282":1}}],["5947882966072905e",{"2":{"287":1}}],["5947266",{"2":{"274":1}}],["594025f",{"2":{"285":1}}],["5946554f",{"2":{"285":1}}],["594664",{"2":{"147":1}}],["5943626",{"2":{"264":1}}],["594",{"2":{"260":1,"276":1,"282":1}}],["594261",{"2":{"168":1}}],["597",{"2":{"229":1}}],["5979974",{"2":{"165":1}}],["59751064",{"2":{"89":1}}],["599908f",{"2":{"285":1}}],["5999135f",{"2":{"285":1}}],["5999714",{"2":{"143":1}}],["5993",{"2":{"282":1}}],["599",{"2":{"276":1}}],["599632",{"2":{"147":1}}],["5929",{"2":{"278":1}}],["592982",{"2":{"264":1}}],["592",{"2":{"260":3}}],["59259",{"2":{"236":1}}],["592152",{"2":{"147":1}}],["59277534",{"2":{"118":1}}],["580",{"2":{"276":1}}],["5800781",{"2":{"274":1}}],["5865717918577946e",{"2":{"287":1}}],["5866031f",{"2":{"285":1}}],["5869141",{"2":{"274":1}}],["5868",{"2":{"199":1}}],["582927190651587e",{"2":{"287":1}}],["582538f",{"2":{"285":1}}],["582",{"2":{"276":3,"282":1}}],["5826772",{"2":{"264":2}}],["5822",{"2":{"229":1}}],["5848538f",{"2":{"285":1}}],["5846",{"2":{"282":1}}],["584657\\ttrain",{"2":{"260":1}}],["584067\\tval",{"2":{"260":1}}],["5847s\\ttraining",{"2":{"234":1}}],["58",{"2":{"234":5,"236":1,"245":2,"260":418}}],["58s",{"2":{"213":1}}],["581",{"2":{"192":1}}],["581354",{"2":{"147":1}}],["58715504",{"2":{"264":1}}],["5871s\\ttraining",{"2":{"236":1}}],["587",{"2":{"260":4,"282":1}}],["58721",{"2":{"188":1}}],["58720636",{"2":{"188":1}}],["58720636f0",{"2":{"188":1}}],["587206",{"2":{"188":2}}],["587816",{"2":{"167":1}}],["585199986335995e",{"2":{"287":1}}],["5857",{"2":{"229":1}}],["585",{"2":{"187":1,"260":3}}],["5889s\\ttraining",{"2":{"234":1}}],["5880s\\ttraining",{"2":{"234":1}}],["5880",{"2":{"229":1}}],["588045",{"2":{"185":1}}],["588",{"2":{"187":1,"282":1}}],["5887876639323259e",{"2":{"287":1}}],["588749",{"2":{"147":1}}],["58873f",{"2":{"89":1}}],["589929\\tthroughput",{"2":{"295":1}}],["5892942382952084e",{"2":{"287":1}}],["5895315f",{"2":{"285":1}}],["5897",{"2":{"282":1}}],["589075",{"2":{"274":1}}],["5890444",{"2":{"147":1}}],["589",{"2":{"187":1,"276":1,"282":1}}],["589177",{"2":{"147":1}}],["58912802",{"2":{"119":1}}],["5837817231014925e",{"2":{"287":1}}],["5831402597941594e",{"2":{"287":1}}],["583191",{"2":{"147":1}}],["583649f",{"2":{"285":1}}],["5835",{"2":{"278":1}}],["583",{"2":{"187":1}}],["58338",{"2":{"147":1}}],["58333",{"2":{"78":1}}],["58328676",{"2":{"147":1}}],["530390\\tthroughput",{"2":{"295":1}}],["530237f",{"2":{"285":1}}],["530",{"2":{"282":1}}],["5306",{"2":{"278":1}}],["5301181",{"2":{"147":1}}],["5352",{"2":{"282":1}}],["53522",{"2":{"260":1}}],["5354",{"2":{"278":1}}],["535",{"2":{"276":1}}],["534692355878928e",{"2":{"287":1}}],["5345895f",{"2":{"285":1}}],["5345728",{"2":{"89":1}}],["5347075f",{"2":{"285":1}}],["5349",{"2":{"282":1}}],["534",{"2":{"282":1}}],["534099",{"2":{"274":1}}],["534014",{"2":{"260":1}}],["534009",{"2":{"260":1}}],["534006",{"2":{"260":1}}],["534003",{"2":{"260":1}}],["534000",{"2":{"260":1}}],["5338686867779225e",{"2":{"287":1}}],["533",{"2":{"282":1}}],["533903f",{"2":{"285":1}}],["5339577f",{"2":{"285":1}}],["533997",{"2":{"260":1}}],["533994",{"2":{"260":1}}],["533991",{"2":{"260":1}}],["533988",{"2":{"260":1}}],["533985",{"2":{"260":1}}],["533971",{"2":{"260":1}}],["53333336f0",{"2":{"58":1}}],["53s",{"2":{"213":1}}],["531988234051425e",{"2":{"287":1}}],["531859f",{"2":{"285":1}}],["53181f",{"2":{"89":1}}],["531536",{"2":{"260":1}}],["531531",{"2":{"260":1}}],["531529",{"2":{"260":1}}],["531526",{"2":{"260":1}}],["531523",{"2":{"260":1}}],["531520",{"2":{"260":1}}],["531517",{"2":{"260":1}}],["531514",{"2":{"260":1}}],["531511",{"2":{"260":1}}],["531508",{"2":{"260":1}}],["531492",{"2":{"260":1}}],["5310851",{"2":{"196":1}}],["538837f",{"2":{"285":1}}],["5385857",{"2":{"264":1}}],["538962693164685e",{"2":{"287":1}}],["538966",{"2":{"260":1}}],["538961",{"2":{"260":1}}],["538958",{"2":{"260":1}}],["538955",{"2":{"260":1}}],["538953",{"2":{"260":1}}],["538950",{"2":{"260":1}}],["538947",{"2":{"260":1}}],["538944",{"2":{"260":1}}],["538941",{"2":{"260":1}}],["538936",{"2":{"260":1}}],["538923",{"2":{"260":1}}],["5380",{"2":{"192":1}}],["538",{"2":{"192":1,"282":1}}],["5386295f",{"2":{"118":1}}],["536155\\tthroughput",{"2":{"295":1}}],["5362437709569422e",{"2":{"287":1}}],["536979088568644e",{"2":{"287":1}}],["536662873406219e",{"2":{"287":1}}],["536482f",{"2":{"285":1}}],["536461",{"2":{"260":1}}],["536457",{"2":{"260":1}}],["536454",{"2":{"260":1}}],["536451",{"2":{"260":1}}],["536448",{"2":{"260":1}}],["536445",{"2":{"260":1}}],["536442",{"2":{"260":1}}],["536439",{"2":{"260":1}}],["536436",{"2":{"260":1}}],["536433",{"2":{"260":1}}],["536420",{"2":{"260":1}}],["5365167",{"2":{"168":1}}],["5367613",{"2":{"167":1}}],["539704648946548e",{"2":{"287":1}}],["53978",{"2":{"147":1}}],["5395718f",{"2":{"285":1}}],["539",{"2":{"260":3,"276":1}}],["53926",{"2":{"147":1}}],["539333",{"2":{"147":1}}],["537610976983005e",{"2":{"287":1}}],["5372",{"2":{"229":1,"282":1}}],["537",{"2":{"187":1,"276":1}}],["537721",{"2":{"147":1}}],["53753",{"2":{"147":1}}],["53789175",{"2":{"89":1}}],["532076\\tthroughput",{"2":{"295":1}}],["5320771",{"2":{"147":1}}],["532156710375631e",{"2":{"287":1}}],["532192f",{"2":{"285":1}}],["532822182890829e",{"2":{"287":1}}],["532685340028104e",{"2":{"287":1}}],["5327217f",{"2":{"285":1}}],["532458",{"2":{"274":1}}],["532",{"2":{"187":1}}],["532299",{"2":{"147":1}}],["5325065f",{"2":{"118":1}}],["53",{"2":{"127":1,"213":2,"234":1,"245":3,"260":20}}],["5468509822384623e",{"2":{"287":1}}],["5467599735927566e",{"2":{"287":1}}],["546791",{"2":{"260":1}}],["546786",{"2":{"260":1}}],["546783",{"2":{"260":1}}],["546780",{"2":{"260":1}}],["546777",{"2":{"260":1}}],["546773",{"2":{"260":1}}],["546770",{"2":{"260":1}}],["546767",{"2":{"260":1}}],["546764",{"2":{"260":1}}],["546760",{"2":{"260":1}}],["546747",{"2":{"260":1}}],["5477080885073506e",{"2":{"287":1}}],["5477656f",{"2":{"285":1}}],["5474579507849773e",{"2":{"287":1}}],["5478373f",{"2":{"285":1}}],["5479672f",{"2":{"285":1}}],["5476s\\ttraining",{"2":{"234":1}}],["5476424498276177",{"2":{"191":4}}],["547642",{"2":{"188":1}}],["541933f",{"2":{"285":1}}],["5419",{"2":{"282":1}}],["5413",{"2":{"278":1}}],["541",{"2":{"276":1,"282":1}}],["541474\\ttrain",{"2":{"260":1}}],["541474",{"2":{"260":1}}],["541469",{"2":{"260":1}}],["541466",{"2":{"260":1}}],["541463",{"2":{"260":1}}],["541460",{"2":{"260":1}}],["541457",{"2":{"260":1}}],["541454",{"2":{"260":1}}],["541451",{"2":{"260":1}}],["541448",{"2":{"260":1}}],["541445",{"2":{"260":1}}],["541431",{"2":{"260":1}}],["545261094679247e",{"2":{"287":1}}],["545",{"2":{"276":1}}],["545164\\tval",{"2":{"260":1}}],["5450",{"2":{"229":1}}],["549212847121244e",{"2":{"287":1}}],["549813f",{"2":{"285":1}}],["54983395f0",{"2":{"58":2}}],["549408",{"2":{"260":1}}],["549403",{"2":{"260":1}}],["549400",{"2":{"260":1}}],["5494s\\ttraining",{"2":{"236":1}}],["549355",{"2":{"274":1}}],["549397",{"2":{"260":1}}],["549395",{"2":{"260":1}}],["549392",{"2":{"260":1}}],["549389",{"2":{"260":1}}],["549386",{"2":{"260":1}}],["549382",{"2":{"260":1}}],["549379",{"2":{"260":1}}],["549367",{"2":{"260":1}}],["54035504467461e",{"2":{"287":1}}],["54055652041523e",{"2":{"287":1}}],["5401716f",{"2":{"285":1}}],["540226f",{"2":{"285":1}}],["54082f",{"2":{"285":1}}],["5406592",{"2":{"264":1}}],["5406784f",{"2":{"118":1}}],["540",{"2":{"237":8,"282":1}}],["5404s\\ttraining",{"2":{"236":1}}],["5480553444632644e",{"2":{"287":1}}],["548012",{"2":{"147":1}}],["5487106811123998e",{"2":{"287":1}}],["5488362f",{"2":{"285":1}}],["5484",{"2":{"278":1}}],["5489093",{"2":{"264":1}}],["5489s\\ttraining",{"2":{"234":1}}],["544",{"2":{"282":1}}],["544118",{"2":{"260":1}}],["544114",{"2":{"260":1}}],["544111",{"2":{"260":1}}],["544108",{"2":{"260":1}}],["544105",{"2":{"260":1}}],["544102",{"2":{"260":1}}],["544099",{"2":{"260":1}}],["544096",{"2":{"260":1}}],["544093",{"2":{"260":1}}],["544090",{"2":{"260":1}}],["544077",{"2":{"260":1}}],["5449s\\ttraining",{"2":{"236":1}}],["5447",{"2":{"229":1}}],["544659f",{"2":{"285":1}}],["54465",{"2":{"147":1}}],["5436s\\ttraining",{"2":{"234":1}}],["543",{"2":{"187":1,"276":1,"282":1}}],["542704",{"2":{"295":1}}],["542682",{"2":{"295":1}}],["542653263354542e",{"2":{"287":1}}],["5420694598712656e",{"2":{"287":1}}],["5424996f",{"2":{"285":1}}],["54248",{"2":{"185":1}}],["542308\\ttrain",{"2":{"260":1}}],["542",{"2":{"187":1,"282":2}}],["54",{"2":{"86":1,"213":1,"260":6,"263":1,"282":1,"295":19}}],["5230",{"2":{"276":1}}],["523",{"2":{"276":1}}],["5234375",{"2":{"274":1}}],["5239",{"2":{"263":1}}],["5238",{"2":{"240":1}}],["52",{"2":{"245":1,"260":14,"278":1}}],["5211193f",{"2":{"285":1}}],["5217",{"2":{"282":1}}],["521358337055153e",{"2":{"287":1}}],["521389f",{"2":{"285":1}}],["5213",{"2":{"278":1}}],["521244771291087e",{"2":{"287":1}}],["5212",{"2":{"278":1}}],["521",{"2":{"260":2,"276":1}}],["521610",{"2":{"260":1}}],["521605",{"2":{"260":1}}],["521602",{"2":{"260":1}}],["521599",{"2":{"260":1}}],["521596",{"2":{"260":1}}],["521593",{"2":{"260":1}}],["521590",{"2":{"260":1}}],["521586",{"2":{"260":1}}],["521583",{"2":{"260":1}}],["521580",{"2":{"260":1}}],["521568",{"2":{"260":1}}],["5218s\\ttraining",{"2":{"234":1}}],["5210",{"2":{"229":1}}],["52001508315218e",{"2":{"287":1}}],["520434940565415e",{"2":{"287":1}}],["5204",{"2":{"278":1}}],["5206",{"2":{"276":1}}],["520",{"2":{"276":1}}],["5201556f",{"2":{"285":1}}],["5201",{"2":{"229":1}}],["5202",{"2":{"229":1}}],["5203194",{"2":{"147":1}}],["5263672",{"2":{"274":1}}],["526597",{"2":{"260":1}}],["526593",{"2":{"260":1}}],["526590",{"2":{"260":1}}],["526586",{"2":{"260":1}}],["526583",{"2":{"260":1}}],["526580",{"2":{"260":1}}],["526577",{"2":{"260":1}}],["526574",{"2":{"260":1}}],["526570",{"2":{"260":1}}],["526566",{"2":{"260":1}}],["526553",{"2":{"260":1}}],["526",{"2":{"187":1,"229":1}}],["524228",{"2":{"295":1}}],["524828941426518e",{"2":{"287":1}}],["524844f",{"2":{"285":1}}],["524952397367383e",{"2":{"287":1}}],["524983f",{"2":{"285":1}}],["524154",{"2":{"260":1}}],["524149",{"2":{"260":1}}],["524147",{"2":{"260":1}}],["524144",{"2":{"260":1}}],["524141",{"2":{"260":1}}],["524138",{"2":{"260":1}}],["524135",{"2":{"260":1}}],["524132",{"2":{"260":1}}],["524129",{"2":{"260":1}}],["524126",{"2":{"260":1}}],["524110",{"2":{"260":1}}],["524008",{"2":{"188":1}}],["524",{"2":{"187":1}}],["5247808",{"2":{"168":1}}],["5283301511037967e",{"2":{"287":1}}],["5283302f",{"2":{"285":1}}],["528",{"2":{"282":1}}],["528999",{"2":{"260":1}}],["528996",{"2":{"260":1}}],["528993",{"2":{"260":1}}],["528990",{"2":{"260":1}}],["528978",{"2":{"260":1}}],["528563",{"2":{"147":1}}],["5281",{"2":{"71":2}}],["529957649664991e",{"2":{"287":1}}],["529877794622476e",{"2":{"287":1}}],["529488f",{"2":{"285":1}}],["529",{"2":{"282":1}}],["529020",{"2":{"260":1}}],["529015",{"2":{"260":1}}],["529011",{"2":{"260":1}}],["529008",{"2":{"260":1}}],["529005",{"2":{"260":1}}],["529002",{"2":{"260":1}}],["5293",{"2":{"253":1}}],["529258",{"2":{"147":1}}],["529701",{"2":{"147":1}}],["522",{"2":{"260":2,"276":1}}],["5221656",{"2":{"196":1}}],["522138",{"2":{"168":1}}],["522541",{"2":{"147":1}}],["52272",{"2":{"147":1}}],["522238f",{"2":{"118":1}}],["5259",{"2":{"282":1}}],["525",{"2":{"229":1,"276":1}}],["52564013",{"2":{"147":1}}],["5255064f",{"2":{"118":1}}],["527",{"2":{"282":1}}],["527559",{"2":{"264":2}}],["52772",{"2":{"204":1}}],["52778",{"2":{"78":1}}],["527171",{"2":{"167":1}}],["5271416",{"2":{"134":1}}],["5×3",{"2":{"188":8}}],["5×3×1",{"2":{"77":1}}],["5×10×1×1",{"2":{"78":1}}],["5×9",{"2":{"76":1}}],["5×5",{"2":{"15":1,"85":1,"188":3}}],["5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀",{"2":{"58":12}}],["5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀",{"2":{"58":1}}],["5x5x6x16xf32>",{"2":{"147":3}}],["5x5x1x6xf32>",{"2":{"147":3}}],["5x",{"2":{"58":1}}],["5∗δ",{"2":{"50":1}}],["5∗|y−y^|2if",{"2":{"50":1}}],["5528",{"2":{"282":1}}],["5525",{"2":{"278":1}}],["5527s\\ttraining",{"2":{"236":1}}],["5529s\\ttraining",{"2":{"234":1}}],["5526",{"2":{"229":1}}],["551206",{"2":{"274":1}}],["5511811",{"2":{"264":2}}],["5519",{"2":{"229":1}}],["55131584",{"2":{"118":1}}],["55s",{"2":{"213":2}}],["555169",{"2":{"274":1}}],["5553797706980106",{"2":{"191":1}}],["555",{"2":{"187":1}}],["55556",{"2":{"78":2,"234":1}}],["5571",{"2":{"282":1}}],["557116",{"2":{"274":1}}],["5579",{"2":{"278":1}}],["557",{"2":{"187":1}}],["55744",{"2":{"147":1}}],["5574339",{"2":{"147":1}}],["5592",{"2":{"295":1}}],["5592724323713863e",{"2":{"287":1}}],["559237f",{"2":{"285":1}}],["559977517444896e",{"2":{"287":1}}],["5598",{"2":{"278":1}}],["55989707",{"2":{"168":1}}],["5593",{"2":{"278":1}}],["559",{"2":{"276":1}}],["559134",{"2":{"274":1}}],["5590551",{"2":{"264":2}}],["5595843665394066",{"2":{"191":1}}],["5597971",{"2":{"147":1}}],["5536540909339406e",{"2":{"287":1}}],["553633",{"2":{"167":1}}],["553039f",{"2":{"285":1}}],["55306894",{"2":{"167":1}}],["553",{"2":{"276":1,"282":1}}],["5532",{"2":{"188":1}}],["554369125297988e",{"2":{"287":1}}],["5540923167334566e",{"2":{"287":1}}],["554008",{"2":{"147":1}}],["5547153f",{"2":{"285":1}}],["554",{"2":{"187":1}}],["55419f",{"2":{"89":1}}],["550202116087691e",{"2":{"287":1}}],["5502",{"2":{"187":1}}],["550164",{"2":{"147":1}}],["5505513",{"2":{"147":1}}],["5581151733521386e",{"2":{"287":1}}],["5583524f",{"2":{"285":1}}],["558612e",{"2":{"225":1}}],["55864",{"2":{"147":1}}],["55841",{"2":{"147":1}}],["556140",{"2":{"295":1}}],["556567",{"2":{"274":1}}],["5565324",{"2":{"118":1}}],["556467",{"2":{"118":1}}],["55",{"2":{"50":1,"236":1,"260":2,"278":1}}],["5where",{"2":{"50":1}}],["5d",{"2":{"42":1,"77":2,"225":1,"286":1}}],["517026293625028e",{"2":{"287":1}}],["5170",{"2":{"278":1}}],["5171875",{"2":{"264":1}}],["519",{"2":{"276":2,"282":1}}],["519665",{"2":{"264":1}}],["51968503",{"2":{"264":2}}],["519053",{"2":{"260":1}}],["519049",{"2":{"260":1}}],["519046",{"2":{"260":1}}],["519043",{"2":{"260":1}}],["519040",{"2":{"260":1}}],["519037",{"2":{"260":1}}],["519034",{"2":{"260":1}}],["519031",{"2":{"260":1}}],["5190274940628986e",{"2":{"287":1}}],["519028",{"2":{"260":1}}],["519025",{"2":{"260":1}}],["519012",{"2":{"260":1}}],["5136448472676424e",{"2":{"287":1}}],["5135049f",{"2":{"285":1}}],["5133",{"2":{"276":1}}],["513",{"2":{"260":3}}],["51385",{"2":{"147":1}}],["51678905f",{"2":{"285":1}}],["516426352893122e",{"2":{"287":1}}],["516466f",{"2":{"285":1}}],["5164032",{"2":{"196":1}}],["516620473784184e",{"2":{"287":1}}],["516616",{"2":{"260":1}}],["516611",{"2":{"260":1}}],["516608",{"2":{"260":1}}],["516605",{"2":{"260":1}}],["516602",{"2":{"260":1}}],["516599",{"2":{"260":1}}],["516597",{"2":{"260":1}}],["516594",{"2":{"260":1}}],["516591",{"2":{"260":1}}],["516588",{"2":{"260":1}}],["516575",{"2":{"260":1}}],["516",{"2":{"229":1,"260":1,"276":2,"282":1}}],["5160",{"2":{"229":1}}],["515155012284174e",{"2":{"287":1}}],["515243f",{"2":{"285":1}}],["51524514",{"2":{"196":1}}],["515",{"2":{"276":1,"282":1}}],["5159",{"2":{"240":1}}],["5155236",{"2":{"147":1}}],["510",{"2":{"192":1}}],["5188946f",{"2":{"285":1}}],["5188",{"2":{"276":1}}],["5189",{"2":{"276":1}}],["51852",{"2":{"234":1}}],["518",{"2":{"187":1}}],["518659",{"2":{"147":1}}],["514967f",{"2":{"285":1}}],["514167",{"2":{"260":1}}],["514162",{"2":{"260":1}}],["514160",{"2":{"260":1}}],["514157",{"2":{"260":1}}],["514154",{"2":{"260":1}}],["514151",{"2":{"260":1}}],["514148",{"2":{"260":1}}],["514145",{"2":{"260":1}}],["514142",{"2":{"260":1}}],["514139",{"2":{"260":1}}],["514123",{"2":{"260":1}}],["514",{"2":{"187":1}}],["514301",{"2":{"147":1}}],["5119455425749796e",{"2":{"287":1}}],["5118846f",{"2":{"285":1}}],["511745855540573e",{"2":{"287":1}}],["5117813f",{"2":{"285":1}}],["511727",{"2":{"260":1}}],["511722",{"2":{"260":1}}],["511719",{"2":{"260":1}}],["511716",{"2":{"260":1}}],["511714",{"2":{"260":1}}],["511711",{"2":{"260":1}}],["511708",{"2":{"260":1}}],["511705",{"2":{"260":1}}],["511702",{"2":{"260":1}}],["511699",{"2":{"260":1}}],["511686",{"2":{"260":1}}],["511",{"2":{"229":1}}],["511156",{"2":{"147":1}}],["51152f",{"2":{"89":1}}],["512984\\tthroughput",{"2":{"295":1}}],["51291\\taccuracy",{"2":{"204":1}}],["512155374745555e",{"2":{"287":1}}],["512126\\ttrain",{"2":{"260":1}}],["512039242859044e",{"2":{"287":1}}],["5128111835583855e",{"2":{"287":1}}],["51285f",{"2":{"89":1}}],["5126857f",{"2":{"285":1}}],["512374",{"2":{"274":1}}],["512",{"2":{"56":3,"252":1,"276":1}}],["51",{"2":{"41":3,"126":1,"225":2,"229":1,"245":1,"260":9,"278":1,"295":19}}],["503976283525156e",{"2":{"287":1}}],["503973f",{"2":{"285":1}}],["50322153009969e",{"2":{"287":1}}],["503237f",{"2":{"285":1}}],["503683617983947e",{"2":{"287":1}}],["503",{"2":{"276":1}}],["50378",{"2":{"147":1}}],["506",{"2":{"282":1}}],["506811",{"2":{"260":1}}],["506806",{"2":{"260":1}}],["506804",{"2":{"260":1}}],["506801",{"2":{"260":1}}],["506798",{"2":{"260":1}}],["506795",{"2":{"260":1}}],["506792",{"2":{"260":1}}],["506789",{"2":{"260":1}}],["506786",{"2":{"260":1}}],["506783",{"2":{"260":1}}],["506769",{"2":{"260":1}}],["506982",{"2":{"147":1}}],["50885f",{"2":{"285":1}}],["5080",{"2":{"278":1}}],["508",{"2":{"260":3,"276":1,"282":2}}],["5083343",{"2":{"147":1}}],["507743\\tval",{"2":{"260":1}}],["507",{"2":{"260":4}}],["5070",{"2":{"240":1}}],["507333im",{"2":{"185":1}}],["504321",{"2":{"260":1}}],["504316",{"2":{"260":1}}],["504313",{"2":{"260":1}}],["504310",{"2":{"260":1}}],["504307",{"2":{"260":1}}],["5042135f",{"2":{"285":1}}],["504293",{"2":{"260":1}}],["504290",{"2":{"260":1}}],["504287",{"2":{"260":1}}],["504284",{"2":{"260":1}}],["504281",{"2":{"260":1}}],["504267",{"2":{"260":1}}],["5047",{"2":{"229":1}}],["504",{"2":{"229":2,"276":1}}],["504491",{"2":{"147":1}}],["5025314906802652e",{"2":{"287":1}}],["5021853f",{"2":{"285":1}}],["5026965",{"2":{"267":1}}],["502",{"2":{"187":1,"253":2,"260":1}}],["502312852219817",{"2":{"50":1}}],["509196323442113e",{"2":{"287":1}}],["5098",{"2":{"276":1}}],["509",{"2":{"276":1,"282":1}}],["509272",{"2":{"260":1}}],["509268",{"2":{"260":1}}],["509265",{"2":{"260":1}}],["509262",{"2":{"260":1}}],["509259",{"2":{"260":1}}],["509256",{"2":{"260":1}}],["509253",{"2":{"260":1}}],["509250",{"2":{"260":1}}],["509247",{"2":{"260":1}}],["509244",{"2":{"260":1}}],["509228",{"2":{"260":1}}],["509239",{"2":{"147":1}}],["509792+0",{"2":{"185":1}}],["5052",{"2":{"295":1}}],["50520\\taccuracy",{"2":{"204":1}}],["5051",{"2":{"282":1}}],["505684",{"2":{"274":1}}],["505",{"2":{"192":1,"282":1}}],["50536",{"2":{"147":1}}],["50502",{"2":{"147":1}}],["501534957362108e",{"2":{"287":1}}],["501381f",{"2":{"285":1}}],["50136f",{"2":{"89":1}}],["501",{"2":{"282":1}}],["501735",{"2":{"260":1}}],["501730",{"2":{"260":1}}],["501728",{"2":{"260":1}}],["501725",{"2":{"260":1}}],["501722",{"2":{"260":1}}],["501719",{"2":{"260":1}}],["501716",{"2":{"260":1}}],["501713",{"2":{"260":1}}],["501710",{"2":{"260":1}}],["501707",{"2":{"260":1}}],["501694",{"2":{"260":1}}],["501695",{"2":{"147":1}}],["5016018",{"2":{"167":1}}],["50063384",{"2":{"264":1}}],["5005703",{"2":{"264":1}}],["5000×30×1",{"2":{"278":1}}],["50000",{"2":{"253":50}}],["5000",{"2":{"253":1,"278":5,"295":1}}],["5001",{"2":{"196":1,"253":1}}],["50096",{"2":{"147":1}}],["500",{"2":{"81":1,"119":1,"200":2,"276":2}}],["50",{"2":{"35":2,"41":3,"81":1,"90":1,"204":1,"213":3,"245":108,"253":1,"260":1,"268":1,"274":3,"278":1,"295":1}}],["5",{"2":{"15":8,"23":2,"35":2,"41":6,"47":4,"50":15,"52":5,"56":11,"58":33,"66":9,"71":1,"74":2,"75":3,"76":24,"77":7,"78":47,"80":5,"81":14,"82":4,"85":4,"86":5,"89":11,"114":1,"118":55,"121":1,"132":5,"134":1,"146":1,"147":13,"167":2,"168":2,"185":6,"187":6,"188":29,"189":1,"190":3,"192":6,"194":3,"196":6,"204":2,"210":14,"213":2,"223":1,"229":24,"234":6,"235":2,"236":1,"237":40,"238":2,"240":4,"245":2,"246":2,"258":1,"260":15,"263":3,"264":9,"268":2,"271":1,"274":3,"276":45,"277":4,"278":20,"279":1,"282":49,"283":4,"284":2,"285":773,"287":741,"288":3,"293":2}}],["6\\ttrain",{"2":{"260":1}}],["6f",{"2":{"260":3,"274":2,"295":2}}],["6e",{"2":{"225":1}}],["6d",{"2":{"196":1,"253":2,"295":2}}],["6xf32>",{"2":{"147":2}}],["6x1x5x5xf32>",{"2":{"147":2}}],["656",{"2":{"276":1,"282":3}}],["6566s",{"2":{"274":1}}],["6567s",{"2":{"274":1}}],["6563s",{"2":{"274":1}}],["6569s",{"2":{"274":1}}],["653298854635155e",{"2":{"287":1}}],["6532s",{"2":{"274":1}}],["653386f",{"2":{"285":1}}],["653",{"2":{"282":1}}],["6538s",{"2":{"274":1}}],["6535434",{"2":{"264":2}}],["658397",{"2":{"295":1}}],["658390",{"2":{"295":1}}],["658388",{"2":{"295":1}}],["658385",{"2":{"295":1}}],["658382",{"2":{"295":1}}],["658380",{"2":{"295":1}}],["658377",{"2":{"295":1}}],["658374",{"2":{"295":1}}],["658371",{"2":{"295":1}}],["658368",{"2":{"295":1}}],["658324",{"2":{"295":1}}],["6583s",{"2":{"274":1}}],["658011318295868e",{"2":{"287":1}}],["658151f",{"2":{"285":1}}],["6584",{"2":{"278":1}}],["658",{"2":{"276":1}}],["6582s",{"2":{"274":1}}],["6589s",{"2":{"274":1}}],["6599",{"2":{"295":1}}],["659",{"2":{"282":1}}],["6593s",{"2":{"274":1}}],["6592s",{"2":{"274":1}}],["6590s",{"2":{"274":1}}],["6595012",{"2":{"147":1}}],["6573602f",{"2":{"285":1}}],["6579s",{"2":{"274":1}}],["6574s",{"2":{"274":1}}],["6578s",{"2":{"274":1}}],["6570s",{"2":{"274":1}}],["6576s",{"2":{"274":1}}],["6575044",{"2":{"264":1}}],["657",{"2":{"260":4,"276":1,"282":1}}],["6572",{"2":{"253":1}}],["65",{"2":{"245":4,"260":35,"274":1}}],["6512847f",{"2":{"285":1}}],["6514s",{"2":{"274":1}}],["651417\\tval",{"2":{"260":1}}],["6515s",{"2":{"274":2}}],["651333",{"2":{"274":1}}],["651",{"2":{"192":1,"229":1,"276":2}}],["65178",{"2":{"167":1}}],["65176255",{"2":{"147":1}}],["6546s",{"2":{"274":1}}],["6541s",{"2":{"274":2}}],["6548s",{"2":{"274":2}}],["65445",{"2":{"260":3}}],["6543",{"2":{"253":3}}],["654",{"2":{"187":1,"276":1,"282":1}}],["6528442208404164e",{"2":{"287":1}}],["6528s",{"2":{"274":1}}],["6527603912049874e",{"2":{"287":1}}],["6527s",{"2":{"274":1}}],["6529734f",{"2":{"285":1}}],["6529s",{"2":{"274":1}}],["6526057f",{"2":{"285":1}}],["6526s",{"2":{"274":1}}],["6524s",{"2":{"274":1}}],["6524158",{"2":{"264":1}}],["6523s",{"2":{"274":1}}],["652",{"2":{"187":1,"276":2}}],["6555s",{"2":{"274":1}}],["6558s",{"2":{"274":1}}],["655881",{"2":{"188":1}}],["6556s",{"2":{"274":1}}],["6556360448565952",{"2":{"191":1}}],["6554s",{"2":{"274":1}}],["6559s",{"2":{"274":2}}],["655222\\ttrain",{"2":{"260":1}}],["655",{"2":{"187":1,"276":2,"282":3}}],["650270326476416e",{"2":{"287":1}}],["650203",{"2":{"147":1}}],["650909f",{"2":{"285":1}}],["650403f",{"2":{"285":1}}],["6503886f",{"2":{"285":1}}],["65056934399944e",{"2":{"287":1}}],["650591",{"2":{"274":1}}],["65058047",{"2":{"118":1}}],["650",{"2":{"41":3,"282":3}}],["695066f",{"2":{"285":1}}],["695",{"2":{"282":2}}],["6953125",{"2":{"274":1}}],["691833957271307e",{"2":{"287":1}}],["6914683f",{"2":{"285":1}}],["691",{"2":{"276":1,"282":2}}],["6913",{"2":{"229":1}}],["6913705",{"2":{"165":1}}],["699421",{"2":{"274":1}}],["692015f",{"2":{"285":1}}],["692",{"2":{"282":1,"295":1}}],["692657",{"2":{"267":1}}],["6923s\\ttraining",{"2":{"234":1}}],["6909118577842545e",{"2":{"287":1}}],["6907826f",{"2":{"285":1}}],["690",{"2":{"260":2,"276":1,"282":1}}],["693878051564067e",{"2":{"287":1}}],["693",{"2":{"276":3,"282":1}}],["693584",{"2":{"274":1}}],["6932",{"2":{"253":3}}],["693396",{"2":{"147":1}}],["6949367775572175e",{"2":{"287":1}}],["6949s\\ttraining",{"2":{"234":1}}],["6942s",{"2":{"274":1}}],["69444",{"2":{"78":1}}],["6983095f",{"2":{"285":1}}],["698",{"2":{"229":1,"276":2}}],["697920104542528e",{"2":{"287":1}}],["697",{"2":{"229":1,"276":1,"282":1}}],["697421",{"2":{"147":1}}],["696",{"2":{"187":1}}],["69",{"2":{"132":1,"213":4,"229":1,"234":2,"260":11,"282":1}}],["689",{"2":{"282":1}}],["6893",{"2":{"278":1}}],["68927",{"2":{"264":1}}],["68156701906618e",{"2":{"287":1}}],["681582f",{"2":{"285":1}}],["681149567304587e",{"2":{"287":1}}],["681",{"2":{"282":1}}],["6812",{"2":{"278":1}}],["681681f",{"2":{"118":1}}],["682935f",{"2":{"285":1}}],["68299204",{"2":{"147":1}}],["682",{"2":{"276":2}}],["6809688f",{"2":{"285":1}}],["6806641",{"2":{"274":1}}],["680748",{"2":{"143":1}}],["684587368797963e",{"2":{"287":1}}],["6845874f",{"2":{"285":1}}],["684913f",{"2":{"285":1}}],["684",{"2":{"276":2,"282":1}}],["684370",{"2":{"274":1}}],["6846315",{"2":{"147":1}}],["68",{"2":{"234":3,"236":2,"245":9,"260":15}}],["683",{"2":{"276":2,"282":3}}],["6830945313415872",{"2":{"191":1}}],["683456",{"2":{"147":1}}],["6865234",{"2":{"274":1}}],["6868",{"2":{"263":1}}],["686",{"2":{"187":1,"282":2}}],["6867144",{"2":{"168":1}}],["68639",{"2":{"147":1}}],["686369",{"2":{"147":1}}],["685778831020177e",{"2":{"287":1}}],["6854",{"2":{"282":1}}],["685",{"2":{"276":1}}],["68504354805617e",{"2":{"287":1}}],["6850394",{"2":{"264":2}}],["685079",{"2":{"188":1}}],["68539",{"2":{"168":1}}],["685639f",{"2":{"285":1}}],["6856",{"2":{"147":1}}],["68522483",{"2":{"118":1}}],["688039926016194e",{"2":{"287":1}}],["6882",{"2":{"278":1}}],["6882201",{"2":{"147":1}}],["688",{"2":{"276":1}}],["688665",{"2":{"168":1}}],["6886741",{"2":{"143":1}}],["687",{"2":{"276":1}}],["687611",{"2":{"185":1}}],["687264",{"2":{"168":1}}],["687716",{"2":{"147":1}}],["6877121",{"2":{"147":1}}],["6875213",{"2":{"147":1}}],["611469264328724e",{"2":{"287":1}}],["6113275216954894e",{"2":{"287":1}}],["6118604339732293e",{"2":{"287":1}}],["611635649463381e",{"2":{"287":1}}],["611232f",{"2":{"285":1}}],["6112404f",{"2":{"285":1}}],["611762f",{"2":{"285":1}}],["611929e",{"2":{"225":1}}],["619",{"2":{"276":1,"282":1}}],["6198",{"2":{"276":1}}],["619768767771663e",{"2":{"287":1}}],["6197855343288158519",{"2":{"260":1}}],["61977077",{"2":{"118":1}}],["6186",{"2":{"278":1}}],["618",{"2":{"276":2}}],["6184501",{"2":{"264":1}}],["61754943877683e",{"2":{"287":1}}],["6170902f",{"2":{"285":1}}],["61726f",{"2":{"285":1}}],["617",{"2":{"276":2,"282":1}}],["617886",{"2":{"274":1}}],["61768365",{"2":{"267":1}}],["6139583049556656e",{"2":{"287":1}}],["613088178118954e",{"2":{"287":1}}],["61369f",{"2":{"285":1}}],["6132812",{"2":{"274":1}}],["61375",{"2":{"260":3}}],["61",{"2":{"213":1,"245":2,"260":4}}],["615229599441948e",{"2":{"287":1}}],["615318f",{"2":{"285":1}}],["615",{"2":{"192":1,"282":1}}],["6168021665799215e",{"2":{"287":1}}],["616",{"2":{"187":1,"276":2}}],["6146540471095255e",{"2":{"287":1}}],["6140875f",{"2":{"285":1}}],["6149562f",{"2":{"285":1}}],["61417323",{"2":{"264":2}}],["614",{"2":{"187":1,"276":1}}],["6143286",{"2":{"143":1}}],["6129",{"2":{"295":1}}],["6122958f",{"2":{"285":1}}],["6123",{"2":{"263":1}}],["61216",{"2":{"260":3}}],["612",{"2":{"187":1,"276":3,"282":2}}],["6120",{"2":{"15":1,"229":1}}],["6103516",{"2":{"274":1}}],["610",{"2":{"168":1,"276":3,"282":1}}],["629205261624368e",{"2":{"287":1}}],["629363f",{"2":{"285":1}}],["629",{"2":{"276":2}}],["62963",{"2":{"234":2}}],["624201452816441e",{"2":{"287":1}}],["62417f",{"2":{"285":1}}],["624",{"2":{"276":2}}],["6240234",{"2":{"274":1}}],["624383955429775",{"2":{"191":1}}],["624384",{"2":{"188":1}}],["6225935144048385e",{"2":{"287":1}}],["622511",{"2":{"147":1}}],["622627307946056e",{"2":{"287":1}}],["6227158659132254e",{"2":{"287":1}}],["6224945f",{"2":{"285":1}}],["6223264f",{"2":{"285":1}}],["622",{"2":{"276":1}}],["6220472",{"2":{"264":2}}],["62603364893066e",{"2":{"287":1}}],["626418012775144e",{"2":{"287":1}}],["626418f",{"2":{"285":1}}],["6268",{"2":{"278":1}}],["626",{"2":{"260":3,"282":2}}],["62625877100596",{"2":{"188":1}}],["62626415",{"2":{"147":1}}],["628840",{"2":{"274":1}}],["6288941",{"2":{"264":1}}],["628038",{"2":{"274":1}}],["62822",{"2":{"274":1}}],["628",{"2":{"229":1,"260":5,"276":1}}],["628994",{"2":{"147":1}}],["627685744253288e",{"2":{"287":1}}],["62748625805241e",{"2":{"287":1}}],["627555f",{"2":{"285":1}}],["627",{"2":{"229":1,"260":1,"276":2,"282":2}}],["62",{"2":{"213":1,"245":7,"260":22,"278":1}}],["6216467706072427e",{"2":{"287":1}}],["6212781f",{"2":{"285":1}}],["62129945",{"2":{"264":1}}],["6211785f",{"2":{"285":1}}],["621380068550216e",{"2":{"287":1}}],["6213",{"2":{"282":1}}],["621333",{"2":{"147":1}}],["6210448f",{"2":{"285":1}}],["6210",{"2":{"278":1}}],["621",{"2":{"229":1,"276":3}}],["621958",{"2":{"188":1}}],["6238537f",{"2":{"285":1}}],["623899\\ttrain",{"2":{"260":1}}],["6231955f",{"2":{"285":1}}],["623290\\ttrain",{"2":{"260":1}}],["623",{"2":{"187":1,"260":3,"276":1,"282":2}}],["620508\\tval",{"2":{"260":1}}],["620",{"2":{"260":5,"276":1,"282":1}}],["620041e",{"2":{"225":1}}],["6207278",{"2":{"147":1}}],["62043f",{"2":{"89":1}}],["625939637504688e",{"2":{"287":1}}],["62505480474385e",{"2":{"287":1}}],["625782f",{"2":{"285":1}}],["625876f",{"2":{"285":1}}],["625334f",{"2":{"285":1}}],["625",{"2":{"50":1,"276":1,"282":1}}],["601073688181039e",{"2":{"287":1}}],["6018",{"2":{"282":1}}],["601",{"2":{"276":1,"282":1}}],["60171676",{"2":{"267":1}}],["60148f",{"2":{"118":1}}],["60632f",{"2":{"285":1}}],["606",{"2":{"260":3,"276":3}}],["606911e",{"2":{"225":1}}],["6054181790606307e",{"2":{"287":1}}],["60513983915291e",{"2":{"287":1}}],["6051288f",{"2":{"285":1}}],["605625",{"2":{"274":1}}],["605930510955933e",{"2":{"287":1}}],["6059",{"2":{"253":2}}],["6055s\\ttraining",{"2":{"234":1}}],["605",{"2":{"229":1,"276":1}}],["60",{"2":{"213":2,"245":2}}],["6090427493554216e",{"2":{"287":1}}],["6099132932389313e",{"2":{"287":1}}],["609813f",{"2":{"285":1}}],["6091885f",{"2":{"285":1}}],["6091344",{"2":{"267":1}}],["609",{"2":{"187":1,"260":3,"276":1,"282":1}}],["603031f",{"2":{"285":1}}],["603305f",{"2":{"285":1}}],["60395515",{"2":{"264":1}}],["6039",{"2":{"229":1}}],["603",{"2":{"187":1,"276":1}}],["603439",{"2":{"147":1}}],["608799107770618e",{"2":{"287":1}}],["608888f",{"2":{"285":1}}],["608",{"2":{"276":2,"282":1}}],["6083984",{"2":{"274":1}}],["608668im",{"2":{"185":1}}],["608401f",{"2":{"118":1}}],["604986f",{"2":{"285":1}}],["6049256",{"2":{"168":1}}],["6046",{"2":{"282":1}}],["604",{"2":{"276":1}}],["604107",{"2":{"147":1}}],["602995307396797e",{"2":{"287":1}}],["6025",{"2":{"278":1}}],["602513",{"2":{"147":1}}],["602",{"2":{"192":1,"276":2}}],["60234f",{"2":{"89":1}}],["600511467716606e",{"2":{"287":1}}],["600941f",{"2":{"285":1}}],["6000",{"2":{"260":5,"295":1}}],["6001",{"2":{"196":1,"253":1}}],["600764",{"2":{"147":1}}],["600",{"2":{"119":1,"187":1,"263":1,"276":1,"282":1}}],["6077032f",{"2":{"118":1}}],["6076053f0",{"2":{"50":2}}],["633",{"2":{"260":1,"276":3,"282":1}}],["63391256",{"2":{"147":1}}],["636",{"2":{"260":2,"282":1}}],["637136882686671e",{"2":{"287":1}}],["6371056f",{"2":{"285":1}}],["637",{"2":{"260":1,"276":1,"282":1}}],["637219",{"2":{"168":1}}],["635142714310196e",{"2":{"287":1}}],["635",{"2":{"260":5,"276":1,"282":1}}],["63550",{"2":{"204":1}}],["63599\\taccuracy",{"2":{"204":1}}],["63537f",{"2":{"89":1}}],["63",{"2":{"192":1,"213":1,"245":1,"260":32}}],["6308156f",{"2":{"285":1}}],["6309045795196e",{"2":{"287":1}}],["6309",{"2":{"229":1}}],["630",{"2":{"187":1,"229":1}}],["630092",{"2":{"147":1}}],["6346849",{"2":{"264":1}}],["6340362477836592",{"2":{"191":1}}],["634",{"2":{"187":1,"192":1,"260":1,"276":1}}],["634469",{"2":{"147":1}}],["6328125",{"2":{"274":1}}],["632952",{"2":{"185":1}}],["6323408",{"2":{"143":1}}],["6391403514363544e",{"2":{"287":1}}],["6396589f",{"2":{"285":1}}],["6390075f",{"2":{"285":1}}],["639",{"2":{"282":2}}],["63936",{"2":{"147":1}}],["6395181",{"2":{"147":1}}],["63952f",{"2":{"89":1}}],["631494225744535e",{"2":{"287":1}}],["6317964f",{"2":{"285":1}}],["63175",{"2":{"147":1}}],["6312",{"2":{"278":1}}],["631217",{"2":{"185":1}}],["6319",{"2":{"278":1}}],["631519e",{"2":{"225":1}}],["63161",{"2":{"147":1}}],["638329053405804e",{"2":{"287":1}}],["6383875",{"2":{"167":1}}],["638298f",{"2":{"285":1}}],["63823",{"2":{"147":1}}],["638",{"2":{"260":6,"276":2,"282":1}}],["638731",{"2":{"147":1}}],["63889",{"2":{"78":1}}],["673023779076256e",{"2":{"287":1}}],["673312f",{"2":{"285":1}}],["673",{"2":{"282":2}}],["673229",{"2":{"147":1}}],["67326",{"2":{"58":1}}],["6750",{"2":{"282":1}}],["675",{"2":{"282":2}}],["67555165",{"2":{"264":1}}],["672533479973743e",{"2":{"287":1}}],["672923f",{"2":{"285":1}}],["6720",{"2":{"278":1}}],["672666\\tval",{"2":{"260":1}}],["6726513",{"2":{"165":1}}],["67",{"2":{"213":2,"234":1,"245":1,"260":23}}],["6706038",{"2":{"264":1}}],["670",{"2":{"187":1,"282":1}}],["6702257",{"2":{"165":1}}],["674519045286662e",{"2":{"287":1}}],["6745504f",{"2":{"285":1}}],["6746",{"2":{"229":1}}],["67469",{"2":{"147":1}}],["67404\\taccuracy",{"2":{"204":1}}],["674",{"2":{"187":1,"282":1}}],["676",{"2":{"260":6}}],["67689",{"2":{"147":1}}],["6766358",{"2":{"118":1}}],["671",{"2":{"260":3,"282":1}}],["6715s\\ttraining",{"2":{"234":1}}],["671869",{"2":{"147":1}}],["671201",{"2":{"147":1}}],["677",{"2":{"276":1}}],["6774449",{"2":{"264":1}}],["6771653",{"2":{"264":2}}],["6776220912718907",{"2":{"191":1}}],["67728",{"2":{"147":1}}],["677237",{"2":{"118":1}}],["677041",{"2":{"147":1}}],["678",{"2":{"276":1,"282":2}}],["6784",{"2":{"199":1}}],["67841595",{"2":{"89":1}}],["67861",{"2":{"188":1}}],["67852753",{"2":{"147":1}}],["679257403614656e",{"2":{"287":1}}],["67922f",{"2":{"118":1}}],["67937f",{"2":{"285":1}}],["679",{"2":{"276":1,"282":1}}],["67909",{"2":{"188":1}}],["67908657",{"2":{"188":1}}],["679087",{"2":{"188":2}}],["679885",{"2":{"167":1}}],["6795253",{"2":{"118":1}}],["6×3×1",{"2":{"78":1}}],["6×3×3",{"2":{"76":1}}],["6×10",{"2":{"76":1}}],["6×6",{"2":{"76":3}}],["665340941317205e",{"2":{"287":1}}],["6651759850291906e",{"2":{"287":1}}],["6654986f",{"2":{"285":1}}],["665816f",{"2":{"285":1}}],["665",{"2":{"282":1}}],["6652409557748218",{"2":{"74":1}}],["665241",{"2":{"50":10,"74":1}}],["66761144101045e",{"2":{"287":1}}],["667853154760091e",{"2":{"287":1}}],["6674011431911173e",{"2":{"287":1}}],["6677524f",{"2":{"285":1}}],["66772f",{"2":{"285":1}}],["667544211269212e",{"2":{"287":1}}],["6675755f",{"2":{"285":1}}],["6675",{"2":{"282":1}}],["667",{"2":{"276":1,"282":1}}],["6679688",{"2":{"274":1}}],["6684",{"2":{"278":1}}],["668",{"2":{"276":1,"282":2}}],["660799079704119e",{"2":{"287":1}}],["660697f",{"2":{"285":1}}],["6606s",{"2":{"274":1}}],["660",{"2":{"282":2}}],["6609",{"2":{"278":1}}],["660927",{"2":{"274":1}}],["6601s",{"2":{"274":1}}],["66010",{"2":{"204":1}}],["6602s",{"2":{"274":1}}],["6629385f",{"2":{"285":1}}],["6629s",{"2":{"274":1}}],["6623s",{"2":{"274":1}}],["6625s",{"2":{"274":1}}],["6621094",{"2":{"274":1}}],["6624s",{"2":{"274":1}}],["6693",{"2":{"278":1}}],["6692604",{"2":{"264":1}}],["669",{"2":{"260":1,"276":1}}],["6694499f",{"2":{"118":1}}],["66",{"2":{"234":1,"245":1,"260":10,"285":1}}],["664509724924789e",{"2":{"287":1}}],["664",{"2":{"187":1,"229":1,"276":1}}],["666430943848252e",{"2":{"287":1}}],["666274\\tval",{"2":{"260":1}}],["666",{"2":{"187":1,"276":1,"282":2}}],["6663245",{"2":{"167":1}}],["6666821",{"2":{"167":1}}],["6666666666666666",{"2":{"84":1}}],["666667",{"2":{"58":1}}],["66667",{"2":{"58":1,"78":2,"234":15,"236":4}}],["6638",{"2":{"295":1}}],["663284709544917e",{"2":{"287":1}}],["663223",{"2":{"147":1}}],["6639077f",{"2":{"285":1}}],["663",{"2":{"282":1}}],["6637254",{"2":{"167":1}}],["661779832089018e",{"2":{"287":1}}],["6611s",{"2":{"274":1}}],["6618s",{"2":{"274":1}}],["6612s",{"2":{"274":1}}],["6610s",{"2":{"274":1}}],["661",{"2":{"187":1,"282":1}}],["661647f",{"2":{"285":1}}],["6616",{"2":{"147":1}}],["6615898",{"2":{"118":1}}],["64443829436409e",{"2":{"287":1}}],["644718f",{"2":{"285":1}}],["644",{"2":{"282":2}}],["6449",{"2":{"278":1}}],["6433519943567903e",{"2":{"287":1}}],["6433369f",{"2":{"285":1}}],["643",{"2":{"276":1}}],["643172",{"2":{"185":1}}],["648571783425076e",{"2":{"287":1}}],["64850235",{"2":{"118":1}}],["6489815553295507e",{"2":{"287":1}}],["6482256f",{"2":{"285":1}}],["6483664f",{"2":{"285":1}}],["648",{"2":{"276":2}}],["6486",{"2":{"253":1}}],["6493658260210587e",{"2":{"287":1}}],["6495205f",{"2":{"285":1}}],["649",{"2":{"192":1}}],["641312518775373e",{"2":{"287":1}}],["6414843f",{"2":{"285":1}}],["641402",{"2":{"147":1}}],["641",{"2":{"187":1,"260":5,"276":1,"282":2}}],["6467251869835914e",{"2":{"287":1}}],["6461134977154854e",{"2":{"287":1}}],["6468507f",{"2":{"285":1}}],["64625f",{"2":{"285":1}}],["646",{"2":{"187":1,"276":1,"282":2}}],["6469914",{"2":{"143":1}}],["6423318f",{"2":{"285":1}}],["6429",{"2":{"278":1}}],["64293",{"2":{"147":1}}],["642",{"2":{"229":1,"276":1,"282":1}}],["6428",{"2":{"192":1}}],["64241976",{"2":{"167":1}}],["6403528731881814e",{"2":{"287":1}}],["640222f",{"2":{"285":1}}],["640",{"2":{"257":1,"282":2}}],["6401",{"2":{"229":1}}],["640935",{"2":{"167":1}}],["640625",{"2":{"147":1}}],["6478636f",{"2":{"285":1}}],["6475",{"2":{"276":1}}],["647",{"2":{"276":1,"282":3}}],["6477456",{"2":{"165":1}}],["64721453",{"2":{"147":1}}],["645584",{"2":{"295":1}}],["645574",{"2":{"295":1}}],["645571",{"2":{"295":1}}],["645567",{"2":{"295":1}}],["645564",{"2":{"295":1}}],["645561",{"2":{"295":1}}],["645504",{"2":{"295":1}}],["645",{"2":{"282":1}}],["6456693",{"2":{"264":2}}],["6456416023530093",{"2":{"191":1}}],["645233",{"2":{"147":1}}],["6454607f",{"2":{"118":1}}],["64",{"2":{"41":18,"65":1,"90":1,"197":2,"206":2,"213":1,"214":2,"227":2,"229":1,"238":2,"240":1,"243":2,"246":2,"255":2,"260":22,"261":2,"269":2,"274":2,"275":2,"280":3,"285":1,"289":3,"297":2}}],["6",{"2":{"15":1,"34":3,"47":2,"58":7,"76":20,"77":9,"78":24,"81":7,"84":1,"89":10,"118":25,"147":9,"165":1,"167":5,"168":7,"172":1,"187":10,"188":2,"190":3,"192":4,"196":1,"197":1,"199":2,"204":4,"206":1,"210":8,"213":2,"214":1,"227":1,"229":24,"234":6,"236":1,"238":6,"240":5,"245":2,"246":6,"255":1,"260":26,"261":1,"263":4,"264":5,"269":1,"274":2,"275":1,"276":51,"278":10,"279":6,"280":1,"282":59,"284":2,"285":147,"287":146,"289":1,"297":1}}],["+=",{"2":{"204":5,"211":2,"233":2,"244":2,"253":1,"260":1,"274":2,"278":1,"291":1,"293":2,"295":1}}],["+exp",{"2":{"86":1}}],["+ϵ∗γ+βand",{"2":{"66":1}}],["+ϵ∗γ+βwhere",{"2":{"41":1}}],["+",{"2":{"15":3,"23":1,"34":5,"35":4,"37":6,"39":4,"42":1,"45":1,"49":1,"52":1,"56":4,"58":9,"63":1,"65":1,"77":1,"78":3,"79":4,"81":5,"84":1,"86":6,"96":2,"105":1,"121":1,"122":2,"153":2,"161":2,"165":1,"166":1,"188":4,"196":3,"200":2,"223":1,"251":4,"252":3,"254":1,"264":1,"271":1,"273":8,"277":4,"278":1,"283":13,"284":1,"285":2,"291":3,"292":2}}],["krylov",{"2":{"229":1,"282":1}}],["knet",{"2":{"86":1,"98":1}}],["know",{"2":{"123":1,"126":1,"164":1,"285":1}}],["knows",{"2":{"84":1}}],["known",{"0":{"141":1},"2":{"15":2,"49":1,"75":1,"84":1,"123":1}}],["kv",{"2":{"73":5}}],["kwarg",{"2":{"114":2,"115":1}}],["kwargsstorage",{"2":{"237":4}}],["kwargs",{"2":{"16":24,"24":1,"28":2,"34":9,"40":2,"56":4,"77":1,"86":1,"96":4,"115":1,"185":2,"186":2,"212":1,"231":7,"234":2,"235":4,"237":4,"258":5,"291":2}}],["kws",{"2":{"77":3}}],["kw",{"2":{"56":2}}],["kldiv",{"2":{"273":3}}],["kldivergenceloss",{"2":{"50":5}}],["klu",{"2":{"229":1,"282":1}}],["klambauer",{"2":{"64":1}}],["kl",{"2":{"50":1}}],["kullback",{"2":{"50":1}}],["kiros",{"2":{"66":1}}],["ki−1",{"2":{"35":1,"37":3}}],["kind",{"0":{"143":1},"2":{"25":1,"34":1,"47":1,"56":2,"126":1,"143":1}}],["kinds",{"2":{"8":1}}],["k",{"2":{"35":6,"39":4,"73":2,"75":26,"80":26,"81":16,"86":5,"196":1,"231":2,"235":2}}],["kaiming",{"2":{"15":4,"35":2,"39":1,"66":1,"185":6}}],["keith",{"2":{"281":1}}],["keep",{"2":{"176":1}}],["kept",{"2":{"173":1}}],["kerneldensity",{"2":{"263":1,"276":1,"282":1}}],["kernelabstractions",{"2":{"67":1,"79":1,"187":3,"192":1,"276":4,"282":4}}],["kernels",{"2":{"63":1,"86":2}}],["kernel",{"2":{"15":1,"35":5,"63":1,"67":1,"75":1,"77":6,"86":6}}],["keys",{"2":{"114":1,"132":1,"156":2}}],["keypath=keypath",{"2":{"24":1}}],["keypath",{"2":{"23":7,"114":1,"115":1,"126":4,"127":10,"144":2,"145":4}}],["key",{"0":{"140":1},"2":{"10":2,"73":4,"136":1}}],["keywords",{"2":{"42":1,"56":1,"79":1,"186":1}}],["keyword",{"2":{"2":1,"10":1,"24":2,"34":5,"35":2,"36":4,"37":3,"38":2,"39":4,"40":2,"41":4,"42":2,"45":1,"49":2,"74":1,"78":3,"81":1,"84":1,"85":1,"86":5,"95":1,"96":2,"104":1,"116":1,"117":1,"119":1,"186":1}}],["hₓ",{"2":{"283":4}}],["h₊",{"2":{"283":4}}],["h12",{"2":{"283":12}}],["h11",{"2":{"283":12}}],["h22",{"2":{"283":12}}],["hmc",{"2":{"278":2}}],["hcat",{"2":{"252":1,"278":1}}],["hn",{"2":{"242":2}}],["hnew",{"2":{"38":6}}],["hnew=activation",{"2":{"38":1}}],["hnew=",{"2":{"38":1}}],["hypergeometricfunctions",{"2":{"263":1,"276":1,"282":1}}],["hypernet",{"0":{"242":1,"243":1},"2":{"242":4,"243":1}}],["hypernetwork",{"0":{"239":1},"1":{"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1}}],["hyperbolic",{"2":{"58":1}}],["httpext",{"2":{"229":1}}],["http",{"2":{"229":1}}],["https",{"2":{"15":1,"38":1,"40":1,"68":1,"71":1,"78":1,"86":1,"194":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"283":2,"289":1,"297":1}}],["hdf5",{"2":{"229":2}}],["hwloctrees",{"2":{"276":2,"282":2}}],["hwloc",{"2":{"187":2,"276":3,"282":3}}],["hlo",{"2":{"119":5,"147":6}}],["hz",{"2":{"86":2}}],["hostcpufeatures",{"2":{"282":1}}],["hosts",{"2":{"91":1}}],["home",{"2":{"155":1}}],["hold",{"2":{"89":1,"99":1}}],["holds",{"2":{"86":2}}],["hops",{"2":{"86":1}}],["hop",{"2":{"86":5}}],["hot",{"2":{"52":1}}],["how",{"0":{"68":1,"157":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1},"2":{"34":2,"38":3,"41":8,"47":1,"50":2,"52":1,"80":2,"81":1,"86":2,"88":1,"89":1,"90":1,"100":1,"118":1,"119":1,"125":1,"127":2,"142":1,"147":1,"153":1,"155":1,"160":1,"164":1,"168":1,"174":1,"178":2,"188":2,"194":1,"207":1,"221":1,"231":1,"242":1,"276":1,"278":1,"279":1,"290":1}}],["however",{"2":{"8":1,"11":1,"34":1,"52":1,"56":1,"79":1,"80":1,"86":1,"89":1,"98":1,"116":1,"124":1,"128":1,"140":1,"168":1,"184":1,"190":2,"192":1,"219":1,"221":3,"231":1,"247":1,"283":1,"285":2}}],["hutchinson",{"0":{"169":1},"1":{"170":1,"171":1,"172":1},"2":{"169":2,"170":3,"171":2,"172":8}}],["huge",{"2":{"86":1}}],["huber",{"2":{"50":1}}],["huberloss",{"2":{"50":3}}],["human",{"2":{"15":2}}],["hh",{"2":{"38":6,"117":1}}],["h",{"2":{"37":6,"38":3,"42":4,"78":4,"82":7,"86":1,"209":1,"230":1,"258":2,"283":14}}],["historical",{"2":{"231":1}}],["historically",{"2":{"21":1}}],["hinton",{"2":{"66":1}}],["hinge",{"2":{"50":2}}],["hingeloss",{"2":{"50":2}}],["hi=1",{"2":{"58":1}}],["hierarchically",{"2":{"56":1}}],["hidden",{"2":{"38":43,"201":6,"202":3,"250":11,"253":2,"260":2,"278":1,"293":7,"295":2}}],["highlight",{"2":{"86":1,"140":1}}],["highlights",{"2":{"86":1}}],["highly",{"2":{"50":1}}],["highest",{"2":{"63":1,"123":1,"158":1,"279":1}}],["higher",{"2":{"42":1,"158":1,"167":1}}],["high",{"2":{"34":1,"78":1,"188":1,"194":1}}],["hi",{"2":{"15":2,"58":1}}],["hit",{"2":{"8":1}}],["heed",{"2":{"84":1}}],["heatmap",{"2":{"254":1}}],["head",{"2":{"100":1,"201":1}}],["heads",{"2":{"73":1}}],["heavily",{"2":{"55":1}}],["heavy",{"2":{"6":1}}],["hence",{"2":{"45":1,"52":5,"86":2,"94":1,"170":1,"174":1,"176":1,"191":1,"192":1,"213":1,"225":1,"231":1,"285":1}}],["height=10",{"2":{"86":2}}],["height=7",{"2":{"58":28}}],["height",{"2":{"35":1,"58":2,"273":6}}],["helps",{"2":{"86":1,"231":1}}],["helpful",{"2":{"77":1}}],["helper",{"0":{"29":1,"40":1,"67":1,"211":1,"259":1,"273":1,"294":1},"2":{"49":1,"54":1,"86":4,"279":1}}],["helpers",{"0":{"17":1},"1":{"18":1,"19":1,"20":1},"2":{"51":1}}],["help",{"2":{"21":1,"22":1,"23":1,"24":2,"34":6,"35":2,"37":3,"38":2,"40":1,"41":4,"42":1,"50":2,"54":1,"56":1,"125":1,"160":1}}],["here",{"2":{"21":1,"24":1,"49":2,"51":1,"56":3,"80":1,"86":1,"89":1,"98":1,"123":1,"131":1,"137":1,"154":1,"163":1,"165":3,"166":1,"167":1,"178":1,"188":3,"194":2,"225":1,"231":1,"250":1,"257":1,"277":1,"278":2,"291":1}}],["hessian",{"2":{"15":1}}],["he",{"2":{"15":2,"66":1}}],["hamiltonian",{"2":{"278":2}}],["hamming",{"2":{"86":8}}],["harfbuzz",{"2":{"276":1,"282":1}}],["harder",{"2":{"100":1,"125":1}}],["hardware",{"2":{"99":1}}],["hardswish",{"2":{"58":6}}],["hardsigmoid",{"2":{"58":3}}],["hardtanh",{"2":{"58":3}}],["hardσ",{"2":{"58":5}}],["hard",{"2":{"11":1,"50":1,"58":2,"127":1,"153":1,"197":1,"206":1,"214":1,"227":1,"238":2,"246":2,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["hat",{"2":{"226":2}}],["had",{"2":{"192":1,"276":1,"282":1}}],["hadsell",{"2":{"50":1}}],["happening",{"2":{"163":1,"166":1}}],["happened",{"2":{"158":1}}],["happens",{"2":{"126":1,"163":1,"164":1,"182":1}}],["hann",{"2":{"86":9}}],["hand",{"2":{"188":1}}],["handwriting",{"2":{"83":1}}],["handling",{"2":{"51":1,"54":1,"151":1,"158":1,"182":1}}],["handles",{"2":{"86":1,"100":1,"202":1}}],["handled",{"2":{"80":1,"82":2}}],["handle",{"2":{"38":1,"148":1,"231":1,"278":1}}],["half",{"2":{"84":1}}],["haven",{"2":{"45":1}}],["have",{"2":{"15":1,"19":1,"21":2,"35":1,"38":1,"40":1,"73":2,"75":1,"77":1,"82":1,"100":2,"101":1,"107":1,"111":2,"114":3,"115":1,"116":2,"117":1,"123":3,"124":2,"126":1,"127":2,"128":2,"141":1,"153":1,"154":1,"155":1,"158":1,"164":2,"165":1,"166":1,"172":1,"173":2,"177":1,"189":1,"190":1,"191":1,"192":3,"201":3,"225":1,"243":1,"278":1,"279":2,"283":1}}],["having",{"2":{"6":1,"25":1,"56":1}}],["hasharraymappedtries",{"2":{"192":1,"282":1}}],["has",{"2":{"8":3,"22":1,"35":1,"37":3,"42":1,"45":2,"52":5,"56":1,"58":1,"80":3,"84":1,"86":4,"98":1,"100":1,"104":2,"107":6,"111":1,"114":7,"115":4,"116":1,"117":1,"125":1,"126":4,"136":1,"139":1,"147":1,"154":1,"168":1,"187":1,"188":2,"192":1,"278":1,"281":1,"285":1}}],["2π",{"2":{"293":1}}],["2πnn−1",{"2":{"86":2}}],["2\\ttrain",{"2":{"260":1}}],["2f",{"2":{"212":2,"245":4}}],["2fs",{"2":{"212":1}}],["2f0",{"2":{"58":3}}],["2`",{"2":{"154":1}}],["297712633224983e",{"2":{"287":1}}],["297576f",{"2":{"285":1}}],["2970",{"2":{"278":1}}],["2931",{"2":{"282":1}}],["2934628",{"2":{"264":1}}],["293867e",{"2":{"225":1}}],["2909",{"2":{"278":1}}],["29001",{"2":{"253":1}}],["2904677",{"2":{"146":1}}],["2969671403684794e",{"2":{"287":1}}],["296879f",{"2":{"285":1}}],["2968750",{"2":{"274":1}}],["296",{"2":{"260":6}}],["29630",{"2":{"234":2,"236":1}}],["2960",{"2":{"229":1}}],["296496",{"2":{"188":1}}],["2959317481558755e",{"2":{"287":1}}],["2956682f",{"2":{"285":1}}],["29561827",{"2":{"196":1}}],["295229",{"2":{"264":1}}],["295489e",{"2":{"225":1}}],["29916334",{"2":{"264":1}}],["299262",{"2":{"264":1}}],["2992126",{"2":{"264":2}}],["2999034",{"2":{"264":1}}],["2999",{"2":{"260":1}}],["2990853",{"2":{"196":1}}],["29944375",{"2":{"196":1}}],["29955",{"2":{"185":1}}],["2921823837799265e",{"2":{"287":1}}],["292193f",{"2":{"285":1}}],["292291699997855e",{"2":{"287":1}}],["2929",{"2":{"260":1}}],["292002",{"2":{"185":1}}],["292533",{"2":{"147":1}}],["29",{"2":{"147":2,"234":1,"245":4,"260":15,"274":2,"276":1,"278":1,"282":1}}],["291827381615741e",{"2":{"287":1}}],["291926f",{"2":{"285":1}}],["2913",{"2":{"147":1}}],["291513",{"2":{"147":1}}],["29156f",{"2":{"89":1}}],["2984",{"2":{"278":1}}],["2986",{"2":{"253":2}}],["2986417",{"2":{"118":1}}],["29828635",{"2":{"196":1}}],["298787",{"2":{"168":1}}],["29872745",{"2":{"118":1}}],["298099",{"2":{"147":1}}],["2x",{"2":{"78":1}}],["26\\ttrain",{"2":{"260":1}}],["269966081634124e",{"2":{"287":1}}],["269968",{"2":{"147":1}}],["2695312",{"2":{"274":1}}],["269",{"2":{"260":3}}],["2636593513605062e",{"2":{"287":1}}],["2636719",{"2":{"274":1}}],["2633750109011224e",{"2":{"287":1}}],["2632495f",{"2":{"285":1}}],["2632644",{"2":{"147":1}}],["2630",{"2":{"278":1}}],["263",{"2":{"260":1}}],["26397",{"2":{"253":1}}],["2611",{"2":{"278":1}}],["26118",{"2":{"274":1}}],["2614254",{"2":{"264":1}}],["2614398",{"2":{"147":1}}],["261",{"2":{"260":1}}],["261636",{"2":{"185":1}}],["2615936",{"2":{"167":1}}],["26124424",{"2":{"147":1}}],["266",{"2":{"260":8}}],["266054",{"2":{"147":1}}],["2668537",{"2":{"147":1}}],["26615036",{"2":{"147":1}}],["2666",{"2":{"229":1}}],["26664025",{"2":{"147":1}}],["2666627",{"2":{"147":1}}],["266974",{"2":{"147":1}}],["267436708794679e",{"2":{"287":1}}],["2672967f",{"2":{"285":1}}],["26771653",{"2":{"264":2}}],["26774603",{"2":{"147":1}}],["267",{"2":{"260":1}}],["267644",{"2":{"167":1,"168":1}}],["267658",{"2":{"147":1}}],["2670698",{"2":{"147":1}}],["2642746f",{"2":{"285":1}}],["2644",{"2":{"278":1}}],["264",{"2":{"260":1}}],["26468",{"2":{"147":1}}],["26493692",{"2":{"147":1}}],["26477206",{"2":{"147":1}}],["2640756f",{"2":{"118":1}}],["262992371380514e",{"2":{"287":1}}],["262435397975399e",{"2":{"287":1}}],["262534f",{"2":{"285":1}}],["26255828",{"2":{"89":1}}],["262729f",{"2":{"285":1}}],["2627",{"2":{"282":1}}],["262",{"2":{"260":1}}],["26229796",{"2":{"147":1}}],["2687107f",{"2":{"285":1}}],["2687",{"2":{"276":1}}],["26867408",{"2":{"147":1}}],["268992",{"2":{"147":1}}],["268941",{"2":{"50":2,"74":2}}],["26858228",{"2":{"147":1}}],["2603",{"2":{"278":1}}],["260295",{"2":{"274":1}}],["260",{"2":{"260":1}}],["26001",{"2":{"253":1}}],["260692",{"2":{"168":1}}],["26076",{"2":{"147":1}}],["26072562",{"2":{"147":1}}],["26055062",{"2":{"147":1}}],["2609411",{"2":{"147":1}}],["26012868",{"2":{"118":1}}],["2601667",{"2":{"118":1}}],["2655",{"2":{"295":1}}],["26553962",{"2":{"147":1}}],["2658",{"2":{"276":1}}],["265788",{"2":{"185":1}}],["265372",{"2":{"185":1}}],["26543173",{"2":{"147":1}}],["26545027",{"2":{"147":1}}],["26520002",{"2":{"147":1}}],["2656273",{"2":{"147":1}}],["26564768",{"2":{"147":1}}],["265",{"2":{"90":1,"260":1,"263":1}}],["26",{"2":{"76":2,"147":2,"225":2,"234":1,"245":3,"260":5,"274":2,"282":3}}],["223541",{"2":{"295":1}}],["22379348",{"2":{"89":1}}],["22m\\u001b",{"2":{"276":1,"282":1}}],["221234696760506e",{"2":{"287":1}}],["221974939600825e",{"2":{"287":1}}],["2213036f",{"2":{"285":1}}],["221388f",{"2":{"285":1}}],["221458257489788e",{"2":{"287":1}}],["2214",{"2":{"278":1}}],["2217",{"2":{"276":1}}],["22157605",{"2":{"147":1}}],["22\\ttrain",{"2":{"260":1}}],["228262487933607e",{"2":{"287":1}}],["2282474f",{"2":{"285":1}}],["22827768",{"2":{"267":1}}],["228",{"2":{"260":16}}],["2207",{"2":{"282":1}}],["2207031",{"2":{"274":1}}],["220057f",{"2":{"285":1}}],["22005874",{"2":{"264":1}}],["22001",{"2":{"253":1}}],["22034",{"2":{"260":1}}],["2203s\\ttraining",{"2":{"234":1}}],["220",{"2":{"237":6}}],["22094624",{"2":{"147":1}}],["2295862f",{"2":{"285":1}}],["2299",{"2":{"260":1}}],["22990",{"2":{"204":1}}],["229",{"2":{"260":12}}],["2293",{"2":{"147":1}}],["22900078",{"2":{"147":1}}],["226081620359892e",{"2":{"287":1}}],["226003e",{"2":{"225":1}}],["2261687f",{"2":{"285":1}}],["22680467",{"2":{"267":1}}],["226",{"2":{"225":1,"260":13}}],["22635892",{"2":{"196":1}}],["226222",{"2":{"147":1}}],["22677277",{"2":{"147":1}}],["227292452852085e",{"2":{"287":1}}],["2279077f",{"2":{"285":1}}],["2278",{"2":{"278":1}}],["227698",{"2":{"274":1}}],["227",{"2":{"260":13}}],["2277",{"2":{"229":1}}],["227513",{"2":{"185":1}}],["227502",{"2":{"147":1}}],["22752927",{"2":{"147":1}}],["227338",{"2":{"147":1}}],["22719024",{"2":{"147":1}}],["222266760543011e",{"2":{"287":1}}],["2222222222222222",{"2":{"84":1}}],["22222",{"2":{"78":2,"234":2,"236":1}}],["222127f",{"2":{"285":1}}],["2221471",{"2":{"168":1}}],["2220735f",{"2":{"285":1}}],["22202058",{"2":{"147":1}}],["2226562",{"2":{"274":1}}],["2224",{"2":{"199":1,"260":1}}],["2223",{"2":{"192":1}}],["22230405",{"2":{"147":1}}],["222528",{"2":{"168":1}}],["222885",{"2":{"147":1}}],["2257",{"2":{"278":1}}],["225307",{"2":{"274":1}}],["22529833",{"2":{"267":1}}],["22521684",{"2":{"147":1}}],["225",{"2":{"260":13}}],["22517107",{"2":{"147":1}}],["22542597",{"2":{"89":1}}],["22474170833147e",{"2":{"287":1}}],["224895f",{"2":{"285":1}}],["2242",{"2":{"282":1}}],["22421768",{"2":{"118":1}}],["2245107",{"2":{"264":1}}],["224",{"2":{"260":9,"276":1}}],["2246",{"2":{"192":1}}],["224029",{"2":{"147":1}}],["22409724",{"2":{"147":1}}],["22412",{"2":{"147":1}}],["22439213",{"2":{"118":1}}],["22",{"2":{"76":2,"78":8,"147":2,"165":1,"204":2,"245":3,"253":2,"260":27,"274":2,"278":5,"283":14,"295":1}}],["272293383704488e",{"2":{"287":1}}],["2724",{"2":{"278":1}}],["27245",{"2":{"260":3}}],["27\\ttrain",{"2":{"260":1}}],["271314255991458e",{"2":{"287":1}}],["2713294f",{"2":{"285":1}}],["2716914f",{"2":{"285":1}}],["27176",{"2":{"260":2}}],["2714",{"2":{"253":1}}],["2714335",{"2":{"147":1}}],["2755905",{"2":{"264":2}}],["275711e",{"2":{"225":1}}],["27561936",{"2":{"147":1}}],["270106271046873e",{"2":{"287":1}}],["270124f",{"2":{"285":1}}],["270612433918453e",{"2":{"287":1}}],["270076586635192e",{"2":{"287":1}}],["27001",{"2":{"253":1}}],["2702188f",{"2":{"285":1}}],["2707014f",{"2":{"285":1}}],["270",{"2":{"260":2}}],["27035674",{"2":{"196":1}}],["27051234",{"2":{"196":1}}],["2708",{"2":{"147":1,"257":1}}],["2798",{"2":{"276":1}}],["27947",{"2":{"274":1}}],["2794292",{"2":{"147":1}}],["279297",{"2":{"147":1}}],["278961614077199e",{"2":{"287":1}}],["2789",{"2":{"260":1}}],["2787244f",{"2":{"285":1}}],["2787",{"2":{"229":1}}],["27841",{"2":{"147":1}}],["27800062",{"2":{"147":1}}],["27834624",{"2":{"147":1}}],["276676488472023e",{"2":{"287":1}}],["27664563",{"2":{"147":1}}],["27664083",{"2":{"147":1}}],["276136709954718e",{"2":{"287":1}}],["27611518",{"2":{"147":1}}],["276277f",{"2":{"285":1}}],["276532101549441e",{"2":{"287":1}}],["276532f",{"2":{"285":1}}],["2765746f",{"2":{"285":1}}],["27658862",{"2":{"147":1}}],["2768",{"2":{"278":1}}],["27682805",{"2":{"147":1}}],["2763796",{"2":{"264":1}}],["276",{"2":{"225":1}}],["276725",{"2":{"147":1}}],["2764903",{"2":{"147":1}}],["2749320911528578e",{"2":{"287":1}}],["2741",{"2":{"278":1}}],["274107",{"2":{"147":1}}],["274203",{"2":{"147":1}}],["2742818",{"2":{"147":1}}],["274844",{"2":{"147":1}}],["273113699313643e",{"2":{"287":1}}],["27319488",{"2":{"147":1}}],["27392295f",{"2":{"285":1}}],["2736335926799843e",{"2":{"287":1}}],["2736",{"2":{"276":1}}],["2734375",{"2":{"274":1}}],["27343664",{"2":{"147":1}}],["2737",{"2":{"187":1}}],["27373654",{"2":{"143":1}}],["2773438",{"2":{"274":1}}],["2771789",{"2":{"196":1}}],["2774315",{"2":{"165":1}}],["2779469",{"2":{"147":1}}],["27778",{"2":{"78":1}}],["27",{"2":{"76":3,"147":2,"234":3,"245":2,"260":5,"274":2,"295":1}}],["27th",{"2":{"15":1}}],["2n",{"2":{"76":4}}],["2nd",{"0":{"20":1},"2":{"86":2,"121":1,"132":1,"249":2,"251":1}}],["2×10",{"2":{"85":1}}],["2×4",{"2":{"81":1,"190":1}}],["2×5",{"2":{"81":2,"185":6}}],["2×32",{"2":{"118":2}}],["2×3×4×1",{"2":{"78":1}}],["2×3×1×1",{"2":{"78":1}}],["2×3",{"2":{"74":2}}],["2×2×1×1",{"2":{"82":1}}],["2×2",{"2":{"50":2,"76":1,"188":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀4⠀",{"2":{"58":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀",{"2":{"58":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀",{"2":{"58":12}}],["25\\ttrain",{"2":{"260":1}}],["252268265139791e",{"2":{"287":1}}],["2523938f",{"2":{"285":1}}],["2523668",{"2":{"168":1}}],["252781f",{"2":{"285":1}}],["25278285",{"2":{"196":1}}],["2527",{"2":{"282":1}}],["2520",{"2":{"229":1}}],["25263\\taccuracy",{"2":{"204":1}}],["252981",{"2":{"168":1}}],["2529244",{"2":{"168":1}}],["2585123352069816e",{"2":{"287":1}}],["2583",{"2":{"282":1}}],["2588",{"2":{"276":1}}],["258",{"2":{"260":7}}],["25842525f",{"2":{"285":1}}],["2584",{"2":{"229":1}}],["258473",{"2":{"147":1}}],["25860527",{"2":{"147":1}}],["2536",{"2":{"278":1}}],["2539",{"2":{"278":1}}],["2533830637509336e",{"2":{"287":1}}],["25338f",{"2":{"89":1}}],["253308f",{"2":{"285":1}}],["2533",{"2":{"229":1}}],["253364im",{"2":{"185":1}}],["253224",{"2":{"147":1}}],["250108141467968e",{"2":{"287":1}}],["2503962f",{"2":{"285":1}}],["2503847f",{"2":{"285":1}}],["250362",{"2":{"147":1}}],["2502",{"2":{"282":1}}],["25020984",{"2":{"147":1}}],["2505",{"2":{"282":1}}],["25055695",{"2":{"194":1,"195":1}}],["2506",{"2":{"278":1}}],["250",{"2":{"268":2,"274":1,"279":1,"284":1}}],["25001",{"2":{"253":1}}],["250818",{"2":{"147":1}}],["2542",{"2":{"282":1}}],["2545",{"2":{"276":1}}],["25483933",{"2":{"264":1}}],["2548422f0",{"2":{"58":1}}],["254",{"2":{"260":3}}],["2541723",{"2":{"147":1}}],["2512",{"2":{"278":1}}],["2519531",{"2":{"274":1}}],["251",{"2":{"225":1}}],["2516714",{"2":{"147":1}}],["2514698",{"2":{"147":1}}],["2550850082862792e",{"2":{"287":1}}],["255203",{"2":{"185":1}}],["25543177",{"2":{"147":1}}],["2553826",{"2":{"147":1}}],["2556785f",{"2":{"118":1}}],["259650",{"2":{"274":1}}],["25966",{"2":{"204":1}}],["25988",{"2":{"260":1}}],["259867",{"2":{"147":1}}],["259",{"2":{"260":1}}],["25926",{"2":{"234":3,"236":1}}],["259247",{"2":{"147":1}}],["25912565",{"2":{"147":1}}],["25911948",{"2":{"118":1}}],["2597034",{"2":{"147":1}}],["2571447035863186e",{"2":{"287":1}}],["2573255f",{"2":{"285":1}}],["257021",{"2":{"274":1}}],["25700033",{"2":{"147":1}}],["25723198",{"2":{"147":1}}],["257",{"2":{"89":1,"260":7}}],["25",{"2":{"76":2,"78":10,"85":17,"147":3,"188":1,"204":3,"213":3,"225":2,"245":40,"253":2,"260":4,"274":2,"278":1,"295":2}}],["2560985612034067e",{"2":{"287":1}}],["2561137f",{"2":{"285":1}}],["2569513",{"2":{"267":1}}],["256x128xf32>",{"2":{"147":2}}],["2564",{"2":{"147":1}}],["25654957",{"2":{"147":1}}],["2567",{"2":{"278":1}}],["2567473",{"2":{"147":1}}],["25673f",{"2":{"89":1}}],["25662464",{"2":{"143":1}}],["256",{"2":{"47":1,"56":1,"86":2,"89":6,"147":2,"210":3,"243":2,"260":1,"263":1}}],["24\\ttrain",{"2":{"260":1}}],["24x24x6x4xi1>",{"2":{"147":2}}],["24x24x6x4xf32>",{"2":{"147":8}}],["243",{"2":{"282":1}}],["2433",{"2":{"282":1}}],["2436",{"2":{"229":1}}],["2431",{"2":{"192":1}}],["24321",{"2":{"147":1}}],["24344929",{"2":{"147":1}}],["24349861",{"2":{"147":1}}],["2411233f",{"2":{"285":1}}],["241428f",{"2":{"285":1}}],["2412109",{"2":{"274":1}}],["2416",{"2":{"229":1}}],["241526616127701e",{"2":{"287":1}}],["2415",{"2":{"276":1}}],["2415926",{"2":{"168":1}}],["24154694",{"2":{"147":1}}],["24199852",{"2":{"147":1}}],["247",{"2":{"260":3,"282":1}}],["24736\\taccuracy",{"2":{"204":1}}],["2470898",{"2":{"147":1}}],["24719",{"2":{"118":1}}],["248169734470736e",{"2":{"287":1}}],["24817711",{"2":{"147":1}}],["248057f",{"2":{"285":1}}],["248016",{"2":{"274":1}}],["24878",{"2":{"147":1}}],["24873069",{"2":{"147":1}}],["248654",{"2":{"147":1}}],["2483782",{"2":{"147":1}}],["246465",{"2":{"274":1}}],["246899\\ttrain",{"2":{"260":1}}],["24626082",{"2":{"147":1}}],["24695547",{"2":{"147":1}}],["24597",{"2":{"282":1}}],["245771\\ttrain",{"2":{"260":1}}],["24539067",{"2":{"147":1}}],["24503033",{"2":{"147":1}}],["2421457534822557e",{"2":{"287":1}}],["242492f",{"2":{"285":1}}],["242468",{"2":{"147":1}}],["242",{"2":{"260":3}}],["242777",{"2":{"147":1}}],["2427716f",{"2":{"118":1}}],["24203484",{"2":{"147":1}}],["24238436",{"2":{"147":1}}],["249626132759039e",{"2":{"287":1}}],["249485f",{"2":{"285":1}}],["2492",{"2":{"278":1}}],["2497",{"2":{"260":1}}],["24978368",{"2":{"147":1}}],["24991",{"2":{"147":1}}],["24997774",{"2":{"147":1}}],["24982567",{"2":{"147":1}}],["2441",{"2":{"282":1}}],["2440945",{"2":{"264":2}}],["244",{"2":{"260":6}}],["2449",{"2":{"253":2}}],["24490844",{"2":{"147":1}}],["24472847105479764",{"2":{"74":1}}],["244728",{"2":{"50":10,"74":1}}],["24",{"2":{"76":2,"90":1,"132":2,"147":2,"204":3,"206":2,"213":1,"214":2,"238":3,"245":2,"246":3,"253":10,"255":2,"260":643,"261":2,"269":2,"274":3,"275":2,"295":4,"297":2}}],["240757279009177e",{"2":{"287":1}}],["24079",{"2":{"147":1}}],["2403401179566026e",{"2":{"287":1}}],["2409",{"2":{"278":1}}],["24001",{"2":{"253":1}}],["2402",{"2":{"192":1}}],["24045505",{"2":{"147":1}}],["24046357",{"2":{"147":1}}],["2408",{"2":{"276":1}}],["24080847",{"2":{"147":1}}],["24086508",{"2":{"147":1}}],["24063988",{"2":{"147":1}}],["2401375",{"2":{"147":1}}],["240",{"2":{"41":3}}],["2838407f",{"2":{"285":1}}],["2834571",{"2":{"118":1}}],["28\\ttrain",{"2":{"260":1}}],["284646729293942e",{"2":{"287":1}}],["2843574f",{"2":{"285":1}}],["2840",{"2":{"282":1}}],["284967",{"2":{"168":1}}],["284991",{"2":{"147":1}}],["2842",{"2":{"167":1}}],["2817",{"2":{"278":1}}],["2810",{"2":{"276":1}}],["2810528",{"2":{"147":1}}],["281516",{"2":{"274":1}}],["281978",{"2":{"264":1}}],["28131",{"2":{"260":1}}],["28114",{"2":{"147":1}}],["2869993f",{"2":{"285":1}}],["28691",{"2":{"118":1}}],["286811f",{"2":{"285":1}}],["2863",{"2":{"260":1}}],["2862",{"2":{"187":1}}],["2864517",{"2":{"168":1}}],["286642",{"2":{"147":1}}],["2857",{"2":{"260":5}}],["285777",{"2":{"147":1}}],["28524",{"2":{"147":1}}],["285972",{"2":{"147":1}}],["2873740367823404e",{"2":{"287":1}}],["287034787956824e",{"2":{"287":1}}],["287775663569804e",{"2":{"287":1}}],["2876113f",{"2":{"285":1}}],["28766",{"2":{"147":1}}],["287874f",{"2":{"285":1}}],["28780195",{"2":{"118":1}}],["288978",{"2":{"295":1}}],["2889",{"2":{"278":1}}],["2880859",{"2":{"274":1}}],["2880",{"2":{"240":1}}],["2880513",{"2":{"147":1}}],["288641",{"2":{"147":1}}],["28×28×1×4",{"2":{"147":1}}],["2896",{"2":{"278":1}}],["2896144",{"2":{"167":1}}],["289039",{"2":{"147":1}}],["28952244",{"2":{"147":1}}],["28993338",{"2":{"147":1}}],["28296",{"2":{"282":1}}],["28209427",{"2":{"147":1}}],["2821546",{"2":{"147":1}}],["28283697",{"2":{"118":1}}],["28288874",{"2":{"89":1}}],["2801",{"2":{"278":1}}],["2801026",{"2":{"264":1}}],["2807",{"2":{"278":1}}],["28001",{"2":{"253":1}}],["280658825955565e",{"2":{"287":1}}],["2806",{"2":{"253":1}}],["2806326",{"2":{"147":1}}],["280846",{"2":{"147":1}}],["28",{"2":{"47":4,"147":10,"210":2,"229":1,"234":3,"237":2,"241":4,"245":2,"260":122,"274":2,"278":3,"282":1}}],["21\\ttrain",{"2":{"260":1}}],["2177",{"2":{"295":1}}],["217717f",{"2":{"285":1}}],["2173",{"2":{"276":1}}],["217873236547994e",{"2":{"287":1}}],["2178",{"2":{"276":1}}],["2170",{"2":{"229":1}}],["21799661",{"2":{"147":1}}],["21796629",{"2":{"147":1}}],["21s",{"2":{"213":1}}],["2135",{"2":{"274":1}}],["2132",{"2":{"274":3}}],["2133",{"2":{"274":2}}],["2138",{"2":{"274":1}}],["2139",{"2":{"274":2}}],["21398924",{"2":{"147":1}}],["2131",{"2":{"274":2}}],["2136",{"2":{"274":5}}],["213618",{"2":{"147":1}}],["2137",{"2":{"274":2}}],["2130",{"2":{"229":1,"274":3}}],["21301",{"2":{"147":1}}],["2145",{"2":{"274":3}}],["2140",{"2":{"274":3}}],["2148",{"2":{"274":5}}],["2149378f",{"2":{"285":1}}],["2149",{"2":{"274":1}}],["2146",{"2":{"274":3}}],["21469435",{"2":{"147":1}}],["2147",{"2":{"274":4}}],["2147609",{"2":{"147":1}}],["2143",{"2":{"274":2}}],["2144",{"2":{"274":3}}],["2141",{"2":{"274":4}}],["2142",{"2":{"274":3}}],["2154",{"2":{"274":1}}],["21542016",{"2":{"147":1}}],["2155",{"2":{"274":1}}],["2156",{"2":{"274":3}}],["2150",{"2":{"274":3}}],["21510",{"2":{"282":1}}],["2151",{"2":{"274":3}}],["2157",{"2":{"274":4}}],["2158",{"2":{"274":4}}],["2159",{"2":{"274":2}}],["215",{"2":{"260":3}}],["2152",{"2":{"260":1,"274":2}}],["21524015",{"2":{"89":1}}],["2153",{"2":{"260":1,"274":3}}],["2122032456671046e",{"2":{"287":1}}],["2127",{"2":{"274":1}}],["21271591",{"2":{"147":1}}],["2128",{"2":{"274":1}}],["2129",{"2":{"274":2}}],["21295139",{"2":{"147":1}}],["212384f",{"2":{"285":1}}],["2123",{"2":{"274":1}}],["2124",{"2":{"274":1,"295":1}}],["2125984",{"2":{"264":2}}],["2125",{"2":{"260":2,"274":3,"282":1}}],["212",{"2":{"260":3}}],["2126",{"2":{"187":1,"274":3}}],["21215f",{"2":{"89":1}}],["2108",{"2":{"278":1}}],["2108695",{"2":{"147":1}}],["21014",{"2":{"260":2}}],["21001",{"2":{"253":1}}],["21004838",{"2":{"147":1}}],["210",{"2":{"229":1,"237":18}}],["21096227",{"2":{"147":1}}],["2199",{"2":{"282":1}}],["219455132171401e",{"2":{"287":1}}],["2194",{"2":{"229":1}}],["2197981041108443",{"2":{"188":1}}],["2192001",{"2":{"168":1}}],["21957569",{"2":{"147":1}}],["21983564",{"2":{"118":1}}],["211551f",{"2":{"285":1}}],["2112",{"2":{"282":1}}],["21121861",{"2":{"147":1}}],["2114",{"2":{"278":1}}],["2111",{"2":{"278":1}}],["2111297",{"2":{"168":1}}],["21178",{"2":{"274":1}}],["21178308",{"2":{"147":1}}],["211",{"2":{"237":6}}],["211378",{"2":{"147":1}}],["218106335564008e",{"2":{"287":1}}],["2184350002970956e",{"2":{"287":1}}],["2184",{"2":{"282":1}}],["218031f",{"2":{"285":1}}],["2180",{"2":{"276":1}}],["2186722f",{"2":{"285":1}}],["2186",{"2":{"276":1}}],["21886098",{"2":{"147":1}}],["21833563",{"2":{"147":1}}],["21899778",{"2":{"147":1}}],["21854877",{"2":{"119":1}}],["2169",{"2":{"278":1}}],["21690917",{"2":{"196":1}}],["2161",{"2":{"274":3}}],["216502",{"2":{"274":1}}],["216",{"2":{"260":7}}],["2162",{"2":{"274":3,"282":1}}],["21623",{"2":{"229":1}}],["21624334",{"2":{"118":1}}],["21636114",{"2":{"147":1}}],["21634498",{"2":{"147":1}}],["21686329",{"2":{"147":1}}],["21672015",{"2":{"143":1}}],["21608135",{"2":{"89":1}}],["21",{"2":{"41":1,"76":2,"78":8,"95":1,"147":2,"197":1,"204":2,"206":1,"214":1,"227":1,"238":1,"245":3,"246":1,"253":2,"255":1,"260":18,"261":1,"269":1,"274":4,"275":1,"278":2,"280":1,"282":1,"289":1,"297":1}}],["20\\ttrain",{"2":{"260":1}}],["20930",{"2":{"274":1}}],["2093402",{"2":{"147":1}}],["20977046",{"2":{"264":1}}],["209",{"2":{"253":1}}],["2096",{"2":{"229":1}}],["209906",{"2":{"147":1}}],["208012920082947e",{"2":{"287":1}}],["20853f",{"2":{"285":1}}],["2083753100961748e",{"2":{"287":1}}],["208378",{"2":{"147":1}}],["2083",{"2":{"282":1}}],["2081",{"2":{"282":1}}],["2082",{"2":{"276":1}}],["2088",{"2":{"276":1}}],["20881107",{"2":{"143":1}}],["2084",{"2":{"240":1}}],["20862025",{"2":{"147":1}}],["2048003336182092e",{"2":{"287":1}}],["20487879",{"2":{"147":1}}],["2046675f",{"2":{"285":1}}],["2043",{"2":{"278":1}}],["20436017",{"2":{"147":1}}],["20472442",{"2":{"264":2}}],["204039",{"2":{"253":1}}],["204033",{"2":{"253":1}}],["204028",{"2":{"253":1}}],["204024",{"2":{"253":1}}],["204020",{"2":{"253":1}}],["204016",{"2":{"253":1}}],["204012",{"2":{"253":1}}],["204007",{"2":{"253":1}}],["204003",{"2":{"253":1}}],["20448",{"2":{"147":1}}],["206293052628927e",{"2":{"287":1}}],["206204f",{"2":{"285":1}}],["20625",{"2":{"274":1}}],["2065",{"2":{"282":1}}],["20656037",{"2":{"147":1}}],["206",{"2":{"260":4}}],["2068",{"2":{"278":1}}],["206822",{"2":{"253":1}}],["206815",{"2":{"253":1}}],["206811",{"2":{"253":1}}],["206807",{"2":{"253":1}}],["206802",{"2":{"253":1}}],["20685",{"2":{"204":1}}],["206798",{"2":{"253":1}}],["206794",{"2":{"253":1}}],["206790",{"2":{"253":1}}],["206785",{"2":{"253":1}}],["206781",{"2":{"253":1}}],["206760",{"2":{"253":1}}],["2066",{"2":{"192":1}}],["20668766",{"2":{"167":1}}],["206476",{"2":{"185":1}}],["206943",{"2":{"147":1}}],["20694472",{"2":{"147":1}}],["20617373",{"2":{"147":1}}],["20636114",{"2":{"147":1}}],["205903",{"2":{"295":1}}],["2051990163880824e",{"2":{"287":1}}],["20511f",{"2":{"285":1}}],["205001286278172e",{"2":{"287":1}}],["2053907f",{"2":{"285":1}}],["2058",{"2":{"282":1}}],["20585205",{"2":{"147":1}}],["205519e",{"2":{"225":1}}],["205559",{"2":{"147":1}}],["20548",{"2":{"185":1}}],["20567945",{"2":{"147":1}}],["20561251",{"2":{"147":1}}],["2078617f",{"2":{"285":1}}],["2074",{"2":{"276":1}}],["20779805",{"2":{"147":1}}],["20753737",{"2":{"147":1}}],["207107",{"2":{"79":2}}],["2020",{"2":{"276":1}}],["20206891",{"2":{"147":1}}],["2021",{"2":{"276":1,"281":1}}],["2028",{"2":{"274":1}}],["2029",{"2":{"274":1}}],["20296505",{"2":{"147":1}}],["2024",{"2":{"238":1,"246":1}}],["20277858",{"2":{"196":1}}],["20266858",{"2":{"147":1}}],["20265023",{"2":{"147":1}}],["2025",{"2":{"90":1,"197":1,"204":1,"206":1,"213":1,"214":1,"227":1,"238":1,"246":1,"253":8,"255":1,"260":114,"261":1,"269":1,"274":1,"275":1,"280":1,"289":1,"295":3,"297":1}}],["2023",{"2":{"71":2}}],["20388553650612e",{"2":{"287":1}}],["2037",{"2":{"282":1}}],["2036",{"2":{"282":1}}],["2039007f",{"2":{"285":1}}],["203999",{"2":{"253":1}}],["2039916195886688483",{"2":{"213":1}}],["203979",{"2":{"253":1}}],["2035",{"2":{"192":1}}],["2034",{"2":{"282":1}}],["20346",{"2":{"147":1}}],["20343f",{"2":{"89":1}}],["2031",{"2":{"278":1}}],["20310251",{"2":{"147":1}}],["20316577",{"2":{"147":1}}],["20314303",{"2":{"89":1}}],["2004",{"2":{"260":2}}],["2001",{"2":{"196":1,"253":1}}],["20033634",{"2":{"147":1}}],["20025302",{"2":{"147":1}}],["2002995",{"2":{"147":1}}],["2008716",{"2":{"147":1}}],["20071083",{"2":{"147":1}}],["20001",{"2":{"253":1}}],["20001543",{"2":{"196":1}}],["2000",{"2":{"81":1,"260":5,"276":1,"295":1}}],["200",{"2":{"81":1,"119":1,"237":18}}],["2009",{"2":{"58":1}}],["20061705",{"2":{"267":1}}],["2006",{"2":{"50":2,"83":1}}],["20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀",{"2":{"58":1}}],["20",{"2":{"34":1,"41":3,"58":1,"73":2,"76":2,"81":3,"89":1,"133":1,"147":3,"167":1,"188":2,"196":3,"204":2,"232":5,"237":24,"245":2,"253":2,"260":2,"271":1,"274":2,"278":7,"295":4}}],["201",{"2":{"225":1,"237":18}}],["20192719",{"2":{"147":1}}],["201250",{"2":{"253":1}}],["201243",{"2":{"253":1}}],["201239",{"2":{"253":1}}],["201234",{"2":{"253":1}}],["201230",{"2":{"253":1}}],["201226",{"2":{"253":1}}],["201222",{"2":{"253":1}}],["201218",{"2":{"253":1}}],["201213",{"2":{"253":1}}],["201209",{"2":{"253":1}}],["2012",{"2":{"83":1}}],["2018",{"2":{"66":1}}],["201184",{"2":{"253":1}}],["20114",{"2":{"188":1}}],["201145",{"2":{"188":2}}],["2011",{"2":{"58":1}}],["2017526",{"2":{"147":1}}],["2017",{"2":{"50":2,"64":1,"295":1}}],["20167\\taccuracy",{"2":{"204":1}}],["2016",{"2":{"41":1,"50":2,"66":2,"78":1}}],["2014",{"2":{"15":1,"64":1}}],["2015",{"2":{"15":2,"66":1}}],["20101166",{"2":{"147":1}}],["2010",{"2":{"15":3}}],["23\\ttrain",{"2":{"260":1}}],["239868215126814e",{"2":{"287":1}}],["23987544",{"2":{"147":1}}],["2395",{"2":{"278":1}}],["239018905309913e",{"2":{"287":1}}],["2390",{"2":{"278":1}}],["239",{"2":{"276":1}}],["239494",{"2":{"274":1}}],["2393",{"2":{"229":1}}],["23992883",{"2":{"196":1}}],["23921251",{"2":{"167":1}}],["23893f",{"2":{"285":1}}],["2382",{"2":{"276":1}}],["238232",{"2":{"147":1}}],["238",{"2":{"260":2}}],["238426",{"2":{"147":1}}],["23847201",{"2":{"147":1}}],["2343750",{"2":{"274":1}}],["234",{"2":{"260":4}}],["2347",{"2":{"229":1}}],["234211",{"2":{"147":1}}],["23402624",{"2":{"118":1}}],["2316677721223581e",{"2":{"287":1}}],["23162602",{"2":{"147":1}}],["2314043f",{"2":{"285":1}}],["231",{"2":{"260":7}}],["23159832",{"2":{"147":1}}],["23192994",{"2":{"147":1}}],["2308",{"2":{"276":1,"278":1}}],["230853",{"2":{"274":1}}],["23084445",{"2":{"147":1}}],["23001",{"2":{"253":1}}],["2302",{"2":{"229":1}}],["230",{"2":{"201":1,"224":2,"231":1,"237":18,"250":1,"260":12}}],["23095",{"2":{"188":1}}],["230954",{"2":{"188":2}}],["23037",{"2":{"147":1}}],["2304493",{"2":{"147":1}}],["23049244",{"2":{"147":1}}],["2305717f",{"2":{"118":1}}],["232221386335118e",{"2":{"287":1}}],["23224601",{"2":{"147":1}}],["232324",{"2":{"274":1}}],["23234344",{"2":{"147":1}}],["232",{"2":{"260":11}}],["23242",{"2":{"167":1}}],["23258851",{"2":{"147":1}}],["2353",{"2":{"278":1}}],["2353516",{"2":{"274":1}}],["23534852",{"2":{"89":1}}],["235",{"2":{"260":13}}],["23598626",{"2":{"147":1}}],["23552088",{"2":{"147":1}}],["23564403",{"2":{"147":1}}],["233662342517915e",{"2":{"287":1}}],["2336775f",{"2":{"285":1}}],["2335",{"2":{"282":1}}],["23352",{"2":{"147":1}}],["2334",{"2":{"278":1}}],["233",{"2":{"260":11}}],["23389685",{"2":{"147":1}}],["23372321",{"2":{"147":1}}],["23395808",{"2":{"147":1}}],["23392133",{"2":{"118":1}}],["23325463",{"2":{"143":1}}],["236274638880334e",{"2":{"287":1}}],["236221",{"2":{"274":1}}],["23622048",{"2":{"264":2}}],["2361865f",{"2":{"285":1}}],["2361472",{"2":{"143":1}}],["23641",{"2":{"274":1}}],["2365",{"2":{"260":1}}],["23608004",{"2":{"147":1}}],["23669082",{"2":{"147":1}}],["2363781",{"2":{"135":1}}],["2375",{"2":{"282":1}}],["23710957",{"2":{"155":1}}],["23715453",{"2":{"118":1}}],["2379414",{"2":{"147":1}}],["23791258",{"2":{"147":1}}],["23735927",{"2":{"147":1}}],["23707482",{"2":{"147":1}}],["2370692",{"2":{"147":1}}],["23703608",{"2":{"118":1}}],["23",{"2":{"34":2,"76":2,"78":8,"147":2,"204":3,"213":1,"245":2,"253":2,"260":31,"274":2,"278":1,"295":3}}],["2=dense",{"2":{"23":1}}],["2d",{"0":{"247":1},"1":{"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1},"2":{"15":2,"35":3,"36":2,"37":3,"77":2,"86":4,"212":2,"247":1}}],["2",{"2":{"5":5,"15":5,"19":2,"22":5,"23":4,"25":8,"34":30,"35":9,"37":15,"38":11,"39":2,"40":15,"41":6,"42":3,"45":4,"47":4,"50":22,"56":13,"58":43,"73":3,"74":10,"75":4,"76":51,"77":11,"78":78,"79":11,"80":14,"81":13,"82":14,"84":2,"85":3,"86":19,"89":34,"90":5,"95":1,"96":1,"118":39,"119":5,"121":2,"126":21,"127":41,"132":1,"133":1,"143":3,"144":2,"145":2,"146":4,"147":58,"149":1,"150":1,"153":3,"154":12,"155":2,"158":5,"165":7,"166":4,"167":14,"168":13,"172":1,"173":6,"174":6,"185":8,"187":17,"188":20,"190":4,"191":3,"192":10,"193":1,"194":1,"199":3,"200":5,"201":1,"202":1,"204":3,"205":1,"206":2,"209":1,"210":18,"212":1,"213":2,"214":2,"223":6,"225":13,"226":5,"229":23,"230":1,"231":1,"232":1,"234":5,"236":1,"237":67,"238":3,"240":8,"241":1,"243":1,"245":4,"246":4,"251":4,"252":10,"254":4,"255":2,"258":1,"260":29,"261":2,"263":2,"264":14,"265":1,"267":6,"268":5,"269":2,"271":13,"272":1,"273":3,"274":3,"275":2,"276":44,"277":2,"278":34,"282":71,"283":50,"284":8,"285":95,"287":96,"291":4,"292":2,"293":4,"296":1,"297":2}}],["v=v",{"2":{"251":1}}],["vtav",{"2":{"169":1}}],["vvt",{"2":{"169":1}}],["v∈rd",{"2":{"169":1}}],["vscodeserver",{"2":{"274":1}}],["vscode",{"2":{"274":3}}],["vs",{"2":{"151":1,"225":1}}],["v0",{"2":{"52":5,"71":1,"114":1,"235":1}}],["voila",{"2":{"126":1}}],["volterra",{"2":{"223":2}}],["vol",{"2":{"50":1}}],["volumetric",{"2":{"50":1}}],["vocabulary",{"2":{"39":2}}],["vcat",{"2":{"38":1,"200":1,"258":1,"274":1,"283":1}}],["vae",{"0":{"270":1},"1":{"271":1,"272":1,"273":1,"274":1,"275":1}}],["vanilla",{"2":{"118":1}}],["var=dense",{"2":{"271":1}}],["various",{"2":{"86":1,"91":1,"100":1,"120":1,"221":1}}],["variational",{"2":{"270":1}}],["variationalhiddendropout",{"2":{"36":5}}],["variants",{"2":{"161":1}}],["variance",{"2":{"41":6,"64":2,"66":8,"278":1}}],["variable",{"2":{"56":2,"114":1,"119":1,"278":1}}],["variables",{"2":{"49":2,"56":1,"86":1,"89":1,"100":2,"250":1,"278":1,"283":1}}],["var",{"2":{"41":7,"56":1,"66":5,"84":1,"126":2,"127":2,"133":3,"143":1,"146":1,"165":2,"201":1,"204":2,"224":2,"231":1,"237":24,"250":1,"260":2,"271":1,"274":1}}],["validate",{"2":{"204":1}}],["validated",{"2":{"1":1}}],["validation",{"2":{"200":1,"204":51}}],["valid",{"2":{"41":1,"153":1,"181":1}}],["val",{"2":{"10":2,"24":3,"34":7,"36":1,"47":1,"50":5,"51":5,"52":4,"64":4,"66":2,"76":3,"84":2,"126":1,"127":2,"143":2,"146":2,"200":4,"201":1,"202":1,"204":2,"224":1,"237":13,"252":8,"254":3,"260":20}}],["valued",{"2":{"193":1}}],["value>",{"2":{"178":3}}],["value",{"2":{"3":1,"8":1,"10":2,"24":1,"30":2,"34":1,"38":5,"41":6,"50":6,"51":1,"52":3,"56":6,"64":1,"66":4,"73":3,"75":1,"76":1,"79":1,"81":1,"85":1,"86":5,"97":1,"127":1,"166":2,"196":16,"200":1,"250":1,"253":4,"278":1,"279":4}}],["valuestorage",{"2":{"237":3}}],["values",{"2":{"3":2,"28":2,"42":1,"49":4,"50":1,"52":1,"53":4,"76":6,"81":4,"82":6,"83":1,"86":3,"267":1,"271":1}}],["v",{"2":{"18":2,"41":1,"50":1,"51":3,"73":4,"107":1,"170":5,"171":5,"172":10,"193":4,"194":2,"195":1,"250":4,"251":10,"283":21}}],["vjp",{"0":{"18":1},"2":{"18":1,"170":7,"172":16,"192":3,"195":4,"212":2,"268":3}}],["visualization",{"0":{"279":1}}],["visualize",{"2":{"264":1,"279":1,"288":1,"291":1}}],["visualizing",{"0":{"254":1,"288":1,"296":1}}],["vision",{"2":{"15":2,"50":4,"66":1}}],["virtual",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["vi",{"2":{"169":1}}],["vitj",{"2":{"169":1}}],["video",{"2":{"78":1}}],["victor",{"2":{"41":1,"66":1}}],["viewaxis",{"2":{"237":68}}],["views",{"2":{"231":2,"251":1,"283":1}}],["view",{"2":{"40":4,"86":3,"190":1,"273":1,"278":1,"279":1}}],["via",{"0":{"147":1},"2":{"3":1,"5":2,"6":1,"7":1,"15":1,"26":1,"49":2,"54":2,"65":1,"91":2,"100":2,"114":1,"115":1,"117":1,"118":1,"124":1,"137":1,"140":1,"148":1,"191":1,"193":4,"194":1,"278":1}}],["vec",{"2":{"80":6,"172":2,"201":1,"202":1,"231":4,"242":1,"252":1,"254":1}}],["vectorizationbase",{"2":{"282":1}}],["vectorization",{"0":{"184":1},"2":{"67":1,"162":1,"184":1}}],["vectorize",{"2":{"60":1}}],["vectors",{"2":{"38":1,"39":2,"41":3,"74":1,"80":2,"81":1,"169":1,"191":6,"276":1}}],["vector",{"0":{"170":1,"171":1,"194":1,"195":1},"2":{"15":2,"18":9,"25":2,"38":16,"39":5,"50":3,"56":2,"58":1,"62":1,"65":1,"74":1,"80":3,"81":7,"86":7,"89":9,"95":1,"100":1,"133":2,"164":2,"169":4,"170":1,"171":1,"188":4,"191":2,"192":5,"193":1,"194":3,"195":1,"196":1,"205":1,"224":1,"231":3,"234":1,"271":1,"272":1,"273":1,"278":4,"279":1,"283":1,"293":1}}],["vendor",{"2":{"67":1}}],["vedaldi",{"2":{"41":1,"66":1}}],["verbatim",{"2":{"56":1}}],["verified",{"2":{"172":1}}],["verification",{"2":{"8":1}}],["verify",{"2":{"52":1,"165":1,"166":1,"167":1,"168":1,"172":1}}],["versioninfo",{"2":{"197":3,"206":3,"214":3,"227":3,"238":3,"246":3,"255":3,"261":3,"269":3,"275":3,"280":3,"289":3,"297":3}}],["versioning",{"2":{"165":2}}],["versions",{"2":{"114":1,"162":1,"184":2}}],["version",{"2":{"22":1,"49":2,"52":1,"56":1,"58":2,"68":3,"71":1,"78":4,"81":2,"86":1,"88":1,"107":1,"131":1,"153":1,"170":1,"197":1,"206":1,"214":1,"220":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["very",{"2":{"7":1,"39":1,"58":1,"89":1,"100":1,"118":1,"125":1,"126":1,"127":2,"173":1,"174":1,"187":1,"188":1,"283":1,"287":1}}],["v1",{"0":{"102":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1},"2":{"7":1,"35":2,"52":5,"88":4,"102":2,"105":1}}],["||",{"2":{"86":4,"90":1,"119":1,"196":1,"253":1,"268":1,"274":2,"295":2}}],["|>",{"2":{"74":1,"80":9,"86":2,"89":2,"90":2,"118":5,"119":2,"147":2,"149":1,"150":3,"163":2,"196":1,"204":3,"212":2,"224":1,"225":2,"226":1,"232":2,"234":1,"242":2,"245":3,"253":3,"260":2,"267":1,"268":1,"272":1,"273":3,"274":4,"285":1,"293":1,"295":3}}],["|x|",{"2":{"58":1}}],["|y^−y|",{"2":{"50":1}}],["|y−y^|−0",{"2":{"50":1}}],["|y−y^|≤δδ∗",{"2":{"50":1}}],["|p|−di×",{"2":{"35":1,"37":3}}],["|",{"2":{"3":4,"112":1,"126":5,"127":13,"149":1,"254":1}}],["x~",{"2":{"279":1}}],["x~|θ",{"2":{"279":1}}],["x~|x",{"2":{"279":1}}],["xml2",{"2":{"276":1,"282":1}}],["xz",{"2":{"276":1,"282":1}}],["xorg",{"2":{"276":8,"282":8}}],["xoshiro",{"2":{"15":3,"25":1,"56":5,"126":1,"143":2,"158":1,"191":3,"245":1,"274":1}}],["x∈",{"2":{"252":1}}],["xyt",{"2":{"251":17,"252":18,"253":10}}],["xi∈rn",{"2":{"196":1}}],["xi",{"2":{"196":2}}],["xᵢ",{"2":{"119":4}}],["xᵢ^p",{"2":{"75":1}}],["xdev",{"2":{"118":5,"119":3,"248":1,"253":3,"256":1,"260":2,"267":2,"268":3,"270":1,"274":4,"290":1,"295":2}}],["x=",{"2":{"96":1}}],["xrot",{"2":{"86":3}}],["xlabel=",{"2":{"254":1,"264":1,"268":1,"277":1,"284":1,"285":1,"288":2,"291":1}}],["xla",{"0":{"70":1,"92":1},"2":{"90":4,"92":1,"118":2,"204":3,"208":1,"213":3,"253":17,"260":229,"274":3,"295":7}}],["xlogy",{"2":{"51":1}}],["xlogx",{"2":{"51":1}}],["xt0s",{"2":{"277":5}}],["xt1s",{"2":{"277":5}}],["xtrans",{"2":{"276":1,"282":1}}],["xt",{"2":{"66":2,"134":1,"135":1}}],["x86",{"2":{"65":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["xslt",{"2":{"276":1,"282":1}}],["xs",{"2":{"58":2,"67":2,"74":1,"254":6,"277":1,"278":3}}],["x3",{"2":{"34":1}}],["x3c",{"2":{"3":3,"4":2,"7":2,"28":3,"35":4,"38":2,"63":1,"65":1,"78":1,"80":2,"86":5,"132":2,"134":2,"147":111,"153":1,"154":1,"174":1,"178":6,"193":1,"201":1,"225":1,"231":2,"235":2,"237":2,"250":1,"253":2,"260":1,"271":3,"272":1,"283":2,"292":5,"293":2}}],["x2s",{"2":{"277":8}}],["x2",{"2":{"34":3,"277":2,"279":18}}],["x265",{"2":{"276":1,"282":1}}],["x264",{"2":{"276":1,"282":1}}],["x26",{"2":{"25":6,"71":1,"119":2,"144":2,"145":4,"149":1,"197":4,"206":4,"214":4,"225":2,"227":4,"238":4,"246":4,"253":4,"255":4,"261":4,"269":4,"274":2,"275":4,"280":4,"283":4,"289":4,"291":2,"293":2,"295":4,"297":4}}],["x1s",{"2":{"277":8}}],["x1",{"2":{"34":3,"277":2,"279":18}}],["xavier",{"2":{"15":4}}],["x",{"2":{"3":12,"5":12,"7":3,"8":12,"11":2,"15":7,"18":6,"19":10,"34":24,"35":23,"36":9,"37":27,"38":20,"39":15,"40":30,"41":13,"42":14,"45":2,"47":2,"50":2,"51":18,"52":16,"55":2,"56":31,"58":151,"60":10,"61":2,"62":10,"63":3,"64":8,"65":3,"66":26,"67":1,"73":3,"74":14,"75":21,"76":26,"77":17,"78":28,"80":3,"82":7,"84":25,"86":43,"89":4,"90":14,"96":5,"118":14,"119":3,"126":4,"127":10,"130":3,"132":9,"133":2,"134":9,"138":3,"143":3,"146":2,"147":10,"149":4,"150":9,"153":6,"154":5,"155":7,"158":3,"161":3,"163":5,"164":6,"165":10,"166":12,"168":8,"170":2,"171":2,"172":9,"173":6,"174":10,"188":7,"189":2,"190":4,"193":7,"194":5,"195":1,"196":11,"200":6,"201":8,"202":8,"203":2,"204":4,"209":6,"211":2,"212":4,"223":4,"230":6,"231":10,"233":2,"234":2,"235":2,"237":13,"241":6,"242":2,"244":2,"245":2,"252":7,"254":1,"258":10,"259":2,"264":9,"268":9,"271":12,"273":7,"274":6,"277":1,"278":5,"279":4,"283":6,"287":2,"291":5,"292":13,"293":14,"294":4,"295":4}}],["3\\ttrain",{"2":{"260":1}}],["3rd",{"0":{"220":1},"2":{"167":1}}],["378",{"2":{"282":1}}],["3781595",{"2":{"264":1}}],["375",{"2":{"276":1}}],["375862",{"2":{"147":1}}],["371",{"2":{"282":1}}],["3711",{"2":{"278":1}}],["371470",{"2":{"274":1}}],["3719285",{"2":{"168":1}}],["3749",{"2":{"282":1}}],["3741",{"2":{"276":1}}],["37486005",{"2":{"264":1}}],["374202\\ttrain",{"2":{"260":1}}],["3742405e",{"2":{"196":1}}],["3736",{"2":{"278":1}}],["3734",{"2":{"278":1}}],["373",{"2":{"260":4,"276":3}}],["373847",{"2":{"147":1}}],["3708",{"2":{"278":1}}],["3700787",{"2":{"264":2}}],["37001",{"2":{"253":1}}],["37037",{"2":{"234":3}}],["3707023",{"2":{"165":1}}],["372412246981049e",{"2":{"287":1}}],["3723640324634174e",{"2":{"287":1}}],["372",{"2":{"229":1,"276":1,"282":1}}],["3729237",{"2":{"167":1}}],["376099",{"2":{"274":1}}],["376",{"2":{"225":1,"260":2}}],["376386",{"2":{"168":1}}],["37",{"2":{"187":1,"213":1,"234":5,"236":2,"245":2,"260":10,"274":2,"278":2,"295":1}}],["3796747490796324e",{"2":{"287":1}}],["379",{"2":{"276":1}}],["37979",{"2":{"147":1}}],["3798853",{"2":{"147":1}}],["37745",{"2":{"147":1}}],["3777897",{"2":{"147":1}}],["37783703",{"2":{"119":1}}],["377829f",{"2":{"118":1}}],["361868910225326e",{"2":{"287":1}}],["3617685f",{"2":{"285":1}}],["361",{"2":{"276":1,"282":1}}],["3610642",{"2":{"267":1}}],["3613516",{"2":{"264":1}}],["361142",{"2":{"147":1}}],["3644614f",{"2":{"285":1}}],["364984f",{"2":{"285":1}}],["3642",{"2":{"278":1}}],["36428708",{"2":{"165":1}}],["3643",{"2":{"276":1}}],["364877\\ttrain",{"2":{"260":1}}],["36485",{"2":{"260":2}}],["366520085372586e",{"2":{"287":1}}],["3666626f",{"2":{"285":1}}],["3663",{"2":{"278":1}}],["3667717",{"2":{"264":1}}],["366",{"2":{"229":1,"276":3}}],["3626",{"2":{"276":1}}],["36220473",{"2":{"264":2}}],["362",{"2":{"229":1}}],["36859601190498e",{"2":{"287":1}}],["3683410436090745e",{"2":{"287":1}}],["368685f",{"2":{"285":1}}],["3684947f",{"2":{"285":1}}],["368",{"2":{"276":3}}],["3689325",{"2":{"264":1}}],["3688",{"2":{"253":3}}],["368127e",{"2":{"225":1}}],["36829436",{"2":{"196":1}}],["36315868151055e",{"2":{"287":1}}],["363001f",{"2":{"285":1}}],["363",{"2":{"282":1}}],["36367\\taccuracy",{"2":{"204":1}}],["36348",{"2":{"147":1}}],["36",{"2":{"204":1,"245":2,"260":10,"274":2,"295":1}}],["369826647816108e",{"2":{"287":1}}],["369528854202042e",{"2":{"287":1}}],["3695437",{"2":{"147":1}}],["369725f",{"2":{"285":1}}],["3692395f",{"2":{"285":1}}],["3690",{"2":{"282":1}}],["369",{"2":{"229":1}}],["3691778381831775",{"2":{"191":1}}],["369178",{"2":{"188":2}}],["36918",{"2":{"188":1}}],["360",{"2":{"276":1}}],["36056",{"2":{"274":1}}],["36001",{"2":{"253":1}}],["360745",{"2":{"185":1}}],["36023283",{"2":{"167":1}}],["36027783",{"2":{"118":1}}],["365927299203859e",{"2":{"287":1}}],["365063374261197e",{"2":{"287":1}}],["365",{"2":{"276":1}}],["3654292",{"2":{"167":1}}],["365734",{"2":{"147":1}}],["367264588929118e",{"2":{"287":1}}],["367246",{"2":{"147":1}}],["367465193983327e",{"2":{"287":1}}],["36745527",{"2":{"264":1}}],["3677445f",{"2":{"285":1}}],["367747f",{"2":{"118":1}}],["367604",{"2":{"147":1}}],["354076315468418e",{"2":{"287":1}}],["35452",{"2":{"147":1}}],["3505",{"2":{"282":1}}],["35001",{"2":{"253":1}}],["35005f",{"2":{"118":1}}],["355",{"2":{"276":1,"282":1}}],["355299",{"2":{"147":1}}],["35299227",{"2":{"264":1}}],["3523193",{"2":{"196":1}}],["3539435f",{"2":{"285":1}}],["3537",{"2":{"282":1}}],["353753",{"2":{"147":1}}],["353",{"2":{"260":3,"276":1}}],["357",{"2":{"229":1,"260":3,"282":1}}],["3574",{"2":{"229":1}}],["35",{"2":{"213":1,"229":1,"238":2,"245":2,"246":2,"260":12,"274":2,"278":1,"282":1,"295":1}}],["35938",{"2":{"204":1}}],["3595447f",{"2":{"118":1}}],["3519s\\ttraining",{"2":{"234":1}}],["3519",{"2":{"229":1}}],["351",{"2":{"225":1,"260":3,"276":3,"282":1}}],["35149138733595564",{"2":{"191":4}}],["351491",{"2":{"188":1}}],["351028",{"2":{"185":1}}],["35181466",{"2":{"143":1}}],["35638645",{"2":{"167":1}}],["35652733",{"2":{"147":1}}],["3588",{"2":{"282":1}}],["358",{"2":{"260":3,"276":1}}],["35817",{"2":{"188":1}}],["3585896f",{"2":{"118":1}}],["35837248",{"2":{"89":1}}],["312790183742453e",{"2":{"287":1}}],["3127225f",{"2":{"285":1}}],["3129",{"2":{"282":1}}],["313870474872496e",{"2":{"287":1}}],["3130923f",{"2":{"285":1}}],["3136",{"2":{"260":1}}],["3198",{"2":{"282":1}}],["3194",{"2":{"282":1}}],["3182815518693234e",{"2":{"287":1}}],["318407908821865e",{"2":{"287":1}}],["3185188f",{"2":{"285":1}}],["318028f",{"2":{"285":1}}],["3181444f",{"2":{"285":1}}],["3181",{"2":{"278":1}}],["3186",{"2":{"276":1}}],["31861955",{"2":{"165":1}}],["3188",{"2":{"276":1}}],["3112",{"2":{"276":1}}],["317887030791724e",{"2":{"287":1}}],["3170407f",{"2":{"285":1}}],["317928",{"2":{"274":1}}],["31730",{"2":{"204":1}}],["31040",{"2":{"274":1}}],["3103005",{"2":{"267":1}}],["31001",{"2":{"253":1}}],["3108878",{"2":{"167":1}}],["310851790191412072",{"2":{"90":1}}],["31",{"2":{"147":2,"213":4,"245":2,"260":16,"274":2,"276":1,"278":1,"282":1,"295":1}}],["315277203181784e",{"2":{"287":1}}],["315267",{"2":{"147":1}}],["3155",{"2":{"276":1}}],["31578436",{"2":{"264":1}}],["315031",{"2":{"147":1}}],["316887",{"2":{"295":1}}],["316060460946486e",{"2":{"287":1}}],["3160548f",{"2":{"285":1}}],["31615327877986e",{"2":{"287":1}}],["31615f",{"2":{"118":1}}],["316324f",{"2":{"285":1}}],["3164",{"2":{"276":1}}],["316552",{"2":{"147":1}}],["31697956",{"2":{"147":1}}],["314014654576705e",{"2":{"287":1}}],["314001f",{"2":{"285":1}}],["314",{"2":{"165":2}}],["3143208",{"2":{"147":1}}],["3149016",{"2":{"147":1}}],["31473723",{"2":{"89":1}}],["348735",{"2":{"295":1}}],["3487583f",{"2":{"285":1}}],["34899f",{"2":{"285":1}}],["348267",{"2":{"147":1}}],["349336258699577e",{"2":{"287":1}}],["3490",{"2":{"282":1}}],["3498",{"2":{"278":1}}],["349",{"2":{"276":2}}],["34999675",{"2":{"267":1}}],["3465",{"2":{"282":1}}],["34652048",{"2":{"267":1}}],["3468",{"2":{"276":1,"282":1}}],["346",{"2":{"260":11,"276":1}}],["3463",{"2":{"253":2}}],["34001",{"2":{"253":1}}],["3408",{"2":{"187":1}}],["3471",{"2":{"282":1}}],["34717593",{"2":{"196":1}}],["3478",{"2":{"253":3,"276":1}}],["3470s\\ttraining",{"2":{"234":1}}],["341700104606488e",{"2":{"287":1}}],["34179",{"2":{"229":1}}],["341451f",{"2":{"285":1}}],["341854f",{"2":{"285":1}}],["3410565",{"2":{"154":1}}],["345266",{"2":{"295":1}}],["3456563f",{"2":{"285":1}}],["345",{"2":{"260":10,"276":1}}],["3459356238423456e",{"2":{"287":1}}],["3459",{"2":{"229":1}}],["345026",{"2":{"168":1}}],["344276f",{"2":{"285":1}}],["344",{"2":{"229":1,"282":1}}],["3444",{"2":{"192":1}}],["34457564",{"2":{"147":1}}],["34254",{"2":{"188":1}}],["34253937",{"2":{"188":1}}],["342539",{"2":{"188":2}}],["342532",{"2":{"147":1}}],["343452340083523e",{"2":{"287":1}}],["3430977f",{"2":{"285":1}}],["3431",{"2":{"282":1}}],["34351700231383653",{"2":{"191":1}}],["34336",{"2":{"147":1}}],["34383082",{"2":{"143":1}}],["34",{"2":{"77":2,"213":2,"234":1,"245":2,"260":7,"263":1,"274":2,"295":1}}],["384012527694463e",{"2":{"287":1}}],["384433283804282e",{"2":{"287":1}}],["3844414",{"2":{"264":1}}],["3842498f",{"2":{"285":1}}],["384344f",{"2":{"285":1}}],["384",{"2":{"276":2}}],["3869",{"2":{"295":1}}],["3860414f",{"2":{"285":1}}],["386",{"2":{"229":1,"260":3,"282":2}}],["3878",{"2":{"276":1}}],["3875027",{"2":{"264":1}}],["387",{"2":{"229":1,"282":1}}],["38s",{"2":{"213":1}}],["380266433500444e",{"2":{"287":1}}],["380525838099007e",{"2":{"287":1}}],["380421f",{"2":{"285":1}}],["380",{"2":{"276":2}}],["3800323",{"2":{"267":1}}],["38001",{"2":{"253":1}}],["3803",{"2":{"229":1}}],["38036",{"2":{"204":1}}],["380777f0",{"2":{"166":1}}],["380776f0",{"2":{"165":1}}],["3885872f",{"2":{"285":1}}],["3886",{"2":{"278":1}}],["388",{"2":{"276":3}}],["388315",{"2":{"185":1}}],["38889",{"2":{"78":1}}],["389825789777324e",{"2":{"287":1}}],["3895364f",{"2":{"285":1}}],["389",{"2":{"276":1}}],["389748",{"2":{"147":1}}],["389206",{"2":{"147":1}}],["3821",{"2":{"276":1}}],["3828125",{"2":{"274":3}}],["38241944",{"2":{"168":1}}],["382491",{"2":{"147":1}}],["382574",{"2":{"147":1}}],["38355059160567e",{"2":{"287":1}}],["3836946f",{"2":{"285":1}}],["383414f",{"2":{"285":1}}],["383",{"2":{"276":1}}],["383347\\ttrain",{"2":{"260":1}}],["383948",{"2":{"147":1}}],["383777",{"2":{"147":1}}],["385739248055888e",{"2":{"287":1}}],["3850",{"2":{"282":1}}],["3850993",{"2":{"143":1}}],["385",{"2":{"229":1,"276":1}}],["385878e",{"2":{"225":1}}],["385454",{"2":{"147":1}}],["381259f",{"2":{"285":1}}],["381158854147211e",{"2":{"287":1}}],["3811",{"2":{"276":1}}],["381141",{"2":{"147":1}}],["3818",{"2":{"278":1}}],["3818359",{"2":{"274":1}}],["38187847",{"2":{"143":1}}],["381",{"2":{"229":1,"276":1}}],["381774",{"2":{"185":1}}],["38",{"2":{"77":2,"245":10,"260":5,"274":2,"278":3,"295":1}}],["3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀",{"2":{"58":3}}],["3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀",{"2":{"58":1}}],["327073429994369e",{"2":{"287":1}}],["3273107f",{"2":{"285":1}}],["3276",{"2":{"278":1}}],["32m\\u001b",{"2":{"276":1,"282":1}}],["325605743832841e",{"2":{"287":1}}],["325653",{"2":{"204":1}}],["325071008673865e",{"2":{"287":1}}],["3257634f",{"2":{"285":1}}],["325159f",{"2":{"285":1}}],["3252",{"2":{"282":1}}],["325",{"2":{"276":1}}],["3234898220635266e",{"2":{"287":1}}],["323332f",{"2":{"285":1}}],["323",{"2":{"276":1}}],["32322538",{"2":{"143":1}}],["322638002478898e",{"2":{"287":1}}],["3227635f",{"2":{"285":1}}],["3227286",{"2":{"264":1}}],["322",{"2":{"276":2}}],["3222656",{"2":{"274":1}}],["3224781f",{"2":{"118":1}}],["324",{"2":{"260":1,"274":1,"276":1}}],["324167",{"2":{"147":1}}],["321295036392848e",{"2":{"287":1}}],["3212891",{"2":{"274":1}}],["3211414f",{"2":{"285":1}}],["3215",{"2":{"282":1}}],["321506",{"2":{"185":1}}],["321",{"2":{"237":6,"276":1}}],["32001",{"2":{"253":1}}],["320",{"2":{"237":6}}],["329490981379316e",{"2":{"287":1}}],["329333f",{"2":{"285":1}}],["329390",{"2":{"274":1}}],["3297678f",{"2":{"285":1}}],["3299",{"2":{"229":1}}],["32968\\taccuracy",{"2":{"204":1}}],["329143",{"2":{"147":1}}],["32983455",{"2":{"89":1}}],["326113606052651e",{"2":{"287":1}}],["3262465f",{"2":{"285":1}}],["32623518",{"2":{"143":1}}],["326315",{"2":{"274":1}}],["326",{"2":{"225":1}}],["32698",{"2":{"147":1}}],["32651654",{"2":{"147":1}}],["32649475",{"2":{"118":1}}],["3266256",{"2":{"118":1}}],["3285",{"2":{"295":1}}],["3285531315531493e",{"2":{"287":1}}],["3282",{"2":{"278":1}}],["328268",{"2":{"118":1}}],["3289",{"2":{"276":1}}],["328",{"2":{"229":1,"260":2}}],["32838f",{"2":{"89":1}}],["32",{"2":{"45":2,"56":2,"80":10,"90":13,"118":13,"119":2,"126":1,"127":1,"147":4,"197":1,"210":2,"223":1,"225":4,"227":1,"241":1,"243":2,"245":2,"253":1,"260":1,"265":1,"274":2,"276":1,"278":1,"280":1,"285":8,"289":1,"295":1}}],["3=dense",{"2":{"23":1}}],["3dv",{"2":{"50":1}}],["3d",{"2":{"19":1,"50":1,"66":1,"77":4,"86":5,"166":1,"204":1,"245":2,"260":1,"268":1}}],["3088836405277338e",{"2":{"287":1}}],["308156f",{"2":{"285":1}}],["308743f",{"2":{"285":1}}],["3085938",{"2":{"274":1}}],["30828551671183e",{"2":{"287":1}}],["3082",{"2":{"229":1}}],["30316323",{"2":{"264":1}}],["302074f",{"2":{"285":1}}],["3028",{"2":{"282":1}}],["3025",{"2":{"278":1}}],["3027344",{"2":{"274":1}}],["302",{"2":{"260":3}}],["30215",{"2":{"188":1}}],["309344717396738e",{"2":{"287":1}}],["3096",{"2":{"282":1}}],["3095950067347489316",{"2":{"204":1}}],["3097294",{"2":{"196":1}}],["309233",{"2":{"147":1}}],["307632537657983e",{"2":{"287":1}}],["3074916f",{"2":{"285":1}}],["307147f",{"2":{"285":1}}],["3071294",{"2":{"168":1}}],["3070866",{"2":{"264":2}}],["307010141362492e",{"2":{"287":1}}],["30701",{"2":{"204":1}}],["30797",{"2":{"260":1}}],["30733",{"2":{"192":1}}],["3077",{"2":{"187":1}}],["3063251135287365e",{"2":{"287":1}}],["3066",{"2":{"278":1}}],["3066406",{"2":{"274":1}}],["306641",{"2":{"185":1}}],["3068",{"2":{"276":1}}],["30674052",{"2":{"196":1}}],["3060458f",{"2":{"285":1}}],["3060",{"2":{"187":1}}],["3048",{"2":{"278":1}}],["30481228",{"2":{"147":1}}],["3040",{"2":{"276":1}}],["304",{"2":{"260":6,"276":1,"282":1}}],["304635",{"2":{"147":1}}],["305216318208326e",{"2":{"287":1}}],["30521\\taccuracy",{"2":{"204":1}}],["3052518f",{"2":{"285":1}}],["305874",{"2":{"274":1}}],["305844",{"2":{"167":1,"168":1}}],["30573",{"2":{"147":1}}],["30556",{"2":{"78":1}}],["301961504875446e",{"2":{"287":1}}],["301137882728573e",{"2":{"287":1}}],["30112675",{"2":{"118":1}}],["301449155436491e",{"2":{"287":1}}],["301429",{"2":{"147":1}}],["3010124f",{"2":{"285":1}}],["30172",{"2":{"147":1}}],["3013572",{"2":{"147":1}}],["301",{"2":{"89":1,"225":1}}],["30052654065131e",{"2":{"287":1}}],["300263f",{"2":{"285":1}}],["300629",{"2":{"274":1}}],["3001",{"2":{"196":1,"253":1}}],["30001",{"2":{"253":1}}],["30000",{"2":{"81":1}}],["3000",{"2":{"81":1,"295":1}}],["300",{"2":{"81":1,"119":1}}],["30",{"2":{"15":1,"64":1,"73":2,"147":2,"245":4,"260":3,"274":2}}],["3×6×1",{"2":{"78":1}}],["3×3×3",{"2":{"76":1}}],["3×3×1×1",{"2":{"15":1,"79":3}}],["3×3",{"2":{"76":4}}],["3×2×1×1",{"2":{"82":2}}],["3×2",{"2":{"56":2}}],["3×5",{"2":{"50":4,"56":1,"81":1}}],["3×7",{"2":{"5":2}}],["3×13",{"2":{"5":4}}],["337",{"2":{"276":1,"282":1}}],["337320",{"2":{"274":1}}],["3379242",{"2":{"147":1}}],["337936",{"2":{"118":1}}],["332449584825098e",{"2":{"287":1}}],["3327632552891016e",{"2":{"287":1}}],["3325788f",{"2":{"285":1}}],["3329135f",{"2":{"285":1}}],["332801040283146e",{"2":{"287":1}}],["3328961f",{"2":{"285":1}}],["33281517",{"2":{"267":1}}],["332",{"2":{"276":1}}],["3320",{"2":{"276":1}}],["33232",{"2":{"229":1}}],["330812336045713e",{"2":{"287":1}}],["330157224740251e",{"2":{"287":1}}],["330723f",{"2":{"285":1}}],["33070865",{"2":{"264":2}}],["33001",{"2":{"253":1}}],["33091125",{"2":{"118":1}}],["334891794723312e",{"2":{"287":1}}],["334861150363281e",{"2":{"287":1}}],["334953718427553e",{"2":{"287":1}}],["334923f",{"2":{"285":1}}],["334508f",{"2":{"285":1}}],["3345535f",{"2":{"285":1}}],["334f",{"2":{"285":1}}],["3341",{"2":{"282":1}}],["334205898848534e",{"2":{"287":1}}],["334287866120473e",{"2":{"287":1}}],["3342",{"2":{"282":1}}],["3343",{"2":{"276":1}}],["3346",{"2":{"229":1}}],["3344336",{"2":{"168":1}}],["335427362644808e",{"2":{"287":1}}],["3352965f",{"2":{"285":1}}],["3350",{"2":{"282":1}}],["3358",{"2":{"276":1}}],["3351",{"2":{"229":1}}],["3357",{"2":{"229":1}}],["3359975f",{"2":{"118":1}}],["3393624949406e",{"2":{"287":1}}],["339904254855787e",{"2":{"287":1}}],["339651894155353e",{"2":{"287":1}}],["339667f",{"2":{"285":1}}],["339",{"2":{"276":1}}],["3391",{"2":{"229":1}}],["339081889086414e",{"2":{"287":1}}],["339081",{"2":{"168":1}}],["339815",{"2":{"147":1}}],["3335",{"2":{"282":1}}],["333503",{"2":{"147":1}}],["3331",{"2":{"282":1}}],["33335",{"2":{"147":1}}],["33333",{"2":{"78":2,"85":10,"234":15,"236":2}}],["3333333333333335",{"2":{"278":1}}],["3333333333333333",{"2":{"84":1}}],["333333",{"2":{"58":2,"74":3}}],["3316176",{"2":{"264":1}}],["3316886f0",{"2":{"58":1}}],["3313",{"2":{"229":1,"282":1}}],["331483",{"2":{"147":1}}],["3363196f",{"2":{"285":1}}],["3363",{"2":{"282":1}}],["3369141",{"2":{"274":1}}],["336721",{"2":{"147":1}}],["3367505",{"2":{"143":1}}],["336659",{"2":{"147":1}}],["338275f",{"2":{"285":1}}],["338237f",{"2":{"285":1}}],["33821836",{"2":{"118":1}}],["338620955923004e",{"2":{"287":1}}],["3386925f",{"2":{"285":1}}],["338672",{"2":{"147":1}}],["3384s",{"2":{"274":1}}],["3385826",{"2":{"264":2}}],["3385",{"2":{"187":1,"253":1}}],["33802274",{"2":{"118":1}}],["33",{"2":{"5":1,"89":2,"90":1,"213":2,"245":2,"260":5,"274":2,"278":2,"282":1,"295":1}}],["3965558726716256e",{"2":{"287":1}}],["3967105f",{"2":{"285":1}}],["3968381f",{"2":{"285":1}}],["396",{"2":{"282":2}}],["396293f",{"2":{"118":1}}],["391832128667507e",{"2":{"287":1}}],["391863f",{"2":{"285":1}}],["391",{"2":{"276":1,"282":1}}],["39m",{"2":{"276":1,"282":1}}],["3998023316198665e",{"2":{"287":1}}],["399842685473047e",{"2":{"287":1}}],["399849f",{"2":{"285":1}}],["399361775239503e",{"2":{"287":1}}],["3993003f",{"2":{"285":1}}],["3991748111659956e",{"2":{"287":1}}],["399997f",{"2":{"285":1}}],["3994141",{"2":{"274":1}}],["39950222",{"2":{"147":1}}],["392799152137513e",{"2":{"287":1}}],["392712f",{"2":{"285":1}}],["392921863727165e",{"2":{"287":1}}],["392",{"2":{"276":2}}],["392132",{"2":{"260":1}}],["392102",{"2":{"260":1}}],["3920",{"2":{"282":1}}],["392098",{"2":{"260":1}}],["392095",{"2":{"260":1}}],["392091",{"2":{"260":1}}],["392087",{"2":{"260":1}}],["392083",{"2":{"260":1}}],["392079",{"2":{"260":1}}],["392074",{"2":{"260":1}}],["392070",{"2":{"260":1}}],["392006",{"2":{"260":1}}],["3928175",{"2":{"165":1}}],["394340723686492e",{"2":{"287":1}}],["3943094f",{"2":{"285":1}}],["394",{"2":{"276":1,"282":2}}],["39495495",{"2":{"267":1}}],["394958",{"2":{"147":1}}],["39456",{"2":{"204":1}}],["393211306206346e",{"2":{"287":1}}],["393282",{"2":{"274":1}}],["3934051240202256e",{"2":{"287":1}}],["393426\\ttrain",{"2":{"260":1}}],["393352f",{"2":{"285":1}}],["393586f",{"2":{"285":1}}],["39370078",{"2":{"264":2}}],["3930344f",{"2":{"285":1}}],["3930",{"2":{"187":1}}],["3939897",{"2":{"168":1}}],["398893\\tval",{"2":{"260":1}}],["398",{"2":{"260":3,"276":1,"282":2}}],["398356",{"2":{"260":1}}],["398348",{"2":{"260":1}}],["398344",{"2":{"260":1}}],["398340",{"2":{"260":1}}],["398336",{"2":{"260":1}}],["398332",{"2":{"260":1}}],["398329",{"2":{"260":1}}],["398324",{"2":{"260":1}}],["398320",{"2":{"260":1}}],["398316",{"2":{"260":1}}],["398296",{"2":{"260":1}}],["398299",{"2":{"147":1}}],["3980046",{"2":{"168":1}}],["3955424941657595e",{"2":{"287":1}}],["395641f",{"2":{"285":1}}],["395",{"2":{"282":1}}],["3957",{"2":{"229":1}}],["395306",{"2":{"165":1}}],["39599",{"2":{"147":1}}],["3970",{"2":{"278":1}}],["397",{"2":{"276":1,"282":2}}],["3977316",{"2":{"168":1}}],["397588",{"2":{"147":1}}],["397521",{"2":{"147":1}}],["3971015841932906e",{"2":{"287":1}}],["3971",{"2":{"147":1}}],["39763013",{"2":{"147":1}}],["390788427300087e",{"2":{"287":1}}],["390676f",{"2":{"285":1}}],["39068",{"2":{"147":1}}],["390",{"2":{"276":1}}],["39001",{"2":{"253":1}}],["39049",{"2":{"147":1}}],["39",{"2":{"2":1,"4":2,"8":1,"10":1,"16":1,"23":1,"25":1,"34":1,"38":1,"40":1,"41":3,"42":1,"45":1,"49":1,"50":1,"51":1,"53":1,"54":3,"56":7,"58":1,"60":2,"63":1,"65":1,"72":1,"84":2,"85":1,"86":7,"89":2,"90":1,"94":1,"100":2,"107":2,"110":1,"114":1,"118":2,"119":1,"122":1,"123":3,"124":1,"126":1,"127":1,"128":1,"130":1,"131":1,"138":2,"140":1,"141":1,"143":1,"153":5,"154":1,"155":3,"156":1,"158":1,"164":4,"165":4,"166":1,"167":2,"168":1,"172":3,"174":2,"176":1,"177":1,"187":5,"188":8,"190":2,"192":2,"193":1,"194":1,"196":1,"201":3,"203":1,"205":2,"221":2,"224":1,"231":1,"234":1,"242":1,"245":4,"260":8,"264":1,"265":1,"268":1,"274":2,"277":1,"278":4,"283":1,"284":1,"285":2,"291":1,"295":1}}],["3",{"2":{"2":1,"5":3,"15":5,"23":7,"34":17,"35":2,"36":2,"38":4,"40":4,"41":3,"45":2,"47":1,"49":1,"50":30,"52":5,"56":27,"58":14,"74":6,"75":3,"76":60,"77":24,"78":86,"79":8,"80":21,"81":15,"82":10,"84":4,"85":11,"86":13,"89":4,"90":4,"95":1,"118":24,"121":7,"126":12,"127":7,"132":3,"133":1,"135":1,"143":4,"146":6,"147":23,"149":1,"150":1,"153":1,"155":1,"165":4,"167":10,"168":8,"187":8,"188":23,"189":1,"190":3,"191":7,"192":9,"197":1,"199":1,"201":1,"202":1,"204":2,"206":1,"209":1,"210":9,"212":1,"213":3,"214":1,"223":1,"225":3,"227":1,"229":21,"230":1,"232":2,"234":5,"236":1,"237":62,"238":8,"240":5,"245":7,"246":7,"250":3,"251":2,"252":1,"253":2,"255":1,"260":30,"261":1,"263":3,"264":9,"267":1,"269":1,"271":17,"273":2,"274":3,"275":1,"276":49,"278":18,"279":1,"280":1,"282":42,"283":12,"284":1,"285":99,"287":103,"289":1,"295":1,"296":2,"297":1}}],["4\\ttrain",{"2":{"260":1}}],["4f",{"2":{"260":4,"274":1}}],["4fs",{"2":{"234":1,"274":1}}],["4t",{"2":{"252":1}}],["4th",{"2":{"144":1}}],["4+0",{"2":{"238":1,"246":1}}],["4x",{"2":{"213":1}}],["4x256xf32>",{"2":{"147":2}}],["4x2xi64>",{"2":{"147":2}}],["4x4x16x4xf32>",{"2":{"147":2}}],["4x16x4x4xf32>",{"2":{"147":2}}],["4x10xf32>",{"2":{"147":3}}],["4x1x28x28xf32>",{"2":{"147":2}}],["484",{"2":{"282":1}}],["489316",{"2":{"274":1}}],["48990166",{"2":{"264":1}}],["489898",{"2":{"260":1}}],["489893",{"2":{"260":1}}],["489890",{"2":{"260":1}}],["489887",{"2":{"260":1}}],["489884",{"2":{"260":1}}],["489881",{"2":{"260":1}}],["489878",{"2":{"260":1}}],["489875",{"2":{"260":1}}],["489872",{"2":{"260":1}}],["489869",{"2":{"260":1}}],["489855",{"2":{"260":1}}],["48983f",{"2":{"89":1}}],["4868877f",{"2":{"285":1}}],["486858",{"2":{"260":1}}],["4862",{"2":{"278":1}}],["486214",{"2":{"185":1}}],["486367\\tval",{"2":{"260":1}}],["486948",{"2":{"260":1}}],["486938",{"2":{"260":1}}],["486936",{"2":{"260":1}}],["486932",{"2":{"260":1}}],["486929",{"2":{"260":1}}],["486926",{"2":{"260":1}}],["486923",{"2":{"260":1}}],["486920",{"2":{"260":1}}],["486917",{"2":{"260":1}}],["486914",{"2":{"260":1}}],["486",{"2":{"229":1}}],["4865",{"2":{"229":1}}],["48s",{"2":{"213":1}}],["4837950239943873e",{"2":{"287":1}}],["483655f",{"2":{"285":1}}],["483",{"2":{"260":2,"276":1}}],["48351598",{"2":{"196":1}}],["4831388f",{"2":{"118":1}}],["48",{"2":{"192":1,"206":3,"213":1,"214":3,"229":1,"238":3,"245":3,"246":3,"255":3,"260":5,"261":3,"269":3,"274":2,"275":3,"282":1,"297":3}}],["48259094",{"2":{"264":1}}],["48257408",{"2":{"147":1}}],["482939",{"2":{"260":1}}],["482934",{"2":{"260":1}}],["482932",{"2":{"260":1}}],["482929",{"2":{"260":1}}],["482926",{"2":{"260":1}}],["482923",{"2":{"260":1}}],["482920",{"2":{"260":1}}],["482917",{"2":{"260":1}}],["482914",{"2":{"260":1}}],["482911",{"2":{"260":1}}],["482900",{"2":{"260":1}}],["48263642",{"2":{"196":1}}],["482",{"2":{"187":1,"276":2,"282":2}}],["4859247611038911e",{"2":{"287":1}}],["4856",{"2":{"278":1}}],["485262",{"2":{"274":1}}],["48525",{"2":{"147":1}}],["485",{"2":{"260":1}}],["485744f",{"2":{"285":1}}],["485745",{"2":{"260":1}}],["485741",{"2":{"260":1}}],["485738",{"2":{"260":1}}],["485735",{"2":{"260":1}}],["485732",{"2":{"260":1}}],["485729",{"2":{"260":1}}],["485726",{"2":{"260":1}}],["485723",{"2":{"260":1}}],["485720",{"2":{"260":1}}],["485717",{"2":{"260":1}}],["485704",{"2":{"260":1}}],["4851",{"2":{"253":1}}],["4851346",{"2":{"155":1}}],["487177087047751e",{"2":{"287":1}}],["4876",{"2":{"278":1}}],["487",{"2":{"276":2,"282":1}}],["4875",{"2":{"229":1}}],["487558",{"2":{"147":1}}],["487275",{"2":{"147":1}}],["4882347427618372e",{"2":{"287":1}}],["488033f",{"2":{"285":1}}],["48801818",{"2":{"118":1}}],["488",{"2":{"282":2}}],["488888",{"2":{"274":1}}],["488551",{"2":{"260":1}}],["488547",{"2":{"260":1}}],["488544",{"2":{"260":1}}],["488541",{"2":{"260":1}}],["488538",{"2":{"260":1}}],["488535",{"2":{"260":1}}],["488533",{"2":{"260":1}}],["488530",{"2":{"260":1}}],["488527",{"2":{"260":1}}],["488524",{"2":{"260":1}}],["488509",{"2":{"260":1}}],["4883",{"2":{"240":1}}],["488387",{"2":{"147":1}}],["4884",{"2":{"229":1}}],["488948e",{"2":{"225":1}}],["488623",{"2":{"168":1}}],["488164088046866e",{"2":{"287":1}}],["4881458f",{"2":{"285":1}}],["48818898",{"2":{"264":2}}],["488105",{"2":{"167":1,"168":1}}],["48815",{"2":{"147":1}}],["480875\\tthroughput",{"2":{"295":1}}],["480403248319472e",{"2":{"287":1}}],["4807",{"2":{"282":1}}],["480708",{"2":{"274":1}}],["480797",{"2":{"147":1}}],["480152",{"2":{"260":1}}],["480147",{"2":{"260":1}}],["480144",{"2":{"260":1}}],["480142",{"2":{"260":1}}],["480139",{"2":{"260":1}}],["480136",{"2":{"260":1}}],["480133",{"2":{"260":1}}],["480130",{"2":{"260":1}}],["480127",{"2":{"260":1}}],["480124f",{"2":{"285":1}}],["480124",{"2":{"260":1}}],["480112",{"2":{"260":1}}],["48001",{"2":{"253":1}}],["48005",{"2":{"147":1}}],["480",{"2":{"237":3,"276":1}}],["481",{"2":{"276":1}}],["48148",{"2":{"234":6,"236":1}}],["48194",{"2":{"204":1}}],["48193252",{"2":{"165":1}}],["4810884",{"2":{"165":1}}],["48107117",{"2":{"147":1}}],["48137343",{"2":{"143":1}}],["418052855666789e",{"2":{"287":1}}],["4180174f",{"2":{"285":1}}],["418238897852649e",{"2":{"287":1}}],["4183755f",{"2":{"285":1}}],["418",{"2":{"282":1}}],["418557",{"2":{"274":1}}],["4189453",{"2":{"274":1}}],["41815332",{"2":{"147":1}}],["4174761f",{"2":{"285":1}}],["417206f",{"2":{"285":1}}],["4172",{"2":{"278":1}}],["417",{"2":{"260":3,"282":1}}],["41799",{"2":{"147":1}}],["413612418088823e",{"2":{"287":1}}],["413648f",{"2":{"285":1}}],["413",{"2":{"260":1,"276":3,"282":1}}],["4137662",{"2":{"118":1}}],["410894651933961e",{"2":{"287":1}}],["4107821f",{"2":{"285":1}}],["410",{"2":{"276":1}}],["41001",{"2":{"253":1}}],["4101562",{"2":{"274":1}}],["4101036",{"2":{"167":1}}],["41017",{"2":{"147":1}}],["415748045851735e",{"2":{"287":1}}],["415284267488991e",{"2":{"287":1}}],["415172f",{"2":{"285":1}}],["415668490254715e",{"2":{"287":1}}],["4156461f",{"2":{"285":1}}],["4156",{"2":{"229":1}}],["415368",{"2":{"147":1}}],["4114325240316644e",{"2":{"287":1}}],["411143f",{"2":{"285":1}}],["4116536",{"2":{"267":1}}],["411",{"2":{"229":1}}],["41127",{"2":{"118":1}}],["41",{"2":{"213":3,"245":4,"260":13,"274":2,"295":1}}],["419",{"2":{"282":3}}],["41919118",{"2":{"196":1}}],["419611",{"2":{"147":1}}],["4142050549200193e",{"2":{"287":1}}],["414427f",{"2":{"285":1}}],["4144737",{"2":{"168":1}}],["4141",{"2":{"282":1}}],["41430124409814e",{"2":{"287":1}}],["4143623717935339e",{"2":{"287":1}}],["414312512077747e",{"2":{"287":1}}],["4143306f",{"2":{"285":1}}],["4143775f",{"2":{"285":1}}],["4143",{"2":{"276":1}}],["414804\\tval",{"2":{"260":1}}],["414769",{"2":{"188":1}}],["4122836f",{"2":{"285":1}}],["41229f",{"2":{"89":1}}],["412",{"2":{"260":4,"282":1}}],["412185",{"2":{"260":1}}],["412180",{"2":{"260":1}}],["412177",{"2":{"260":1}}],["412174",{"2":{"260":1}}],["412172",{"2":{"260":1}}],["412169",{"2":{"260":1}}],["412166",{"2":{"260":1}}],["412163",{"2":{"260":1}}],["412160",{"2":{"260":1}}],["412157",{"2":{"260":1}}],["412139",{"2":{"260":1}}],["41238895",{"2":{"167":1}}],["41261",{"2":{"147":1}}],["41246277",{"2":{"143":1}}],["4162",{"2":{"282":1}}],["41628727",{"2":{"267":1}}],["41623f",{"2":{"89":1}}],["416",{"2":{"210":2}}],["416461",{"2":{"260":1}}],["416457",{"2":{"260":1}}],["416454",{"2":{"260":1}}],["416451",{"2":{"260":1}}],["41645882",{"2":{"147":1}}],["416448",{"2":{"260":1}}],["416445",{"2":{"260":1}}],["416441",{"2":{"260":1}}],["416438",{"2":{"260":1}}],["416435",{"2":{"260":1}}],["416432",{"2":{"260":1}}],["416418",{"2":{"260":1}}],["4164f",{"2":{"89":1}}],["41613227",{"2":{"118":1}}],["41667",{"2":{"78":1}}],["4782520729532884e",{"2":{"287":1}}],["478237f",{"2":{"285":1}}],["4786",{"2":{"278":2}}],["478",{"2":{"260":1}}],["47872233",{"2":{"165":1}}],["474",{"2":{"276":2}}],["474457",{"2":{"260":1}}],["474450",{"2":{"260":1}}],["474445",{"2":{"260":1}}],["474439",{"2":{"260":1}}],["474435",{"2":{"260":1}}],["474431",{"2":{"260":1}}],["474427",{"2":{"260":1}}],["474423",{"2":{"260":1}}],["474418",{"2":{"260":1}}],["474414",{"2":{"260":1}}],["474398",{"2":{"260":1}}],["474701",{"2":{"260":1}}],["474634",{"2":{"295":1}}],["474696",{"2":{"260":1}}],["474694",{"2":{"260":1}}],["474689",{"2":{"260":1}}],["474686",{"2":{"260":1}}],["474683",{"2":{"260":1}}],["474681",{"2":{"260":1}}],["474678",{"2":{"260":1}}],["474675",{"2":{"260":1}}],["474672",{"2":{"260":1}}],["474657",{"2":{"260":1}}],["47692233985983e",{"2":{"287":1}}],["476671f",{"2":{"285":1}}],["47632f",{"2":{"285":1}}],["4762",{"2":{"282":1}}],["476",{"2":{"225":1}}],["47s",{"2":{"213":4}}],["47",{"2":{"210":2,"213":1,"229":2,"245":2,"253":77,"260":11,"274":2,"276":1,"282":2}}],["479349799113199e",{"2":{"287":1}}],["479381f",{"2":{"285":1}}],["479",{"2":{"192":1,"260":12,"276":1,"282":1}}],["4714",{"2":{"282":1}}],["471448",{"2":{"260":1}}],["4716797",{"2":{"274":1}}],["471634",{"2":{"147":1}}],["471568",{"2":{"260":1}}],["471556",{"2":{"260":1}}],["471552",{"2":{"260":1}}],["471548",{"2":{"260":1}}],["471544",{"2":{"260":1}}],["471540",{"2":{"260":1}}],["471536",{"2":{"260":1}}],["471532",{"2":{"260":1}}],["471527",{"2":{"260":1}}],["471523",{"2":{"260":1}}],["471",{"2":{"192":1,"260":3,"282":1}}],["47174177",{"2":{"118":1}}],["473",{"2":{"276":3,"282":3}}],["473799\\ttrain",{"2":{"260":1}}],["4736",{"2":{"240":1}}],["473179",{"2":{"147":1}}],["47329637",{"2":{"118":1}}],["477",{"2":{"276":2,"282":2}}],["4775391",{"2":{"274":1}}],["477324",{"2":{"260":1}}],["477329",{"2":{"147":1}}],["477318",{"2":{"260":1}}],["477314",{"2":{"260":1}}],["477310",{"2":{"260":1}}],["477306",{"2":{"260":1}}],["477302",{"2":{"260":1}}],["477298",{"2":{"260":1}}],["477294",{"2":{"260":1}}],["477290",{"2":{"260":1}}],["477285",{"2":{"260":1}}],["477270",{"2":{"260":1}}],["47787",{"2":{"147":1}}],["47792822",{"2":{"118":1}}],["4707031",{"2":{"274":1}}],["47001",{"2":{"253":1}}],["47006413",{"2":{"118":1}}],["470567",{"2":{"260":1}}],["470562",{"2":{"260":1}}],["470559",{"2":{"260":1}}],["470556",{"2":{"260":1}}],["470554",{"2":{"260":1}}],["470551",{"2":{"260":1}}],["470548",{"2":{"260":1}}],["470545",{"2":{"260":1}}],["470542",{"2":{"260":1}}],["470539",{"2":{"260":1}}],["470526",{"2":{"260":1}}],["4705",{"2":{"253":1}}],["470",{"2":{"229":1,"282":2}}],["47092",{"2":{"147":1}}],["470282",{"2":{"147":1}}],["4755604f",{"2":{"285":1}}],["4755f",{"2":{"89":1}}],["475",{"2":{"276":1,"282":3}}],["475696\\tval",{"2":{"260":1}}],["4751",{"2":{"240":1}}],["47512773",{"2":{"147":1}}],["4754237662873872e",{"2":{"287":1}}],["4754",{"2":{"147":1}}],["4727",{"2":{"278":1}}],["47222",{"2":{"78":1}}],["472",{"2":{"38":1,"282":2}}],["4677493f",{"2":{"285":1}}],["467271f",{"2":{"285":1}}],["467",{"2":{"282":1}}],["466266393733491e",{"2":{"287":1}}],["466793041196418e",{"2":{"287":1}}],["466",{"2":{"276":1}}],["466387",{"2":{"260":1}}],["466382",{"2":{"260":1}}],["466379",{"2":{"260":1}}],["466377",{"2":{"260":1}}],["466374",{"2":{"260":1}}],["466371",{"2":{"260":1}}],["466368",{"2":{"260":1}}],["466365",{"2":{"260":1}}],["466362",{"2":{"260":1}}],["466359",{"2":{"260":1}}],["466346",{"2":{"260":1}}],["46469590415659e",{"2":{"287":1}}],["464696f",{"2":{"285":1}}],["4645595558477864e",{"2":{"287":1}}],["4645909f",{"2":{"285":1}}],["4645",{"2":{"276":1}}],["464567",{"2":{"264":2}}],["464",{"2":{"276":1}}],["46497717",{"2":{"264":1}}],["4631066",{"2":{"264":1}}],["463610\\tval",{"2":{"260":1}}],["465376845004567e",{"2":{"287":1}}],["465",{"2":{"260":3,"282":1}}],["46514034",{"2":{"196":1}}],["4621996649483235e",{"2":{"287":1}}],["46211714f0",{"2":{"58":1}}],["46211717f0",{"2":{"58":1}}],["462",{"2":{"282":1}}],["462248",{"2":{"260":1}}],["462243",{"2":{"260":1}}],["462240",{"2":{"260":1}}],["462238",{"2":{"260":1}}],["462235",{"2":{"260":1}}],["462232",{"2":{"260":1}}],["462229",{"2":{"260":1}}],["462226",{"2":{"260":1}}],["462223",{"2":{"260":1}}],["462220",{"2":{"260":1}}],["462205",{"2":{"260":1}}],["46s",{"2":{"213":3}}],["460",{"2":{"276":1,"282":1}}],["46079",{"2":{"260":3}}],["46001",{"2":{"253":1}}],["4604826",{"2":{"168":1}}],["46096706",{"2":{"147":1}}],["4619",{"2":{"282":1}}],["461908",{"2":{"188":1}}],["4611",{"2":{"278":1}}],["4614",{"2":{"278":1}}],["461",{"2":{"276":1}}],["461399",{"2":{"167":1}}],["46127",{"2":{"147":1}}],["469730\\tthroughput",{"2":{"295":1}}],["46976f",{"2":{"89":1}}],["469015f",{"2":{"285":1}}],["469",{"2":{"260":5}}],["469687",{"2":{"147":1}}],["4683293f",{"2":{"285":1}}],["468",{"2":{"282":1}}],["46878588",{"2":{"168":1}}],["468792",{"2":{"147":1}}],["468023",{"2":{"147":1}}],["46",{"2":{"86":1,"213":1,"245":2,"253":1,"260":8,"274":2,"282":1}}],["432399",{"2":{"295":1}}],["43256",{"2":{"282":1}}],["432",{"2":{"276":1,"282":1}}],["432701",{"2":{"147":1}}],["436",{"2":{"282":1}}],["436213549089752e",{"2":{"287":1}}],["4362",{"2":{"278":1}}],["436969",{"2":{"295":1}}],["4369135",{"2":{"267":1}}],["43697",{"2":{"260":2}}],["43676856",{"2":{"167":1}}],["437089f",{"2":{"285":1}}],["437085f",{"2":{"118":1}}],["437",{"2":{"282":1}}],["437804",{"2":{"260":1}}],["437800",{"2":{"260":1}}],["437797",{"2":{"260":1}}],["437794",{"2":{"260":1}}],["437791",{"2":{"260":1}}],["437788",{"2":{"260":1}}],["437785",{"2":{"260":1}}],["437783",{"2":{"260":1}}],["437780",{"2":{"260":1}}],["437777",{"2":{"260":1}}],["437762",{"2":{"260":1}}],["4308017918943016e",{"2":{"287":1}}],["4307137f",{"2":{"285":1}}],["430",{"2":{"276":4}}],["43001",{"2":{"253":1}}],["4305115e",{"2":{"172":1}}],["4305781",{"2":{"143":1}}],["4347078545641728e",{"2":{"287":1}}],["434771",{"2":{"147":1}}],["434535116622432e",{"2":{"287":1}}],["4344347f",{"2":{"285":1}}],["4344",{"2":{"282":1}}],["434",{"2":{"229":1,"282":1}}],["43af",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["4384",{"2":{"278":1}}],["43846482",{"2":{"196":1}}],["43846",{"2":{"185":1}}],["43817532",{"2":{"264":1}}],["4389",{"2":{"40":1}}],["433194207789489e",{"2":{"287":1}}],["4330709",{"2":{"264":2}}],["433094\\tval",{"2":{"260":1}}],["433259\\tval",{"2":{"260":1}}],["433",{"2":{"260":4,"276":3}}],["433677",{"2":{"260":1}}],["433672",{"2":{"260":1}}],["433669",{"2":{"260":1}}],["433667",{"2":{"260":1}}],["433664",{"2":{"260":1}}],["433661",{"2":{"260":1}}],["433658",{"2":{"260":1}}],["433655",{"2":{"260":1}}],["433652",{"2":{"260":1}}],["433656",{"2":{"185":1}}],["433649",{"2":{"260":1}}],["433634",{"2":{"260":1}}],["4339262",{"2":{"196":1}}],["43350345",{"2":{"167":1}}],["433858",{"2":{"147":1}}],["439070565787211e",{"2":{"287":1}}],["439068",{"2":{"264":1}}],["439207f",{"2":{"285":1}}],["4399",{"2":{"278":1}}],["43998",{"2":{"147":1}}],["439167\\ttrain",{"2":{"260":1}}],["4397957",{"2":{"165":1}}],["435445\\tthroughput",{"2":{"295":1}}],["435",{"2":{"276":1}}],["4359436",{"2":{"267":1}}],["43599",{"2":{"147":1}}],["435064\\tval",{"2":{"260":1}}],["4350",{"2":{"229":1}}],["435143",{"2":{"147":1}}],["431165551615428e",{"2":{"287":1}}],["43112\\taccuracy",{"2":{"204":1}}],["431847907588169e",{"2":{"287":1}}],["4318977",{"2":{"143":1}}],["431008f",{"2":{"285":1}}],["43156f",{"2":{"285":1}}],["43153",{"2":{"86":1}}],["431",{"2":{"282":1}}],["4312565",{"2":{"267":1}}],["4319346874291314",{"2":{"188":1}}],["431456",{"2":{"168":1}}],["43",{"2":{"107":1,"245":4,"260":17,"263":1,"274":2,"278":1,"282":1}}],["45s",{"2":{"213":1}}],["452792877103415e",{"2":{"287":1}}],["452793f",{"2":{"285":1}}],["452",{"2":{"260":18,"276":1,"282":3}}],["45247704",{"2":{"196":1}}],["45298168",{"2":{"89":1}}],["4597326f",{"2":{"285":1}}],["459599749307891e",{"2":{"287":1}}],["4595859",{"2":{"267":1}}],["45953766",{"2":{"147":1}}],["459",{"2":{"260":6}}],["45981",{"2":{"240":1}}],["4593922",{"2":{"193":4}}],["459019",{"2":{"188":1}}],["4550",{"2":{"278":1}}],["4550781",{"2":{"274":1}}],["455770\\tval",{"2":{"260":1}}],["455",{"2":{"260":18}}],["4552384158732863",{"2":{"191":4}}],["455238",{"2":{"188":1}}],["45553648",{"2":{"147":1}}],["4547",{"2":{"282":1}}],["454367\\ttrain",{"2":{"260":1}}],["454",{"2":{"260":21,"276":1,"282":1}}],["454135",{"2":{"260":1}}],["454130",{"2":{"260":1}}],["454128",{"2":{"260":1}}],["454125",{"2":{"260":1}}],["454122",{"2":{"260":1}}],["454119",{"2":{"260":1}}],["454116",{"2":{"260":1}}],["454113",{"2":{"260":1}}],["454110",{"2":{"260":1}}],["454107",{"2":{"260":1}}],["454089",{"2":{"260":1}}],["45492",{"2":{"229":1}}],["454679",{"2":{"185":1}}],["454535",{"2":{"147":1}}],["4537",{"2":{"278":1}}],["453139\\tval",{"2":{"260":1}}],["45312",{"2":{"204":2}}],["453",{"2":{"260":18,"282":2}}],["4534s\\ttraining",{"2":{"234":1}}],["45341",{"2":{"204":1}}],["45325348",{"2":{"153":1}}],["453974",{"2":{"147":1}}],["4562",{"2":{"278":1}}],["4568",{"2":{"278":1}}],["45682606",{"2":{"147":1}}],["4560547",{"2":{"274":1}}],["4566929",{"2":{"264":2}}],["456",{"2":{"260":18,"276":2,"282":1}}],["45678f",{"2":{"89":1}}],["451347307728101e",{"2":{"287":1}}],["451611f",{"2":{"285":1}}],["451707693279354e",{"2":{"287":1}}],["4517077f",{"2":{"285":1}}],["451749",{"2":{"264":1}}],["451763",{"2":{"185":1}}],["451",{"2":{"192":1,"225":1,"260":18,"282":1}}],["4510427",{"2":{"143":1}}],["45199f",{"2":{"89":1}}],["458560312950922e",{"2":{"287":1}}],["4585489",{"2":{"143":1}}],["45871137179712e",{"2":{"287":1}}],["4587985f",{"2":{"285":1}}],["458762",{"2":{"147":1}}],["458472f",{"2":{"285":1}}],["458",{"2":{"260":6,"276":1}}],["458639\\ttrain",{"2":{"260":1}}],["45867",{"2":{"260":1}}],["458157",{"2":{"260":1}}],["458153",{"2":{"260":1}}],["458150",{"2":{"260":1}}],["458147",{"2":{"260":1}}],["458144",{"2":{"260":1}}],["458141",{"2":{"260":1}}],["458138",{"2":{"260":1}}],["458135",{"2":{"260":1}}],["458133",{"2":{"260":1}}],["458130",{"2":{"260":1}}],["458113",{"2":{"260":1}}],["45895",{"2":{"147":1}}],["458241f",{"2":{"118":1}}],["4508",{"2":{"278":1}}],["450",{"2":{"260":18,"276":1,"282":2}}],["450098",{"2":{"260":1}}],["450095",{"2":{"260":1}}],["450092",{"2":{"260":1}}],["450089",{"2":{"260":1}}],["450086",{"2":{"260":1}}],["450070",{"2":{"260":1}}],["45001",{"2":{"253":1}}],["450437\\tval",{"2":{"260":1}}],["4504",{"2":{"229":1}}],["45046f",{"2":{"89":1}}],["450114",{"2":{"260":1}}],["450109",{"2":{"260":1}}],["450106",{"2":{"260":1}}],["450103",{"2":{"260":1}}],["450100",{"2":{"260":1}}],["45016125",{"2":{"196":1}}],["45013",{"2":{"147":1}}],["450581f",{"2":{"118":1}}],["4571",{"2":{"282":1}}],["4579",{"2":{"278":1}}],["457",{"2":{"260":18,"276":1,"282":1}}],["45745936",{"2":{"118":1}}],["45763f",{"2":{"89":1}}],["45",{"2":{"79":2,"192":1,"229":1,"245":3,"260":23,"274":2,"282":1}}],["4×6×1×1",{"2":{"78":1}}],["4×3",{"2":{"78":1}}],["4×3×1",{"2":{"77":1}}],["4×9×1×1",{"2":{"78":1}}],["4×9",{"2":{"78":1}}],["4×1×1",{"2":{"77":2}}],["4×4×1×1",{"2":{"79":3}}],["4×4",{"2":{"76":1}}],["408543888121968e",{"2":{"287":1}}],["4085126f",{"2":{"285":1}}],["408246765479624e",{"2":{"287":1}}],["408158f",{"2":{"285":1}}],["408",{"2":{"282":1}}],["408753\\tval",{"2":{"260":1}}],["40506269564419e",{"2":{"287":1}}],["405046\\tval",{"2":{"260":1}}],["4051612f",{"2":{"285":1}}],["405103",{"2":{"274":1}}],["4051151",{"2":{"193":4}}],["405530\\ttrain",{"2":{"260":1}}],["406",{"2":{"276":1}}],["406759\\tval",{"2":{"260":1}}],["4066525",{"2":{"168":1}}],["4097369907469507e",{"2":{"287":1}}],["409743f",{"2":{"118":1}}],["409316315245289e",{"2":{"287":1}}],["409453f",{"2":{"285":1}}],["409047",{"2":{"274":1}}],["409",{"2":{"253":3,"276":3}}],["40gb",{"2":{"238":1,"246":1}}],["4025670565429145e",{"2":{"287":1}}],["4024268645488513e",{"2":{"287":1}}],["4027162f",{"2":{"285":1}}],["402726",{"2":{"274":1}}],["402",{"2":{"229":1,"276":2}}],["40229",{"2":{"188":1}}],["4016",{"2":{"282":1}}],["4015749",{"2":{"264":2}}],["401",{"2":{"225":1,"282":1}}],["401070e",{"2":{"225":1}}],["401719",{"2":{"147":1}}],["403743588650047e",{"2":{"287":1}}],["403784",{"2":{"147":1}}],["4034379f",{"2":{"285":1}}],["403",{"2":{"282":1}}],["403075",{"2":{"260":1}}],["403071",{"2":{"260":1}}],["403068",{"2":{"260":1}}],["403065",{"2":{"260":1}}],["403062",{"2":{"260":1}}],["403059",{"2":{"260":1}}],["403056",{"2":{"260":1}}],["403054",{"2":{"260":1}}],["403051",{"2":{"260":1}}],["403047",{"2":{"260":1}}],["403031",{"2":{"260":1}}],["40354207",{"2":{"196":1}}],["404",{"2":{"276":1,"282":3}}],["40473",{"2":{"188":1}}],["404728",{"2":{"188":2}}],["40417f",{"2":{"89":1}}],["407",{"2":{"276":2,"282":1}}],["407562",{"2":{"260":1}}],["407558",{"2":{"260":1}}],["407555",{"2":{"260":1}}],["407552",{"2":{"260":1}}],["407549",{"2":{"260":1}}],["407547",{"2":{"260":1}}],["407544",{"2":{"260":1}}],["407541",{"2":{"260":1}}],["407538",{"2":{"260":1}}],["407535",{"2":{"260":1}}],["407521",{"2":{"260":1}}],["40741",{"2":{"234":3}}],["40789893",{"2":{"196":1}}],["40721768",{"2":{"135":1}}],["4076941f",{"2":{"118":1}}],["400265f",{"2":{"285":1}}],["4002",{"2":{"282":1}}],["4001",{"2":{"196":1,"253":1,"278":1}}],["4007297",{"2":{"167":1}}],["4007905",{"2":{"165":1}}],["400",{"2":{"119":1,"276":2,"282":1}}],["40001",{"2":{"253":1}}],["4000",{"2":{"81":2,"260":5,"295":1}}],["40",{"2":{"77":9,"192":1,"229":1,"234":5,"236":1,"245":4,"274":2,"276":1,"282":1,"295":1}}],["4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀",{"2":{"58":1}}],["4224374644348466e",{"2":{"287":1}}],["4222798f",{"2":{"285":1}}],["422",{"2":{"282":1}}],["4221",{"2":{"278":1}}],["422918",{"2":{"188":1}}],["423486227675334e",{"2":{"287":1}}],["423573f",{"2":{"285":1}}],["423",{"2":{"282":2}}],["4230",{"2":{"278":1}}],["4231",{"2":{"278":1}}],["42384\\taccuracy",{"2":{"204":1}}],["425511081133681e",{"2":{"287":1}}],["425489f",{"2":{"285":1}}],["42547",{"2":{"276":1}}],["425405\\tval",{"2":{"260":1}}],["4257812",{"2":{"274":1}}],["42519686",{"2":{"264":2}}],["42502",{"2":{"260":1}}],["4250",{"2":{"229":1}}],["429386",{"2":{"260":1}}],["429382",{"2":{"260":1}}],["429379",{"2":{"260":1}}],["429376",{"2":{"260":1}}],["429373",{"2":{"260":1}}],["429370",{"2":{"260":1}}],["429368",{"2":{"260":1}}],["429365",{"2":{"260":1}}],["429362",{"2":{"260":1}}],["429359",{"2":{"260":1}}],["429345",{"2":{"260":1}}],["429851f",{"2":{"118":1}}],["424550182110481e",{"2":{"287":1}}],["424386f",{"2":{"285":1}}],["4243966f",{"2":{"285":1}}],["4248",{"2":{"282":1}}],["4248047",{"2":{"274":2}}],["424",{"2":{"276":1,"282":2}}],["42493280755817e",{"2":{"287":1}}],["424934",{"2":{"260":1}}],["424976",{"2":{"260":1}}],["424971",{"2":{"260":1}}],["424968",{"2":{"260":1}}],["424965",{"2":{"260":1}}],["424963",{"2":{"260":1}}],["424960",{"2":{"260":1}}],["424957",{"2":{"260":1}}],["424954",{"2":{"260":1}}],["424951",{"2":{"260":1}}],["424948",{"2":{"260":1}}],["4242767",{"2":{"196":1}}],["420514\\ttrain",{"2":{"260":1}}],["420",{"2":{"260":3,"282":1}}],["420657",{"2":{"260":1}}],["420653",{"2":{"260":1}}],["420650",{"2":{"260":1}}],["420647",{"2":{"260":1}}],["420644",{"2":{"260":1}}],["420642",{"2":{"260":1}}],["420639",{"2":{"260":1}}],["420636",{"2":{"260":1}}],["420633",{"2":{"260":1}}],["420630",{"2":{"260":1}}],["420617",{"2":{"260":1}}],["420698",{"2":{"167":1,"168":1}}],["42001",{"2":{"253":1}}],["42709802444648e",{"2":{"287":1}}],["4279543f",{"2":{"285":1}}],["4274f",{"2":{"285":1}}],["427734\\ttrain",{"2":{"260":1}}],["427",{"2":{"229":1,"276":1,"282":1}}],["42752",{"2":{"147":1}}],["4285558f",{"2":{"285":1}}],["4282",{"2":{"278":1}}],["4286",{"2":{"260":3}}],["428",{"2":{"229":1,"276":2}}],["42610393158913e",{"2":{"287":1}}],["4269633",{"2":{"264":1}}],["426513\\tval",{"2":{"260":1}}],["426",{"2":{"225":1,"229":1}}],["4266",{"2":{"147":1}}],["4212913069822937e",{"2":{"287":1}}],["4212042f",{"2":{"285":1}}],["421987219253033e",{"2":{"287":1}}],["4218472f",{"2":{"285":1}}],["42189494",{"2":{"196":1}}],["4215803",{"2":{"267":1}}],["4216236",{"2":{"143":1}}],["42",{"2":{"58":1,"185":2,"197":1,"206":1,"213":1,"214":1,"227":1,"234":1,"238":1,"245":2,"246":1,"255":1,"260":8,"261":1,"269":1,"274":2,"275":1,"276":1,"280":1,"282":1,"289":1,"297":1}}],["4997427299750096e",{"2":{"287":1}}],["4996172f",{"2":{"285":1}}],["499",{"2":{"276":1}}],["49923",{"2":{"147":1}}],["497780",{"2":{"274":1}}],["491696139665593e",{"2":{"287":1}}],["49135f",{"2":{"285":1}}],["491393",{"2":{"260":1}}],["491",{"2":{"276":1,"282":1}}],["491436",{"2":{"260":1}}],["491431",{"2":{"260":1}}],["491428",{"2":{"260":1}}],["491426",{"2":{"260":1}}],["491423",{"2":{"260":1}}],["491420",{"2":{"260":1}}],["491417",{"2":{"260":1}}],["491414",{"2":{"260":1}}],["491411",{"2":{"260":1}}],["491408",{"2":{"260":1}}],["4931098559687974e",{"2":{"287":1}}],["49312",{"2":{"260":3}}],["493687\\tval",{"2":{"260":1}}],["4930",{"2":{"240":1}}],["496268786641555e",{"2":{"287":1}}],["496038442320833e",{"2":{"287":1}}],["496063",{"2":{"264":2}}],["4964265f",{"2":{"285":1}}],["496",{"2":{"282":3}}],["49684",{"2":{"260":1}}],["4965",{"2":{"229":1,"278":1}}],["4967",{"2":{"229":1}}],["4980",{"2":{"278":1}}],["49889082",{"2":{"264":1}}],["498809",{"2":{"260":1}}],["498804",{"2":{"260":1}}],["498801",{"2":{"260":1}}],["498798",{"2":{"260":1}}],["498795",{"2":{"260":1}}],["498792",{"2":{"260":1}}],["498789",{"2":{"260":1}}],["498786",{"2":{"260":1}}],["498783",{"2":{"260":1}}],["498780",{"2":{"260":1}}],["498765",{"2":{"260":1}}],["4989",{"2":{"229":1}}],["4981",{"2":{"229":1,"278":1}}],["498",{"2":{"213":1,"260":6,"276":2}}],["4955572371183748e",{"2":{"287":1}}],["4955572f",{"2":{"285":1}}],["4959514f",{"2":{"285":1}}],["4956",{"2":{"278":1}}],["4951172",{"2":{"274":1}}],["495814",{"2":{"260":1}}],["495809",{"2":{"260":1}}],["495806",{"2":{"260":1}}],["495804",{"2":{"260":1}}],["495801",{"2":{"260":1}}],["4957",{"2":{"278":1}}],["495798",{"2":{"260":1}}],["495795",{"2":{"260":1}}],["495792",{"2":{"260":1}}],["495788",{"2":{"260":1}}],["495785",{"2":{"260":1}}],["495770",{"2":{"260":1}}],["4950953f",{"2":{"285":1}}],["4950",{"2":{"240":1}}],["4950674f",{"2":{"118":1}}],["495",{"2":{"192":1,"282":2}}],["494450\\tthroughput",{"2":{"295":1}}],["494964431566503e",{"2":{"287":1}}],["494214",{"2":{"260":1}}],["494211",{"2":{"260":1}}],["494208",{"2":{"260":1}}],["494205",{"2":{"260":1}}],["494189",{"2":{"260":1}}],["494",{"2":{"192":1}}],["492977f",{"2":{"285":1}}],["49205",{"2":{"260":2}}],["492899",{"2":{"260":1}}],["492895",{"2":{"260":1}}],["492892",{"2":{"260":1}}],["492889",{"2":{"260":1}}],["492886",{"2":{"260":1}}],["492883",{"2":{"260":1}}],["492880",{"2":{"260":1}}],["492877",{"2":{"260":1}}],["492874",{"2":{"260":1}}],["492871",{"2":{"260":1}}],["492857",{"2":{"260":1}}],["492",{"2":{"187":1}}],["49001",{"2":{"253":1}}],["4901",{"2":{"240":1}}],["490193",{"2":{"147":1}}],["490881\\ttrain",{"2":{"260":1}}],["4908",{"2":{"147":1}}],["4905",{"2":{"147":1}}],["490201f",{"2":{"118":1}}],["49",{"2":{"56":1,"192":1,"245":2,"260":24,"265":1,"268":2,"274":2,"282":1}}],["4d",{"2":{"42":3,"77":2,"119":2}}],["4479215987684456e",{"2":{"287":1}}],["447823f",{"2":{"285":1}}],["447",{"2":{"276":1,"282":1}}],["447179\\tval",{"2":{"260":1}}],["44746",{"2":{"188":1}}],["4495",{"2":{"278":1}}],["449",{"2":{"260":18,"282":2}}],["440408962839323e",{"2":{"287":1}}],["44048917",{"2":{"196":1}}],["440993165101207e",{"2":{"287":1}}],["440905f",{"2":{"285":1}}],["44054f",{"2":{"285":1}}],["4405",{"2":{"278":1}}],["440",{"2":{"276":1}}],["44001",{"2":{"253":1}}],["4402",{"2":{"192":1}}],["448124653341628e",{"2":{"287":1}}],["448175",{"2":{"274":1}}],["448296291469673e",{"2":{"287":1}}],["44874f",{"2":{"285":1}}],["448429f",{"2":{"285":1}}],["448",{"2":{"260":12,"276":1,"282":1}}],["4488101521328277",{"2":{"191":1}}],["44894f",{"2":{"89":1}}],["445",{"2":{"276":1,"282":1}}],["44571495",{"2":{"168":1}}],["4450739",{"2":{"147":1}}],["4467",{"2":{"295":1}}],["446466",{"2":{"295":1}}],["4465352737082865e",{"2":{"287":1}}],["446599",{"2":{"147":1}}],["4466865f",{"2":{"285":1}}],["446",{"2":{"260":1,"282":1}}],["446078",{"2":{"274":1}}],["446057",{"2":{"260":1}}],["446044",{"2":{"260":1}}],["446041",{"2":{"260":1}}],["446038",{"2":{"260":1}}],["446036",{"2":{"260":1}}],["446031",{"2":{"260":1}}],["446028",{"2":{"260":1}}],["446026",{"2":{"260":1}}],["446023",{"2":{"260":1}}],["446020",{"2":{"260":1}}],["446005",{"2":{"260":1}}],["4462147",{"2":{"264":1}}],["446214",{"2":{"167":1}}],["443",{"2":{"276":2,"282":2}}],["44397",{"2":{"168":1}}],["4437156",{"2":{"167":1}}],["443328",{"2":{"147":1}}],["44338",{"2":{"147":1}}],["441548",{"2":{"295":1}}],["441",{"2":{"276":3,"282":2}}],["44148",{"2":{"274":1}}],["441454",{"2":{"147":1}}],["441927",{"2":{"260":1}}],["441922",{"2":{"260":1}}],["441919",{"2":{"260":1}}],["441916",{"2":{"260":1}}],["441914",{"2":{"260":1}}],["441911",{"2":{"260":1}}],["441908",{"2":{"260":1}}],["441905",{"2":{"260":1}}],["441902",{"2":{"260":1}}],["441829\\ttest",{"2":{"260":1}}],["441899",{"2":{"260":1}}],["441885",{"2":{"260":1}}],["4418456",{"2":{"196":1}}],["44173f",{"2":{"89":1}}],["442394",{"2":{"295":1}}],["4429620995780787e",{"2":{"287":1}}],["442927",{"2":{"147":1}}],["4426741f",{"2":{"285":1}}],["4427",{"2":{"278":1}}],["442757\\tval",{"2":{"260":1}}],["442",{"2":{"276":1,"282":1}}],["442206",{"2":{"147":1}}],["442849",{"2":{"147":1}}],["444633694017027e",{"2":{"287":1}}],["4446337f",{"2":{"285":1}}],["4441",{"2":{"278":1}}],["444",{"2":{"260":3,"276":1}}],["444251e",{"2":{"225":1}}],["444516",{"2":{"147":1}}],["44444",{"2":{"78":2,"234":2}}],["44",{"2":{"25":1,"213":2,"245":2,"253":78,"260":21,"274":2}}],["4",{"2":{"2":1,"15":1,"22":1,"23":1,"25":8,"36":1,"38":4,"40":1,"41":5,"49":2,"50":2,"56":2,"58":7,"74":5,"76":28,"77":1,"78":69,"79":12,"80":6,"81":12,"82":11,"86":5,"89":6,"118":4,"119":4,"121":2,"126":1,"132":8,"133":3,"134":1,"135":6,"143":5,"146":8,"147":12,"153":2,"154":2,"158":2,"165":11,"166":2,"167":3,"168":9,"172":7,"173":8,"174":4,"187":6,"188":15,"190":5,"192":8,"193":2,"196":1,"204":5,"210":2,"212":1,"213":2,"225":1,"229":21,"234":7,"236":1,"237":33,"238":3,"240":3,"241":2,"245":2,"246":3,"253":1,"260":21,"263":3,"264":5,"267":1,"271":6,"274":2,"276":49,"277":9,"278":20,"282":52,"283":7,"284":1,"285":108,"287":104}}],["0+560",{"2":{"238":1,"246":1}}],["0f4",{"2":{"284":1}}],["0f",{"2":{"212":1,"232":2}}],["0f0",{"2":{"56":2,"58":4,"86":1,"90":2,"134":2,"135":1,"172":2,"200":2,"204":3,"223":4,"231":4,"235":2,"252":8,"254":6,"264":2,"271":2,"274":1,"277":4,"284":1}}],["0>",{"2":{"147":2}}],["0x00000000086f4710",{"2":{"208":1}}],["0x1911b814c02405e8",{"2":{"191":1}}],["0x12c522b8034ae186",{"2":{"143":1}}],["0x8c49bc52dc8a77ea",{"2":{"191":1}}],["0x8e0c3a65079041bb",{"2":{"143":1}}],["0x48d73dc42d195740",{"2":{"191":1}}],["0x4fa3403dd074e603",{"2":{"143":1}}],["0xdb2fa90498613fdf",{"2":{"191":1}}],["0xff800000>",{"2":{"147":1}}],["0x22a21880af5dc689",{"2":{"143":1,"191":1}}],["0x21617f7747d97206",{"2":{"143":1}}],["0e",{"2":{"89":7,"266":1,"268":2}}],["09",{"2":{"213":1,"245":2,"260":12,"278":1}}],["0998",{"2":{"253":2}}],["09984\\taccuracy",{"2":{"204":1}}],["099841796",{"2":{"147":1}}],["099759124",{"2":{"147":1}}],["09900095",{"2":{"147":1}}],["09932359",{"2":{"147":1}}],["0957031",{"2":{"274":1}}],["095",{"2":{"260":1}}],["09517",{"2":{"147":1}}],["0952605",{"2":{"147":1}}],["09540328",{"2":{"147":1}}],["09596483",{"2":{"147":1}}],["09591715",{"2":{"147":1}}],["095548995",{"2":{"118":1}}],["09777648",{"2":{"264":1}}],["09701932",{"2":{"196":1}}],["09705801",{"2":{"147":1}}],["09799",{"2":{"147":1}}],["09757828",{"2":{"147":1}}],["09725678",{"2":{"147":1}}],["097253",{"2":{"143":1}}],["0973789",{"2":{"147":1}}],["0913s\\ttraining",{"2":{"234":1}}],["0913238f",{"2":{"118":1}}],["0914631",{"2":{"168":1}}],["091481484",{"2":{"147":1}}],["0917963",{"2":{"147":1}}],["09189941",{"2":{"147":1}}],["098",{"2":{"260":4}}],["09852",{"2":{"188":1}}],["0985227",{"2":{"188":2}}],["0980411",{"2":{"147":1}}],["0982",{"2":{"147":1}}],["09885789",{"2":{"147":1}}],["09860833",{"2":{"147":1}}],["09875466",{"2":{"118":1}}],["09878272",{"2":{"118":1}}],["093925126",{"2":{"196":1}}],["093958974",{"2":{"147":1}}],["0935077",{"2":{"283":2}}],["09351656",{"2":{"194":1,"195":1}}],["09359703",{"2":{"147":1}}],["093378566",{"2":{"147":1}}],["093302496",{"2":{"118":1}}],["09321641",{"2":{"118":1}}],["09348221",{"2":{"147":1}}],["0934494",{"2":{"118":1}}],["093436174",{"2":{"118":1}}],["09222229",{"2":{"147":1}}],["092265144",{"2":{"147":1}}],["09230051",{"2":{"147":1}}],["09250315",{"2":{"147":1}}],["09255884",{"2":{"118":1}}],["09282472",{"2":{"147":1}}],["09287714",{"2":{"118":1}}],["0921624",{"2":{"118":1}}],["092198014",{"2":{"118":1}}],["092793465",{"2":{"118":1}}],["094",{"2":{"260":3}}],["0949819",{"2":{"185":1}}],["0945",{"2":{"253":3}}],["094564",{"2":{"185":1}}],["094527975",{"2":{"89":1}}],["09461",{"2":{"147":1}}],["09469124",{"2":{"118":1}}],["09422591",{"2":{"147":1}}],["09430397",{"2":{"147":1}}],["09444",{"2":{"204":1}}],["09442957",{"2":{"147":1}}],["09441388",{"2":{"147":1}}],["094833545",{"2":{"118":1}}],["09485077",{"2":{"118":1}}],["096862\\ttrain",{"2":{"260":1}}],["0964",{"2":{"260":1}}],["09646284",{"2":{"147":1}}],["09625\\taccuracy",{"2":{"204":1}}],["09638",{"2":{"204":1}}],["0961363",{"2":{"147":1}}],["09668045",{"2":{"147":1}}],["09658958",{"2":{"147":1}}],["09650694",{"2":{"147":1}}],["09659223",{"2":{"118":1}}],["09699093",{"2":{"167":1}}],["096988946",{"2":{"143":1}}],["09693537",{"2":{"89":1}}],["0904949767133873e",{"2":{"287":1}}],["0904621",{"2":{"147":1}}],["0902069f",{"2":{"285":1}}],["09021f",{"2":{"89":1}}],["0907582",{"2":{"147":1}}],["09096992",{"2":{"147":1}}],["0901",{"2":{"90":1}}],["09003057317038046",{"2":{"74":1}}],["0900306",{"2":{"50":10,"74":1}}],["07",{"2":{"245":1,"260":20}}],["078747541208472e",{"2":{"287":1}}],["07874016",{"2":{"264":2}}],["0788357f",{"2":{"285":1}}],["0782",{"2":{"278":1}}],["07802",{"2":{"204":1}}],["07894",{"2":{"147":1}}],["0785",{"2":{"147":1}}],["07865399",{"2":{"118":1}}],["078672916",{"2":{"118":1}}],["0753",{"2":{"295":1}}],["07530\\taccuracy",{"2":{"204":1}}],["0753409",{"2":{"147":1}}],["07572",{"2":{"204":1}}],["07513529",{"2":{"167":1}}],["07513033",{"2":{"147":1}}],["07542",{"2":{"147":1}}],["0755849",{"2":{"147":1}}],["07552715",{"2":{"147":1}}],["07564111",{"2":{"147":1}}],["0762714144371656e",{"2":{"287":1}}],["07667785",{"2":{"267":1}}],["0764",{"2":{"278":1}}],["0764086",{"2":{"264":1}}],["0764233",{"2":{"147":1}}],["0769223",{"2":{"147":1}}],["076062605",{"2":{"147":1}}],["07600229",{"2":{"147":1}}],["07615535",{"2":{"147":1}}],["07638691",{"2":{"147":1}}],["0771484",{"2":{"274":1}}],["0779s\\ttraining",{"2":{"234":1}}],["07796077",{"2":{"147":1}}],["07708\\taccuracy",{"2":{"204":1}}],["07702",{"2":{"188":1}}],["0770206",{"2":{"188":2}}],["07701616",{"2":{"89":1}}],["0775962",{"2":{"147":1}}],["0772457",{"2":{"147":1}}],["0773392",{"2":{"147":1}}],["07774367",{"2":{"147":1}}],["07778767",{"2":{"147":1}}],["0778448",{"2":{"147":1}}],["07765452",{"2":{"147":1}}],["0796541526515755e",{"2":{"287":1}}],["07929204377037e",{"2":{"287":1}}],["0792036",{"2":{"147":1}}],["0795556f",{"2":{"285":1}}],["0791126",{"2":{"185":1}}],["07915",{"2":{"147":1}}],["0799499",{"2":{"147":1}}],["079042196",{"2":{"147":1}}],["072417692602771e",{"2":{"287":1}}],["072319f",{"2":{"285":1}}],["07297467",{"2":{"147":1}}],["072645485",{"2":{"147":1}}],["07287f",{"2":{"89":1}}],["0728675615927385",{"2":{"50":1}}],["073351831852744e",{"2":{"287":1}}],["0736776f",{"2":{"285":1}}],["0732393f",{"2":{"285":1}}],["0731",{"2":{"278":1}}],["07319629",{"2":{"147":1}}],["073",{"2":{"260":4}}],["07376325",{"2":{"118":1}}],["07384164",{"2":{"118":1}}],["0713718348652142e",{"2":{"287":1}}],["07133968",{"2":{"126":1}}],["0715118f",{"2":{"285":1}}],["071",{"2":{"260":3}}],["071482725",{"2":{"196":1}}],["07186686",{"2":{"118":1}}],["07125516",{"2":{"89":1}}],["0740278",{"2":{"264":1}}],["07406641",{"2":{"147":1}}],["074723326",{"2":{"147":1}}],["07477753",{"2":{"118":1}}],["07489191",{"2":{"147":1}}],["074939f",{"2":{"118":1}}],["07444663",{"2":{"118":1}}],["070431186680027e",{"2":{"287":1}}],["070319f",{"2":{"285":1}}],["0703125",{"2":{"274":1}}],["07021",{"2":{"147":1}}],["0701",{"2":{"90":1}}],["070576",{"2":{"89":1}}],["03f0",{"2":{"266":1}}],["03",{"2":{"213":1,"260":9,"266":1,"268":2}}],["037785888",{"2":{"253":1}}],["03753s\\ttraining",{"2":{"245":1}}],["03704",{"2":{"234":2}}],["03704503",{"2":{"147":1}}],["03784s\\ttraining",{"2":{"245":1}}],["03781\\taccuracy",{"2":{"204":1}}],["037877575",{"2":{"147":1}}],["0373",{"2":{"147":1}}],["037487797",{"2":{"147":1}}],["037206527",{"2":{"147":1}}],["03293051390289e",{"2":{"287":1}}],["0329183772259492e",{"2":{"287":1}}],["0322795440284445e",{"2":{"287":1}}],["032368316416563e",{"2":{"287":1}}],["0323032f",{"2":{"285":1}}],["0320803f",{"2":{"285":1}}],["03202845",{"2":{"89":1}}],["03258145",{"2":{"196":1}}],["03284671",{"2":{"167":1}}],["032847542",{"2":{"147":1}}],["0321158",{"2":{"147":1}}],["03278077",{"2":{"147":1}}],["034934171",{"2":{"253":1}}],["034259234",{"2":{"196":1}}],["034200985",{"2":{"147":1}}],["0340666",{"2":{"185":1}}],["0341252",{"2":{"147":1}}],["03415",{"2":{"147":1}}],["0346338",{"2":{"147":1}}],["03461915",{"2":{"147":1}}],["03475806",{"2":{"147":1}}],["03394\\taccuracy",{"2":{"204":2}}],["033936515",{"2":{"147":1}}],["0338566",{"2":{"167":1}}],["0334668",{"2":{"185":1}}],["033461835",{"2":{"147":1}}],["033446588",{"2":{"147":1}}],["03308521",{"2":{"147":1}}],["03306083",{"2":{"89":1}}],["03374887",{"2":{"147":1}}],["031586222932255e",{"2":{"287":1}}],["0319333f",{"2":{"285":1}}],["03194926",{"2":{"147":1}}],["031942792",{"2":{"147":1}}],["0312500",{"2":{"274":1}}],["03168",{"2":{"204":1}}],["031034712",{"2":{"147":1}}],["031386353",{"2":{"147":1}}],["031767f",{"2":{"285":1}}],["031756539",{"2":{"253":1}}],["03170667",{"2":{"147":1}}],["03178858",{"2":{"118":1}}],["031490605",{"2":{"147":1}}],["031815775",{"2":{"118":1}}],["03044\\taccuracy",{"2":{"204":1}}],["030407937",{"2":{"147":1}}],["030724227",{"2":{"147":1}}],["030336674",{"2":{"147":1}}],["030987449",{"2":{"147":1}}],["030971587",{"2":{"118":1}}],["030937059",{"2":{"89":1}}],["0301979",{"2":{"147":1}}],["0301",{"2":{"90":1}}],["035660738996977e",{"2":{"287":1}}],["03569",{"2":{"147":1}}],["035303f",{"2":{"285":1}}],["0357867052554138e",{"2":{"287":1}}],["0357716f",{"2":{"285":1}}],["03570s\\ttraining",{"2":{"245":2}}],["0358415f",{"2":{"285":1}}],["03589s\\ttraining",{"2":{"245":1}}],["0358925",{"2":{"89":1}}],["035063371",{"2":{"253":1}}],["03504307",{"2":{"89":1}}],["03529s\\ttraining",{"2":{"245":1}}],["03520",{"2":{"204":1}}],["0355555",{"2":{"188":1}}],["035161175",{"2":{"118":1}}],["035196126",{"2":{"118":1}}],["03547f",{"2":{"118":1}}],["03542879",{"2":{"118":1}}],["0362457f",{"2":{"285":1}}],["03625s\\ttraining",{"2":{"245":1}}],["036290962",{"2":{"147":1}}],["03677s\\ttraining",{"2":{"245":1}}],["03657",{"2":{"147":1}}],["0365001",{"2":{"89":1}}],["036468588",{"2":{"147":1}}],["03666526",{"2":{"89":1}}],["03926724009262e",{"2":{"287":1}}],["03924s\\ttraining",{"2":{"245":1}}],["03976s\\ttraining",{"2":{"245":1}}],["03971s\\ttraining",{"2":{"245":1}}],["03981s\\ttraining",{"2":{"245":1}}],["03940",{"2":{"204":1}}],["03968s\\ttraining",{"2":{"245":1}}],["03960s\\ttraining",{"2":{"245":1}}],["039640423",{"2":{"147":1}}],["039614968",{"2":{"147":1}}],["039650977",{"2":{"147":1}}],["039103575",{"2":{"147":1}}],["039153982",{"2":{"147":1}}],["039138064",{"2":{"89":1}}],["0390298",{"2":{"89":1}}],["03937737",{"2":{"89":1}}],["0389128f",{"2":{"285":1}}],["03896s\\ttraining",{"2":{"245":1}}],["03821s\\ttraining",{"2":{"245":1}}],["03828844",{"2":{"118":1}}],["03808s\\ttraining",{"2":{"245":1}}],["038037397",{"2":{"89":1}}],["03860s\\ttraining",{"2":{"245":1}}],["0388123659987015e",{"2":{"287":1}}],["0388",{"2":{"278":1}}],["03886",{"2":{"204":1}}],["038802307",{"2":{"147":1}}],["0381317",{"2":{"168":1}}],["0381727",{"2":{"167":1}}],["0385186",{"2":{"147":1}}],["038353663",{"2":{"118":1}}],["038",{"2":{"41":3}}],["04",{"2":{"204":1,"213":1,"245":2,"253":78,"260":1163,"274":1,"295":19}}],["0454",{"2":{"278":1}}],["045",{"2":{"260":2}}],["0453085",{"2":{"147":1}}],["04583689",{"2":{"147":1}}],["045083",{"2":{"89":1}}],["046293726945874e",{"2":{"287":1}}],["0462949",{"2":{"168":1}}],["046163f",{"2":{"285":1}}],["04616411",{"2":{"147":1}}],["0467",{"2":{"278":1}}],["046786353",{"2":{"147":1}}],["0469304",{"2":{"168":1}}],["046468746",{"2":{"147":1}}],["046394978",{"2":{"147":1}}],["047244094",{"2":{"264":2}}],["04728",{"2":{"147":1}}],["04781\\taccuracy",{"2":{"204":1}}],["047050234",{"2":{"147":1}}],["047006823",{"2":{"147":1}}],["047496427",{"2":{"147":1}}],["04715905",{"2":{"147":1}}],["047317084",{"2":{"147":1}}],["044513852118433e",{"2":{"287":1}}],["0442878912279912e",{"2":{"287":1}}],["0440993f",{"2":{"285":1}}],["0447",{"2":{"278":1}}],["044715x^3",{"2":{"58":1}}],["04484s\\ttraining",{"2":{"245":1}}],["044987272",{"2":{"147":1}}],["04447686",{"2":{"147":1}}],["044427596",{"2":{"147":1}}],["0491123",{"2":{"264":1}}],["049741872",{"2":{"253":1}}],["04971",{"2":{"147":1}}],["04940",{"2":{"204":1}}],["04903387",{"2":{"196":1}}],["0499631",{"2":{"185":1}}],["04993449",{"2":{"89":1}}],["04933",{"2":{"147":1}}],["049328804",{"2":{"147":1}}],["0492191",{"2":{"147":1}}],["049217567",{"2":{"147":1}}],["0480",{"2":{"278":1}}],["04811",{"2":{"168":1}}],["048154727",{"2":{"147":1}}],["04823295",{"2":{"147":1}}],["048471738",{"2":{"147":1}}],["04899112",{"2":{"147":1}}],["048821628",{"2":{"147":1}}],["048522998",{"2":{"146":1}}],["0433996f",{"2":{"285":1}}],["0439998f",{"2":{"285":1}}],["043952383",{"2":{"147":1}}],["04359s\\ttraining",{"2":{"245":1}}],["0437017563533625e",{"2":{"287":1}}],["04372",{"2":{"204":1}}],["043790642",{"2":{"147":1}}],["043883",{"2":{"188":1}}],["04343",{"2":{"147":1}}],["04368514",{"2":{"147":1}}],["04319424",{"2":{"118":1}}],["043209497",{"2":{"118":1}}],["041858786703286e",{"2":{"287":1}}],["04181s\\ttraining",{"2":{"245":1}}],["0419607f",{"2":{"285":1}}],["0419025f",{"2":{"285":1}}],["04195s\\ttraining",{"2":{"245":1}}],["04199602",{"2":{"196":1}}],["04166s\\ttraining",{"2":{"245":1}}],["04117s\\ttraining",{"2":{"245":1}}],["04119787",{"2":{"147":1}}],["041031003",{"2":{"165":1}}],["04173",{"2":{"147":1}}],["04125s\\ttraining",{"2":{"245":1}}],["041253403",{"2":{"118":1}}],["041269857",{"2":{"118":1}}],["041368797",{"2":{"118":1}}],["042057128122542e",{"2":{"287":1}}],["0428",{"2":{"278":1}}],["04283s\\ttraining",{"2":{"245":1}}],["042415008",{"2":{"264":1}}],["0424012",{"2":{"185":1}}],["04239s\\ttraining",{"2":{"245":1}}],["04230\\taccuracy",{"2":{"204":1}}],["04256s\\ttraining",{"2":{"245":1}}],["042543",{"2":{"167":1}}],["04262s\\ttraining",{"2":{"245":1}}],["04266s\\ttraining",{"2":{"245":1}}],["04265762",{"2":{"196":1}}],["0422985921064145e",{"2":{"287":1}}],["04229615",{"2":{"147":1}}],["042229",{"2":{"274":1}}],["042247728",{"2":{"89":1}}],["042990997",{"2":{"147":1}}],["042965017",{"2":{"118":1}}],["042984415",{"2":{"118":1}}],["0407738045610902e",{"2":{"287":1}}],["0407738f",{"2":{"285":1}}],["040643815975388e",{"2":{"287":1}}],["0406746f",{"2":{"285":1}}],["04063s\\ttraining",{"2":{"245":1}}],["04095s\\ttraining",{"2":{"245":1}}],["040938716",{"2":{"147":1}}],["0402095",{"2":{"167":1}}],["040403437",{"2":{"147":1}}],["0404982",{"2":{"89":1}}],["04014s\\ttraining",{"2":{"245":1}}],["04019666",{"2":{"147":1}}],["0401",{"2":{"90":1}}],["04d",{"2":{"90":1}}],["0≤ω",{"2":{"86":1}}],["08",{"2":{"194":1,"213":1,"245":1,"260":1}}],["0860245618543714e",{"2":{"287":1}}],["08603747",{"2":{"147":1}}],["086505502225862e",{"2":{"287":1}}],["0865055f",{"2":{"285":1}}],["0861265f",{"2":{"285":1}}],["08613788",{"2":{"267":1}}],["0869141",{"2":{"274":1}}],["08694684883050086",{"2":{"191":4}}],["0869468",{"2":{"188":1}}],["08694564",{"2":{"147":1}}],["0866141",{"2":{"264":2}}],["086430185",{"2":{"167":1}}],["082824589752237e",{"2":{"287":1}}],["0828727f",{"2":{"285":1}}],["0825933645449456e",{"2":{"287":1}}],["082587f",{"2":{"285":1}}],["0829067",{"2":{"264":1}}],["0829282",{"2":{"165":1}}],["0824661",{"2":{"147":1}}],["08260414",{"2":{"147":1}}],["082605906",{"2":{"118":1}}],["08269734",{"2":{"147":1}}],["08268406",{"2":{"118":1}}],["084995380749891e",{"2":{"287":1}}],["08437178",{"2":{"147":1}}],["08455851",{"2":{"147":1}}],["084573634",{"2":{"118":1}}],["084569596",{"2":{"89":1}}],["0834977f",{"2":{"285":1}}],["08387",{"2":{"260":2}}],["0836404",{"2":{"147":1}}],["08358303",{"2":{"147":1}}],["083357716905078e",{"2":{"287":1}}],["0833",{"2":{"147":1}}],["08338781",{"2":{"147":1}}],["08333",{"2":{"78":1}}],["088965707191072e",{"2":{"287":1}}],["088904090743741e",{"2":{"287":1}}],["088935f",{"2":{"285":1}}],["088771f",{"2":{"285":1}}],["08872778",{"2":{"147":1}}],["0880",{"2":{"278":1}}],["0883609",{"2":{"147":1}}],["08832364",{"2":{"147":1}}],["08846138",{"2":{"147":1}}],["0899",{"2":{"278":1}}],["089975506",{"2":{"147":1}}],["0898651",{"2":{"147":1}}],["08981f",{"2":{"89":1}}],["08927501",{"2":{"147":1}}],["085996439083766e",{"2":{"287":1}}],["0850835f",{"2":{"285":1}}],["08502",{"2":{"260":2}}],["0857084f",{"2":{"285":1}}],["0855",{"2":{"278":1}}],["08554761",{"2":{"147":1}}],["08517",{"2":{"188":1}}],["085170805",{"2":{"188":1}}],["0851708",{"2":{"188":2}}],["085806124",{"2":{"147":1}}],["08589938",{"2":{"147":1}}],["08527214",{"2":{"89":1}}],["0811757892775233e",{"2":{"287":1}}],["0811164",{"2":{"147":1}}],["08115203",{"2":{"147":1}}],["08152549",{"2":{"147":1}}],["08178409",{"2":{"147":1}}],["081932",{"2":{"118":1}}],["08193636",{"2":{"118":1}}],["081931",{"2":{"118":1}}],["08162795",{"2":{"118":1}}],["0808964f",{"2":{"285":1}}],["080871",{"2":{"264":1}}],["08034529",{"2":{"267":1}}],["08098776",{"2":{"147":1}}],["08096753",{"2":{"147":1}}],["0802694f",{"2":{"285":1}}],["08025096",{"2":{"147":1}}],["08022",{"2":{"41":1,"66":1}}],["0801737f",{"2":{"285":1}}],["0801",{"2":{"90":1}}],["08073235",{"2":{"90":1}}],["0875275f",{"2":{"285":1}}],["087390475",{"2":{"267":1}}],["0877254",{"2":{"264":1}}],["087764904",{"2":{"147":1}}],["08713347",{"2":{"155":1}}],["08765691",{"2":{"147":1}}],["087496189641618e",{"2":{"287":1}}],["08746583",{"2":{"147":1}}],["08743f",{"2":{"89":1}}],["08797912",{"2":{"89":1}}],["02f0",{"2":{"254":2}}],["022055",{"2":{"260":1}}],["022045",{"2":{"260":1}}],["022042",{"2":{"260":1}}],["022041641",{"2":{"253":1}}],["022039",{"2":{"260":1}}],["022036",{"2":{"260":1}}],["022033",{"2":{"260":1}}],["022030",{"2":{"260":1}}],["022027",{"2":{"260":1}}],["022024",{"2":{"260":1}}],["022021",{"2":{"260":1}}],["022008",{"2":{"260":1}}],["02209\\taccuracy",{"2":{"204":1}}],["022884602",{"2":{"253":1}}],["02287s\\ttraining",{"2":{"245":1}}],["022407684",{"2":{"253":1}}],["0222",{"2":{"253":2}}],["022760",{"2":{"260":1}}],["02275s\\ttraining",{"2":{"245":1}}],["02274s\\ttraining",{"2":{"245":1}}],["02272s\\ttraining",{"2":{"245":2}}],["02291s\\ttraining",{"2":{"245":1}}],["022343e",{"2":{"225":1}}],["022579487",{"2":{"118":1}}],["022576151",{"2":{"118":1}}],["02501s\\ttraining",{"2":{"245":1}}],["02505s\\ttraining",{"2":{"245":1}}],["02591s\\ttraining",{"2":{"245":1}}],["02517s\\ttraining",{"2":{"245":1}}],["02516s\\ttraining",{"2":{"245":1}}],["02589s\\ttraining",{"2":{"245":2}}],["02548s\\ttraining",{"2":{"245":1}}],["02549s\\ttraining",{"2":{"245":2}}],["025493514",{"2":{"147":1}}],["02544s\\ttraining",{"2":{"245":1}}],["02569s\\ttraining",{"2":{"245":1}}],["02561s\\ttraining",{"2":{"245":1}}],["02535537",{"2":{"196":1}}],["025357665",{"2":{"167":1}}],["021496386569071e",{"2":{"287":1}}],["0214576f",{"2":{"285":1}}],["0216",{"2":{"278":1}}],["0215446730034912e",{"2":{"287":1}}],["0215",{"2":{"278":1}}],["021555956",{"2":{"253":1}}],["021532167",{"2":{"253":1}}],["021726867",{"2":{"147":1}}],["02109389",{"2":{"147":1}}],["0213176",{"2":{"89":1}}],["0273557939436337e",{"2":{"287":1}}],["0274683f",{"2":{"285":1}}],["02746s\\ttraining",{"2":{"245":1}}],["027456025",{"2":{"253":1}}],["02703\\taccuracy",{"2":{"204":1}}],["02767261",{"2":{"147":1}}],["027224839",{"2":{"147":1}}],["027592428",{"2":{"147":1}}],["027705256",{"2":{"89":1}}],["02778",{"2":{"78":1}}],["02883164238858e",{"2":{"287":1}}],["0288624f",{"2":{"285":1}}],["0283",{"2":{"278":1}}],["02850357",{"2":{"267":1}}],["028594570988895e",{"2":{"287":1}}],["0285957",{"2":{"167":1}}],["028596466",{"2":{"147":1}}],["028",{"2":{"260":1}}],["028678153",{"2":{"253":1}}],["02878s\\ttraining",{"2":{"245":1}}],["028738922",{"2":{"167":1}}],["0281603",{"2":{"147":1}}],["028464f",{"2":{"285":1}}],["02846s\\ttraining",{"2":{"245":1}}],["028461456",{"2":{"143":1}}],["028482005",{"2":{"147":1}}],["02804",{"2":{"204":1}}],["028036328",{"2":{"118":1}}],["028029898",{"2":{"118":1}}],["028244259",{"2":{"89":1}}],["0297",{"2":{"278":1}}],["029212",{"2":{"260":1}}],["029208",{"2":{"260":1}}],["029205",{"2":{"260":1}}],["029202",{"2":{"260":1}}],["029283267",{"2":{"89":1}}],["029199",{"2":{"260":1}}],["029196",{"2":{"260":1}}],["029193",{"2":{"260":1}}],["029190",{"2":{"260":1}}],["029187",{"2":{"260":1}}],["029184",{"2":{"260":1}}],["029167",{"2":{"260":1}}],["029342793",{"2":{"253":1}}],["02964765308691042",{"2":{"191":4}}],["0296477",{"2":{"188":1}}],["02998",{"2":{"147":1}}],["02995433",{"2":{"119":1}}],["02946",{"2":{"147":1}}],["029835312",{"2":{"118":1}}],["02373761724604e",{"2":{"287":1}}],["02379s\\ttraining",{"2":{"245":1}}],["02340s\\ttraining",{"2":{"245":1}}],["02345s\\ttraining",{"2":{"245":1}}],["02341s\\ttraining",{"2":{"245":1}}],["0234102",{"2":{"147":1}}],["02336s\\ttraining",{"2":{"245":1}}],["02335s\\ttraining",{"2":{"245":1}}],["02333s\\ttraining",{"2":{"245":1}}],["02331s\\ttraining",{"2":{"245":1}}],["02367943859367e",{"2":{"287":1}}],["023622",{"2":{"264":2}}],["02362s\\ttraining",{"2":{"245":1}}],["02368s\\ttraining",{"2":{"245":2}}],["02366s\\ttraining",{"2":{"245":1}}],["0238676",{"2":{"264":1}}],["02383s\\ttraining",{"2":{"245":1}}],["02388s\\ttraining",{"2":{"245":1}}],["02385s\\ttraining",{"2":{"245":1}}],["023857513",{"2":{"147":1}}],["02384s\\ttraining",{"2":{"245":1}}],["02327s\\ttraining",{"2":{"245":1}}],["02328s\\ttraining",{"2":{"245":2}}],["023283502",{"2":{"147":1}}],["02326s\\ttraining",{"2":{"245":1}}],["02324",{"2":{"147":1}}],["023525f",{"2":{"285":1}}],["023557f",{"2":{"285":1}}],["02355s\\ttraining",{"2":{"245":1}}],["0235",{"2":{"278":1}}],["02358s\\ttraining",{"2":{"245":1}}],["0235815",{"2":{"118":1}}],["02354s\\ttraining",{"2":{"245":1}}],["02359\\taccuracy",{"2":{"204":1}}],["023569956",{"2":{"147":1}}],["023097955",{"2":{"147":1}}],["023026714",{"2":{"89":1}}],["02313s\\ttraining",{"2":{"245":1}}],["023150858",{"2":{"147":1}}],["023147244",{"2":{"118":1}}],["02316084",{"2":{"118":1}}],["0205078",{"2":{"274":1}}],["020825051",{"2":{"253":1}}],["02077",{"2":{"204":1}}],["0207279",{"2":{"153":1}}],["020255674",{"2":{"147":1}}],["020219602",{"2":{"118":1}}],["020608762",{"2":{"147":1}}],["02034\\taccuracy",{"2":{"204":1}}],["02034735",{"2":{"118":1}}],["020335387",{"2":{"118":1}}],["0201",{"2":{"90":1}}],["020926",{"2":{"90":1}}],["0269",{"2":{"278":1}}],["026859",{"2":{"260":1}}],["026854",{"2":{"260":1}}],["026851",{"2":{"260":1}}],["026847",{"2":{"260":1}}],["026844",{"2":{"260":1}}],["026844418",{"2":{"89":1}}],["026841",{"2":{"260":1}}],["026838",{"2":{"260":1}}],["026835",{"2":{"260":1}}],["026832",{"2":{"260":1}}],["026829",{"2":{"260":1}}],["026814",{"2":{"260":1}}],["0262098f",{"2":{"285":1}}],["0262",{"2":{"278":1}}],["02624s\\ttraining",{"2":{"245":1}}],["0262358",{"2":{"147":1}}],["02613s\\ttraining",{"2":{"245":1}}],["026193827",{"2":{"147":1}}],["0260972856991384e",{"2":{"287":1}}],["02609s\\ttraining",{"2":{"245":1}}],["0260",{"2":{"278":1}}],["02606s\\ttraining",{"2":{"245":2}}],["026029184",{"2":{"147":1}}],["02672s\\ttraining",{"2":{"245":1}}],["02640s\\ttraining",{"2":{"245":1}}],["02640",{"2":{"204":1}}],["02649f",{"2":{"89":1}}],["026323957",{"2":{"118":1}}],["026362807",{"2":{"118":1}}],["026577514",{"2":{"118":1}}],["026588852",{"2":{"118":1}}],["026523968",{"2":{"89":1}}],["026677115",{"2":{"118":1}}],["024457",{"2":{"260":1}}],["024453",{"2":{"260":1}}],["024450",{"2":{"260":1}}],["024447",{"2":{"260":1}}],["024444",{"2":{"260":1}}],["024441",{"2":{"260":1}}],["024442466",{"2":{"253":1}}],["024438",{"2":{"260":1}}],["024435",{"2":{"260":1}}],["024432",{"2":{"260":1}}],["024429",{"2":{"260":1}}],["024415",{"2":{"260":1}}],["02441",{"2":{"204":1}}],["024854",{"2":{"260":1}}],["02489s\\ttraining",{"2":{"245":1}}],["02483s\\ttraining",{"2":{"245":1}}],["0248332",{"2":{"89":1}}],["02405s\\ttraining",{"2":{"245":1}}],["02402s\\ttraining",{"2":{"245":1}}],["02439s\\ttraining",{"2":{"245":2}}],["02453s\\ttraining",{"2":{"245":1}}],["02457s\\ttraining",{"2":{"245":1}}],["02454s\\ttraining",{"2":{"245":2}}],["0245608",{"2":{"147":1}}],["02468s\\ttraining",{"2":{"245":1}}],["02416s\\ttraining",{"2":{"245":1}}],["024163416",{"2":{"118":1}}],["02413s\\ttraining",{"2":{"245":1}}],["024228286",{"2":{"147":1}}],["024228822",{"2":{"118":1}}],["0242259",{"2":{"89":1}}],["024",{"2":{"89":1}}],["02",{"2":{"58":1,"225":21,"245":1,"260":6}}],["062694965062129e",{"2":{"287":1}}],["062983f",{"2":{"285":1}}],["0629448f",{"2":{"285":1}}],["0627",{"2":{"278":1}}],["06272482",{"2":{"147":1}}],["0622",{"2":{"278":1}}],["06208062",{"2":{"167":1}}],["06217661",{"2":{"147":1}}],["062121924",{"2":{"147":1}}],["062463008",{"2":{"147":1}}],["064214923865348e",{"2":{"287":1}}],["0644777f",{"2":{"285":1}}],["06445622",{"2":{"147":1}}],["06445969",{"2":{"147":1}}],["06459",{"2":{"204":1}}],["064508988746292e",{"2":{"287":1}}],["06450",{"2":{"66":1}}],["06419\\taccuracy",{"2":{"204":1}}],["064331025",{"2":{"147":1}}],["0619403023447958e",{"2":{"287":1}}],["0619532739907314e",{"2":{"287":1}}],["0619725",{"2":{"147":1}}],["0617996f",{"2":{"285":1}}],["061296",{"2":{"274":1}}],["061299384",{"2":{"147":1}}],["061889123",{"2":{"147":1}}],["061646424",{"2":{"118":1}}],["066790344558352e",{"2":{"287":1}}],["0667989",{"2":{"118":1}}],["0666490256237582e",{"2":{"287":1}}],["066353",{"2":{"274":1}}],["0661273",{"2":{"168":1}}],["06680218",{"2":{"147":1}}],["06682308",{"2":{"147":1}}],["06697127",{"2":{"118":1}}],["068241773624291e",{"2":{"287":1}}],["068596f",{"2":{"285":1}}],["06886f",{"2":{"285":1}}],["06885",{"2":{"147":1}}],["0688707",{"2":{"118":1}}],["0684376",{"2":{"147":1}}],["0687270352484084e",{"2":{"287":1}}],["068773107643253e",{"2":{"287":1}}],["06877028",{"2":{"118":1}}],["06879873",{"2":{"147":1}}],["06875219",{"2":{"118":1}}],["0671366f",{"2":{"285":1}}],["06715196",{"2":{"147":1}}],["0679537f",{"2":{"285":1}}],["067",{"2":{"260":1}}],["067008324",{"2":{"147":1}}],["06704105",{"2":{"118":1}}],["06777584",{"2":{"264":1}}],["06777759",{"2":{"147":1}}],["06776868",{"2":{"118":1}}],["06740383",{"2":{"147":1}}],["06782",{"2":{"260":1}}],["06781",{"2":{"147":1}}],["06784943",{"2":{"118":1}}],["06788022",{"2":{"89":1}}],["063875977513542e",{"2":{"287":1}}],["063735f",{"2":{"285":1}}],["063340",{"2":{"213":1}}],["063393466",{"2":{"118":1}}],["06339173",{"2":{"118":1}}],["06352646",{"2":{"147":1}}],["06315138",{"2":{"147":1}}],["0634295im",{"2":{"185":1}}],["063480295",{"2":{"147":1}}],["06347208",{"2":{"89":1}}],["063676804",{"2":{"147":1}}],["06368081",{"2":{"147":1}}],["0636671",{"2":{"118":1}}],["0694620713912318e",{"2":{"287":1}}],["06942153",{"2":{"89":1}}],["0697642f",{"2":{"285":1}}],["06973958",{"2":{"147":1}}],["0692",{"2":{"278":1}}],["0692472",{"2":{"147":1}}],["0698316",{"2":{"264":1}}],["069884226",{"2":{"147":1}}],["0698748",{"2":{"89":1}}],["06953073",{"2":{"118":1}}],["060891",{"2":{"274":1}}],["06080796",{"2":{"147":1}}],["06066203",{"2":{"267":1}}],["06063513",{"2":{"147":1}}],["060633026",{"2":{"147":1}}],["060551327",{"2":{"147":1}}],["06052852",{"2":{"89":1}}],["06003239",{"2":{"118":1}}],["0601",{"2":{"90":1}}],["06026435",{"2":{"89":1}}],["060276944",{"2":{"89":1}}],["06592303",{"2":{"147":1}}],["06528456",{"2":{"147":1}}],["06574188",{"2":{"118":1}}],["06582927",{"2":{"118":1}}],["065509446",{"2":{"89":1}}],["065325744",{"2":{"89":1}}],["06",{"2":{"50":1,"213":2,"260":6,"295":1}}],["0549",{"2":{"278":1}}],["05486\\taccuracy",{"2":{"204":1}}],["054846566",{"2":{"147":1}}],["05f0",{"2":{"253":2,"254":1}}],["0538714545726616e",{"2":{"287":1}}],["053941",{"2":{"274":1}}],["053120755",{"2":{"264":1}}],["053432",{"2":{"185":1}}],["053398557",{"2":{"147":1}}],["0537844f",{"2":{"285":1}}],["053759247",{"2":{"147":1}}],["05379219",{"2":{"147":1}}],["0581",{"2":{"278":1}}],["058104068",{"2":{"147":1}}],["058062823383844e",{"2":{"287":1}}],["058020362240716e",{"2":{"287":1}}],["0580",{"2":{"278":1}}],["05877",{"2":{"147":1}}],["058901027",{"2":{"147":1}}],["0526",{"2":{"278":1}}],["05262428",{"2":{"147":1}}],["05227\\taccuracy",{"2":{"204":1}}],["05218964",{"2":{"196":1}}],["0523839",{"2":{"147":1}}],["05290228",{"2":{"147":1}}],["052803323",{"2":{"147":1}}],["052086722",{"2":{"118":1}}],["0592",{"2":{"278":1}}],["059224278",{"2":{"147":1}}],["05951088",{"2":{"264":1}}],["059521634",{"2":{"147":1}}],["0594",{"2":{"253":3}}],["059475474",{"2":{"147":1}}],["059655108",{"2":{"147":1}}],["059671473",{"2":{"147":1}}],["05998",{"2":{"147":1}}],["059937198",{"2":{"147":1}}],["059965573",{"2":{"118":1}}],["059055757",{"2":{"147":1}}],["059382",{"2":{"89":1}}],["051292453112726e",{"2":{"287":1}}],["0514471f",{"2":{"285":1}}],["0514504",{"2":{"185":1}}],["0516141f",{"2":{"285":1}}],["051605415",{"2":{"118":1}}],["0513",{"2":{"278":1}}],["051",{"2":{"260":3}}],["0517578e",{"2":{"168":1}}],["051994912",{"2":{"147":1}}],["051825043",{"2":{"147":1}}],["0515989503581901e",{"2":{"287":1}}],["051536214",{"2":{"118":1}}],["05158",{"2":{"78":1}}],["0579333f",{"2":{"285":1}}],["057005208",{"2":{"194":1,"195":1}}],["0572769",{"2":{"147":1}}],["05721",{"2":{"147":1}}],["05742457",{"2":{"147":1}}],["0574222",{"2":{"118":1}}],["057571076",{"2":{"147":1}}],["057713665",{"2":{"147":1}}],["05779423",{"2":{"143":1}}],["057887085",{"2":{"118":1}}],["050334737",{"2":{"147":1}}],["05095075",{"2":{"147":1}}],["0501",{"2":{"90":1}}],["050780848",{"2":{"147":1}}],["0507623",{"2":{"89":1}}],["05070",{"2":{"58":1}}],["0567329539117285e",{"2":{"287":1}}],["056754f",{"2":{"285":1}}],["0566074f",{"2":{"285":1}}],["05622",{"2":{"204":1}}],["05634",{"2":{"204":1}}],["05657739601024536",{"2":{"191":1}}],["05659819",{"2":{"147":1}}],["0568757",{"2":{"147":1}}],["056034062",{"2":{"147":1}}],["056",{"2":{"90":3,"285":1}}],["0551181",{"2":{"264":2}}],["0551505",{"2":{"147":1}}],["05522",{"2":{"147":1}}],["055663344",{"2":{"147":1}}],["05560146",{"2":{"119":1}}],["055411585",{"2":{"147":1}}],["055422388",{"2":{"147":1}}],["055999022",{"2":{"147":1}}],["055362783",{"2":{"147":1}}],["055573903",{"2":{"147":1}}],["05556",{"2":{"78":1}}],["055711076",{"2":{"89":1}}],["05",{"2":{"50":1,"90":3,"147":1,"245":1,"260":10,"274":1,"278":1}}],["00",{"2":{"245":58,"260":143}}],["009721650",{"2":{"253":1}}],["009720063",{"2":{"147":1}}],["009044455",{"2":{"253":1}}],["0090858545",{"2":{"147":1}}],["009084041f0",{"2":{"50":1}}],["009156350",{"2":{"253":1}}],["00914141",{"2":{"196":1}}],["009989802",{"2":{"253":1}}],["009982477",{"2":{"253":1}}],["009686996",{"2":{"253":1}}],["009353531",{"2":{"253":1}}],["009397319",{"2":{"118":1}}],["00959\\taccuracy",{"2":{"204":1}}],["00944",{"2":{"204":1}}],["009416734",{"2":{"118":1}}],["009401777",{"2":{"118":1}}],["009405821",{"2":{"118":1}}],["00942f",{"2":{"118":1}}],["008369f",{"2":{"285":1}}],["00835054",{"2":{"147":1}}],["008499",{"2":{"260":1}}],["008495",{"2":{"260":1}}],["008492",{"2":{"260":1}}],["008489",{"2":{"260":1}}],["008485",{"2":{"260":1}}],["008468",{"2":{"260":1}}],["008",{"2":{"260":4}}],["008794412",{"2":{"253":1}}],["008508",{"2":{"260":1}}],["008505",{"2":{"260":1}}],["008502",{"2":{"260":1}}],["008573188",{"2":{"253":1}}],["008559611",{"2":{"253":1}}],["008516",{"2":{"260":1}}],["008512",{"2":{"260":1}}],["00851",{"2":{"204":1}}],["008988982",{"2":{"253":1}}],["008673832",{"2":{"253":1}}],["00863\\taccuracy",{"2":{"204":1}}],["008215369033789e",{"2":{"287":1}}],["008210877",{"2":{"253":1}}],["00823197",{"2":{"188":1}}],["008269919",{"2":{"147":1}}],["0081296405",{"2":{"147":1}}],["008840925",{"2":{"147":1}}],["008846691",{"2":{"147":1}}],["008825431",{"2":{"118":1}}],["0080251f",{"2":{"285":1}}],["008014115",{"2":{"118":1}}],["008014375",{"2":{"118":1}}],["00803444",{"2":{"89":1}}],["007146059789129e",{"2":{"287":1}}],["0071106f",{"2":{"285":1}}],["00715",{"2":{"204":1}}],["007330933",{"2":{"253":1}}],["0073688403",{"2":{"147":1}}],["007559542",{"2":{"253":1}}],["00722\\taccuracy",{"2":{"204":1}}],["0072900057",{"2":{"147":1}}],["007961722",{"2":{"253":1}}],["007984848",{"2":{"167":1}}],["007904152",{"2":{"147":1}}],["007675517",{"2":{"253":1}}],["0076573063",{"2":{"147":1}}],["0076395273",{"2":{"89":1}}],["0078942604616312e",{"2":{"287":1}}],["0078154",{"2":{"264":1}}],["00785\\taccuracy",{"2":{"204":1}}],["0078206",{"2":{"147":1}}],["007873881",{"2":{"89":1}}],["00744",{"2":{"147":1}}],["0074208006",{"2":{"118":1}}],["007434898",{"2":{"118":1}}],["007744140",{"2":{"253":1}}],["007789471",{"2":{"253":1}}],["00778",{"2":{"204":1}}],["007705185",{"2":{"147":1}}],["00775477",{"2":{"119":1}}],["0077551096",{"2":{"118":1}}],["007762789",{"2":{"118":1}}],["0043",{"2":{"278":1}}],["004308250",{"2":{"253":1}}],["004308246",{"2":{"253":1}}],["004727511",{"2":{"253":1}}],["004753132",{"2":{"118":1}}],["004263383",{"2":{"253":1}}],["004263131",{"2":{"253":1}}],["004831538",{"2":{"253":1}}],["004818310",{"2":{"253":1}}],["004148528",{"2":{"253":1}}],["0041270256",{"2":{"89":1}}],["004527f",{"2":{"285":1}}],["004531536",{"2":{"253":1}}],["004589280",{"2":{"253":1}}],["004514552",{"2":{"253":1}}],["0045406874",{"2":{"89":1}}],["00407581",{"2":{"196":1}}],["004908773",{"2":{"147":1}}],["004666437",{"2":{"253":1}}],["00461",{"2":{"147":1}}],["004629241",{"2":{"147":1}}],["0046932907",{"2":{"118":1}}],["004689035",{"2":{"118":1}}],["0044566749965056e",{"2":{"287":1}}],["004457889",{"2":{"118":1}}],["0044614216",{"2":{"118":1}}],["00443555",{"2":{"89":1}}],["004425203",{"2":{"89":1}}],["006389363",{"2":{"253":1}}],["006852372",{"2":{"253":1}}],["0068602865",{"2":{"147":1}}],["006111934",{"2":{"253":1}}],["00610\\taccuracy",{"2":{"204":1}}],["006038338",{"2":{"253":1}}],["006062332",{"2":{"89":1}}],["006546173",{"2":{"253":1}}],["006596491",{"2":{"253":1}}],["00659",{"2":{"204":1}}],["006657102",{"2":{"253":1}}],["006656272",{"2":{"89":1}}],["00664\\taccuracy",{"2":{"204":1}}],["0066712513",{"2":{"196":1}}],["006780343",{"2":{"147":1}}],["0067291846",{"2":{"118":1}}],["0069064857",{"2":{"147":1}}],["006282",{"2":{"260":1}}],["0062883645",{"2":{"147":1}}],["006277",{"2":{"260":1}}],["006274",{"2":{"260":1}}],["006271",{"2":{"260":1}}],["006276353",{"2":{"253":1}}],["006268",{"2":{"260":1}}],["006265",{"2":{"260":1}}],["006262",{"2":{"260":1}}],["006259",{"2":{"260":1}}],["006256",{"2":{"260":1}}],["006253",{"2":{"260":1}}],["006239",{"2":{"260":1}}],["006239144",{"2":{"118":1}}],["006228818",{"2":{"253":1}}],["006220151",{"2":{"253":1}}],["006241375",{"2":{"253":1}}],["006240552",{"2":{"118":1}}],["0064240773",{"2":{"89":1}}],["002668173",{"2":{"253":1}}],["0026271285",{"2":{"127":1}}],["002202400",{"2":{"253":1}}],["002219696",{"2":{"253":1}}],["0022123815",{"2":{"89":1}}],["002241082",{"2":{"253":1}}],["002317760",{"2":{"253":1}}],["002314395",{"2":{"253":1}}],["00234933",{"2":{"89":1}}],["002419390",{"2":{"253":1}}],["002460307",{"2":{"253":1}}],["002489067",{"2":{"253":1}}],["0024843565",{"2":{"147":1}}],["002483191",{"2":{"147":1}}],["002503448",{"2":{"253":1}}],["002532476",{"2":{"253":1}}],["002512253",{"2":{"253":1}}],["002584497",{"2":{"253":1}}],["00258",{"2":{"147":1}}],["002045441",{"2":{"253":1}}],["002052142",{"2":{"253":1}}],["002033974",{"2":{"253":1}}],["0020273982",{"2":{"147":1}}],["00201115524",{"2":{"90":1}}],["002898715",{"2":{"253":1}}],["0028994146",{"2":{"89":1}}],["002849674",{"2":{"253":1}}],["002823828",{"2":{"253":1}}],["0028807875",{"2":{"147":1}}],["002771034",{"2":{"253":1}}],["002704730",{"2":{"253":1}}],["002748450",{"2":{"253":1}}],["0027464977",{"2":{"89":1}}],["00271383",{"2":{"89":1}}],["0027350332",{"2":{"89":1}}],["002976",{"2":{"274":1}}],["002975470",{"2":{"253":1}}],["002933637",{"2":{"253":1}}],["0029306945",{"2":{"89":1}}],["002955514",{"2":{"253":1}}],["002917211",{"2":{"253":1}}],["002969833",{"2":{"253":1}}],["0029033197",{"2":{"89":1}}],["002143627",{"2":{"253":1}}],["0021180161",{"2":{"118":1}}],["0021197717",{"2":{"118":1}}],["0021157824",{"2":{"89":1}}],["0021741025",{"2":{"89":1}}],["002197485",{"2":{"89":1}}],["003979",{"2":{"260":1}}],["003974",{"2":{"260":1}}],["003971",{"2":{"260":1}}],["003958",{"2":{"260":1}}],["003955",{"2":{"260":1}}],["003952",{"2":{"260":1}}],["003949",{"2":{"260":1}}],["003933",{"2":{"260":1}}],["003967f",{"2":{"285":1}}],["003968",{"2":{"260":1}}],["003965",{"2":{"260":1}}],["003961",{"2":{"260":1}}],["003963022",{"2":{"253":1}}],["0039605754",{"2":{"89":1}}],["003234812",{"2":{"253":1}}],["003274539",{"2":{"253":1}}],["00326585",{"2":{"89":1}}],["003183292",{"2":{"253":1}}],["003130429",{"2":{"253":1}}],["0031301186",{"2":{"89":1}}],["003135182",{"2":{"253":1}}],["0031371699",{"2":{"89":1}}],["003878513381596e",{"2":{"287":1}}],["003816272",{"2":{"253":1}}],["0038155357",{"2":{"89":1}}],["003847382",{"2":{"253":1}}],["003096203",{"2":{"253":1}}],["003098861",{"2":{"253":1}}],["003023159",{"2":{"253":1}}],["00303222",{"2":{"89":1}}],["0035893882",{"2":{"267":1}}],["003585879",{"2":{"89":1}}],["003575745",{"2":{"253":1}}],["0035986663",{"2":{"147":1}}],["003485207",{"2":{"147":1}}],["0034407252",{"2":{"89":1}}],["003336374",{"2":{"253":1}}],["003328452",{"2":{"253":1}}],["00332033+0",{"2":{"185":1}}],["003325696",{"2":{"118":1}}],["0033274868",{"2":{"118":1}}],["003383829",{"2":{"89":1}}],["003730775",{"2":{"253":1}}],["0037318026",{"2":{"89":1}}],["00376332",{"2":{"89":1}}],["003750000000000005",{"2":{"50":1}}],["001965113",{"2":{"253":1}}],["001906899",{"2":{"253":1}}],["001910425",{"2":{"253":1}}],["001986223",{"2":{"253":1}}],["00198415",{"2":{"196":1}}],["001348543",{"2":{"253":1}}],["001342141",{"2":{"253":1}}],["001363893",{"2":{"253":1}}],["001365761",{"2":{"253":1}}],["001390206",{"2":{"253":1}}],["001392871",{"2":{"253":1}}],["001394353",{"2":{"253":1}}],["001394610",{"2":{"253":1}}],["001306025",{"2":{"253":1}}],["001308756",{"2":{"253":1}}],["001301785",{"2":{"253":1}}],["001351097",{"2":{"253":1}}],["001203460",{"2":{"253":1}}],["001202732",{"2":{"253":1}}],["001224924",{"2":{"253":1}}],["001220055",{"2":{"253":1}}],["001242174",{"2":{"253":1}}],["001248004",{"2":{"253":1}}],["001258306",{"2":{"253":1}}],["001217105",{"2":{"253":1}}],["001213797",{"2":{"253":1}}],["001211297",{"2":{"253":1}}],["001270391",{"2":{"253":1}}],["001271797",{"2":{"253":1}}],["00127967",{"2":{"89":1}}],["001261572",{"2":{"253":1}}],["001231446",{"2":{"253":1}}],["001293942",{"2":{"253":1}}],["001",{"2":{"225":1,"260":1}}],["0018046f",{"2":{"285":1}}],["001803906",{"2":{"253":1}}],["001886792",{"2":{"253":1}}],["001881554",{"2":{"253":1}}],["001840178",{"2":{"253":1}}],["001836546",{"2":{"253":1}}],["0018348376",{"2":{"147":1}}],["001818714",{"2":{"253":1}}],["001854344",{"2":{"253":1}}],["001857541",{"2":{"253":1}}],["001822183",{"2":{"253":1}}],["0018621369",{"2":{"89":1}}],["001512",{"2":{"274":1}}],["001518319",{"2":{"253":1}}],["001575661",{"2":{"253":1}}],["001594038",{"2":{"253":1}}],["001524673",{"2":{"253":1}}],["0015270555",{"2":{"89":1}}],["001584558",{"2":{"253":1}}],["0015867107",{"2":{"147":1}}],["001559482",{"2":{"253":1}}],["001530622",{"2":{"253":1}}],["0015469915",{"2":{"147":1}}],["001f0",{"2":{"90":1,"119":1,"234":1,"245":1}}],["001465745",{"2":{"253":1}}],["00146704",{"2":{"89":1}}],["001444635",{"2":{"253":1}}],["001451151",{"2":{"253":1}}],["001458327",{"2":{"253":1}}],["001458622",{"2":{"253":1}}],["001430126",{"2":{"253":1}}],["001479890",{"2":{"253":1}}],["001470311",{"2":{"253":1}}],["001489272",{"2":{"253":1}}],["001482069",{"2":{"253":1}}],["001429706",{"2":{"253":1}}],["001413772",{"2":{"253":1}}],["001491809",{"2":{"253":1}}],["001496360",{"2":{"253":1}}],["0014993",{"2":{"89":1}}],["001681",{"2":{"260":1}}],["0016859076",{"2":{"147":1}}],["001676",{"2":{"260":1}}],["001673",{"2":{"260":1}}],["001670",{"2":{"260":1}}],["001667",{"2":{"260":1}}],["001664",{"2":{"260":1}}],["001661",{"2":{"260":1}}],["001659",{"2":{"260":1}}],["001656",{"2":{"260":1}}],["001652",{"2":{"260":1}}],["001657575",{"2":{"253":1}}],["001637",{"2":{"260":1}}],["00163654",{"2":{"89":1}}],["001625932",{"2":{"253":1}}],["001649175",{"2":{"253":1}}],["001640767",{"2":{"253":1}}],["001645100",{"2":{"253":1}}],["001697728",{"2":{"253":1}}],["00169459",{"2":{"89":1}}],["00160158",{"2":{"89":1}}],["001773861737288e",{"2":{"287":1}}],["001789718",{"2":{"253":1}}],["001738792",{"2":{"253":1}}],["0017325052",{"2":{"147":1}}],["001700793",{"2":{"253":1}}],["001797066",{"2":{"253":1}}],["0017983615",{"2":{"89":1}}],["001751710",{"2":{"253":1}}],["00174489",{"2":{"89":1}}],["001056366",{"2":{"253":1}}],["001054667",{"2":{"253":1}}],["001096051",{"2":{"253":1}}],["001001803",{"2":{"253":1}}],["001007635",{"2":{"253":1}}],["001007139",{"2":{"253":1}}],["001022693",{"2":{"253":1}}],["001014963",{"2":{"253":1}}],["0010140468",{"2":{"118":1}}],["00101147",{"2":{"196":1}}],["0010442184",{"2":{"118":1}}],["001037403",{"2":{"253":1}}],["00103705",{"2":{"89":1}}],["001038662",{"2":{"253":1}}],["00103866",{"2":{"89":1}}],["00106235",{"2":{"89":1}}],["001149551",{"2":{"253":1}}],["001149394",{"2":{"253":1}}],["001148664",{"2":{"253":1}}],["001140883",{"2":{"253":1}}],["001145163",{"2":{"253":1}}],["001126388",{"2":{"253":1}}],["0011292367",{"2":{"89":1}}],["001197141",{"2":{"253":1}}],["001113268",{"2":{"253":1}}],["001112666",{"2":{"253":1}}],["0011154839",{"2":{"89":1}}],["001130134",{"2":{"253":1}}],["0011872598",{"2":{"147":1}}],["00118357129",{"2":{"90":1}}],["00110328",{"2":{"89":1}}],["005546124688226e",{"2":{"287":1}}],["0055776797",{"2":{"147":1}}],["005886041",{"2":{"253":1}}],["0058369",{"2":{"118":1}}],["00569",{"2":{"260":2}}],["005691605",{"2":{"253":1}}],["005651589",{"2":{"253":1}}],["005671175",{"2":{"253":1}}],["0056276177",{"2":{"89":1}}],["005f0",{"2":{"253":1}}],["005",{"2":{"225":1,"260":6}}],["005070512",{"2":{"253":1}}],["005017307",{"2":{"253":1}}],["005086349",{"2":{"147":1}}],["005000000000000009",{"2":{"50":1}}],["005350559",{"2":{"147":1}}],["005353872",{"2":{"118":1}}],["005372945",{"2":{"118":1}}],["0053392933",{"2":{"89":1}}],["005158901",{"2":{"267":1}}],["005170135",{"2":{"253":1}}],["005173992",{"2":{"89":1}}],["0051055951",{"2":{"90":1}}],["00571",{"2":{"85":5}}],["000",{"2":{"279":1,"291":1,"295":2,"296":1}}],["0000",{"2":{"260":10}}],["000058015",{"2":{"253":1}}],["000058747",{"2":{"253":1}}],["000072045",{"2":{"253":1}}],["00000",{"2":{"204":47,"234":17,"236":3}}],["000000e+00>",{"2":{"147":4}}],["000245804313270806",{"2":{"287":1}}],["00024534184506921366",{"2":{"287":1}}],["0002416021787459457",{"2":{"287":1}}],["00024160316",{"2":{"285":1}}],["00024395707",{"2":{"285":1}}],["00024751409822006056",{"2":{"287":1}}],["00024751297",{"2":{"285":1}}],["00024739783644049055",{"2":{"287":1}}],["00024739647",{"2":{"285":1}}],["0002463296995581305",{"2":{"287":1}}],["00024633258",{"2":{"285":1}}],["00024677252581862267",{"2":{"287":1}}],["00024676963",{"2":{"285":1}}],["000249964",{"2":{"253":1}}],["00022932567297481477",{"2":{"287":1}}],["00022161423433166277",{"2":{"287":1}}],["00022161243",{"2":{"285":1}}],["00022037337",{"2":{"285":1}}],["0002236010605176565",{"2":{"287":1}}],["00022305499",{"2":{"285":1}}],["00022359965",{"2":{"285":1}}],["000226719",{"2":{"253":1}}],["00026180337971381206",{"2":{"287":1}}],["000261034",{"2":{"253":1}}],["0002628853403574167",{"2":{"287":1}}],["00026288297",{"2":{"285":1}}],["0002652609671686e",{"2":{"287":1}}],["000264422",{"2":{"253":1}}],["00028733240285908827",{"2":{"287":1}}],["00028733205",{"2":{"285":1}}],["000281135",{"2":{"253":1}}],["000281832268",{"2":{"90":1}}],["000288372",{"2":{"253":1}}],["00023470185140701995",{"2":{"287":1}}],["00023419227770470874",{"2":{"287":1}}],["00023419409",{"2":{"285":1}}],["0002336549263445295",{"2":{"287":1}}],["000233010",{"2":{"253":1}}],["00023748613259022187",{"2":{"287":1}}],["00023747998",{"2":{"285":1}}],["00023296587503694646",{"2":{"287":1}}],["000232967",{"2":{"253":1}}],["00023203238661056113",{"2":{"287":1}}],["00023203099",{"2":{"285":1}}],["00023663114036301372",{"2":{"287":1}}],["00023663025",{"2":{"285":1}}],["00023637286531921677",{"2":{"287":1}}],["00023636897",{"2":{"285":1}}],["0002301521017214964",{"2":{"287":1}}],["0002301521",{"2":{"285":1}}],["0002746316613115953",{"2":{"287":1}}],["0002788537879161968",{"2":{"287":1}}],["00027885364",{"2":{"285":1}}],["0002782496928917827",{"2":{"287":1}}],["00027824836",{"2":{"285":1}}],["000278991",{"2":{"253":1}}],["000271820",{"2":{"253":1}}],["000279625",{"2":{"253":1}}],["000279479",{"2":{"253":1}}],["000276357",{"2":{"118":1}}],["0002966060395107406",{"2":{"287":1}}],["000290614",{"2":{"253":1}}],["00029203",{"2":{"196":1}}],["00025969017269987537",{"2":{"287":1}}],["00025430398188500445",{"2":{"287":1}}],["0002543031",{"2":{"285":1}}],["00025132792762114984",{"2":{"287":1}}],["0002513219",{"2":{"285":1}}],["00025890653984722865",{"2":{"287":1}}],["00025890366",{"2":{"285":1}}],["0002538888216697285",{"2":{"287":1}}],["00025388983",{"2":{"285":1}}],["00025326014",{"2":{"166":1}}],["0002528183136094774",{"2":{"287":1}}],["0002528165",{"2":{"285":1}}],["000252445",{"2":{"253":1}}],["00020724653688725098",{"2":{"287":1}}],["00020263389821266977",{"2":{"287":1}}],["000202413",{"2":{"253":1}}],["00020513723141489166",{"2":{"287":1}}],["00020513852",{"2":{"285":1}}],["00020498916317610683",{"2":{"287":1}}],["00020499196",{"2":{"285":1}}],["0002098619435244305",{"2":{"287":1}}],["00020986063",{"2":{"285":1}}],["000209119",{"2":{"253":1}}],["00020312480676484222",{"2":{"287":1}}],["00020312569",{"2":{"285":1}}],["0002030581951020879",{"2":{"287":1}}],["00020306083",{"2":{"285":1}}],["000203011135",{"2":{"90":1}}],["00020137881271728322",{"2":{"287":1}}],["00020137579",{"2":{"285":1}}],["00020127723",{"2":{"285":1}}],["000201709",{"2":{"253":1}}],["00020825366743085208",{"2":{"287":1}}],["00020825498",{"2":{"285":1}}],["000208197156015778",{"2":{"287":1}}],["00020819326",{"2":{"285":1}}],["0002084372189612855",{"2":{"287":1}}],["00020843332",{"2":{"285":1}}],["00020849705",{"2":{"166":1}}],["0002009630081543254",{"2":{"287":1}}],["00020095686",{"2":{"285":1}}],["00020041184687265496",{"2":{"287":1}}],["00020041059",{"2":{"285":1}}],["000200418",{"2":{"253":1}}],["00020055473",{"2":{"118":1}}],["00021100041385111674",{"2":{"287":1}}],["00021180017764961743",{"2":{"287":1}}],["00021180407",{"2":{"285":1}}],["00021278356374514274",{"2":{"287":1}}],["00021278443",{"2":{"285":1}}],["00021316138392515777",{"2":{"287":1}}],["00021315523",{"2":{"285":1}}],["00021031218",{"2":{"285":1}}],["00021099804",{"2":{"285":1}}],["000210634",{"2":{"253":1}}],["00021599596737085198",{"2":{"287":1}}],["00021599686",{"2":{"285":1}}],["000215553516548378",{"2":{"287":1}}],["00021555215",{"2":{"285":1}}],["000219414",{"2":{"253":1}}],["00021798005",{"2":{"118":1}}],["00021804248",{"2":{"118":1}}],["0007703236351394529",{"2":{"287":1}}],["0007736237",{"2":{"196":1}}],["000773079",{"2":{"89":1}}],["0007602384872897437",{"2":{"287":1}}],["0007321562545599134",{"2":{"287":1}}],["000732422",{"2":{"147":1}}],["0007068354548441024",{"2":{"287":1}}],["0007052501604153043",{"2":{"287":1}}],["000707322",{"2":{"253":1}}],["000709339",{"2":{"253":1}}],["0007234091484616137",{"2":{"287":1}}],["000723396",{"2":{"253":1}}],["0007208689380831659",{"2":{"287":1}}],["000722710",{"2":{"253":1}}],["000725332",{"2":{"253":1}}],["000789952",{"2":{"253":1}}],["000786213",{"2":{"253":1}}],["000740890",{"2":{"253":1}}],["0007467641",{"2":{"118":1}}],["00074701244",{"2":{"118":1}}],["0007594082672826099",{"2":{"287":1}}],["0007518903236338871",{"2":{"286":1}}],["000750440",{"2":{"253":1}}],["00075",{"2":{"85":2}}],["0006387189687910598",{"2":{"287":1}}],["000636658",{"2":{"253":1}}],["0006420515608012984",{"2":{"287":1}}],["0006416307649416789",{"2":{"287":1}}],["000641221",{"2":{"253":1}}],["0006872855625840169",{"2":{"287":1}}],["0006863294593320278",{"2":{"287":1}}],["00068473816",{"2":{"165":1}}],["0006294064772027735",{"2":{"287":1}}],["000625095",{"2":{"253":1}}],["000622213",{"2":{"253":1}}],["0006970124261547597",{"2":{"287":1}}],["0006929681398377475",{"2":{"287":1}}],["000694363",{"2":{"253":1}}],["000693607",{"2":{"253":1}}],["0006786424064657307",{"2":{"287":1}}],["000678301",{"2":{"253":1}}],["000671515387886372",{"2":{"287":1}}],["000676658",{"2":{"253":1}}],["000612216",{"2":{"253":1}}],["000618911",{"2":{"89":1}}],["0006040130617433301",{"2":{"287":1}}],["000604900",{"2":{"253":1}}],["00060117245",{"2":{"118":1}}],["000657654",{"2":{"253":1}}],["000657073",{"2":{"253":1}}],["000656006",{"2":{"253":1}}],["000659175",{"2":{"89":1}}],["0006668962852593428",{"2":{"287":1}}],["000661293486180718",{"2":{"287":1}}],["0006699663759985883",{"2":{"287":1}}],["000669667",{"2":{"253":1}}],["000665410",{"2":{"253":1}}],["000665883",{"2":{"89":1}}],["0003474788350635215",{"2":{"287":1}}],["000343106",{"2":{"253":1}}],["0003",{"2":{"278":1}}],["0003323131938604503",{"2":{"287":1}}],["000339579",{"2":{"253":1}}],["000339070",{"2":{"253":1}}],["000336276",{"2":{"253":1}}],["000337431",{"2":{"253":1}}],["00031432995686404174",{"2":{"287":1}}],["000312800",{"2":{"253":1}}],["000310050",{"2":{"253":1}}],["000317998",{"2":{"253":1}}],["000319502",{"2":{"253":1}}],["000319554",{"2":{"253":1}}],["000315701",{"2":{"253":1}}],["000311622",{"2":{"253":1}}],["000318603",{"2":{"253":1}}],["000353080",{"2":{"253":1}}],["000355202",{"2":{"253":1}}],["0003253273352741355",{"2":{"287":1}}],["00032724125691788327",{"2":{"287":1}}],["00032723838",{"2":{"285":1}}],["000321687",{"2":{"253":1}}],["000320880",{"2":{"253":1}}],["000366133",{"2":{"253":1}}],["000364929",{"2":{"253":1}}],["000363118",{"2":{"253":1}}],["000365587",{"2":{"89":1}}],["00037111824754254026",{"2":{"287":1}}],["000370217",{"2":{"253":1}}],["000375299",{"2":{"253":1}}],["000373781",{"2":{"253":1}}],["000376609",{"2":{"253":1}}],["000377079",{"2":{"253":1}}],["0003947422343828539",{"2":{"287":1}}],["000392687",{"2":{"253":1}}],["000395160",{"2":{"253":1}}],["000397676",{"2":{"253":1}}],["000391039",{"2":{"89":1}}],["0005686196674731176",{"2":{"287":1}}],["000584702750938777",{"2":{"287":1}}],["000584197",{"2":{"89":1}}],["000547062580375197",{"2":{"287":1}}],["000542178",{"2":{"253":1}}],["000529087",{"2":{"253":1}}],["000520651",{"2":{"89":1}}],["000555564038915714",{"2":{"287":1}}],["000559338",{"2":{"253":1}}],["000554989",{"2":{"253":1}}],["000551110",{"2":{"253":1}}],["000556399",{"2":{"253":1}}],["000550666",{"2":{"253":1}}],["000534889",{"2":{"253":1}}],["00053332",{"2":{"196":1}}],["0005f0",{"2":{"253":1}}],["0005770834146635521",{"2":{"287":1}}],["0005767567490700625",{"2":{"287":1}}],["00057825993",{"2":{"147":1}}],["000574605",{"2":{"89":1}}],["000500438",{"2":{"253":1}}],["000508558",{"2":{"253":1}}],["000505303",{"2":{"253":1}}],["000503887",{"2":{"253":1}}],["000504208321",{"2":{"90":1}}],["000501535",{"2":{"89":1}}],["0004252697652671806",{"2":{"287":1}}],["000427971",{"2":{"253":1}}],["0004",{"2":{"278":1,"295":1}}],["0004470814936101512",{"2":{"287":1}}],["000441957",{"2":{"253":1}}],["000449356",{"2":{"253":1}}],["000449867",{"2":{"89":1}}],["0004115588958664411",{"2":{"287":1}}],["0004115561",{"2":{"285":1}}],["000411240",{"2":{"253":1}}],["000418792",{"2":{"253":1}}],["000412173",{"2":{"253":1}}],["000485165",{"2":{"253":1}}],["000480712",{"2":{"89":1}}],["000480834",{"2":{"89":1}}],["000450692",{"2":{"253":1}}],["000458097",{"2":{"253":1}}],["000451727",{"2":{"253":1}}],["000438084",{"2":{"253":1}}],["000438617",{"2":{"89":1}}],["000439273",{"2":{"253":1}}],["000491625",{"2":{"253":1}}],["000495484",{"2":{"89":1}}],["00046617747547874494",{"2":{"287":1}}],["000466423",{"2":{"253":1}}],["000466388",{"2":{"253":1}}],["000461125",{"2":{"253":1}}],["00046014786",{"2":{"165":1}}],["000405950",{"2":{"253":1}}],["000401148",{"2":{"253":1}}],["00040024254",{"2":{"89":1}}],["00047666396",{"2":{"147":1}}],["000470662",{"2":{"89":1}}],["0008284032113006212",{"2":{"287":1}}],["0008243604406214613",{"2":{"287":1}}],["0008470960064196446",{"2":{"287":1}}],["000844118",{"2":{"253":1}}],["000894072",{"2":{"253":1}}],["000834267",{"2":{"253":1}}],["000882140",{"2":{"253":1}}],["000884497",{"2":{"253":1}}],["000887432",{"2":{"253":1}}],["00088779256",{"2":{"118":1}}],["00088798604",{"2":{"118":1}}],["000818373",{"2":{"253":1}}],["000819583",{"2":{"253":1}}],["000813131",{"2":{"89":1}}],["000859504",{"2":{"253":1}}],["000857728",{"2":{"253":1}}],["000856754",{"2":{"253":1}}],["0008679954776256519",{"2":{"287":1}}],["000868318",{"2":{"89":1}}],["000860063",{"2":{"89":1}}],["0008762945",{"2":{"89":1}}],["0009673660790493529",{"2":{"287":1}}],["000964010",{"2":{"253":1}}],["000991343",{"2":{"253":1}}],["000994991",{"2":{"253":1}}],["000992391663909964",{"2":{"50":2}}],["000918692",{"2":{"253":1}}],["000912517",{"2":{"253":1}}],["000916688",{"2":{"253":1}}],["000959212",{"2":{"253":1}}],["000936380",{"2":{"253":1}}],["000983181",{"2":{"253":1}}],["000927515",{"2":{"253":1}}],["000926178",{"2":{"253":1}}],["000926728",{"2":{"89":1}}],["000900613",{"2":{"253":1}}],["000908301",{"2":{"89":1}}],["000977637",{"2":{"89":1}}],["00014995426587898148",{"2":{"287":1}}],["00014995539",{"2":{"285":1}}],["00014921267395239928",{"2":{"287":1}}],["00014921126",{"2":{"285":1}}],["00014257342379206514",{"2":{"287":1}}],["00014257689",{"2":{"285":1}}],["00014294178509251964",{"2":{"287":1}}],["00014293577",{"2":{"285":1}}],["00014276416593171276",{"2":{"287":1}}],["00014276597",{"2":{"285":1}}],["00014830955658708142",{"2":{"287":1}}],["00014831097",{"2":{"285":1}}],["00014810692490824082",{"2":{"287":1}}],["00014811082",{"2":{"285":1}}],["00014893794",{"2":{"285":1}}],["00014570788116276455",{"2":{"287":1}}],["0001457063",{"2":{"285":1}}],["00014588151163755285",{"2":{"287":1}}],["00014588062",{"2":{"285":1}}],["0001451950643542815",{"2":{"287":1}}],["00014519217",{"2":{"285":1}}],["0001448503304618146",{"2":{"287":1}}],["00014485297",{"2":{"285":1}}],["00014460866251213585",{"2":{"287":1}}],["00014460477",{"2":{"285":1}}],["0001445886271263188",{"2":{"287":1}}],["00014458633",{"2":{"285":1}}],["00014458733",{"2":{"285":1}}],["00014067009244447858",{"2":{"287":1}}],["0001406692",{"2":{"285":1}}],["0001405256499738763",{"2":{"287":1}}],["00014052702",{"2":{"285":1}}],["0001404190799556239",{"2":{"287":1}}],["0001404181",{"2":{"285":1}}],["00014026918827615586",{"2":{"287":1}}],["00014026738",{"2":{"285":1}}],["00014710546113878698",{"2":{"287":1}}],["00014710434",{"2":{"285":1}}],["00014762198564927372",{"2":{"287":1}}],["00014762214",{"2":{"285":1}}],["00014780753960773508",{"2":{"287":1}}],["00014780935",{"2":{"285":1}}],["00014737185768606465",{"2":{"287":1}}],["00014737186748719914",{"2":{"287":1}}],["00014737055",{"2":{"285":1}}],["00014736797",{"2":{"285":1}}],["000147021",{"2":{"253":1}}],["0001460198115291046",{"2":{"287":1}}],["00014601879",{"2":{"285":1}}],["0001463368188608353",{"2":{"287":1}}],["00014633292",{"2":{"285":1}}],["0001463257",{"2":{"285":1}}],["00014665237",{"2":{"285":1}}],["000146818",{"2":{"89":1}}],["0001437923515708472",{"2":{"287":1}}],["00014379322",{"2":{"285":1}}],["00014343801924007225",{"2":{"287":1}}],["00014343837",{"2":{"285":1}}],["00014328124023174605",{"2":{"287":1}}],["00014328213",{"2":{"285":1}}],["00014350317165092021",{"2":{"287":1}}],["0001435018",{"2":{"285":1}}],["00014383887173639653",{"2":{"287":1}}],["00014383887",{"2":{"285":1}}],["000143649",{"2":{"253":1}}],["000141316",{"2":{"253":1}}],["00015144216",{"2":{"285":1}}],["00015577589217132342",{"2":{"287":1}}],["0001557746",{"2":{"285":1}}],["0001553727006180973",{"2":{"287":1}}],["00015537383",{"2":{"285":1}}],["00015582490511340387",{"2":{"287":1}}],["00015582202",{"2":{"285":1}}],["0001537973348425927",{"2":{"287":1}}],["00015379698",{"2":{"285":1}}],["00015344942765331058",{"2":{"287":1}}],["00015344907",{"2":{"285":1}}],["00015366018782550243",{"2":{"287":1}}],["00015366152",{"2":{"285":1}}],["00015728436003772776",{"2":{"287":1}}],["0001572809",{"2":{"285":1}}],["00015748128882344428",{"2":{"287":1}}],["00015747514",{"2":{"285":1}}],["00015791032722585596",{"2":{"287":1}}],["00015791321",{"2":{"285":1}}],["00015675257645384697",{"2":{"287":1}}],["0001567502",{"2":{"285":1}}],["00015687008387541565",{"2":{"287":1}}],["00015687355",{"2":{"285":1}}],["00015631455643210496",{"2":{"287":1}}],["00015631424",{"2":{"285":1}}],["00015662540719112584",{"2":{"287":1}}],["0001566244",{"2":{"285":1}}],["00015698358826003438",{"2":{"287":1}}],["00015698241328174312",{"2":{"287":1}}],["00015698257",{"2":{"285":1}}],["00015698108",{"2":{"285":1}}],["00015813161772875286",{"2":{"287":1}}],["00015813061",{"2":{"285":1}}],["00015839677796321706",{"2":{"287":1}}],["0001583958",{"2":{"285":1}}],["00015856075248782845",{"2":{"287":1}}],["00015852405465016132",{"2":{"287":1}}],["00015852142",{"2":{"285":1}}],["00015855787",{"2":{"285":1}}],["00015254749575438378",{"2":{"287":1}}],["00015254486",{"2":{"285":1}}],["0001526995462245013",{"2":{"287":1}}],["000152633",{"2":{"253":1}}],["00015273282426144847",{"2":{"287":1}}],["00015272667",{"2":{"285":1}}],["00015270218",{"2":{"285":1}}],["00015405409250853318",{"2":{"287":1}}],["00015405445",{"2":{"285":1}}],["00015492507150450375",{"2":{"287":1}}],["00015492218",{"2":{"285":1}}],["0001545632985649061",{"2":{"287":1}}],["0001545633",{"2":{"285":1}}],["00015478948",{"2":{"285":1}}],["00015922881427245944",{"2":{"287":1}}],["00015922752",{"2":{"285":1}}],["000159858",{"2":{"253":1}}],["000159133",{"2":{"253":1}}],["00018346638828957787",{"2":{"287":1}}],["00018346768",{"2":{"285":1}}],["0001822830788747122",{"2":{"287":1}}],["00018228179",{"2":{"285":1}}],["00018978464723017397",{"2":{"287":1}}],["00018978622",{"2":{"285":1}}],["00018196736714670334",{"2":{"287":1}}],["00018187299507846564",{"2":{"287":1}}],["0001818743",{"2":{"285":1}}],["00018153626841048174",{"2":{"287":1}}],["00018153591",{"2":{"285":1}}],["0001815078014620828",{"2":{"287":1}}],["0001815075",{"2":{"285":1}}],["0001859879255141768",{"2":{"287":1}}],["00018598446",{"2":{"285":1}}],["0001851385028766427",{"2":{"287":1}}],["00018513748",{"2":{"285":1}}],["00018530846425421303",{"2":{"287":1}}],["00018530748",{"2":{"285":1}}],["0001849428463352618",{"2":{"287":1}}],["0001849432",{"2":{"285":1}}],["00018461362394226968",{"2":{"287":1}}],["0001846106",{"2":{"285":1}}],["0001871437805714198",{"2":{"287":1}}],["00018713776",{"2":{"285":1}}],["00018775681507582261",{"2":{"287":1}}],["00018775418",{"2":{"285":1}}],["00018817123738681168",{"2":{"287":1}}],["00018816844",{"2":{"285":1}}],["0001883290400383472",{"2":{"287":1}}],["0001883264",{"2":{"285":1}}],["00018854346635367353",{"2":{"287":1}}],["00018854189",{"2":{"285":1}}],["00018855169220806766",{"2":{"287":1}}],["00018855205",{"2":{"285":1}}],["00018647255025936032",{"2":{"287":1}}],["00018647601",{"2":{"285":1}}],["00018631656627738009",{"2":{"287":1}}],["00018631369",{"2":{"285":1}}],["000186986",{"2":{"253":1}}],["00018010626160250601",{"2":{"287":1}}],["00018010757",{"2":{"285":1}}],["0001803648829907284",{"2":{"287":1}}],["00018036878",{"2":{"285":1}}],["00018031502258926727",{"2":{"287":1}}],["00018031502",{"2":{"285":1}}],["000180287",{"2":{"253":1}}],["00016678719760673813",{"2":{"287":1}}],["00016678589",{"2":{"285":1}}],["00016218015626896033",{"2":{"287":1}}],["00016218047",{"2":{"285":1}}],["00016787764960231856",{"2":{"287":1}}],["00016784511",{"2":{"285":1}}],["0001678838",{"2":{"285":1}}],["0001642421836811525",{"2":{"287":1}}],["00016420734329533772",{"2":{"287":1}}],["00016420889",{"2":{"285":1}}],["00016423872",{"2":{"285":1}}],["00016022111170795186",{"2":{"287":1}}],["00016022213",{"2":{"285":1}}],["000160382622218286",{"2":{"287":1}}],["00016038121",{"2":{"285":1}}],["00016830684960076193",{"2":{"287":1}}],["00016830383",{"2":{"285":1}}],["00016878",{"2":{"196":1}}],["0001639669261108711",{"2":{"287":1}}],["00016396995",{"2":{"285":1}}],["00016358889554538193",{"2":{"287":1}}],["00016358988",{"2":{"285":1}}],["00016506753305175245",{"2":{"287":1}}],["00016514066258313396",{"2":{"287":1}}],["0001651372",{"2":{"285":1}}],["00016531819770708206",{"2":{"287":1}}],["00016531718",{"2":{"285":1}}],["00016583880873204335",{"2":{"287":1}}],["00016583968",{"2":{"285":1}}],["0001657753725888879",{"2":{"287":1}}],["00016577568",{"2":{"285":1}}],["00016522053294273028",{"2":{"287":1}}],["000165219",{"2":{"285":1}}],["00016961962976933575",{"2":{"287":1}}],["00016962096764438475",{"2":{"287":1}}],["00016962092",{"2":{"285":1}}],["00016962066",{"2":{"285":1}}],["00016922388123738445",{"2":{"287":1}}],["00016922234",{"2":{"285":1}}],["000169103",{"2":{"253":1}}],["00016143989967724203",{"2":{"287":1}}],["00016144144",{"2":{"285":1}}],["000161957",{"2":{"253":1}}],["00017717759912452464",{"2":{"287":1}}],["000177574",{"2":{"253":1}}],["00017139958577282128",{"2":{"287":1}}],["00017139669",{"2":{"285":1}}],["00017533509748934086",{"2":{"287":1}}],["00017533799",{"2":{"285":1}}],["00017535916",{"2":{"285":1}}],["0001733786607133871",{"2":{"287":1}}],["00017337779",{"2":{"285":1}}],["00017307313",{"2":{"285":1}}],["00017892397821689",{"2":{"287":1}}],["00017892485",{"2":{"285":1}}],["00017869890505656856",{"2":{"287":1}}],["00017869765",{"2":{"285":1}}],["0001788694347288481",{"2":{"287":1}}],["00017882947346387792",{"2":{"287":1}}],["0001788286",{"2":{"285":1}}],["00017887045",{"2":{"285":1}}],["00017627767276980928",{"2":{"287":1}}],["00017627855",{"2":{"285":1}}],["00017634654",{"2":{"285":1}}],["000176694",{"2":{"253":1}}],["0001706220965378943",{"2":{"287":1}}],["0001706207",{"2":{"285":1}}],["00017082683916638447",{"2":{"287":1}}],["00017082684",{"2":{"285":1}}],["00017419038109674125",{"2":{"287":1}}],["0001741893730216401",{"2":{"287":1}}],["00017418951",{"2":{"285":1}}],["00017418849",{"2":{"285":1}}],["00017425688998593265",{"2":{"287":1}}],["00017425556",{"2":{"285":1}}],["0001744479",{"2":{"118":1}}],["00017982694702403543",{"2":{"287":1}}],["00017982828",{"2":{"285":1}}],["00017905233281384326",{"2":{"287":1}}],["00017904844",{"2":{"285":1}}],["00017946545",{"2":{"285":1}}],["00017930324791485817",{"2":{"287":1}}],["0001793029",{"2":{"285":1}}],["00017939445827067134",{"2":{"287":1}}],["00017939544",{"2":{"285":1}}],["00019021942681524766",{"2":{"287":1}}],["00019021974",{"2":{"285":1}}],["00019400278344487945",{"2":{"287":1}}],["00019560213793038666",{"2":{"287":1}}],["00019560126",{"2":{"285":1}}],["00019562244",{"2":{"118":1}}],["00019370720307998422",{"2":{"287":1}}],["00019370418",{"2":{"285":1}}],["00019399663",{"2":{"285":1}}],["000199072086771057",{"2":{"287":1}}],["00019906907",{"2":{"285":1}}],["00019974101012196015",{"2":{"287":1}}],["00019974002",{"2":{"285":1}}],["0001913838338885272",{"2":{"287":1}}],["00019138037",{"2":{"285":1}}],["00019176832379611924",{"2":{"287":1}}],["00019176696",{"2":{"285":1}}],["000191524",{"2":{"253":1}}],["00019252053318921704",{"2":{"287":1}}],["00019252152",{"2":{"285":1}}],["000192074",{"2":{"253":1}}],["000196180",{"2":{"253":1}}],["000196472",{"2":{"118":1}}],["00012429048071639927",{"2":{"287":1}}],["00012428919",{"2":{"285":1}}],["00012492937172939317",{"2":{"287":1}}],["00012493068",{"2":{"285":1}}],["00012419714688146732",{"2":{"287":1}}],["00012419584",{"2":{"285":1}}],["00012447183471255168",{"2":{"287":1}}],["00012447026",{"2":{"285":1}}],["00012440864238766146",{"2":{"287":1}}],["00012441004",{"2":{"285":1}}],["0001293715713186908",{"2":{"287":1}}],["0001293727",{"2":{"285":1}}],["0001299821471066502",{"2":{"287":1}}],["00012998351",{"2":{"285":1}}],["00012188276662780651",{"2":{"287":1}}],["00012188146",{"2":{"285":1}}],["00012109893613137458",{"2":{"287":1}}],["00012109862",{"2":{"285":1}}],["00012135534017238758",{"2":{"287":1}}],["00012135565",{"2":{"285":1}}],["00012128945757533569",{"2":{"287":1}}],["00012129234",{"2":{"285":1}}],["00012153430822973407",{"2":{"287":1}}],["00012153343",{"2":{"285":1}}],["00012151897",{"2":{"118":1}}],["00012225274870505853",{"2":{"287":1}}],["00012224929",{"2":{"285":1}}],["0001224293046283855",{"2":{"287":1}}],["00012243056",{"2":{"285":1}}],["000122820",{"2":{"253":1}}],["00012519860517423711",{"2":{"287":1}}],["00012519558",{"2":{"285":1}}],["00012531393",{"2":{"285":1}}],["0001257858046340239",{"2":{"287":1}}],["0001257690287759273",{"2":{"287":1}}],["00012576614",{"2":{"285":1}}],["00012577978",{"2":{"285":1}}],["00012561317016270552",{"2":{"287":1}}],["00012561919",{"2":{"285":1}}],["000125641",{"2":{"253":1}}],["000125642",{"2":{"253":1}}],["00012584605385182484",{"2":{"287":1}}],["00012584326",{"2":{"285":1}}],["00012585469186696105",{"2":{"287":1}}],["00012585316",{"2":{"285":1}}],["00012357169119976596",{"2":{"287":1}}],["00012357271",{"2":{"285":1}}],["00012332559453693195",{"2":{"287":1}}],["00012333161",{"2":{"285":1}}],["00012305155643651454",{"2":{"287":1}}],["00012305002",{"2":{"285":1}}],["00012368748",{"2":{"285":1}}],["000123195",{"2":{"253":1}}],["00012722638313084693",{"2":{"287":1}}],["0001272267",{"2":{"285":1}}],["0001273455293263569",{"2":{"287":1}}],["00012734522",{"2":{"285":1}}],["000127603817525491",{"2":{"287":1}}],["00012760228",{"2":{"285":1}}],["0001270977998748817",{"2":{"287":1}}],["00012709879",{"2":{"285":1}}],["00012771800808746188",{"2":{"287":1}}],["00012771913",{"2":{"285":1}}],["00012774013380002224",{"2":{"287":1}}],["00012774303",{"2":{"285":1}}],["00012823513296229805",{"2":{"287":1}}],["00012824128",{"2":{"285":1}}],["0001286015278063151",{"2":{"287":1}}],["00012860054",{"2":{"285":1}}],["00012882442",{"2":{"285":1}}],["00012800864546721013",{"2":{"287":1}}],["00012804550977499733",{"2":{"287":1}}],["00012804705",{"2":{"285":1}}],["00012801001",{"2":{"285":1}}],["00012875935864890145",{"2":{"287":1}}],["00012876061",{"2":{"285":1}}],["0001287f",{"2":{"285":1}}],["000128774",{"2":{"253":1}}],["00012002070494591742",{"2":{"287":1}}],["000120019686",{"2":{"285":1}}],["00012033743361808402",{"2":{"287":1}}],["00012033464",{"2":{"285":1}}],["00012073029734927432",{"2":{"287":1}}],["000120730605",{"2":{"285":1}}],["00012095384879359694",{"2":{"287":1}}],["000120952864",{"2":{"285":1}}],["00012061846275449036",{"2":{"287":1}}],["00012061757",{"2":{"285":1}}],["000120629",{"2":{"253":1}}],["00012611258244239548",{"2":{"287":1}}],["00012610656",{"2":{"285":1}}],["00012646582461508374",{"2":{"287":1}}],["00012646428",{"2":{"285":1}}],["00012697043350221043",{"2":{"287":1}}],["00012697169",{"2":{"285":1}}],["00012625653267639905",{"2":{"287":1}}],["00012625684",{"2":{"285":1}}],["0001268205189629806",{"2":{"287":1}}],["00012682185",{"2":{"285":1}}],["00012631515552424488",{"2":{"287":1}}],["000126315",{"2":{"285":1}}],["000126347542",{"2":{"90":1}}],["00013839561436766043",{"2":{"287":1}}],["0001383969",{"2":{"285":1}}],["00013892735421000216",{"2":{"287":1}}],["00013892472",{"2":{"285":1}}],["00013870627",{"2":{"285":1}}],["0001361160562741851",{"2":{"287":1}}],["000136122",{"2":{"253":1}}],["00013660281003431953",{"2":{"287":1}}],["00013660316",{"2":{"285":1}}],["00013608267654099173",{"2":{"287":1}}],["0001360887",{"2":{"285":1}}],["00013310800752116086",{"2":{"287":1}}],["00013310836",{"2":{"285":1}}],["00013397353610309097",{"2":{"287":1}}],["00013397221",{"2":{"285":1}}],["00013323872651408686",{"2":{"287":1}}],["00013323785",{"2":{"285":1}}],["00013321274353093578",{"2":{"287":1}}],["00013321554",{"2":{"285":1}}],["0001376391901228972",{"2":{"287":1}}],["0001376406",{"2":{"285":1}}],["00013785016378987656",{"2":{"287":1}}],["00013785157",{"2":{"285":1}}],["00013905988311074547",{"2":{"287":1}}],["00013918267161861086",{"2":{"287":1}}],["0001391857",{"2":{"285":1}}],["00013973686715257073",{"2":{"287":1}}],["00013973672",{"2":{"285":1}}],["00013939503257717642",{"2":{"287":1}}],["00013939405",{"2":{"285":1}}],["0001302785303120752",{"2":{"287":1}}],["00013027941",{"2":{"285":1}}],["00013047291307106862",{"2":{"287":1}}],["00013047179",{"2":{"285":1}}],["00013016852812108295",{"2":{"287":1}}],["00013016822",{"2":{"285":1}}],["0001308322",{"2":{"118":1}}],["00013231759597882783",{"2":{"287":1}}],["00013231862",{"2":{"285":1}}],["0001328824238684407",{"2":{"287":1}}],["0001328794",{"2":{"285":1}}],["00013252765310263048",{"2":{"287":1}}],["00013252866",{"2":{"285":1}}],["00013248406845412403",{"2":{"287":1}}],["00013248438",{"2":{"285":1}}],["00013299776471244751",{"2":{"287":1}}],["00013299647",{"2":{"285":1}}],["00013297566610974678",{"2":{"287":1}}],["00013297427",{"2":{"285":1}}],["00013208596099301942",{"2":{"287":1}}],["00013208737",{"2":{"285":1}}],["0001320207957178097",{"2":{"287":1}}],["0001320208",{"2":{"285":1}}],["00013109555111832344",{"2":{"287":1}}],["00013108953",{"2":{"285":1}}],["00013142848731350815",{"2":{"287":1}}],["00013142708",{"2":{"285":1}}],["0001311944364822806",{"2":{"287":1}}],["00013119343",{"2":{"285":1}}],["0001316468709501647",{"2":{"287":1}}],["00013164424",{"2":{"285":1}}],["00013184866737184743",{"2":{"287":1}}],["0001318502",{"2":{"285":1}}],["00013158180764107847",{"2":{"287":1}}],["00013158418",{"2":{"285":1}}],["00013151791100589748",{"2":{"287":1}}],["0001315169",{"2":{"285":1}}],["00013535372357250256",{"2":{"287":1}}],["0001353566",{"2":{"285":1}}],["00013549696292397282",{"2":{"287":1}}],["00013549681",{"2":{"285":1}}],["00013587971741000395",{"2":{"287":1}}],["00013582121532928907",{"2":{"287":1}}],["00013588318",{"2":{"285":1}}],["00013581832",{"2":{"285":1}}],["0001352817274780579",{"2":{"287":1}}],["00013528141",{"2":{"285":1}}],["00013528241490217056",{"2":{"287":1}}],["00013527626",{"2":{"285":1}}],["000135529",{"2":{"253":1}}],["000134952149724482",{"2":{"287":1}}],["00013445213917561414",{"2":{"287":1}}],["00013445372",{"2":{"285":1}}],["00013471883642775397",{"2":{"287":1}}],["00013471753",{"2":{"285":1}}],["00013468765776415066",{"2":{"287":1}}],["00013469068",{"2":{"285":1}}],["00013420232527813612",{"2":{"287":1}}],["00013420079",{"2":{"285":1}}],["00013405368287683735",{"2":{"287":1}}],["0001340508",{"2":{"285":1}}],["00013409055790199713",{"2":{"287":1}}],["00013409156",{"2":{"285":1}}],["00013431907",{"2":{"118":1}}],["00011405178289587279",{"2":{"287":1}}],["00011405091",{"2":{"285":1}}],["00011439576179622385",{"2":{"287":1}}],["00011439561",{"2":{"285":1}}],["00011722279989026201",{"2":{"287":1}}],["000117220006",{"2":{"285":1}}],["00011787768060475499",{"2":{"287":1}}],["000117880474",{"2":{"285":1}}],["00011697982280827048",{"2":{"287":1}}],["00011625968121560546",{"2":{"287":1}}],["0001162607",{"2":{"285":1}}],["00011604091029803909",{"2":{"287":1}}],["00011603789",{"2":{"285":1}}],["00011673296424153897",{"2":{"287":1}}],["00011673116",{"2":{"285":1}}],["00011539036435178243",{"2":{"287":1}}],["00011539138",{"2":{"285":1}}],["00011569386127735784",{"2":{"287":1}}],["00011569401",{"2":{"285":1}}],["00011516668527934832",{"2":{"287":1}}],["00011516987613302975",{"2":{"287":1}}],["00011516511",{"2":{"285":1}}],["000115171286",{"2":{"285":1}}],["00011523285029788361",{"2":{"287":1}}],["00011524183073657884",{"2":{"287":1}}],["00011524323",{"2":{"285":1}}],["00011522683",{"2":{"285":1}}],["00011999942428067517",{"2":{"287":1}}],["000119998535",{"2":{"285":1}}],["00011928112002967413",{"2":{"287":1}}],["00011928401",{"2":{"285":1}}],["00011939036595527266",{"2":{"287":1}}],["00011939068",{"2":{"285":1}}],["00011939605",{"2":{"285":1}}],["00011009070549723973",{"2":{"287":1}}],["000110084686",{"2":{"285":1}}],["00011010285077698089",{"2":{"287":1}}],["000110101304",{"2":{"285":1}}],["00011055657399078439",{"2":{"287":1}}],["00011055921",{"2":{"285":1}}],["00011085212",{"2":{"285":1}}],["0001109530814941399",{"2":{"287":1}}],["00011095441",{"2":{"285":1}}],["00011097497917925826",{"2":{"287":1}}],["00011097498",{"2":{"285":1}}],["00011099165567425477",{"2":{"287":1}}],["000110991656",{"2":{"285":1}}],["00011369781190261977",{"2":{"287":1}}],["000113697504",{"2":{"285":1}}],["00011335811527983256",{"2":{"287":1}}],["00011335675",{"2":{"285":1}}],["00011321939605001596",{"2":{"287":1}}],["00011322229",{"2":{"285":1}}],["000113272",{"2":{"253":1}}],["00011270459683944482",{"2":{"287":1}}],["000112701135",{"2":{"285":1}}],["00011255057094112734",{"2":{"287":1}}],["0001125497",{"2":{"285":1}}],["00011281472058936172",{"2":{"287":1}}],["00011281384",{"2":{"285":1}}],["00011238964486442804",{"2":{"287":1}}],["00011238701",{"2":{"285":1}}],["00011204144006756364",{"2":{"287":1}}],["000112042806",{"2":{"285":1}}],["0001124222845809427",{"2":{"287":1}}],["00011242075",{"2":{"285":1}}],["00011214958112290285",{"2":{"287":1}}],["00011214721",{"2":{"285":1}}],["00011210597069567706",{"2":{"287":1}}],["00011210508",{"2":{"285":1}}],["00011266387790009605",{"2":{"287":1}}],["00011262947921039688",{"2":{"287":1}}],["00011262807",{"2":{"285":1}}],["00011267003",{"2":{"285":1}}],["000112652",{"2":{"89":1}}],["00011892866349421466",{"2":{"287":1}}],["00011892835",{"2":{"285":1}}],["00011875635673180477",{"2":{"287":1}}],["00011875496",{"2":{"285":1}}],["00011885238020314611",{"2":{"287":1}}],["00011885097",{"2":{"285":1}}],["00011828132846848188",{"2":{"287":1}}],["00011828133",{"2":{"285":1}}],["00011844671532303873",{"2":{"287":1}}],["00011844542",{"2":{"285":1}}],["00011844933",{"2":{"118":1}}],["00010185048668330986",{"2":{"287":1}}],["00010185206",{"2":{"285":1}}],["00010156470573538287",{"2":{"287":1}}],["00010156559",{"2":{"285":1}}],["00010199950154891523",{"2":{"287":1}}],["00010199915",{"2":{"285":1}}],["00010118181715389958",{"2":{"287":1}}],["00010118461",{"2":{"285":1}}],["00010167457",{"2":{"285":1}}],["00010108948",{"2":{"118":1}}],["0001084117567178875",{"2":{"287":1}}],["00010841036",{"2":{"285":1}}],["00010805875158707556",{"2":{"287":1}}],["000108057386",{"2":{"285":1}}],["00010041868990420451",{"2":{"287":1}}],["00010042471",{"2":{"285":1}}],["00010001158627547407",{"2":{"287":1}}],["00010001018",{"2":{"285":1}}],["00010093530811966179",{"2":{"287":1}}],["00010093819",{"2":{"285":1}}],["00010095129",{"2":{"285":1}}],["00010012918210118536",{"2":{"287":1}}],["00010012918",{"2":{"285":1}}],["00010476830313545685",{"2":{"287":1}}],["00010476551",{"2":{"285":1}}],["000104514515151617",{"2":{"287":1}}],["00010451297",{"2":{"285":1}}],["000104994106",{"2":{"285":1}}],["00010427211005899125",{"2":{"287":1}}],["00010427124",{"2":{"285":1}}],["00010420034435775822",{"2":{"287":1}}],["00010420004",{"2":{"285":1}}],["00010422243651844865",{"2":{"287":1}}],["00010422342",{"2":{"285":1}}],["00010705817172666824",{"2":{"287":1}}],["00010705715",{"2":{"285":1}}],["00010749223081527115",{"2":{"287":1}}],["00010749312",{"2":{"285":1}}],["00010765778067886924",{"2":{"287":1}}],["00010765867",{"2":{"285":1}}],["00010788728956870217",{"2":{"287":1}}],["000107884654",{"2":{"285":1}}],["00010357977552140063",{"2":{"287":1}}],["00010358078",{"2":{"285":1}}],["0001039441485771382",{"2":{"287":1}}],["00010394415",{"2":{"285":1}}],["00010308505173401682",{"2":{"287":1}}],["00010308393",{"2":{"285":1}}],["00010306038361438175",{"2":{"287":1}}],["00010306175",{"2":{"285":1}}],["0001094549014553375",{"2":{"287":1}}],["00010945621",{"2":{"285":1}}],["00010978364928423146",{"2":{"287":1}}],["00010978453",{"2":{"285":1}}],["00010931399490818558",{"2":{"287":1}}],["00010931679",{"2":{"285":1}}],["00010955031413555335",{"2":{"287":1}}],["00010954416",{"2":{"285":1}}],["00010925839923197505",{"2":{"287":1}}],["000109258086",{"2":{"285":1}}],["00010920083438833616",{"2":{"287":1}}],["00010968822",{"2":{"285":1}}],["00010919985",{"2":{"285":1}}],["0001090765",{"2":{"118":1}}],["0001069477374287258",{"2":{"287":1}}],["000106949046",{"2":{"285":1}}],["00010618439613573993",{"2":{"287":1}}],["00010618409",{"2":{"285":1}}],["00010606452058434258",{"2":{"287":1}}],["00010606541",{"2":{"285":1}}],["00010656769449305567",{"2":{"287":1}}],["00010656902",{"2":{"285":1}}],["00010680919834006971",{"2":{"287":1}}],["00010680739",{"2":{"285":1}}],["00010678244259107603",{"2":{"287":1}}],["000106785905",{"2":{"285":1}}],["000106744425",{"2":{"285":1}}],["0001021441444549124",{"2":{"287":1}}],["000102145015",{"2":{"285":1}}],["00010242914175955702",{"2":{"287":1}}],["00010242883",{"2":{"285":1}}],["0001020175740459871",{"2":{"287":1}}],["000102018974",{"2":{"285":1}}],["00010227209751227683",{"2":{"287":1}}],["00010227109",{"2":{"285":1}}],["0001025977930537748",{"2":{"287":1}}],["00010268296217155972",{"2":{"287":1}}],["0001026845",{"2":{"285":1}}],["00010260069",{"2":{"285":1}}],["000102632",{"2":{"253":1}}],["00010286648873412568",{"2":{"287":1}}],["0001028683",{"2":{"285":1}}],["00010289252",{"2":{"118":1}}],["0001058072605439852",{"2":{"287":1}}],["00010581072",{"2":{"285":1}}],["00010574793639259105",{"2":{"287":1}}],["000105748826",{"2":{"285":1}}],["00010546855",{"2":{"196":1}}],["00010551",{"2":{"196":1}}],["0001",{"2":{"89":7,"90":1}}],["0001f0",{"2":{"89":1}}],["01f0",{"2":{"165":1,"166":1,"196":2,"204":1}}],["0188235444820348e",{"2":{"287":1}}],["018854173",{"2":{"147":1}}],["018549927622104e",{"2":{"287":1}}],["01855f",{"2":{"285":1}}],["0189255f",{"2":{"285":1}}],["018215429",{"2":{"253":1}}],["01803",{"2":{"204":1}}],["01860",{"2":{"204":1}}],["01832",{"2":{"147":1}}],["0183474",{"2":{"147":1}}],["018303769",{"2":{"147":1}}],["01872db4",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["018743665453639813",{"2":{"191":1}}],["018748894",{"2":{"118":1}}],["018774498",{"2":{"118":1}}],["018755732",{"2":{"118":1}}],["018708104",{"2":{"89":1}}],["016960934672413e",{"2":{"287":1}}],["0160",{"2":{"278":1}}],["01601\\taccuracy",{"2":{"204":1}}],["016400268",{"2":{"253":1}}],["016400939",{"2":{"253":1}}],["016156793",{"2":{"253":1}}],["016353965",{"2":{"253":1}}],["016386721",{"2":{"147":1}}],["016709540",{"2":{"253":1}}],["016621085",{"2":{"253":1}}],["016625574",{"2":{"253":1}}],["016287515",{"2":{"147":1}}],["0168695",{"2":{"165":1}}],["016865179",{"2":{"147":1}}],["016841749",{"2":{"89":1}}],["017007464442261e",{"2":{"287":1}}],["017019382",{"2":{"253":1}}],["017354f",{"2":{"285":1}}],["017370628",{"2":{"253":1}}],["017536",{"2":{"260":1}}],["017531",{"2":{"260":1}}],["017528",{"2":{"260":1}}],["017525",{"2":{"260":1}}],["017522",{"2":{"260":1}}],["017519",{"2":{"260":1}}],["017516",{"2":{"260":1}}],["017513",{"2":{"260":1}}],["017510",{"2":{"260":1}}],["017507",{"2":{"260":1}}],["017492",{"2":{"260":1}}],["017459199",{"2":{"147":1}}],["01761\\taccuracy",{"2":{"204":1}}],["017608581",{"2":{"147":1}}],["017935842",{"2":{"147":1}}],["017933888",{"2":{"147":1}}],["017866217",{"2":{"147":1}}],["017829532",{"2":{"147":1}}],["01717774",{"2":{"147":1}}],["0171592f",{"2":{"118":1}}],["0172084",{"2":{"89":1}}],["010739",{"2":{"260":1}}],["010734",{"2":{"260":1}}],["010731",{"2":{"260":1}}],["010729",{"2":{"260":1}}],["010726",{"2":{"260":1}}],["010723",{"2":{"260":1}}],["010720",{"2":{"260":1}}],["010717",{"2":{"260":1}}],["010714",{"2":{"260":1}}],["010711",{"2":{"260":1}}],["010695",{"2":{"260":1}}],["0106334835",{"2":{"147":1}}],["01038",{"2":{"260":1}}],["010303006",{"2":{"118":1}}],["010303013",{"2":{"118":1}}],["010282698",{"2":{"253":1}}],["0109437",{"2":{"264":1}}],["010943229",{"2":{"253":1}}],["010950223",{"2":{"147":1}}],["010573289",{"2":{"253":1}}],["010581179",{"2":{"253":1}}],["01058",{"2":{"204":1}}],["010567766",{"2":{"147":1}}],["010564294",{"2":{"118":1}}],["010891679",{"2":{"253":1}}],["01084\\taccuracy",{"2":{"204":1}}],["0108668577150213",{"2":{"188":1}}],["0108781",{"2":{"147":1}}],["010494760",{"2":{"253":1}}],["010404",{"2":{"165":1}}],["010482817",{"2":{"147":1}}],["01003141",{"2":{"119":1}}],["0101",{"2":{"90":1}}],["015375865583452e",{"2":{"287":1}}],["0153607f",{"2":{"285":1}}],["01534216",{"2":{"147":1}}],["015748031",{"2":{"264":2}}],["015277",{"2":{"260":1}}],["015272",{"2":{"260":1}}],["015269",{"2":{"260":1}}],["015266",{"2":{"260":1}}],["015263",{"2":{"260":1}}],["015260",{"2":{"260":1}}],["015257",{"2":{"260":1}}],["015254",{"2":{"260":1}}],["015251",{"2":{"260":1}}],["015248",{"2":{"260":1}}],["015229",{"2":{"260":1}}],["015235063",{"2":{"253":1}}],["01596",{"2":{"204":1}}],["015937408",{"2":{"143":1,"146":1}}],["01546\\taccuracy",{"2":{"204":1}}],["015039141",{"2":{"147":1}}],["015896475",{"2":{"147":1}}],["015869793",{"2":{"118":1}}],["015173428",{"2":{"89":1}}],["014774345",{"2":{"264":1}}],["014730594",{"2":{"253":1}}],["014741955",{"2":{"147":1}}],["01482",{"2":{"204":1}}],["014004502",{"2":{"147":1}}],["014984946",{"2":{"253":1}}],["014992779",{"2":{"147":1}}],["01490508",{"2":{"147":1}}],["014606951",{"2":{"147":1}}],["014689862",{"2":{"147":1}}],["014139019",{"2":{"147":1}}],["014197593",{"2":{"118":1}}],["0141069945",{"2":{"89":1}}],["01431",{"2":{"204":1}}],["014333963",{"2":{"118":1}}],["014353451",{"2":{"118":1}}],["014224287",{"2":{"118":1}}],["014550619",{"2":{"253":1}}],["014522564",{"2":{"253":1}}],["014522501",{"2":{"118":1}}],["014524898",{"2":{"118":1}}],["014519578",{"2":{"118":1}}],["0192360717498933e",{"2":{"287":1}}],["019230088",{"2":{"89":1}}],["019794",{"2":{"260":1}}],["019789",{"2":{"260":1}}],["019787",{"2":{"260":1}}],["019784",{"2":{"260":1}}],["019781",{"2":{"260":1}}],["019778",{"2":{"260":1}}],["019775",{"2":{"260":1}}],["019770",{"2":{"260":1}}],["019767443146926e",{"2":{"287":1}}],["019767",{"2":{"260":1}}],["019764",{"2":{"260":1}}],["019748",{"2":{"260":1}}],["019912010",{"2":{"253":1}}],["019320106",{"2":{"253":1}}],["019530077",{"2":{"253":1}}],["019876348",{"2":{"147":1}}],["019838588",{"2":{"147":1}}],["019675586",{"2":{"147":1}}],["01965834",{"2":{"143":1}}],["019472213",{"2":{"147":1}}],["01910517",{"2":{"119":1}}],["0191175",{"2":{"89":1}}],["012999",{"2":{"260":1}}],["012996",{"2":{"260":1}}],["012993",{"2":{"260":1}}],["012990",{"2":{"260":1}}],["012987",{"2":{"260":1}}],["012984",{"2":{"260":1}}],["012981",{"2":{"260":1}}],["012966",{"2":{"260":1}}],["012625601",{"2":{"253":1}}],["012438955",{"2":{"253":1}}],["012419771",{"2":{"253":1}}],["0124978",{"2":{"89":1}}],["012557827",{"2":{"253":1}}],["01252\\taccuracy",{"2":{"204":1}}],["01229",{"2":{"204":1}}],["0122697f",{"2":{"118":1}}],["0128",{"2":{"278":1}}],["0128983",{"2":{"147":1}}],["012866791",{"2":{"147":1}}],["012322948",{"2":{"253":1}}],["012399685",{"2":{"253":1}}],["012382892",{"2":{"147":1}}],["0123434",{"2":{"89":1}}],["012063608",{"2":{"147":1}}],["0121",{"2":{"278":1}}],["012151958",{"2":{"118":1}}],["012145694",{"2":{"118":1}}],["013593317",{"2":{"253":1}}],["013275324",{"2":{"253":1}}],["01325738",{"2":{"119":1}}],["0130",{"2":{"278":1}}],["013010",{"2":{"260":1}}],["013016323",{"2":{"253":1}}],["013005",{"2":{"260":1}}],["013002",{"2":{"260":1}}],["01309\\taccuracy",{"2":{"204":1}}],["01381\\taccuracy",{"2":{"204":1}}],["0134848",{"2":{"147":1}}],["013927094",{"2":{"118":1}}],["013915376",{"2":{"118":1}}],["013969757",{"2":{"89":1}}],["013654154",{"2":{"118":1}}],["013659927",{"2":{"118":1}}],["0136483535",{"2":{"89":1}}],["011476245",{"2":{"253":1}}],["0114351",{"2":{"89":1}}],["011310317",{"2":{"253":1}}],["011818888",{"2":{"253":1}}],["011811271",{"2":{"253":1}}],["011849238",{"2":{"253":1}}],["011975820",{"2":{"253":1}}],["011937321",{"2":{"196":1}}],["01117275",{"2":{"147":1}}],["011100831f0",{"2":{"50":1}}],["011206773",{"2":{"147":1}}],["011098074",{"2":{"147":1}}],["01100632",{"2":{"118":1}}],["011658882",{"2":{"253":1}}],["0116419",{"2":{"89":1}}],["0116051175",{"2":{"89":1}}],["01174226",{"2":{"89":1}}],["01",{"2":{"15":3,"50":2,"58":2,"90":1,"196":1,"197":1,"204":1,"206":1,"213":1,"214":1,"225":4,"227":1,"238":1,"245":1,"246":1,"253":8,"255":1,"260":379,"261":1,"269":1,"274":1,"275":1,"280":1,"287":1,"289":1,"295":3,"297":1}}],["0",{"2":{"2":1,"4":1,"7":3,"11":1,"15":44,"25":2,"34":3,"36":3,"39":2,"40":10,"41":1,"50":122,"51":4,"52":5,"56":16,"58":71,"66":3,"71":1,"74":15,"75":1,"76":30,"77":22,"78":22,"79":143,"81":17,"82":20,"84":19,"85":43,"86":15,"89":204,"90":11,"96":2,"118":253,"119":10,"126":5,"127":37,"132":1,"133":1,"135":1,"137":1,"143":54,"146":25,"147":1153,"153":6,"154":1,"155":7,"158":1,"165":18,"166":5,"167":35,"168":23,"172":7,"185":59,"187":9,"188":155,"190":3,"191":62,"192":8,"193":12,"194":9,"195":4,"196":72,"197":3,"200":1,"203":1,"204":109,"206":3,"209":1,"210":2,"211":2,"212":5,"213":9,"214":3,"223":4,"225":2,"227":3,"229":23,"230":1,"231":2,"232":1,"233":2,"234":26,"235":1,"236":8,"237":19,"238":11,"240":1,"241":2,"244":2,"245":105,"246":11,"252":7,"253":470,"254":6,"255":3,"260":472,"261":3,"263":4,"264":138,"265":1,"266":4,"267":38,"268":10,"269":3,"271":1,"274":55,"275":3,"276":47,"277":4,"278":67,"280":3,"282":47,"283":4,"284":9,"285":495,"286":1,"287":396,"289":3,"291":1,"293":2,"295":34,"297":3}}],["1m",{"2":{"276":1,"282":1}}],["1\\ttrain",{"2":{"260":1}}],["1g",{"2":{"238":1,"246":1}}],["1e6",{"2":{"260":1,"274":1}}],["1e",{"2":{"225":1,"283":1}}],["1`",{"2":{"154":3}}],["1>",{"2":{"147":6}}],["1f",{"2":{"86":1}}],["1f0",{"2":{"41":2,"50":2,"56":1,"58":1,"66":3,"86":2,"90":1,"264":1}}],["1st",{"2":{"85":1,"86":1,"144":1}}],["17\\ttrain",{"2":{"260":1}}],["1799",{"2":{"260":6}}],["1798",{"2":{"260":9}}],["1797",{"2":{"260":9}}],["17973809",{"2":{"147":1}}],["1795204f",{"2":{"285":1}}],["1795",{"2":{"260":11}}],["179542",{"2":{"147":1}}],["1794",{"2":{"260":9}}],["1792",{"2":{"260":6}}],["1796",{"2":{"260":25}}],["179",{"2":{"260":3}}],["1790",{"2":{"260":6,"276":1}}],["1793",{"2":{"260":9,"278":1,"282":1}}],["17932355638179910565",{"2":{"253":1}}],["17938598",{"2":{"147":1}}],["177699f",{"2":{"285":1}}],["177697",{"2":{"147":1}}],["1777894f",{"2":{"285":1}}],["177779",{"2":{"147":1}}],["17797",{"2":{"274":1}}],["1771",{"2":{"263":1}}],["17713173",{"2":{"147":1}}],["177",{"2":{"260":3}}],["177836f",{"2":{"285":1}}],["1778",{"2":{"260":8}}],["17785463",{"2":{"147":1}}],["17724\\taccuracy",{"2":{"204":1}}],["17706361",{"2":{"196":1}}],["17703351",{"2":{"147":1}}],["17755158",{"2":{"147":1}}],["171680893237082e",{"2":{"287":1}}],["17161606",{"2":{"264":1}}],["1719296687976183e",{"2":{"287":1}}],["171969",{"2":{"147":1}}],["17196824",{"2":{"147":1}}],["1714437f",{"2":{"285":1}}],["17184",{"2":{"274":1}}],["171756",{"2":{"147":1}}],["1717901f",{"2":{"118":1}}],["171212",{"2":{"147":1}}],["171224",{"2":{"147":1}}],["17124604",{"2":{"147":1}}],["17115343",{"2":{"147":1}}],["173466666243995e",{"2":{"287":1}}],["173435f",{"2":{"285":1}}],["17349",{"2":{"147":1}}],["17383",{"2":{"282":1}}],["17385432",{"2":{"147":1}}],["1735",{"2":{"282":1}}],["1737",{"2":{"278":1}}],["17322835",{"2":{"264":2}}],["17326018",{"2":{"147":1}}],["1730737",{"2":{"147":1}}],["17310219",{"2":{"147":1}}],["17337337",{"2":{"147":1}}],["17361793",{"2":{"147":1}}],["1700008f",{"2":{"285":1}}],["17001",{"2":{"253":1}}],["1706",{"2":{"276":1}}],["1705",{"2":{"276":1}}],["1709",{"2":{"257":1,"282":1}}],["170453e",{"2":{"225":1}}],["170342",{"2":{"188":1}}],["17039865",{"2":{"147":1}}],["17029831",{"2":{"147":1}}],["17029636",{"2":{"147":1}}],["17085029",{"2":{"147":1}}],["17083026",{"2":{"147":1}}],["17078549",{"2":{"147":1}}],["178580249843196e",{"2":{"287":1}}],["178710363661654e",{"2":{"287":1}}],["178612f",{"2":{"285":1}}],["1786895",{"2":{"267":1}}],["1780",{"2":{"282":1}}],["1781",{"2":{"276":1}}],["1782",{"2":{"229":1}}],["17829275",{"2":{"147":1}}],["17840558",{"2":{"147":1}}],["17886546",{"2":{"118":1}}],["1720706f",{"2":{"285":1}}],["172884",{"2":{"274":1}}],["1728516",{"2":{"274":1}}],["1725",{"2":{"260":2}}],["17252",{"2":{"229":1}}],["172708",{"2":{"147":1}}],["17242633",{"2":{"147":1}}],["17211406",{"2":{"147":1}}],["172313",{"2":{"147":1}}],["174063185282471e",{"2":{"287":1}}],["1740324f",{"2":{"285":1}}],["1742",{"2":{"276":1}}],["174298",{"2":{"264":1}}],["1745",{"2":{"229":1,"260":2}}],["174513",{"2":{"147":1}}],["174993f",{"2":{"285":1}}],["1749",{"2":{"229":1,"276":1}}],["17441",{"2":{"188":1}}],["17440075",{"2":{"147":1}}],["1747854",{"2":{"147":1}}],["17461234",{"2":{"89":1}}],["17671",{"2":{"274":1}}],["1764",{"2":{"260":12}}],["1764085",{"2":{"143":1}}],["176",{"2":{"225":1,"229":1,"240":1}}],["1769388",{"2":{"167":1}}],["17696409",{"2":{"147":1}}],["176021",{"2":{"147":1}}],["17600718",{"2":{"147":1}}],["17604505",{"2":{"147":1}}],["17616239",{"2":{"147":1}}],["17617925",{"2":{"147":1}}],["176348",{"2":{"147":1}}],["17657f",{"2":{"89":1}}],["175882916596657e",{"2":{"287":1}}],["1753",{"2":{"276":1}}],["17538628",{"2":{"147":1}}],["1756",{"2":{"260":6}}],["1755",{"2":{"187":1}}],["17558427",{"2":{"147":1}}],["1752539",{"2":{"147":1}}],["1759",{"2":{"276":1}}],["1759687",{"2":{"147":1}}],["17598884",{"2":{"147":1}}],["17590503",{"2":{"118":1}}],["1751135",{"2":{"147":1}}],["17570448",{"2":{"118":1}}],["17",{"2":{"76":2,"80":7,"127":1,"147":2,"168":1,"192":1,"204":2,"213":1,"234":1,"245":4,"253":2,"260":3,"263":1,"265":1,"274":2,"276":1,"278":4,"282":1,"287":17,"295":2}}],["1494",{"2":{"282":1}}],["14941788",{"2":{"89":1}}],["1490",{"2":{"282":1}}],["1499",{"2":{"282":1}}],["1498",{"2":{"276":1}}],["1497",{"2":{"276":1}}],["149206",{"2":{"274":1}}],["1493",{"2":{"274":1,"278":1}}],["1496063",{"2":{"264":2}}],["14\\ttrain",{"2":{"260":1}}],["147160165764285e",{"2":{"287":1}}],["1477",{"2":{"282":1}}],["1470597f",{"2":{"285":1}}],["1470",{"2":{"276":1}}],["14702773",{"2":{"118":1}}],["14786",{"2":{"274":1}}],["1476",{"2":{"263":1}}],["14736821",{"2":{"147":1}}],["1483918692899636e",{"2":{"287":1}}],["1485044f",{"2":{"285":1}}],["1480556f",{"2":{"285":1}}],["14895",{"2":{"274":1}}],["148901",{"2":{"167":1}}],["14881086",{"2":{"267":1}}],["14884971",{"2":{"147":1}}],["148",{"2":{"260":5}}],["1487s\\ttraining",{"2":{"234":1}}],["1482575",{"2":{"264":1}}],["14825",{"2":{"188":1}}],["148248",{"2":{"188":2}}],["14815",{"2":{"234":2}}],["148189",{"2":{"147":1}}],["148147",{"2":{"147":1}}],["140071858130575e",{"2":{"287":1}}],["14001",{"2":{"253":1}}],["140203f",{"2":{"285":1}}],["140299",{"2":{"147":1}}],["1401",{"2":{"278":1}}],["14012",{"2":{"274":1}}],["14012684",{"2":{"147":1}}],["1408",{"2":{"260":6}}],["140",{"2":{"257":1}}],["1405",{"2":{"187":1}}],["140772420179515e",{"2":{"287":1}}],["1407417f",{"2":{"285":1}}],["1407",{"2":{"276":2}}],["14079519",{"2":{"147":1}}],["14073242",{"2":{"147":1}}],["14033335",{"2":{"147":1}}],["14546",{"2":{"282":1}}],["1454",{"2":{"278":1}}],["14557",{"2":{"274":1}}],["14555879",{"2":{"147":1}}],["1450",{"2":{"260":3}}],["1453",{"2":{"260":6}}],["1451",{"2":{"260":4,"276":1}}],["14517665",{"2":{"147":1}}],["145",{"2":{"234":1}}],["1457",{"2":{"229":1,"282":2}}],["1452",{"2":{"260":1}}],["145296",{"2":{"185":1}}],["14523235",{"2":{"147":1}}],["14562015",{"2":{"147":1}}],["14593515",{"2":{"147":1}}],["14142",{"2":{"276":1}}],["1414",{"2":{"260":6}}],["1418",{"2":{"260":4,"282":1}}],["14199",{"2":{"260":1}}],["141",{"2":{"257":1}}],["14173229",{"2":{"264":2}}],["1417",{"2":{"253":1,"282":1}}],["1413",{"2":{"192":1,"260":3}}],["1413405",{"2":{"147":1}}],["1416",{"2":{"187":1,"282":1}}],["14163494",{"2":{"147":1}}],["14102985",{"2":{"147":1}}],["1415927",{"2":{"50":1}}],["14493",{"2":{"276":1}}],["14499082",{"2":{"147":1}}],["1445",{"2":{"276":1}}],["1441",{"2":{"260":3}}],["1440",{"2":{"260":6}}],["144009",{"2":{"147":1}}],["1448",{"2":{"260":6}}],["144",{"2":{"260":3,"276":1}}],["14421012",{"2":{"147":1}}],["14433007",{"2":{"118":1}}],["14392",{"2":{"282":1}}],["1431",{"2":{"282":1}}],["1438",{"2":{"278":1}}],["14324",{"2":{"274":1}}],["1437",{"2":{"260":1}}],["1437841",{"2":{"196":1}}],["143",{"2":{"199":1,"263":1}}],["14359581",{"2":{"147":1}}],["14365605",{"2":{"147":1}}],["14361875",{"2":{"147":1}}],["14366318",{"2":{"147":1}}],["14336014",{"2":{"147":1}}],["1421",{"2":{"263":1}}],["1427",{"2":{"260":6}}],["14255",{"2":{"274":1}}],["1425",{"2":{"260":6}}],["142574623",{"2":{"90":1}}],["1429857",{"2":{"264":1}}],["1429",{"2":{"260":7,"276":1}}],["142",{"2":{"260":4}}],["1424",{"2":{"253":3}}],["14243387",{"2":{"147":1}}],["14238782",{"2":{"147":1}}],["1467258f",{"2":{"285":1}}],["14673097",{"2":{"147":1}}],["1465964f",{"2":{"285":1}}],["1461",{"2":{"282":1}}],["1461747",{"2":{"147":1}}],["1468383519459738e",{"2":{"287":1}}],["1468",{"2":{"260":1}}],["14634",{"2":{"229":1}}],["146",{"2":{"199":1,"229":1,"263":1,"282":1}}],["1466271862099168e",{"2":{"287":1}}],["1466",{"2":{"260":3}}],["14660794",{"2":{"147":1}}],["14663221",{"2":{"118":1}}],["14692664",{"2":{"118":1}}],["14646342",{"2":{"118":1}}],["14",{"2":{"76":2,"81":1,"147":2,"204":2,"213":2,"229":1,"234":1,"245":2,"253":6,"260":8,"274":2,"278":6,"282":1}}],["11\\ttrain",{"2":{"260":1}}],["112995f",{"2":{"285":1}}],["1124",{"2":{"282":1}}],["11249",{"2":{"274":1}}],["1128842f",{"2":{"285":1}}],["1128",{"2":{"282":1}}],["1127",{"2":{"282":1}}],["11216",{"2":{"274":1}}],["1121",{"2":{"260":7,"282":1}}],["112",{"2":{"260":17}}],["11223",{"2":{"274":1}}],["1122",{"2":{"260":2}}],["1126353",{"2":{"168":1}}],["11269632",{"2":{"147":1}}],["1120",{"2":{"260":2,"276":1}}],["112076",{"2":{"147":1}}],["112063006",{"2":{"147":1}}],["112344",{"2":{"147":1}}],["1155996263835813e",{"2":{"287":1}}],["115506485",{"2":{"267":1}}],["115507",{"2":{"264":1}}],["1150",{"2":{"282":1}}],["1152",{"2":{"276":1}}],["1158",{"2":{"260":4}}],["1157",{"2":{"229":1}}],["115",{"2":{"229":1,"260":17,"282":1}}],["11532391",{"2":{"147":1}}],["11549814",{"2":{"147":1}}],["11545f",{"2":{"89":1}}],["1114",{"2":{"260":6}}],["111452006",{"2":{"147":1}}],["11195",{"2":{"274":1}}],["1119",{"2":{"260":22}}],["1115",{"2":{"260":1,"276":1}}],["11139",{"2":{"276":1}}],["1113",{"2":{"260":2}}],["1112",{"2":{"260":16}}],["1110",{"2":{"260":17}}],["11188",{"2":{"282":1}}],["1118",{"2":{"260":2}}],["1111",{"2":{"260":2}}],["11111",{"2":{"236":1}}],["1117",{"2":{"229":1,"260":13}}],["111",{"2":{"192":1,"229":1,"240":1,"282":1}}],["11169241",{"2":{"147":1}}],["111676365",{"2":{"147":1}}],["11441",{"2":{"274":1}}],["11415",{"2":{"274":1}}],["114184186",{"2":{"264":1}}],["114",{"2":{"229":1,"260":17,"276":1,"282":1}}],["1140",{"2":{"192":1}}],["1149755",{"2":{"167":1}}],["11454258",{"2":{"147":1}}],["114560924",{"2":{"147":1}}],["11476975",{"2":{"89":1}}],["1135971083830777e",{"2":{"287":1}}],["11353",{"2":{"274":1}}],["11395",{"2":{"282":1}}],["113952",{"2":{"185":1}}],["1133",{"2":{"276":1,"282":1}}],["11334",{"2":{"274":1}}],["1138",{"2":{"276":1}}],["1138737",{"2":{"147":1}}],["1137",{"2":{"276":1}}],["11371",{"2":{"274":1}}],["11373",{"2":{"167":1}}],["1134994511662673e",{"2":{"287":1}}],["11344",{"2":{"274":1}}],["11345",{"2":{"274":1}}],["1136",{"2":{"229":1}}],["11362071",{"2":{"147":1}}],["113",{"2":{"229":1,"240":1,"260":17}}],["113006115",{"2":{"147":1}}],["11325522",{"2":{"147":1}}],["113252416",{"2":{"118":1}}],["11326964",{"2":{"118":1}}],["11623",{"2":{"276":1}}],["11689",{"2":{"274":1}}],["11681",{"2":{"274":1}}],["116195",{"2":{"274":1}}],["116",{"2":{"192":1,"229":1,"260":17,"282":1}}],["116989",{"2":{"188":1}}],["116574",{"2":{"147":1}}],["11647",{"2":{"274":1}}],["11640526",{"2":{"147":1}}],["11645464",{"2":{"147":1}}],["11673186",{"2":{"147":1}}],["116768986",{"2":{"147":1}}],["1163871",{"2":{"147":1}}],["11631798",{"2":{"147":1}}],["1194113f",{"2":{"285":1}}],["11969",{"2":{"282":1}}],["1195",{"2":{"276":1}}],["11950846",{"2":{"147":1}}],["1193",{"2":{"260":2}}],["11933",{"2":{"147":1}}],["11902616938318e",{"2":{"287":1}}],["119011f",{"2":{"285":1}}],["1190",{"2":{"260":6}}],["1199",{"2":{"260":5}}],["11929058502883e",{"2":{"287":1}}],["11926",{"2":{"274":1}}],["1192",{"2":{"260":7}}],["119",{"2":{"260":17}}],["119758",{"2":{"147":1}}],["1198",{"2":{"260":19,"276":1}}],["11987625",{"2":{"147":1}}],["1198f",{"2":{"89":1}}],["1189",{"2":{"282":1}}],["118938f",{"2":{"285":1}}],["11893",{"2":{"276":1}}],["11826",{"2":{"274":1}}],["11805",{"2":{"274":1}}],["1180353",{"2":{"264":1}}],["11808148",{"2":{"147":1}}],["1181102",{"2":{"264":2}}],["118148",{"2":{"147":1}}],["1185",{"2":{"260":1}}],["1186",{"2":{"260":6}}],["1186073",{"2":{"147":1}}],["1184",{"2":{"260":3}}],["11841",{"2":{"229":1}}],["118",{"2":{"260":17,"282":1}}],["1183",{"2":{"229":1}}],["118356064",{"2":{"147":1}}],["1187",{"2":{"147":1,"260":7}}],["11882",{"2":{"282":1}}],["1188",{"2":{"282":1}}],["11883854",{"2":{"118":1}}],["118868664",{"2":{"118":1}}],["117948335315745e",{"2":{"287":1}}],["117329",{"2":{"274":1}}],["1173706",{"2":{"167":1}}],["1172",{"2":{"276":1}}],["11727",{"2":{"274":1}}],["11728277",{"2":{"147":1}}],["117711f",{"2":{"285":1}}],["1177",{"2":{"260":1,"282":1}}],["1178",{"2":{"260":5}}],["117853135",{"2":{"147":1}}],["1176",{"2":{"229":1}}],["117",{"2":{"229":1,"260":17}}],["1175",{"2":{"260":2,"278":1}}],["11750876",{"2":{"147":1}}],["11754602",{"2":{"118":1}}],["11740078",{"2":{"147":1}}],["11708575",{"2":{"147":1}}],["110828f",{"2":{"285":1}}],["1108",{"2":{"282":1}}],["1100",{"2":{"260":6}}],["11001",{"2":{"253":1}}],["11073",{"2":{"274":1}}],["1107",{"2":{"260":16,"276":1}}],["1104",{"2":{"260":15}}],["1106733621921516e",{"2":{"287":1}}],["11061",{"2":{"274":1}}],["11060",{"2":{"274":1}}],["1106",{"2":{"260":2,"278":1}}],["110",{"2":{"237":6,"282":2}}],["1101",{"2":{"229":1,"260":24}}],["11094",{"2":{"274":1}}],["1109",{"2":{"229":1,"253":2}}],["110962",{"2":{"147":1}}],["11096934",{"2":{"118":1}}],["1102453f",{"2":{"285":1}}],["11023622",{"2":{"264":2}}],["11023762",{"2":{"196":1}}],["1102",{"2":{"260":7,"276":1}}],["11028515",{"2":{"147":1}}],["110330954",{"2":{"147":1}}],["11050306",{"2":{"118":1}}],["110567965",{"2":{"118":1}}],["11",{"2":{"76":2,"78":9,"81":3,"147":2,"165":3,"166":2,"167":2,"168":1,"197":1,"204":2,"206":1,"214":1,"227":1,"237":3,"238":4,"245":2,"246":4,"253":13,"255":1,"260":9,"261":1,"269":1,"274":52,"275":1,"278":3,"280":1,"282":2,"289":1,"297":1}}],["19\\ttrain",{"2":{"260":1}}],["1988",{"2":{"283":2}}],["1985",{"2":{"282":1}}],["19847682",{"2":{"264":1}}],["198461",{"2":{"253":1}}],["1984668",{"2":{"165":1}}],["198454",{"2":{"253":1}}],["198450",{"2":{"253":1}}],["198446",{"2":{"253":1}}],["198442",{"2":{"253":1}}],["198436",{"2":{"253":1}}],["198431",{"2":{"253":1}}],["198427",{"2":{"253":1}}],["198423",{"2":{"253":1}}],["198418",{"2":{"253":1}}],["198396",{"2":{"253":1}}],["1983",{"2":{"253":1}}],["19899946",{"2":{"147":1}}],["1989196",{"2":{"147":1}}],["1920120623875877e",{"2":{"287":1}}],["1922554608678126e",{"2":{"287":1}}],["19226",{"2":{"147":1}}],["1921413f",{"2":{"285":1}}],["19213f",{"2":{"285":1}}],["1921258",{"2":{"167":1}}],["192794",{"2":{"253":1}}],["192788",{"2":{"253":1}}],["192781",{"2":{"253":1}}],["192775",{"2":{"253":1}}],["192769",{"2":{"253":1}}],["192762",{"2":{"253":1}}],["192756",{"2":{"253":1}}],["192749",{"2":{"253":1}}],["192718",{"2":{"253":1}}],["19286752",{"2":{"267":1}}],["192810",{"2":{"253":1}}],["192800",{"2":{"253":1}}],["1928",{"2":{"229":1}}],["192836",{"2":{"185":1}}],["19236",{"2":{"147":1}}],["19259712",{"2":{"147":1}}],["192992\\tval",{"2":{"260":1}}],["1929",{"2":{"64":1}}],["1955",{"2":{"276":1,"278":1}}],["1953125",{"2":{"274":1}}],["1953009",{"2":{"194":1,"195":1}}],["1957",{"2":{"260":1}}],["195673",{"2":{"253":1}}],["195666",{"2":{"253":1}}],["195662",{"2":{"253":1}}],["195657",{"2":{"253":1}}],["195653",{"2":{"253":1}}],["195649",{"2":{"253":1}}],["195645",{"2":{"253":1}}],["195641",{"2":{"253":1}}],["195636",{"2":{"253":1}}],["195632",{"2":{"253":1}}],["195610",{"2":{"253":1}}],["19597391412112541",{"2":{"191":1}}],["19547431",{"2":{"147":1}}],["1958431",{"2":{"196":1}}],["1958",{"2":{"64":1,"278":1}}],["1970263511415903e",{"2":{"287":1}}],["197045",{"2":{"188":1}}],["197460488240333e",{"2":{"287":1}}],["19744",{"2":{"147":1}}],["19728f",{"2":{"285":1}}],["197747206383522e",{"2":{"287":1}}],["1977",{"2":{"282":1}}],["1978",{"2":{"276":1}}],["197381f",{"2":{"285":1}}],["1973",{"2":{"276":1}}],["1976",{"2":{"260":2}}],["19792819",{"2":{"147":1}}],["1964921f",{"2":{"285":1}}],["1964",{"2":{"229":1,"260":2}}],["19641913",{"2":{"147":1}}],["196581",{"2":{"185":1}}],["19623",{"2":{"147":1}}],["19601588",{"2":{"147":1}}],["1966",{"2":{"260":1}}],["19660722",{"2":{"147":1}}],["19666132",{"2":{"147":1}}],["19634563",{"2":{"147":1}}],["1961907f",{"2":{"118":1}}],["1961357",{"2":{"118":1}}],["191995109320312e",{"2":{"287":1}}],["1912",{"2":{"282":1}}],["19124654",{"2":{"165":1}}],["1918",{"2":{"263":1,"282":1}}],["19183022",{"2":{"147":1}}],["1915",{"2":{"187":1}}],["191424",{"2":{"147":1}}],["19138478",{"2":{"147":1}}],["190574",{"2":{"295":1}}],["19059215",{"2":{"147":1}}],["19001",{"2":{"253":1}}],["19080",{"2":{"229":1}}],["1907",{"2":{"229":1,"260":1}}],["1906",{"2":{"192":1}}],["19065982",{"2":{"147":1}}],["1904757",{"2":{"168":1}}],["1901",{"2":{"192":1}}],["19011",{"2":{"147":1}}],["19019358",{"2":{"147":1}}],["19096",{"2":{"260":3}}],["19099",{"2":{"147":1}}],["19094673",{"2":{"147":1}}],["19031122",{"2":{"118":1}}],["1943675f",{"2":{"285":1}}],["1943359",{"2":{"274":1}}],["1947",{"2":{"276":1}}],["1944",{"2":{"192":1,"276":2}}],["19444",{"2":{"78":1}}],["1941834",{"2":{"147":1}}],["19405301",{"2":{"147":1}}],["19498321",{"2":{"147":1}}],["1996",{"2":{"282":1}}],["19969921",{"2":{"147":1}}],["1992",{"2":{"276":1}}],["19920883",{"2":{"147":1}}],["199",{"2":{"260":3}}],["1993",{"2":{"260":1}}],["199387",{"2":{"165":1}}],["1998",{"2":{"260":1}}],["1994",{"2":{"260":5}}],["1990",{"2":{"169":1,"260":5}}],["1990666",{"2":{"147":1}}],["1995",{"2":{"260":1}}],["1995s\\ttraining",{"2":{"234":1}}],["19959326",{"2":{"147":1}}],["19954802",{"2":{"147":1}}],["19995327",{"2":{"147":1}}],["1997513",{"2":{"89":1}}],["1938",{"2":{"278":1}}],["1930",{"2":{"276":1}}],["1934",{"2":{"229":1}}],["19343676",{"2":{"118":1}}],["19397983",{"2":{"196":1}}],["193291",{"2":{"188":1}}],["193263",{"2":{"147":1}}],["1936812",{"2":{"165":1}}],["19361475",{"2":{"147":1}}],["19373",{"2":{"147":1}}],["19339",{"2":{"147":1}}],["19313364",{"2":{"147":1}}],["19355828",{"2":{"118":1}}],["19",{"2":{"76":2,"133":1,"147":2,"197":1,"204":2,"206":1,"213":2,"214":1,"227":1,"238":1,"245":2,"246":1,"253":2,"255":1,"260":11,"261":1,"269":1,"274":2,"275":1,"278":3,"280":1,"289":1,"295":1,"297":1}}],["1×16",{"2":{"167":1,"168":1}}],["1×11",{"2":{"58":1}}],["1×5",{"2":{"85":2}}],["1×32",{"2":{"56":1,"90":1}}],["18\\ttrain",{"2":{"260":1}}],["1879",{"2":{"282":1}}],["18794",{"2":{"185":1}}],["1871",{"2":{"276":1}}],["187146",{"2":{"274":1}}],["1871622",{"2":{"193":4}}],["1872",{"2":{"260":5}}],["1878",{"2":{"260":2}}],["187646",{"2":{"185":1}}],["187688",{"2":{"147":1}}],["1822912f",{"2":{"285":1}}],["1822",{"2":{"282":2}}],["1828",{"2":{"278":1}}],["1823",{"2":{"260":5,"282":1}}],["182365",{"2":{"147":1}}],["1826",{"2":{"260":14,"276":1}}],["1821927318511916e",{"2":{"287":1}}],["1821",{"2":{"260":12}}],["1824",{"2":{"260":8}}],["1820",{"2":{"260":6}}],["18294385",{"2":{"147":1}}],["18292144",{"2":{"118":1}}],["18295963",{"2":{"118":1}}],["1852",{"2":{"260":2}}],["1851",{"2":{"276":1}}],["18519",{"2":{"234":2}}],["1851237",{"2":{"167":1}}],["185913",{"2":{"147":1}}],["18599004",{"2":{"147":1}}],["18574256",{"2":{"147":1}}],["18531604",{"2":{"147":1}}],["1869",{"2":{"260":1}}],["18695377",{"2":{"196":1}}],["1862",{"2":{"260":1}}],["18629253",{"2":{"147":1}}],["1866",{"2":{"260":5}}],["186",{"2":{"229":1,"285":1}}],["186341621659298e",{"2":{"287":1}}],["186377f",{"2":{"285":1}}],["1863",{"2":{"229":1,"282":1}}],["18652153",{"2":{"168":1}}],["18659353",{"2":{"147":1}}],["186449",{"2":{"168":1}}],["1804245262090085e",{"2":{"287":1}}],["1805",{"2":{"276":1}}],["18056087",{"2":{"196":1}}],["1802",{"2":{"260":3}}],["180848922318268e",{"2":{"287":1}}],["1808",{"2":{"260":6,"276":1}}],["1801",{"2":{"260":9}}],["18018521",{"2":{"147":1}}],["1807",{"2":{"260":6}}],["1807569",{"2":{"147":1}}],["180383189070195e",{"2":{"287":1}}],["18033011",{"2":{"267":1}}],["1803",{"2":{"260":10}}],["1800",{"2":{"260":9}}],["18001",{"2":{"253":1}}],["18000072",{"2":{"147":1}}],["180649+0",{"2":{"185":1}}],["18095753",{"2":{"147":1}}],["1847",{"2":{"282":1}}],["18482",{"2":{"274":1}}],["184351978938409e",{"2":{"287":1}}],["184352f",{"2":{"285":1}}],["1843",{"2":{"240":1}}],["1844",{"2":{"240":1,"260":2}}],["18495357",{"2":{"147":1}}],["18450394",{"2":{"147":1}}],["1893516f",{"2":{"285":1}}],["18999",{"2":{"274":1}}],["18996632",{"2":{"147":1}}],["189808",{"2":{"253":1}}],["189788",{"2":{"253":1}}],["189782",{"2":{"253":1}}],["189776",{"2":{"253":1}}],["189769",{"2":{"253":1}}],["189763",{"2":{"253":1}}],["189756",{"2":{"253":1}}],["189750",{"2":{"253":1}}],["189743",{"2":{"253":1}}],["189735",{"2":{"253":1}}],["189670",{"2":{"253":1}}],["18960184",{"2":{"147":1}}],["189",{"2":{"229":1,"260":3}}],["18901739",{"2":{"147":1}}],["18949205",{"2":{"147":1}}],["1891",{"2":{"282":1}}],["18916532",{"2":{"147":1}}],["18919921",{"2":{"147":1}}],["1895644",{"2":{"89":1}}],["183615620969391e",{"2":{"287":1}}],["1836073988698543e",{"2":{"287":1}}],["183688",{"2":{"260":1}}],["1831",{"2":{"276":1}}],["1834864f",{"2":{"285":1}}],["1834",{"2":{"276":1}}],["1837484f",{"2":{"285":1}}],["1837",{"2":{"260":7}}],["1833",{"2":{"260":2}}],["183381",{"2":{"185":1}}],["183308",{"2":{"167":1}}],["18308182",{"2":{"143":1}}],["18356045",{"2":{"118":1}}],["188729752652418e",{"2":{"287":1}}],["1884504f",{"2":{"285":1}}],["18841726",{"2":{"196":1}}],["188388",{"2":{"282":1}}],["1883",{"2":{"229":1,"253":1}}],["1882",{"2":{"187":1}}],["188839",{"2":{"147":1}}],["18816021",{"2":{"147":1}}],["18810266",{"2":{"89":1}}],["18898767",{"2":{"147":1}}],["18803856",{"2":{"147":1}}],["18860114",{"2":{"143":1}}],["1814",{"2":{"278":1}}],["181",{"2":{"260":3,"282":1}}],["1811024",{"2":{"264":2}}],["1811",{"2":{"260":2}}],["1813",{"2":{"260":12}}],["1817",{"2":{"187":1,"260":14,"282":1}}],["1819797",{"2":{"147":1}}],["18102062",{"2":{"89":1}}],["18120264",{"2":{"89":1}}],["18",{"2":{"56":1,"76":2,"147":2,"204":4,"213":2,"245":2,"253":2,"260":7,"274":3,"276":1,"278":5,"282":3,"287":5,"295":1}}],["15\\ttrain",{"2":{"260":1}}],["15568",{"2":{"282":1}}],["1551",{"2":{"276":1,"282":1}}],["1559",{"2":{"260":3}}],["155488636408763e",{"2":{"287":1}}],["1554",{"2":{"260":3}}],["15587",{"2":{"274":1}}],["1558",{"2":{"260":6}}],["1550",{"2":{"260":2}}],["1552",{"2":{"229":1}}],["1557573",{"2":{"168":1}}],["1539017f",{"2":{"285":1}}],["153",{"2":{"282":1}}],["1534",{"2":{"282":1}}],["153377",{"2":{"274":1}}],["15332098",{"2":{"147":1}}],["15327",{"2":{"274":1}}],["153295",{"2":{"263":1}}],["1532",{"2":{"263":1}}],["1535290509728552e",{"2":{"287":1}}],["1535",{"2":{"260":3}}],["1537651397968044e",{"2":{"287":1}}],["1537",{"2":{"229":1}}],["1537897",{"2":{"147":1}}],["153843340464007e",{"2":{"287":1}}],["1538",{"2":{"229":1,"260":3}}],["158832851223743e",{"2":{"287":1}}],["158864f",{"2":{"285":1}}],["1582293f",{"2":{"285":1}}],["1582",{"2":{"282":1}}],["1585",{"2":{"260":6,"282":1}}],["1583839662139467e",{"2":{"287":1}}],["1583",{"2":{"260":1}}],["15830041",{"2":{"147":1}}],["1581",{"2":{"260":5}}],["158992767",{"2":{"253":2}}],["158975",{"2":{"147":1}}],["1580",{"2":{"229":1}}],["15877295",{"2":{"264":1}}],["15877534",{"2":{"147":1}}],["15875",{"2":{"147":1}}],["15864038",{"2":{"147":1}}],["1569",{"2":{"282":1}}],["1567",{"2":{"276":1}}],["1561",{"2":{"260":6}}],["1560",{"2":{"260":8}}],["15605855",{"2":{"147":1}}],["1562",{"2":{"260":1,"278":1}}],["1563",{"2":{"260":2}}],["1565",{"2":{"260":8}}],["15681",{"2":{"237":6}}],["15681f",{"2":{"89":1}}],["15680",{"2":{"237":6}}],["156",{"2":{"210":2,"253":70,"260":1025,"295":16}}],["15961446908625e",{"2":{"287":1}}],["15967444",{"2":{"147":1}}],["159751f",{"2":{"285":1}}],["15978265",{"2":{"264":1}}],["159959f",{"2":{"285":1}}],["1599543",{"2":{"147":1}}],["1599",{"2":{"282":2}}],["15955",{"2":{"282":1}}],["1595",{"2":{"260":6}}],["1593",{"2":{"260":14}}],["1598851",{"2":{"253":70}}],["1590",{"2":{"240":1,"282":1}}],["15905891",{"2":{"89":1}}],["1592",{"2":{"229":1,"260":10}}],["152020937505254e",{"2":{"287":1}}],["1528068208956726e",{"2":{"287":1}}],["152354f",{"2":{"285":1}}],["152649f",{"2":{"285":1}}],["1526082f",{"2":{"285":1}}],["15262935",{"2":{"147":1}}],["1522966f",{"2":{"285":1}}],["15222053",{"2":{"147":1}}],["1524",{"2":{"276":1}}],["1525574e",{"2":{"172":1}}],["15276335",{"2":{"153":1}}],["15218197",{"2":{"147":1}}],["152131",{"2":{"118":1}}],["154449568322866e",{"2":{"287":1}}],["154565414289149e",{"2":{"287":1}}],["15452995f",{"2":{"285":1}}],["154516",{"2":{"168":1}}],["15451026",{"2":{"89":1}}],["1543625f",{"2":{"285":1}}],["1543",{"2":{"282":1}}],["1547",{"2":{"276":1}}],["1541",{"2":{"229":1,"278":1}}],["154041",{"2":{"188":1}}],["15499277",{"2":{"147":1}}],["154",{"2":{"133":1,"210":2,"229":1}}],["15101",{"2":{"282":1}}],["15105338",{"2":{"147":1}}],["15105f",{"2":{"89":1}}],["1519",{"2":{"282":3}}],["15199737",{"2":{"147":1}}],["15163",{"2":{"276":1}}],["1513672",{"2":{"274":1}}],["151290",{"2":{"274":1}}],["1512",{"2":{"229":1}}],["151",{"2":{"225":1}}],["151792",{"2":{"147":1}}],["1515675",{"2":{"118":1}}],["1515998",{"2":{"118":1}}],["1579",{"2":{"276":1}}],["1574",{"2":{"260":6}}],["15741",{"2":{"204":1}}],["1573865574959218e",{"2":{"287":1}}],["1573",{"2":{"260":6}}],["157368",{"2":{"147":1}}],["1572",{"2":{"260":6}}],["1571",{"2":{"260":4}}],["1575",{"2":{"260":12}}],["1576",{"2":{"229":1}}],["1570",{"2":{"260":3}}],["15701",{"2":{"237":6}}],["15700",{"2":{"237":12}}],["15707001",{"2":{"167":1}}],["15708946",{"2":{"147":1}}],["15708117",{"2":{"118":1}}],["15771388",{"2":{"147":1}}],["1577647",{"2":{"89":1}}],["15",{"2":{"50":1,"64":1,"76":2,"77":1,"81":2,"90":1,"147":2,"167":1,"168":3,"187":1,"188":4,"192":1,"199":1,"204":2,"234":2,"238":1,"245":2,"246":1,"260":34,"263":1,"274":2,"276":3,"278":7,"282":5}}],["1509360444755223e",{"2":{"287":1}}],["1505",{"2":{"282":1}}],["1506",{"2":{"276":1}}],["15082",{"2":{"274":1}}],["1507",{"2":{"229":1}}],["15001",{"2":{"253":1}}],["1500",{"2":{"209":1,"230":2,"272":2,"279":1}}],["15035425",{"2":{"147":1}}],["15026206",{"2":{"147":1}}],["15048876",{"2":{"118":1}}],["150",{"2":{"41":1}}],["1−cos⁡",{"2":{"86":1}}],["1−2∑yy^+α∑y2+∑y^2+α",{"2":{"50":1}}],["1−α",{"2":{"50":2}}],["1−y",{"2":{"50":1}}],["1−yy^",{"2":{"50":2}}],["1−y^+ϵ",{"2":{"50":1}}],["1−y~",{"2":{"50":2}}],["1−z",{"2":{"38":1}}],["16\\ttrain",{"2":{"260":1}}],["161539264384642e",{"2":{"287":1}}],["161048f",{"2":{"285":1}}],["16105938",{"2":{"147":1}}],["1619",{"2":{"282":1}}],["161437f",{"2":{"285":1}}],["1614",{"2":{"282":1}}],["1616",{"2":{"276":1}}],["16169",{"2":{"274":1}}],["161722",{"2":{"274":1}}],["16133",{"2":{"274":1}}],["161",{"2":{"199":1,"263":1}}],["162373576645326e",{"2":{"287":1}}],["1621",{"2":{"276":1}}],["1621094",{"2":{"274":3}}],["1622326f",{"2":{"285":1}}],["162235",{"2":{"274":1}}],["16220272",{"2":{"147":1}}],["16241",{"2":{"237":12}}],["16240",{"2":{"237":12}}],["1626",{"2":{"187":1}}],["16xf32>",{"2":{"147":2}}],["16x6x5x5xf32>",{"2":{"147":2}}],["1677",{"2":{"282":1}}],["1676",{"2":{"282":1}}],["1670",{"2":{"282":1}}],["16701514",{"2":{"147":1}}],["1673",{"2":{"276":1,"282":1}}],["167",{"2":{"199":1,"229":1,"240":1}}],["1678",{"2":{"147":1}}],["1678572",{"2":{"147":1}}],["16783f",{"2":{"89":1}}],["165149176073911e",{"2":{"287":1}}],["165442864734179e",{"2":{"287":1}}],["1653864f",{"2":{"285":1}}],["1650",{"2":{"263":1}}],["16504566",{"2":{"147":1}}],["165",{"2":{"192":1,"229":1,"282":1}}],["16566901",{"2":{"165":1}}],["165689",{"2":{"147":1}}],["16553",{"2":{"147":1}}],["16576298",{"2":{"147":1}}],["1689039843919384e",{"2":{"287":1}}],["1689453",{"2":{"274":1}}],["1685",{"2":{"278":1}}],["1682",{"2":{"276":1}}],["1686",{"2":{"276":1,"278":1}}],["16869935",{"2":{"147":1}}],["16817337",{"2":{"147":1}}],["1687434",{"2":{"147":1}}],["164793256618043e",{"2":{"287":1}}],["164786",{"2":{"188":1}}],["164926f",{"2":{"285":1}}],["16442",{"2":{"204":1}}],["16447",{"2":{"147":1}}],["16447622",{"2":{"147":1}}],["1642132",{"2":{"167":1}}],["16428338",{"2":{"147":1}}],["16450",{"2":{"237":6}}],["1645",{"2":{"229":1}}],["16454",{"2":{"147":1}}],["1645548",{"2":{"147":1}}],["166584799164253e",{"2":{"287":1}}],["166",{"2":{"240":1}}],["1669",{"2":{"229":1}}],["1664",{"2":{"192":1}}],["16645215",{"2":{"147":1}}],["1666f",{"2":{"285":1}}],["1666",{"2":{"187":1}}],["16667",{"2":{"78":1}}],["163255608105006e",{"2":{"287":1}}],["1631074645864936e",{"2":{"287":1}}],["1631956f",{"2":{"285":1}}],["163493f",{"2":{"285":1}}],["1634",{"2":{"276":1}}],["16340032",{"2":{"147":1}}],["1635447",{"2":{"260":1025}}],["16363813",{"2":{"147":1}}],["1691",{"2":{"295":1}}],["1691734",{"2":{"264":1}}],["169871586116995e",{"2":{"287":1}}],["16943",{"2":{"274":1}}],["169",{"2":{"229":1,"240":1}}],["1696",{"2":{"192":1}}],["16969556",{"2":{"147":1}}],["1695192f",{"2":{"285":1}}],["16951457",{"2":{"118":1}}],["16954337",{"2":{"147":1}}],["16955197",{"2":{"118":1}}],["16f0",{"2":{"58":1}}],["160",{"2":{"282":1}}],["160099789366813e",{"2":{"287":1}}],["1600",{"2":{"260":6}}],["160011",{"2":{"274":1}}],["16001",{"2":{"253":1}}],["1601",{"2":{"229":1,"295":1}}],["16060\\taccuracy",{"2":{"204":1}}],["160876",{"2":{"185":1}}],["16083185f0",{"2":{"50":1}}],["160832f0",{"2":{"50":1}}],["16027175",{"2":{"147":1}}],["1609376",{"2":{"295":16}}],["160961072117094e",{"2":{"287":1}}],["16094846",{"2":{"165":1}}],["160986",{"2":{"147":1}}],["1609",{"2":{"78":1,"282":1}}],["1607",{"2":{"41":1,"66":2}}],["16",{"2":{"47":1,"56":3,"58":1,"76":2,"80":9,"126":13,"127":14,"132":1,"147":5,"167":1,"168":1,"188":1,"197":5,"204":2,"206":1,"210":3,"213":2,"214":1,"227":5,"229":1,"238":2,"245":3,"246":2,"252":1,"253":2,"255":1,"260":14,"261":1,"265":4,"268":4,"269":1,"274":3,"275":1,"278":5,"280":1,"282":1,"287":10,"289":1,"295":1,"297":1}}],["1th",{"2":{"41":3}}],["13\\ttrain",{"2":{"260":1}}],["13489",{"2":{"274":1}}],["13485748",{"2":{"147":1}}],["13444",{"2":{"274":1}}],["134",{"2":{"260":16,"276":1}}],["1341",{"2":{"199":1}}],["13462374",{"2":{"118":1}}],["1369",{"2":{"295":1}}],["136059123897974e",{"2":{"287":1}}],["136028f",{"2":{"285":1}}],["136754254263088e",{"2":{"287":1}}],["136785f",{"2":{"285":1}}],["13671",{"2":{"274":1}}],["1362",{"2":{"260":2}}],["13625823",{"2":{"147":1}}],["136",{"2":{"260":16}}],["136356",{"2":{"185":1}}],["13684514",{"2":{"147":1}}],["132360195473337e",{"2":{"287":1}}],["1323625",{"2":{"147":1}}],["1323294f",{"2":{"285":1}}],["1326",{"2":{"282":1}}],["1328",{"2":{"282":2}}],["13249940196448407136",{"2":{"295":1}}],["1324",{"2":{"278":2}}],["132",{"2":{"260":18}}],["132188335458648e",{"2":{"287":1}}],["1321",{"2":{"253":2,"282":1}}],["1322",{"2":{"229":1}}],["13276449",{"2":{"147":1}}],["1330",{"2":{"276":1}}],["1336",{"2":{"276":1}}],["1336206",{"2":{"147":1}}],["1335",{"2":{"276":1}}],["1337891",{"2":{"274":1}}],["1337921f",{"2":{"118":1}}],["13345",{"2":{"274":1}}],["133",{"2":{"260":17}}],["1333",{"2":{"229":1}}],["13399862",{"2":{"147":1}}],["1357575f",{"2":{"285":1}}],["1358",{"2":{"278":1,"282":1}}],["135",{"2":{"260":16}}],["13559",{"2":{"188":1}}],["1350523",{"2":{"167":1}}],["13513",{"2":{"147":1}}],["135206",{"2":{"147":1}}],["13540733",{"2":{"147":1}}],["13590635",{"2":{"147":1}}],["1377",{"2":{"276":1}}],["13775",{"2":{"274":1}}],["137507f",{"2":{"285":1}}],["1375",{"2":{"276":1}}],["13759731",{"2":{"267":1}}],["13751146",{"2":{"147":1}}],["137",{"2":{"260":16,"276":1}}],["13767318",{"2":{"147":1}}],["13760304",{"2":{"147":1}}],["137367074491195e",{"2":{"287":1}}],["1373",{"2":{"282":1}}],["1373128",{"2":{"147":1}}],["13732003",{"2":{"147":1}}],["1383",{"2":{"276":1}}],["1386",{"2":{"276":1}}],["13865",{"2":{"274":1}}],["13861585",{"2":{"119":1}}],["13873",{"2":{"260":1}}],["1388",{"2":{"187":1,"282":1}}],["13889",{"2":{"78":1}}],["138228",{"2":{"147":1}}],["13825017",{"2":{"147":1}}],["13812806",{"2":{"147":1}}],["13819042",{"2":{"143":1}}],["131377f",{"2":{"285":1}}],["13182",{"2":{"282":1}}],["1315",{"2":{"282":1}}],["1315112",{"2":{"118":1}}],["13114",{"2":{"274":1}}],["13110895",{"2":{"147":1}}],["1319",{"2":{"260":2}}],["131",{"2":{"260":16,"282":1}}],["131726",{"2":{"167":1,"168":1}}],["13142236",{"2":{"147":1}}],["13164552",{"2":{"118":1}}],["1312",{"2":{"15":1,"229":1}}],["1395",{"2":{"276":1}}],["13950577",{"2":{"147":1}}],["13950492",{"2":{"118":1}}],["1396",{"2":{"276":1}}],["13994",{"2":{"274":1}}],["13996856",{"2":{"147":1}}],["1393543540180377e",{"2":{"287":1}}],["1393",{"2":{"260":6}}],["1392",{"2":{"260":4,"276":1}}],["1398",{"2":{"260":2}}],["1398194",{"2":{"118":1}}],["139433",{"2":{"168":1}}],["13940203",{"2":{"118":1}}],["139778",{"2":{"147":1}}],["13971032",{"2":{"118":1}}],["1308",{"2":{"276":1}}],["130161",{"2":{"274":1}}],["1303",{"2":{"260":6,"276":2}}],["1303978",{"2":{"147":1}}],["1307",{"2":{"260":6}}],["13073668",{"2":{"196":1}}],["13001",{"2":{"253":1}}],["130",{"2":{"199":1,"229":1,"260":16,"263":1,"278":1,"282":2}}],["1306752",{"2":{"168":1}}],["1309877241035206e",{"2":{"287":1}}],["1309025",{"2":{"147":1}}],["13093f",{"2":{"89":1}}],["13048346",{"2":{"118":1}}],["13051206",{"2":{"118":1}}],["13021im",{"2":{"185":1}}],["1302",{"2":{"40":1}}],["13",{"2":{"74":2,"76":2,"78":8,"81":2,"119":1,"147":2,"199":1,"204":2,"245":3,"253":3,"260":22,"263":1,"274":2,"278":3,"282":1}}],["1c",{"2":{"38":1}}],["1b",{"2":{"38":3}}],["1a",{"2":{"38":3}}],["10f",{"2":{"286":1}}],["10f0",{"2":{"58":11}}],["10\\ttrain",{"2":{"260":1}}],["1069",{"2":{"276":1}}],["1066",{"2":{"260":3,"282":1}}],["106541",{"2":{"282":1}}],["1065",{"2":{"253":3}}],["10642",{"2":{"282":1}}],["1064",{"2":{"253":3}}],["106482625",{"2":{"147":1}}],["1063",{"2":{"253":3}}],["10626",{"2":{"274":1}}],["1062",{"2":{"253":3}}],["1061",{"2":{"253":3,"260":6}}],["1060",{"2":{"253":3}}],["106069",{"2":{"165":1}}],["106",{"2":{"229":1,"240":1,"263":1}}],["10676",{"2":{"147":1}}],["1081",{"2":{"282":1}}],["108170e",{"2":{"225":1}}],["1085",{"2":{"278":1}}],["1082",{"2":{"276":1}}],["1089",{"2":{"260":6}}],["1087",{"2":{"260":1}}],["10875653",{"2":{"147":1}}],["10848",{"2":{"274":1}}],["10844",{"2":{"274":1}}],["10843",{"2":{"274":1}}],["1084",{"2":{"260":20,"282":1}}],["10842768",{"2":{"147":1}}],["1088",{"2":{"260":20,"276":2}}],["10885",{"2":{"147":1}}],["1080",{"2":{"253":7,"260":118,"295":2}}],["1083",{"2":{"229":1,"260":4}}],["108",{"2":{"192":1,"229":1,"240":1,"260":4,"282":1}}],["10862",{"2":{"274":1}}],["1086",{"2":{"187":1,"260":1}}],["1022",{"2":{"282":1}}],["1021",{"2":{"276":1}}],["10200",{"2":{"274":1}}],["1020",{"2":{"260":6}}],["10206564",{"2":{"147":1}}],["1023",{"2":{"260":6}}],["10290",{"2":{"274":1}}],["1029",{"2":{"253":1,"260":3}}],["1028",{"2":{"253":1}}],["1027",{"2":{"253":1,"260":3,"282":1}}],["1026",{"2":{"253":1}}],["1025",{"2":{"253":1}}],["102",{"2":{"229":1}}],["1024",{"2":{"229":1,"241":1,"253":1}}],["10242631",{"2":{"147":1}}],["1047",{"2":{"253":1,"260":6}}],["1046784f",{"2":{"285":1}}],["1046",{"2":{"253":1}}],["10464323",{"2":{"147":1}}],["104591292834862e",{"2":{"287":1}}],["1045",{"2":{"253":1,"260":3,"276":1}}],["10442",{"2":{"274":1}}],["1044",{"2":{"253":1,"260":6,"276":1,"278":1}}],["1043",{"2":{"253":1}}],["10428",{"2":{"274":1}}],["1042",{"2":{"253":1}}],["10420467",{"2":{"147":1}}],["104191",{"2":{"264":1}}],["1041",{"2":{"253":1,"282":1}}],["104",{"2":{"229":1,"240":1}}],["104912",{"2":{"276":1}}],["10495",{"2":{"274":1}}],["1049",{"2":{"253":1,"260":1}}],["10494",{"2":{"147":1}}],["104936846",{"2":{"147":1}}],["1040",{"2":{"253":1,"282":1}}],["104042254",{"2":{"147":1}}],["10401063",{"2":{"147":1}}],["104892f",{"2":{"285":1}}],["1048",{"2":{"253":1,"260":5}}],["104844354",{"2":{"118":1}}],["104868725",{"2":{"118":1}}],["1078",{"2":{"276":1}}],["10743",{"2":{"274":1}}],["107406616",{"2":{"147":1}}],["10718",{"2":{"274":1}}],["10756",{"2":{"274":1}}],["10779",{"2":{"274":1}}],["1077",{"2":{"260":3}}],["1073",{"2":{"260":9}}],["10760",{"2":{"274":1}}],["1076",{"2":{"260":3}}],["1079",{"2":{"260":3}}],["107933",{"2":{"167":1}}],["107",{"2":{"199":1,"229":1}}],["10708179",{"2":{"147":1}}],["1095",{"2":{"260":5}}],["1094",{"2":{"260":16,"282":1}}],["1099",{"2":{"260":16}}],["10995198",{"2":{"118":1}}],["109195549017914e",{"2":{"287":1}}],["109124398922551e",{"2":{"287":1}}],["1091",{"2":{"260":18}}],["10914792",{"2":{"147":1}}],["10935",{"2":{"274":1}}],["1093",{"2":{"260":4}}],["10907f",{"2":{"285":1}}],["1090",{"2":{"260":7,"283":2}}],["1097",{"2":{"260":2}}],["109",{"2":{"229":2,"240":1,"282":1}}],["1098",{"2":{"187":1,"260":10}}],["1092134f",{"2":{"285":1}}],["10925",{"2":{"274":1}}],["1092",{"2":{"260":8,"276":1}}],["10923161",{"2":{"147":1}}],["109261915",{"2":{"143":1}}],["1036",{"2":{"276":1}}],["10361175",{"2":{"147":1}}],["1039",{"2":{"282":1}}],["10395",{"2":{"274":1}}],["103913486",{"2":{"147":1}}],["10349",{"2":{"274":1}}],["1034",{"2":{"260":9}}],["10346943",{"2":{"147":1}}],["1035",{"2":{"260":4}}],["1033",{"2":{"253":1,"276":2}}],["1032",{"2":{"253":1}}],["1031",{"2":{"253":1}}],["10316533",{"2":{"147":1}}],["10316813",{"2":{"147":1}}],["1030",{"2":{"253":1}}],["10375",{"2":{"274":1}}],["1037",{"2":{"187":1,"260":3}}],["103",{"2":{"126":1,"229":1,"240":1}}],["105428509522358e",{"2":{"287":1}}],["10529",{"2":{"274":1}}],["10521463",{"2":{"118":1}}],["105117f",{"2":{"285":1}}],["1051",{"2":{"260":6}}],["105397f",{"2":{"285":1}}],["1053067030159004e",{"2":{"287":1}}],["1053067f",{"2":{"285":1}}],["10530",{"2":{"274":1}}],["1053",{"2":{"260":1}}],["105389535",{"2":{"147":1}}],["105022684960374e",{"2":{"287":1}}],["1050152154502304e",{"2":{"287":1}}],["1050",{"2":{"260":11}}],["1059",{"2":{"253":3,"260":3}}],["1058",{"2":{"253":3,"260":9,"282":1}}],["10584956",{"2":{"118":1}}],["10579",{"2":{"274":1}}],["1057",{"2":{"253":3,"260":2,"276":1}}],["10566",{"2":{"274":1}}],["1056",{"2":{"253":3,"260":9}}],["10561423",{"2":{"147":1}}],["10554",{"2":{"274":1}}],["1055",{"2":{"229":1,"260":1}}],["105",{"2":{"118":1,"229":2,"240":1,"282":2}}],["10^4",{"2":{"85":2}}],["10x4xf32>",{"2":{"147":4}}],["10xf32>",{"2":{"147":2}}],["10x",{"2":{"84":3}}],["1018",{"2":{"282":1}}],["1011",{"2":{"282":1}}],["10116895",{"2":{"147":1}}],["1014",{"2":{"276":1}}],["10148246",{"2":{"147":1}}],["10166",{"2":{"187":1}}],["1012",{"2":{"187":1}}],["10137743",{"2":{"147":1}}],["1019723",{"2":{"118":1}}],["101",{"2":{"81":1,"225":1,"237":6}}],["10i",{"2":{"78":1}}],["10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀",{"2":{"58":1}}],["10",{"2":{"25":2,"34":1,"41":12,"47":2,"58":7,"71":2,"73":2,"76":2,"78":2,"81":10,"84":7,"85":1,"86":5,"88":1,"89":5,"90":1,"96":1,"134":2,"135":1,"147":5,"165":2,"166":2,"168":2,"188":4,"191":2,"196":3,"204":2,"210":5,"212":1,"213":22,"232":5,"237":30,"238":2,"243":1,"245":2,"246":2,"253":26,"260":2,"271":1,"274":2,"276":1,"278":3,"279":1,"282":2,"283":2,"287":8,"291":1,"296":1}}],["1006",{"2":{"282":1}}],["10062",{"2":{"274":1}}],["1005",{"2":{"260":3}}],["100514844",{"2":{"147":1}}],["1001",{"2":{"196":1,"253":1}}],["100167036",{"2":{"147":1}}],["1007",{"2":{"187":1}}],["100737736",{"2":{"147":1}}],["1003911",{"2":{"155":1}}],["10097251",{"2":{"147":1}}],["10044085",{"2":{"118":1}}],["10022328",{"2":{"118":1}}],["100f0",{"2":{"58":1}}],["10001",{"2":{"253":1}}],["10000",{"2":{"196":2,"253":1,"295":12}}],["1000",{"2":{"56":1,"58":1,"81":3,"90":3,"119":14,"196":1,"253":1,"295":2}}],["100",{"2":{"15":1,"35":6,"77":7,"81":5,"86":2,"90":1,"96":2,"119":2,"197":1,"206":1,"212":2,"214":1,"227":1,"229":1,"234":2,"237":6,"238":2,"245":39,"246":2,"255":1,"259":1,"261":1,"269":1,"275":1,"276":1,"280":1,"282":1,"284":2,"289":1,"297":1}}],["12\\ttrain",{"2":{"260":1}}],["12s",{"2":{"245":2}}],["12abac4f24f6",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["12x12x6x4xf32>",{"2":{"147":2}}],["121254065330963e",{"2":{"287":1}}],["1210",{"2":{"282":1}}],["1210938",{"2":{"274":1}}],["121125f",{"2":{"285":1}}],["1211",{"2":{"260":3}}],["12116281",{"2":{"147":1}}],["1216275895555067e",{"2":{"287":1}}],["1216",{"2":{"260":2}}],["12165",{"2":{"204":1}}],["12183",{"2":{"274":1}}],["1218",{"2":{"260":14,"276":1,"278":1,"282":1}}],["12180024",{"2":{"147":1}}],["12146",{"2":{"274":1}}],["1214",{"2":{"260":8}}],["12143",{"2":{"229":1}}],["1215386f",{"2":{"285":1}}],["12150",{"2":{"274":1}}],["12154174806817483476",{"2":{"274":1}}],["1215",{"2":{"260":5}}],["1213",{"2":{"260":18}}],["12138993",{"2":{"147":1}}],["121",{"2":{"229":1,"260":17}}],["1217",{"2":{"260":3}}],["12177",{"2":{"147":1}}],["121751495",{"2":{"147":1}}],["121998",{"2":{"147":1}}],["125",{"2":{"282":2}}],["12543",{"2":{"274":1}}],["12550",{"2":{"274":1}}],["12559004",{"2":{"147":1}}],["1250735475854338e",{"2":{"287":1}}],["12508307",{"2":{"267":1}}],["125019",{"2":{"260":1}}],["1259",{"2":{"260":12}}],["1252",{"2":{"260":2}}],["12589",{"2":{"196":1}}],["1251",{"2":{"192":1,"260":6}}],["125342",{"2":{"118":1}}],["1261",{"2":{"282":1}}],["1261990389976131",{"2":{"191":1}}],["1268860161522646e",{"2":{"287":1}}],["12688\\taccuracy",{"2":{"204":1}}],["1268",{"2":{"263":1}}],["1264",{"2":{"260":3}}],["12648",{"2":{"147":1}}],["1267",{"2":{"260":3,"282":2}}],["12620",{"2":{"274":1}}],["1262",{"2":{"260":4}}],["1269",{"2":{"260":12,"278":1}}],["1263",{"2":{"260":3}}],["126324",{"2":{"118":1}}],["126",{"2":{"225":1}}],["12650935",{"2":{"147":1}}],["1241507f",{"2":{"285":1}}],["124123",{"2":{"147":1}}],["12498",{"2":{"274":1}}],["12497909",{"2":{"147":1}}],["12481",{"2":{"274":1}}],["124",{"2":{"260":3}}],["124779207499906e",{"2":{"287":1}}],["12477568",{"2":{"196":1}}],["1247",{"2":{"260":2,"276":1}}],["1246",{"2":{"260":2}}],["124440040958759e",{"2":{"287":1}}],["124433f",{"2":{"285":1}}],["1244",{"2":{"229":1,"260":6}}],["1245009",{"2":{"167":1}}],["12437135",{"2":{"147":1}}],["1243891",{"2":{"147":1}}],["1225",{"2":{"263":1}}],["122567",{"2":{"147":1}}],["122563",{"2":{"147":1}}],["1224",{"2":{"263":1,"276":1}}],["12241076",{"2":{"147":1}}],["1222",{"2":{"260":2}}],["12229406",{"2":{"147":1}}],["1223",{"2":{"260":3}}],["1227",{"2":{"229":1}}],["122653805",{"2":{"147":1}}],["1220150083885688e",{"2":{"287":1}}],["1220458f",{"2":{"285":1}}],["1220",{"2":{"260":3}}],["12208768",{"2":{"147":1}}],["12209795",{"2":{"118":1}}],["122150525",{"2":{"147":1}}],["127181381241847e",{"2":{"287":1}}],["12715751",{"2":{"147":1}}],["127681f",{"2":{"285":1}}],["12766095",{"2":{"147":1}}],["1277",{"2":{"260":3}}],["1277556f0",{"2":{"50":1}}],["127307f",{"2":{"285":1}}],["1273",{"2":{"282":1}}],["12736",{"2":{"274":1}}],["127351",{"2":{"260":1}}],["12738f",{"2":{"89":1}}],["127250447859456e",{"2":{"287":1}}],["1272",{"2":{"260":6}}],["12724\\taccuracy",{"2":{"204":1}}],["1272793",{"2":{"147":1}}],["1279",{"2":{"187":1,"260":20}}],["127934",{"2":{"147":1}}],["1206",{"2":{"282":1}}],["120612435",{"2":{"147":1}}],["1204",{"2":{"260":1}}],["1202",{"2":{"260":15}}],["120217",{"2":{"147":1}}],["1201",{"2":{"260":3}}],["120147206",{"2":{"147":1}}],["1203",{"2":{"260":8}}],["1208",{"2":{"260":9}}],["120820366",{"2":{"147":1}}],["12003",{"2":{"274":1}}],["1200",{"2":{"260":6,"296":1}}],["12001",{"2":{"253":1}}],["12008221",{"2":{"89":1}}],["1205",{"2":{"229":1}}],["1207034757716905e",{"2":{"287":1}}],["1207",{"2":{"276":1,"282":2}}],["12077212",{"2":{"147":1}}],["12076889",{"2":{"147":1}}],["12078848",{"2":{"147":1}}],["12092318",{"2":{"147":1}}],["120917246",{"2":{"147":1}}],["120",{"2":{"77":1,"260":17}}],["129204601297713e",{"2":{"287":1}}],["12923913",{"2":{"118":1}}],["1293",{"2":{"263":1}}],["12936348",{"2":{"147":1}}],["12962",{"2":{"199":1}}],["1297",{"2":{"192":1,"260":6}}],["1295",{"2":{"147":1}}],["12900914",{"2":{"147":1}}],["1291",{"2":{"260":6,"282":1}}],["12912875",{"2":{"119":1}}],["12918535",{"2":{"118":1}}],["129",{"2":{"41":1,"56":1,"260":16}}],["1284627f",{"2":{"285":1}}],["12843",{"2":{"274":1}}],["1287",{"2":{"282":1}}],["1289",{"2":{"282":1}}],["1289025f",{"2":{"285":1}}],["1289062",{"2":{"274":1}}],["1289033",{"2":{"267":1}}],["128296465116822e",{"2":{"287":1}}],["1282",{"2":{"276":1}}],["12825768",{"2":{"147":1}}],["1288",{"2":{"276":1}}],["128855",{"2":{"274":1}}],["12883",{"2":{"263":1}}],["1281",{"2":{"282":1}}],["12813",{"2":{"274":1}}],["12811285",{"2":{"147":1}}],["1286",{"2":{"260":3}}],["12839837338686e",{"2":{"287":1}}],["1283094f",{"2":{"285":1}}],["1283",{"2":{"260":8}}],["128x4xi1>",{"2":{"147":2}}],["128x4xf32>",{"2":{"147":8}}],["128x84xf32>",{"2":{"147":2}}],["128xf32>",{"2":{"147":2}}],["128",{"2":{"41":3,"47":2,"56":12,"89":4,"147":5,"197":1,"210":6,"212":1,"227":1,"234":1,"260":23,"264":4,"280":5,"289":5}}],["12",{"2":{"25":2,"76":2,"77":1,"78":8,"81":1,"90":1,"147":3,"153":1,"167":6,"168":7,"172":9,"188":2,"204":2,"213":10,"238":5,"245":14,"246":5,"253":15,"274":2,"278":4,"283":1}}],["1237",{"2":{"276":1}}],["1235",{"2":{"276":1}}],["12354",{"2":{"274":1}}],["1230",{"2":{"276":1}}],["12392",{"2":{"274":1}}],["12397134",{"2":{"147":1}}],["12345",{"2":{"264":1}}],["1234",{"2":{"260":2,"277":1}}],["12348884",{"2":{"147":1}}],["1238",{"2":{"260":5}}],["12336",{"2":{"274":1}}],["1233",{"2":{"260":2,"282":1}}],["1236",{"2":{"282":1}}],["12361",{"2":{"204":1}}],["123659",{"2":{"147":1}}],["123123",{"2":{"147":1}}],["123",{"2":{"15":3,"34":1,"260":3,"282":1}}],["1=dense",{"2":{"23":1}}],["1d",{"2":{"15":1,"38":1,"77":5,"78":1,"83":1,"86":4}}],["1",{"2":{"2":1,"4":1,"5":2,"7":1,"11":4,"15":48,"23":6,"34":27,"35":9,"36":1,"37":24,"39":7,"40":12,"41":13,"42":3,"47":5,"49":1,"50":110,"51":1,"56":24,"58":44,"64":8,"66":10,"73":1,"74":9,"75":2,"76":52,"77":34,"78":83,"79":33,"80":17,"81":22,"82":41,"83":1,"84":3,"85":26,"86":17,"89":55,"90":14,"95":6,"96":1,"105":1,"114":1,"118":45,"119":6,"121":2,"126":35,"127":47,"132":6,"134":1,"135":3,"143":4,"144":2,"145":2,"146":9,"147":191,"153":3,"154":11,"155":1,"165":13,"167":23,"168":20,"170":3,"171":3,"172":3,"174":2,"185":2,"187":12,"188":45,"190":4,"191":5,"192":21,"193":4,"194":6,"195":1,"196":6,"197":1,"199":6,"200":11,"202":1,"203":1,"204":52,"206":1,"209":5,"210":8,"211":1,"212":2,"213":2,"214":1,"222":1,"223":12,"225":15,"226":7,"227":1,"229":57,"230":5,"231":3,"233":1,"234":6,"235":1,"236":1,"237":118,"238":5,"240":16,"241":6,"242":1,"243":1,"244":1,"245":7,"246":5,"248":1,"250":1,"251":9,"252":3,"253":6,"254":5,"255":1,"256":1,"257":3,"258":4,"259":1,"260":47,"261":1,"263":11,"264":90,"265":5,"266":1,"267":13,"268":20,"269":1,"270":1,"271":4,"272":1,"273":19,"274":6,"275":1,"276":70,"277":6,"278":67,"279":9,"280":1,"282":133,"283":37,"284":7,"285":130,"286":2,"287":130,"288":5,"289":1,"290":1,"291":10,"292":7,"293":8,"294":1,"295":2,"296":4,"297":1}}],["e^2",{"2":{"284":1}}],["err",{"2":{"254":3}}],["errs",{"2":{"254":3}}],["errored",{"2":{"125":1}}],["errors",{"2":{"24":3,"58":1,"128":1,"225":1}}],["error",{"2":{"2":3,"3":1,"22":1,"24":6,"41":1,"49":1,"50":3,"52":1,"54":2,"57":1,"58":2,"74":1,"76":1,"97":2,"107":2,"114":1,"126":3,"165":1,"196":1,"254":1,"276":1,"278":2,"282":1,"295":1}}],["e0124",{"2":{"253":70,"260":1025,"295":16}}],["ess",{"2":{"278":3}}],["essence",{"2":{"86":1}}],["essentially",{"2":{"34":1,"56":1,"154":1,"183":1,"268":1}}],["especially",{"2":{"123":1,"125":1}}],["estimated",{"2":{"169":1}}],["estimate",{"2":{"169":1,"172":1}}],["estimation",{"0":{"169":1,"290":1},"1":{"170":1,"171":1,"172":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1},"2":{"86":1,"169":1,"225":4}}],["established",{"2":{"98":1,"191":1}}],["eccv",{"2":{"66":1}}],["ecosystem",{"2":{"45":1,"90":1,"120":1}}],["european",{"2":{"66":1}}],["edata=gph",{"2":{"257":1}}],["editors",{"2":{"58":2}}],["edges",{"2":{"164":1}}],["edge",{"2":{"21":1,"86":1,"100":1,"123":1,"124":1,"128":1,"257":2}}],["evolved",{"2":{"279":1}}],["evalpoly",{"2":{"264":2,"268":1}}],["eval",{"2":{"241":5}}],["evaluate",{"2":{"97":1}}],["evaluation",{"2":{"58":1}}],["even",{"2":{"3":1,"7":1,"8":1,"52":1,"79":3,"86":2,"99":1,"100":1,"101":1,"125":1,"127":1,"159":1,"188":1,"192":1,"213":1}}],["everystep",{"2":{"237":13}}],["everystep=false",{"2":{"232":1}}],["everything",{"2":{"86":1,"100":2,"174":1}}],["every",{"2":{"36":1,"76":5,"80":1,"85":2,"95":1,"130":1,"131":1,"163":1,"188":1}}],["ever",{"2":{"3":1,"100":1}}],["epyc",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["epochs=50",{"2":{"274":1}}],["epochs",{"2":{"225":2,"260":2,"268":3,"274":2}}],["epoch",{"2":{"56":1,"204":53,"212":2,"234":2,"245":2,"260":34,"268":5,"274":106}}],["eps",{"2":{"50":1,"58":3,"66":8}}],["epsilon=eps",{"2":{"66":1}}],["epsilon=0",{"2":{"50":2}}],["epsilon=nothing",{"2":{"50":1}}],["epsilon=1f",{"2":{"41":4}}],["epsilon",{"2":{"41":4,"50":9,"66":8}}],["emp",{"2":{"165":2,"166":2}}],["empirical",{"2":{"58":1}}],["empting",{"2":{"35":1}}],["empty",{"2":{"7":1,"8":2,"35":2,"37":9,"39":4,"40":5,"41":5,"42":1,"49":2,"50":1,"153":1,"196":1,"274":5,"285":1}}],["embed=chain",{"2":{"271":1}}],["embed",{"2":{"56":8,"90":8,"271":1}}],["embeddings",{"2":{"39":2}}],["embedding",{"2":{"39":2,"116":1,"243":1}}],["either",{"2":{"35":4,"37":6,"50":1,"73":2,"75":3,"76":2,"78":1,"80":2,"81":1,"85":2,"86":3,"283":1}}],["earcut",{"2":{"276":1,"282":1}}],["early",{"2":{"260":2}}],["eagerly",{"2":{"163":1}}],["easily",{"2":{"99":1,"160":1,"165":1}}],["easier",{"2":{"40":1,"49":1,"279":1}}],["easy",{"2":{"34":1,"68":1,"100":1,"125":1,"158":1}}],["eachindex",{"2":{"278":1}}],["eachslice",{"2":{"51":2,"201":1,"202":1,"272":1,"273":1}}],["each",{"2":{"5":1,"15":1,"25":4,"34":15,"35":8,"37":12,"38":4,"39":2,"40":4,"41":2,"66":2,"74":1,"76":5,"80":3,"81":3,"82":1,"83":2,"85":2,"86":7,"166":3,"188":2,"224":1,"273":1,"278":1,"284":1}}],["effort",{"2":{"118":1}}],["efficient",{"2":{"50":1,"52":1,"71":1,"75":1,"77":1,"117":1,"156":1,"165":1,"166":1,"192":2,"249":1}}],["efficiently",{"2":{"18":2,"50":1}}],["effectively",{"2":{"225":2}}],["effect",{"2":{"1":1,"42":1,"57":1}}],["eta=learning",{"2":{"274":1}}],["eta=lr",{"2":{"260":1}}],["et",{"2":{"15":2,"50":2,"64":2,"78":1,"83":1,"281":1}}],["etc",{"2":{"3":2,"7":1,"67":1,"80":2,"86":6,"100":1,"153":1,"156":1,"165":2,"173":1,"174":1,"176":1,"191":1,"221":1,"260":1,"274":1}}],["equilibrium",{"2":{"100":1}}],["equivalent",{"2":{"8":1,"40":1,"47":1,"50":2,"52":4,"56":1,"74":2,"76":1,"78":1,"80":7,"83":1,"84":1,"86":5,"108":1,"188":1}}],["equation",{"2":{"86":2}}],["equations",{"2":{"71":1,"286":1}}],["equally",{"2":{"51":1,"66":1}}],["equals",{"2":{"40":1,"86":1}}],["equal",{"2":{"15":1,"25":1,"35":2,"37":3,"50":1,"76":1,"78":3}}],["ellipticalslicesampling",{"2":{"276":1}}],["elem",{"2":{"252":2,"254":2}}],["elemet",{"2":{"86":2}}],["elementwise",{"2":{"40":2,"41":5,"62":1,"66":1}}],["element",{"0":{"54":1},"2":{"15":1,"25":1,"34":1,"38":6,"41":1,"50":1,"52":1,"54":10,"56":2,"64":2,"74":1,"75":1,"77":1,"81":5,"85":2,"86":1,"188":4,"193":1,"194":1,"196":1,"201":1,"205":1}}],["elements",{"2":{"8":1,"15":9,"23":1,"35":2,"37":3,"39":1,"73":1,"75":1,"81":2,"86":2,"98":1,"188":6,"201":1}}],["eliminate",{"2":{"86":1}}],["elu",{"2":{"58":6}}],["eltypes",{"2":{"158":1,"182":1}}],["eltype",{"0":{"182":1},"2":{"52":6,"53":4,"54":3,"58":1,"66":9,"76":6,"86":3,"132":1,"158":5,"182":2,"190":1,"271":1}}],["elman",{"2":{"38":1}}],["elseif",{"2":{"150":1,"155":1,"283":1}}],["elsewhere",{"2":{"80":2,"86":2}}],["else",{"2":{"8":1,"19":1,"34":1,"36":3,"38":1,"40":1,"45":1,"49":1,"51":1,"52":1,"64":1,"80":1,"84":2,"85":1,"86":1,"97":1,"150":1,"204":1,"209":1,"212":1,"230":1,"241":2,"260":1,"273":1,"283":3}}],["eg",{"2":{"8":1,"35":2,"37":3}}],["enumx",{"2":{"276":1,"282":1}}],["enumerate",{"2":{"5":2,"119":1,"274":1,"293":3,"295":1,"296":1}}],["enc",{"2":{"271":4}}],["encode",{"2":{"271":1}}],["encoder=st",{"2":{"271":2}}],["encoder",{"2":{"271":15}}],["encountered",{"2":{"127":1,"295":1}}],["encounter",{"2":{"124":1,"164":1}}],["encouraged",{"2":{"52":5}}],["engine",{"2":{"190":1}}],["enforce",{"2":{"190":1}}],["enhance",{"2":{"123":1}}],["enhancing",{"2":{"74":1}}],["enough",{"2":{"86":1,"163":1,"173":1,"225":1}}],["energy",{"2":{"86":1,"278":2}}],["environment",{"2":{"119":1,"197":1,"206":1,"214":1,"227":1,"238":2,"246":2,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["env",{"2":{"80":1,"209":1,"230":1,"241":2,"272":1}}],["enable",{"2":{"96":2,"160":1}}],["enables",{"2":{"65":1,"100":1,"276":1}}],["enabled",{"2":{"7":1,"24":2,"114":1,"276":1}}],["ensuring",{"2":{"64":1}}],["ensures",{"2":{"137":1}}],["ensure",{"2":{"2":1,"8":1,"15":1,"100":1,"124":1,"137":1,"169":1,"201":1,"267":1}}],["enter",{"2":{"88":1}}],["entered",{"2":{"58":2}}],["entry",{"2":{"142":1}}],["entries",{"2":{"86":2,"166":1}}],["entropy",{"2":{"50":3,"74":1}}],["entire",{"2":{"36":2,"38":5,"41":1,"131":1,"225":1}}],["entirely",{"2":{"6":1,"35":2,"39":3,"235":1,"237":1}}],["enzymechainrulescoreext",{"2":{"282":1}}],["enzymecoreext",{"2":{"229":1,"240":1}}],["enzymecore",{"2":{"187":2,"276":2,"282":2}}],["enzymestaticarraysext",{"2":{"282":1}}],["enzymespecialfunctionsext",{"2":{"282":1}}],["enzymelogexpfunctionsext",{"2":{"282":1}}],["enzymegpuarrayscoreext",{"2":{"282":1}}],["enzymebfloat16sext",{"2":{"229":2}}],["enzymeext",{"2":{"187":1,"276":1,"282":1}}],["enzymejax",{"0":{"147":1},"2":{"147":2}}],["enzymemlir",{"2":{"118":1,"124":1}}],["enzyme",{"2":{"49":1,"84":1,"90":1,"96":5,"100":1,"105":2,"118":10,"119":1,"121":2,"122":6,"124":3,"147":1,"164":1,"190":1,"192":1,"229":1,"248":1,"251":6,"256":1,"268":1,"270":1,"282":7,"290":1}}],["endpoints",{"2":{"283":2}}],["end",{"2":{"5":2,"11":1,"15":2,"23":2,"34":4,"35":2,"37":3,"56":14,"81":2,"84":2,"86":5,"89":1,"90":7,"118":2,"119":4,"126":2,"127":5,"132":3,"133":1,"134":3,"143":1,"144":2,"145":2,"147":1,"150":1,"153":4,"154":2,"155":3,"163":2,"165":1,"166":1,"167":1,"168":1,"170":1,"171":1,"172":1,"174":3,"189":1,"191":2,"196":3,"197":3,"198":1,"200":3,"201":4,"202":3,"203":1,"204":5,"206":3,"209":2,"211":2,"212":4,"214":3,"223":2,"224":1,"225":4,"226":1,"227":3,"230":2,"231":7,"232":1,"233":2,"234":3,"235":3,"238":3,"241":4,"242":3,"243":1,"244":2,"245":4,"246":3,"250":3,"251":3,"252":1,"253":3,"254":2,"255":3,"257":1,"258":8,"259":1,"260":4,"261":3,"264":2,"268":4,"269":3,"271":9,"272":3,"273":8,"274":6,"275":3,"277":1,"278":4,"279":3,"280":3,"283":26,"284":2,"285":2,"286":2,"288":2,"289":3,"291":3,"292":10,"293":8,"294":1,"295":3,"296":3,"297":3}}],["executor",{"2":{"260":11}}],["execution",{"2":{"191":1}}],["executables",{"2":{"118":1}}],["exhaustive",{"2":{"178":1}}],["existent",{"2":{"140":1}}],["exists",{"2":{"133":1,"139":1,"231":1}}],["exist",{"2":{"56":2,"77":1,"80":2,"86":3}}],["exactpredicates",{"2":{"276":1,"282":1}}],["exactly",{"2":{"42":1,"51":1,"63":1,"85":1,"108":1,"153":1,"154":1,"176":1}}],["exact",{"2":{"15":1,"83":1,"86":1,"154":1,"204":1}}],["examplex",{"2":{"149":1}}],["examples",{"0":{"186":1},"2":{"5":1,"15":2,"50":2,"56":4,"73":1,"74":1,"78":3,"79":1,"81":4,"82":1,"84":1,"85":1,"101":1,"186":1,"219":1}}],["example",{"0":{"131":1},"1":{"132":1,"133":1,"134":1,"135":1},"2":{"2":1,"3":2,"4":1,"8":1,"15":1,"19":1,"22":3,"23":2,"25":3,"34":3,"40":5,"41":3,"45":1,"47":1,"49":1,"50":14,"55":1,"66":1,"77":4,"78":1,"82":2,"83":1,"95":1,"96":1,"101":1,"108":1,"111":1,"125":1,"136":2,"143":1,"144":1,"147":1,"149":3,"155":1,"156":1,"158":1,"166":4,"167":1,"168":1,"173":1,"174":1,"183":1,"193":1,"194":1,"256":1,"285":1}}],["ext",{"2":{"133":2}}],["extents",{"2":{"276":1,"282":1}}],["extensively",{"2":{"100":1,"123":2,"124":1,"192":1}}],["extensive",{"2":{"99":1,"100":1}}],["extensible",{"2":{"99":1}}],["extensions",{"0":{"97":1}}],["extending",{"2":{"190":1,"201":1}}],["extended",{"2":{"22":1,"23":1,"24":1,"34":6,"35":2,"37":3,"38":2,"40":1,"41":4,"42":1,"50":2,"54":1,"56":1}}],["extend",{"2":{"7":1,"22":1,"100":1}}],["external",{"2":{"90":1,"204":1,"213":1,"253":8,"260":114,"274":1,"278":1,"295":3}}],["extrema",{"2":{"58":1,"82":1,"252":2}}],["extremely",{"2":{"24":1,"49":1,"148":1,"174":1}}],["extract",{"2":{"278":2,"279":1}}],["extracted",{"2":{"86":1}}],["extra",{"2":{"15":1,"86":3}}],["exciting",{"2":{"102":1}}],["excellent",{"2":{"207":1}}],["exceed",{"2":{"76":1}}],["except",{"2":{"7":1,"38":2,"40":1,"84":1,"159":1}}],["exceptionunwrapping",{"2":{"229":1}}],["exceptions",{"2":{"15":1}}],["exception",{"2":{"3":2,"97":1}}],["exclusively",{"2":{"23":1,"94":1}}],["excluding",{"2":{"22":1,"40":1,"41":1,"47":1}}],["excludes",{"2":{"76":4}}],["exclude",{"2":{"10":2,"104":1}}],["exclude=internal",{"2":{"10":1}}],["expat",{"2":{"276":1,"282":1}}],["exp⁡",{"2":{"86":1}}],["explore",{"2":{"164":2}}],["exploiting",{"2":{"163":1}}],["explanation",{"2":{"86":5}}],["explicit",{"2":{"71":1,"100":1,"185":2,"276":1}}],["explicitly",{"2":{"7":1,"56":2,"58":3,"86":2}}],["exposing",{"2":{"237":1}}],["exponent",{"2":{"86":1}}],["exponential",{"2":{"58":4}}],["exporting",{"0":{"147":1}}],["export",{"2":{"72":1,"147":1}}],["exported",{"2":{"3":1,"6":1,"21":1,"58":2,"147":6,"189":1}}],["exp",{"2":{"58":6,"74":3,"84":1,"86":1,"252":1,"271":1,"273":1,"292":2}}],["express",{"2":{"278":1}}],["expressing",{"2":{"86":1}}],["expression",{"2":{"56":1,"81":2,"95":1}}],["exprtools",{"2":{"276":1,"282":1}}],["expronicon",{"2":{"229":1,"276":1,"282":1}}],["expr",{"2":{"56":1,"97":4}}],["expected",{"2":{"34":1,"35":2,"39":1,"50":4,"100":1,"127":1,"140":1,"191":1,"253":70,"260":1025,"295":16}}],["expect",{"2":{"21":1,"100":1,"118":1,"140":1,"276":1}}],["expects",{"2":{"19":1,"35":1,"37":3,"42":1,"47":1,"74":1,"75":3,"78":1,"95":1,"155":1}}],["experimental",{"0":{"21":1},"1":{"22":1,"23":1,"24":1,"25":1},"2":{"2":1,"4":2,"7":1,"21":2,"22":10,"23":2,"24":6,"25":2,"114":5,"115":5,"125":1,"126":2,"127":2,"128":1,"141":1,"142":1,"143":3,"144":2,"145":2,"146":1,"148":1,"158":1}}],["e",{"2":{"2":1,"11":1,"15":1,"22":1,"35":2,"36":2,"37":9,"41":3,"45":1,"47":1,"50":3,"54":1,"55":1,"58":3,"66":1,"76":6,"81":2,"82":1,"83":2,"84":2,"85":1,"86":3,"107":2,"118":1,"126":4,"127":4,"133":2,"137":1,"147":1,"156":1,"169":1,"253":7,"260":102,"283":4,"284":6,"285":3,"295":2}}],["gw",{"2":{"284":1}}],["gph",{"2":{"257":5}}],["gpucompiler",{"2":{"282":1}}],["gpuci",{"2":{"126":1,"133":3,"165":2,"204":2,"237":3,"260":2,"274":1}}],["gpubroadcastop",{"2":{"67":1}}],["gpuarray",{"2":{"51":1}}],["gpuarrayscore",{"2":{"159":2,"187":1,"276":1,"282":1}}],["gpuarrays",{"2":{"13":7,"51":1,"192":1,"282":1}}],["gpusintel",{"2":{"69":2}}],["gpusmetal",{"2":{"69":2}}],["gpusamd",{"2":{"69":2}}],["gpus",{"2":{"28":1,"60":1,"69":2,"122":2,"148":1,"158":1,"159":1,"163":1}}],["gpu",{"0":{"4":1,"69":1,"92":1,"93":1,"148":1,"159":1,"180":1,"181":1},"1":{"149":1,"150":1},"2":{"1":8,"2":7,"3":10,"5":2,"37":3,"63":1,"67":1,"69":4,"70":1,"89":2,"92":1,"93":1,"96":1,"115":2,"118":3,"121":1,"122":4,"132":1,"140":4,"148":5,"149":11,"150":5,"163":4,"174":1,"181":3,"185":1,"189":3,"222":1,"224":3,"234":3,"237":1,"245":1,"253":7,"260":102,"295":2}}],["gnngraph",{"2":{"257":3}}],["gnngraphs",{"2":{"256":1}}],["gnnlux",{"2":{"256":1}}],["gnu",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["gzip",{"2":{"229":1}}],["gcnlayer",{"2":{"258":3}}],["gcn",{"2":{"256":1,"258":5,"260":10}}],["gc",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["guess",{"2":{"287":1}}],["guide",{"0":{"137":1,"138":1},"1":{"139":1,"140":1},"2":{"138":1,"153":1}}],["guarantee",{"2":{"115":1,"183":1}}],["guaranteed",{"2":{"63":1,"65":1,"165":2}}],["guarantees",{"2":{"21":1}}],["gdev",{"2":{"112":1,"149":1,"150":4,"163":6,"222":1,"224":3,"225":3,"226":1}}],["glib",{"2":{"276":1,"282":1}}],["glu",{"2":{"84":1}}],["glob",{"2":{"229":1}}],["globals",{"2":{"285":1}}],["globally",{"2":{"276":1}}],["globallppool",{"2":{"37":1,"117":1}}],["globalmeanpool",{"2":{"37":1}}],["globalmaxpool",{"2":{"37":1}}],["global",{"2":{"37":3,"95":1,"158":1,"276":1}}],["glorot",{"2":{"15":4,"153":2}}],["günter",{"2":{"64":1}}],["gs",{"2":{"56":2,"89":3}}],["giflib",{"2":{"276":1,"282":1}}],["gif",{"2":{"254":1,"279":2}}],["gib",{"2":{"238":2,"246":2}}],["gigantic",{"2":{"125":1}}],["giving",{"2":{"86":1}}],["give",{"2":{"84":1,"200":1,"287":2}}],["gives",{"2":{"51":1,"56":1,"153":1}}],["given",{"2":{"2":1,"4":2,"15":6,"16":24,"28":6,"29":2,"30":5,"34":2,"39":3,"41":5,"45":1,"50":3,"66":1,"67":1,"73":1,"76":5,"82":1,"84":2,"86":5,"99":1,"100":1,"225":1}}],["github",{"2":{"38":1,"68":2,"86":1,"101":2,"128":1,"164":1}}],["g=tanh",{"2":{"38":1}}],["grisu",{"2":{"276":1,"282":1}}],["gridlayoutbase",{"2":{"276":1,"282":1}}],["grid",{"2":{"82":27,"252":5,"254":8,"273":29}}],["green",{"2":{"268":1}}],["great",{"2":{"221":1}}],["ground",{"2":{"196":3}}],["group",{"2":{"41":1,"66":3,"147":4,"278":1}}],["groupnorm",{"2":{"41":7,"66":1,"176":1}}],["groups",{"2":{"35":8,"41":4,"66":4,"77":1}}],["groups=1",{"2":{"35":2}}],["gray",{"2":{"273":1}}],["graphite2",{"2":{"276":1,"282":1}}],["graphics",{"2":{"263":1,"276":1,"282":1}}],["graphs",{"2":{"257":1}}],["graph",{"0":{"256":1},"1":{"257":1,"258":1,"259":1,"260":1,"261":1},"2":{"277":1,"279":1}}],["gravitational",{"0":{"281":1},"1":{"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1}}],["gravitate",{"2":{"52":1}}],["graves",{"2":{"83":2}}],["grad",{"2":{"84":2}}],["grads",{"2":{"49":11,"196":1}}],["gradient|jacobian",{"2":{"164":2}}],["gradient",{"0":{"96":1,"167":1},"2":{"18":2,"49":1,"78":4,"79":2,"82":1,"84":10,"86":7,"94":1,"96":1,"118":8,"127":4,"165":6,"166":3,"167":6,"168":6,"172":3,"173":2,"174":2,"193":9,"196":1,"221":1,"251":3,"260":1,"274":1}}],["gradients=val",{"2":{"260":1,"274":1,"295":1}}],["gradients=true",{"2":{"49":2}}],["gradients",{"0":{"193":1},"2":{"18":2,"31":1,"34":1,"49":34,"50":2,"51":1,"82":1,"89":6,"96":10,"139":1,"164":1,"165":1,"172":1,"193":1,"249":1}}],["grucell",{"2":{"38":6,"116":1}}],["gru",{"2":{"7":1,"38":1}}],["gaussadjoint",{"2":{"234":1}}],["gaussian",{"2":{"58":1,"278":1}}],["gathers",{"2":{"81":2}}],["gather",{"0":{"81":1},"2":{"81":12,"257":1}}],["gated",{"2":{"38":1,"58":3,"84":2}}],["gamma=0",{"2":{"50":1}}],["gamma",{"2":{"50":2}}],["ganguli",{"2":{"15":1}}],["gain=1",{"2":{"15":1,"185":2}}],["gain",{"2":{"15":15,"35":4,"39":2,"86":2,"116":2}}],["gt",{"2":{"5":1,"11":1,"15":1,"35":2,"37":3,"40":2,"41":2,"42":3,"50":1,"51":1,"56":2,"84":1,"91":1,"107":2,"112":1,"149":2,"154":2,"164":12,"201":1}}],["geometrybasics",{"2":{"276":1,"282":1}}],["geointerface",{"2":{"276":1,"282":1}}],["geoformattypes",{"2":{"276":1,"282":1}}],["geoffrey",{"2":{"66":1}}],["gen",{"2":{"274":4}}],["genericbroadcastop",{"2":{"67":1}}],["generic",{"2":{"60":1,"63":2,"65":2,"80":2,"90":1,"114":1,"118":1,"127":1,"147":1,"170":1,"171":1,"172":1,"200":1,"202":1,"203":1,"209":1,"211":1,"212":1,"222":1,"230":1,"231":2,"232":1,"233":1,"241":1,"242":1,"243":1,"244":1,"248":1,"251":2,"256":1,"257":1,"258":1,"259":1,"264":1,"270":1,"271":2,"272":1,"273":1,"278":2,"279":1,"283":5,"285":1,"286":2,"290":1,"291":2,"292":1,"293":1,"294":1}}],["genericlossfunction",{"2":{"50":2,"268":1}}],["generating",{"0":{"277":1},"2":{"42":1}}],["generation",{"2":{"38":3}}],["generator=lux",{"2":{"242":1}}],["generator",{"2":{"15":3,"36":3,"64":4,"85":1,"86":1,"153":1,"187":1,"191":2,"242":4,"243":2,"264":1}}],["generates",{"2":{"90":1,"277":1}}],["generated",{"2":{"15":1,"36":1,"56":1,"64":1,"164":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"262":1,"269":1,"275":1,"279":1,"280":1,"289":1,"297":1}}],["generate",{"0":{"223":1,"252":1},"2":{"3":1,"9":1,"10":1,"38":1,"56":1,"77":1,"147":3,"156":1,"185":2,"196":2,"200":1,"242":1,"252":1,"264":4,"271":1,"273":2,"274":1,"277":2,"291":1}}],["generalization",{"2":{"112":1}}],["generalized",{"2":{"5":1,"86":1}}],["generalises",{"2":{"86":1}}],["general",{"0":{"8":1},"2":{"34":1,"68":1,"77":2,"86":1,"91":1,"147":3,"157":1,"285":1}}],["generally",{"2":{"5":1}}],["geared",{"2":{"221":1}}],["gemm",{"2":{"80":3,"86":9,"253":7,"260":113,"295":2}}],["gelu",{"2":{"58":5,"118":12,"119":2,"158":1}}],["gettext",{"2":{"276":1,"282":1}}],["getting",{"0":{"87":1},"1":{"88":1,"89":1,"90":1,"91":1,"92":1,"93":1},"2":{"8":1,"77":1}}],["getaxes",{"2":{"242":1}}],["getindex",{"2":{"224":1,"272":1}}],["getproperty",{"2":{"51":3,"155":2,"237":3}}],["getfield",{"2":{"7":1,"132":3,"155":3}}],["get",{"0":{"101":1},"2":{"3":5,"27":2,"28":2,"29":2,"51":1,"60":1,"61":1,"67":1,"86":1,"89":1,"137":2,"155":1,"169":2,"173":1,"182":1,"186":1,"188":2,"191":2,"200":3,"201":1,"204":2,"209":1,"220":1,"230":1,"241":2,"272":1,"273":2,"278":2,"283":1}}],["g",{"2":{"2":1,"34":4,"36":2,"41":4,"58":3,"81":2,"83":1,"84":2,"86":2,"118":1,"283":1}}],["goal",{"2":{"277":1}}],["goes",{"2":{"84":1}}],["goodies",{"2":{"194":1}}],["good",{"2":{"84":5,"151":1,"165":1,"225":1,"278":1,"287":2}}],["goodfellow",{"2":{"40":1}}],["going",{"2":{"45":1,"167":1}}],["go",{"2":{"2":1,"58":1,"90":1,"101":2,"127":1,"131":1,"142":1,"147":1,"154":1,"155":1,"157":1,"198":1,"247":1}}],["nsteps",{"2":{"293":3}}],["nsamples",{"2":{"293":2}}],["nparameters",{"2":{"278":3}}],["npz",{"2":{"229":1}}],["nb",{"2":{"258":2,"260":2}}],["ndata=gph",{"2":{"257":1}}],["ndim",{"2":{"75":6}}],["ndims",{"2":{"11":1,"19":4,"35":4,"37":9,"40":4,"41":2,"42":2,"76":4,"78":1,"80":4,"86":2,"132":2,"170":2,"171":2,"273":1,"274":1,"292":1,"293":2,"295":1}}],["nvml",{"2":{"238":1,"246":1}}],["nvidia",{"2":{"3":1,"5":1,"69":2,"122":2,"206":2,"214":2,"238":4,"246":4,"255":2,"261":2,"269":2,"275":2,"297":2}}],["nlsolversbase",{"2":{"229":1,"276":1,"282":1}}],["nlayers",{"2":{"56":2,"90":2}}],["nicely",{"2":{"165":1}}],["nilarray",{"2":{"107":1}}],["nitish",{"2":{"64":1}}],["n=4",{"2":{"78":1}}],["n=tanh⁡",{"2":{"38":1}}],["nheads",{"2":{"73":7}}],["nnlibfftwext",{"2":{"263":2,"276":1,"282":2}}],["nnlibforwarddiffext",{"2":{"187":1,"276":1,"282":1}}],["nnlibcudacudnnext",{"2":{"229":2,"240":2}}],["nnlibcudaext",{"2":{"229":2,"240":2}}],["nnlibenzymecoreext",{"2":{"187":1,"276":1,"282":1}}],["nnlib",{"0":{"72":1,"86":1},"1":{"73":1,"74":1,"75":1,"76":1,"77":1,"78":1,"79":1,"80":1,"81":1,"82":1,"83":1,"84":1,"85":1,"86":1},"2":{"35":1,"42":6,"58":27,"60":2,"61":2,"72":3,"73":3,"74":2,"75":4,"76":6,"77":15,"78":9,"79":6,"80":6,"81":12,"82":2,"83":1,"84":8,"85":3,"86":50,"114":3,"161":3,"173":2,"174":2,"187":3,"229":2,"240":2,"263":1,"268":2,"276":4,"282":4}}],["nn",{"2":{"35":2,"39":1,"278":12,"279":11,"285":14,"286":1,"288":9}}],["ntuple",{"2":{"34":1,"35":2,"41":2,"75":5,"78":9,"86":1,"132":1,"237":13,"253":1,"292":1,"293":2}}],["n",{"2":{"34":23,"35":27,"37":54,"39":2,"40":11,"41":11,"42":8,"51":2,"56":5,"66":5,"75":3,"76":4,"78":3,"82":7,"86":52,"90":5,"119":1,"196":3,"204":2,"209":4,"212":1,"225":1,"230":4,"231":5,"234":1,"235":5,"241":10,"245":2,"253":1,"260":4,"268":1,"272":2,"274":3,"277":2,"278":3,"279":2,"286":1,"291":5,"292":6,"293":11,"295":8,"296":1}}],["nccl",{"2":{"27":1,"28":4,"139":1}}],["ncclbackend",{"2":{"27":2,"28":2,"137":4}}],["naturalsort",{"2":{"276":1}}],["native",{"2":{"177":1,"221":1}}],["nabla",{"2":{"193":1}}],["naive",{"2":{"86":2}}],["navab",{"2":{"50":1}}],["nassir",{"2":{"50":1}}],["naming",{"2":{"34":9,"40":2}}],["name>=",{"2":{"178":3}}],["namefreezing",{"2":{"144":1}}],["name=",{"2":{"34":1,"56":1}}],["name=nothing",{"2":{"34":11,"56":1}}],["names",{"2":{"22":1,"41":1,"56":1}}],["namedarrays",{"2":{"276":1}}],["nameddimsarray",{"2":{"86":1}}],["named",{"2":{"22":2,"50":1,"66":2,"155":1,"232":2,"234":1,"278":1}}],["namedtuples",{"2":{"155":1}}],["namedtuple",{"2":{"7":4,"8":1,"10":3,"22":5,"23":1,"34":10,"35":2,"37":9,"38":2,"39":4,"40":9,"41":5,"42":1,"49":1,"51":1,"52":1,"56":3,"89":24,"90":1,"107":1,"118":17,"126":3,"127":6,"130":3,"132":3,"133":7,"134":3,"143":3,"146":4,"147":11,"153":4,"154":3,"156":1,"201":2,"237":147,"267":2,"268":2,"278":1,"285":6,"293":2}}],["name",{"0":{"144":1},"2":{"7":2,"8":5,"22":1,"23":1,"40":1,"56":3,"58":2,"91":1,"114":1,"132":5,"144":5,"145":6,"149":1,"155":1,"245":4}}],["nanmath",{"2":{"187":1,"276":1,"282":1}}],["nans",{"0":{"127":1},"2":{"24":3,"127":7,"128":1}}],["nan",{"2":{"24":5,"86":5,"127":31,"253":1,"295":1}}],["nₙ",{"2":{"19":2}}],["n₂",{"2":{"19":2}}],["n₁",{"2":{"19":2}}],["numer",{"2":{"284":3,"285":3}}],["numerically",{"2":{"58":3,"74":1,"84":1,"203":1}}],["numerical",{"2":{"41":4,"50":1,"66":4,"74":1,"158":1,"278":1}}],["numeric",{"2":{"15":2}}],["numpy",{"2":{"147":1}}],["num",{"2":{"73":1,"77":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"257":1,"261":1,"269":1,"271":32,"273":5,"274":9,"275":1,"279":2,"280":1,"289":1,"297":1}}],["number=0",{"2":{"15":2}}],["number=1",{"2":{"15":2}}],["numbers",{"2":{"15":6,"16":12,"74":1,"79":1,"147":2,"185":1,"188":3}}],["number",{"2":{"9":1,"10":1,"15":8,"29":1,"34":3,"35":10,"36":3,"37":3,"39":7,"40":2,"41":3,"42":3,"49":1,"50":4,"51":3,"52":1,"56":3,"58":1,"64":4,"66":1,"73":3,"74":1,"77":2,"78":1,"85":1,"86":7,"153":1,"187":1,"188":1,"191":2,"192":1,"264":1,"273":1,"277":1,"278":4}}],["nepochs",{"2":{"196":3,"212":3,"234":3,"245":3}}],["next",{"2":{"137":1,"153":1,"278":1,"279":1,"283":1,"285":1,"286":1}}],["neighboring",{"2":{"86":2}}],["neighbour",{"2":{"79":1,"86":1}}],["nest",{"2":{"165":1}}],["nested",{"0":{"20":1,"164":1,"179":1},"1":{"165":1,"166":1,"167":1,"168":1,"169":1,"170":1,"171":1,"172":1},"2":{"3":1,"20":2,"25":2,"52":5,"55":2,"86":2,"121":1,"164":1,"167":2,"179":3,"247":1,"249":1,"254":1}}],["ness",{"2":{"86":1}}],["negative",{"2":{"50":1,"283":1}}],["netpbm",{"2":{"276":1,"282":1}}],["net",{"2":{"50":1,"251":11,"278":1,"285":1,"288":2}}],["network",{"0":{"265":1,"276":1,"278":1,"285":1,"286":1,"287":1},"1":{"277":1,"278":1,"279":1,"280":1},"2":{"15":1,"58":1,"66":1,"72":1,"83":1,"90":1,"98":2,"100":1,"131":1,"165":1,"166":2,"187":1,"198":2,"242":5,"243":3,"265":1,"277":1,"278":6,"279":4,"285":6}}],["networks",{"0":{"250":1,"256":1},"1":{"257":1,"258":1,"259":1,"260":1,"261":1},"2":{"15":3,"40":1,"47":1,"50":2,"58":3,"64":3,"71":1,"75":1,"78":1,"84":1,"100":3,"172":1,"207":1,"221":2,"250":2,"278":1}}],["nearest",{"2":{"42":7,"78":9,"79":4,"86":1}}],["necessary",{"2":{"37":3,"153":1}}],["never",{"2":{"21":1,"40":1,"45":1,"80":1,"285":1}}],["neuralpde",{"2":{"247":2}}],["neuralode",{"2":{"231":7,"234":4,"237":5}}],["neuralodecompact",{"2":{"231":2,"234":1,"237":1}}],["neural",{"0":{"228":1,"231":1,"232":1,"236":1,"250":1,"265":1,"276":1,"278":1,"281":1,"285":1,"286":1,"287":1},"1":{"229":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"277":1,"278":1,"279":1,"280":1,"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"15":4,"47":1,"50":1,"55":1,"58":3,"64":4,"71":2,"72":1,"75":1,"83":1,"90":1,"98":2,"100":4,"131":3,"165":3,"166":2,"172":1,"187":1,"198":2,"207":1,"221":5,"228":2,"231":1,"232":1,"265":1,"277":1,"278":7,"279":2,"285":7,"286":1,"288":2}}],["needed",{"2":{"18":2,"19":1,"49":1,"60":2,"83":1,"84":2,"114":1,"139":1,"164":1,"174":1,"185":2}}],["need",{"2":{"6":1,"8":1,"22":1,"25":1,"40":1,"45":1,"56":2,"73":1,"84":1,"86":5,"89":1,"102":1,"114":1,"118":3,"126":1,"130":1,"137":1,"138":2,"153":3,"154":2,"164":1,"174":1,"183":1,"186":1,"188":1,"200":1,"201":1,"210":2,"225":1,"231":1,"243":1,"251":1,"267":1,"268":1,"283":1,"285":1}}],["needs",{"2":{"3":1,"7":1,"10":1,"35":1,"36":1,"53":1,"60":2,"74":1,"132":1,"138":1,"168":1,"181":1}}],["newtonian",{"2":{"283":2,"285":1,"287":1}}],["new",{"0":{"105":1,"108":1,"112":1,"117":1,"236":1},"2":{"1":1,"23":9,"25":5,"36":1,"38":4,"40":5,"49":1,"56":1,"102":1,"104":1,"115":1,"130":1,"138":1,"151":1,"153":1,"154":1,"164":1,"242":2,"278":3}}],["nom",{"2":{"278":1}}],["noisy",{"2":{"200":2}}],["noise=0",{"2":{"291":1}}],["noise",{"2":{"196":1,"291":5,"295":2}}],["now",{"2":{"72":1,"105":1,"112":1,"115":1,"116":4,"117":2,"118":4,"119":1,"126":2,"127":2,"138":1,"140":3,"147":3,"153":1,"154":1,"158":1,"165":1,"166":2,"167":1,"168":1,"172":2,"174":1,"187":2,"192":1,"196":1,"201":2,"203":1,"210":1,"213":1,"225":1,"268":1,"276":1,"278":2,"279":1,"283":1,"285":2,"286":1,"288":1}}],["norm",{"2":{"41":1,"75":2,"165":7,"166":7,"167":6,"168":6,"172":12}}],["normally",{"2":{"118":1}}],["normalizing",{"0":{"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1},"2":{"58":1,"64":1}}],["normalize",{"2":{"41":2,"86":1}}],["normalized",{"2":{"41":7,"50":3,"66":8,"82":1,"86":9,"254":2}}],["normalization",{"0":{"41":1,"66":1},"2":{"15":1,"19":1,"41":11,"66":12,"166":1}}],["normalises",{"2":{"41":2,"66":2}}],["normal",{"2":{"13":2,"15":12,"16":6,"77":1,"86":3,"147":11,"185":6,"271":2,"278":1,"285":3}}],["nooplayer",{"2":{"34":6,"36":3,"40":3}}],["nonunitary",{"2":{"86":2}}],["nonzero",{"2":{"86":2}}],["nonlinear",{"2":{"15":1}}],["non",{"0":{"135":1},"2":{"15":2,"22":1,"24":1,"28":2,"39":1,"49":2,"50":1,"56":9,"58":3,"63":1,"65":1,"78":2,"79":1,"84":1,"107":1,"110":1,"122":2,"125":1,"140":1,"174":2,"220":1,"231":2,"283":1}}],["nonetheless",{"2":{"2":1}}],["none",{"2":{"2":1,"21":1,"22":1,"24":1,"54":1,"73":1,"86":1,"127":1}}],["nodes",{"2":{"257":1}}],["node",{"2":{"3":1,"257":3}}],["notangent",{"2":{"96":1,"127":1}}],["notice",{"2":{"81":2,"86":1,"153":1,"154":1,"166":2,"234":1}}],["notion",{"2":{"52":1}}],["not",{"2":{"2":2,"3":4,"4":2,"6":1,"7":1,"8":1,"15":2,"19":1,"24":1,"25":1,"27":2,"34":1,"38":11,"40":1,"41":1,"47":3,"49":1,"51":1,"54":1,"55":1,"56":1,"62":1,"63":1,"64":1,"66":3,"74":1,"76":8,"77":3,"78":1,"79":1,"80":3,"83":2,"84":5,"85":1,"86":12,"89":1,"94":1,"97":1,"100":1,"107":2,"114":1,"120":1,"122":1,"123":1,"124":4,"126":1,"127":1,"137":1,"140":1,"149":1,"151":1,"163":1,"164":1,"165":5,"169":1,"172":1,"174":3,"184":1,"186":1,"213":2,"220":1,"225":1,"231":1,"234":2,"236":1,"237":2,"253":7,"260":102,"265":1,"278":1,"279":1,"283":1,"295":2}}],["notes",{"2":{"63":1,"65":1,"96":1,"194":1,"228":1}}],["note",{"2":{"2":1,"3":4,"15":1,"22":1,"26":1,"40":1,"49":1,"50":1,"52":2,"55":1,"56":1,"58":1,"60":2,"71":1,"74":1,"77":1,"86":6,"118":1,"119":1,"124":2,"126":1,"137":1,"147":1,"153":1,"154":1,"159":1,"164":1,"167":1,"177":1,"190":1,"193":1,"205":1,"231":1,"237":1,"276":1}}],["nothing",{"2":{"2":4,"3":5,"4":1,"8":2,"22":5,"27":3,"28":1,"34":4,"35":4,"36":1,"38":11,"39":4,"40":10,"41":5,"49":2,"50":12,"51":1,"52":2,"55":1,"62":1,"63":1,"64":2,"65":1,"66":15,"73":4,"86":7,"89":27,"96":1,"118":24,"127":1,"133":4,"165":2,"167":2,"168":2,"208":1,"209":2,"223":1,"224":1,"225":2,"230":2,"237":117,"241":6,"268":12,"273":1,"274":3,"278":1,"285":1,"291":3}}],["no",{"2":{"2":2,"4":2,"5":1,"21":1,"22":3,"40":1,"47":1,"49":1,"50":2,"54":1,"57":1,"73":1,"86":3,"89":1,"95":1,"100":1,"109":1,"112":1,"114":1,"119":1,"126":1,"127":1,"133":2,"139":3,"149":2,"150":1,"153":1}}],["u=u",{"2":{"251":1}}],["uris",{"2":{"229":1}}],["url",{"2":{"71":1}}],["url=",{"2":{"68":1}}],["u0=res",{"2":{"225":1}}],["u0",{"2":{"223":2,"225":2,"226":1,"284":2,"285":2,"286":1,"288":2}}],["utc",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["utility",{"0":{"233":1,"244":1,"283":1}}],["utilities",{"0":{"48":1},"1":{"49":1,"50":1,"51":1,"52":1,"53":1,"54":1,"55":1,"56":1,"57":1},"2":{"94":1,"174":1}}],["utils",{"0":{"26":1},"1":{"27":1,"28":1,"29":1,"30":1,"31":1,"32":1},"2":{"22":2,"52":1,"84":1,"165":4,"260":2,"274":2}}],["uint8",{"2":{"86":2}}],["ulyanov",{"2":{"41":1,"66":1}}],["u",{"2":{"18":7,"223":4,"225":12,"226":6,"231":7,"235":2,"250":4,"251":11,"253":4,"254":7,"284":3,"285":4,"287":1,"288":2}}],["upchain",{"2":{"271":1}}],["upchain=chain",{"2":{"271":1}}],["upto",{"2":{"249":1}}],["up",{"0":{"286":1},"2":{"51":1,"78":1,"81":2,"86":1,"169":1,"172":1,"220":2,"228":1}}],["upsamples",{"2":{"78":4}}],["upsampled",{"2":{"42":1,"78":4}}],["upsample",{"2":{"42":5,"78":22,"116":1,"271":3}}],["upsampling",{"0":{"42":1,"78":1},"2":{"42":2}}],["upscaling",{"2":{"42":1,"78":2}}],["upscale",{"2":{"42":2}}],["updating",{"0":{"53":1,"102":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1},"2":{"31":1,"66":2,"88":1,"196":1}}],["updated",{"2":{"25":1,"34":6,"36":3,"38":7,"40":1,"41":1,"49":5,"50":1,"62":1,"64":2,"66":2,"90":1,"99":1,"100":1,"116":1,"140":1,"153":1,"201":1,"220":1,"267":1}}],["updates",{"2":{"25":1,"41":1,"49":3,"110":1,"137":1}}],["update",{"2":{"10":2,"36":3,"38":5,"41":3,"49":2,"53":1,"56":1,"62":1,"64":2,"88":2,"102":1,"190":1}}],["upon",{"2":{"5":1,"86":5}}],["uncertain",{"2":{"279":1}}],["unchanged",{"2":{"34":1}}],["unable",{"2":{"190":1}}],["unnecessary",{"2":{"166":1}}],["unnormalized",{"2":{"41":1}}],["untrained",{"2":{"285":2,"288":1}}],["unthunk",{"2":{"127":1}}],["until",{"2":{"36":1}}],["un",{"2":{"114":1}}],["unexpected",{"2":{"100":1,"253":7,"260":102,"295":2}}],["unpadded",{"2":{"86":1}}],["unpack",{"2":{"8":1,"276":1,"282":1}}],["unfold",{"2":{"77":14}}],["unfreezes",{"2":{"22":1}}],["unfreeze",{"2":{"22":2}}],["unwrapped",{"2":{"52":1}}],["unwrap",{"2":{"52":2}}],["unwraps",{"2":{"22":1}}],["unreleased",{"2":{"68":1}}],["unreasonably",{"2":{"34":1}}],["unroll",{"2":{"60":1}}],["unrolls",{"2":{"34":1}}],["undone",{"2":{"25":1}}],["undef",{"2":{"82":1,"191":1}}],["undefined",{"2":{"8":1,"38":1}}],["understood",{"2":{"80":2,"86":2}}],["understands",{"2":{"86":1}}],["understandable",{"2":{"86":2}}],["understand",{"2":{"56":1,"151":1,"201":1,"228":1,"231":1}}],["understanding",{"2":{"15":2,"242":1}}],["underlying",{"2":{"80":1,"231":1}}],["under",{"2":{"58":1,"174":1}}],["undesirable",{"2":{"11":1}}],["unlike",{"2":{"16":1,"56":1,"80":1}}],["unless",{"2":{"2":1,"24":1,"58":1,"86":1,"242":1}}],["unified",{"2":{"221":1}}],["uniformly",{"2":{"137":1}}],["uniform",{"2":{"15":6,"16":6,"35":5,"38":6,"39":4,"58":1,"89":1,"153":3}}],["uninitiated",{"0":{"187":1},"1":{"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1}}],["unicodefun",{"2":{"276":1,"282":1}}],["unicodeplots",{"2":{"58":2}}],["unicode",{"2":{"58":2}}],["universal",{"2":{"40":1}}],["unity",{"2":{"283":1}}],["unitfulext",{"2":{"276":2,"282":2}}],["unitful",{"2":{"276":3,"282":3}}],["unitfulatomic",{"2":{"229":1}}],["unitrange",{"2":{"76":6,"190":1}}],["units",{"2":{"58":4,"86":1}}],["unit",{"2":{"38":1,"58":5,"84":1}}],["union",{"2":{"2":3,"8":1,"15":2,"24":2,"38":2,"41":1,"47":1,"50":4,"64":1,"75":1,"86":2,"241":2,"272":1,"291":2}}],["unsupported",{"2":{"8":1,"13":1}}],["unsafeatomicsllvm",{"2":{"192":1,"282":1}}],["unsafeatomics",{"2":{"187":1,"192":1,"276":1,"282":2}}],["unsafe",{"2":{"5":1}}],["unknown",{"2":{"3":2}}],["unknowndevice",{"2":{"3":2}}],["usr",{"2":{"206":2,"214":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"297":2}}],["usacases",{"2":{"122":3}}],["usage",{"2":{"22":1,"86":1,"101":1,"124":1}}],["usually",{"2":{"42":1,"50":2,"56":1,"58":1,"188":1,"225":1}}],["usual",{"2":{"41":3,"168":1}}],["us",{"2":{"18":2,"21":1,"125":1,"126":1,"127":6,"154":2,"155":3,"164":1,"174":1,"187":1,"191":1,"194":1,"195":1,"196":3,"200":1,"236":1,"285":1,"286":1,"288":2}}],["usecases",{"2":{"52":1,"55":1,"247":1}}],["uses",{"2":{"41":1,"45":1,"52":4,"53":1,"55":1,"63":1,"65":2,"77":3,"80":1,"82":1,"86":3,"107":1,"158":1,"161":2,"164":1,"167":1,"178":1,"191":1,"221":1,"283":2,"284":1,"285":1,"287":1}}],["userbase",{"2":{"45":1}}],["user",{"2":{"7":2,"8":2,"34":1,"55":1,"60":2,"86":3,"98":1,"130":1,"167":1,"174":1,"176":2}}],["users",{"2":{"6":1,"7":2,"22":1,"27":2,"28":1,"49":1,"52":6,"62":1,"98":1,"100":1,"107":1,"110":1,"147":1,"158":1,"190":1,"228":2,"231":1,"237":1}}],["useful",{"2":{"3":1,"15":3,"24":2,"50":1,"51":1,"54":1,"56":2,"67":1,"71":1,"77":1,"84":1,"86":1,"119":2,"125":1,"126":1}}],["use",{"0":{"100":1,"177":1},"2":{"2":2,"4":2,"5":1,"15":1,"18":2,"19":1,"21":1,"22":2,"24":1,"27":4,"35":11,"38":18,"39":16,"41":3,"49":4,"52":5,"56":1,"60":3,"62":1,"64":1,"65":2,"67":1,"68":1,"71":1,"77":1,"82":4,"84":5,"85":1,"86":3,"89":2,"90":2,"94":2,"100":1,"101":2,"110":1,"114":2,"115":4,"116":1,"117":1,"118":3,"119":2,"122":6,"124":2,"125":1,"126":1,"137":2,"141":1,"143":1,"147":2,"148":2,"149":1,"158":2,"161":1,"162":1,"164":3,"165":1,"166":1,"167":2,"169":2,"177":1,"180":1,"181":2,"184":2,"186":1,"187":1,"188":1,"191":1,"192":1,"193":1,"194":2,"196":3,"200":2,"201":3,"203":2,"207":2,"209":1,"228":1,"230":1,"231":1,"232":2,"234":2,"247":1,"251":1,"257":2,"260":2,"266":1,"267":1,"268":2,"276":1,"277":1,"278":5,"279":1,"283":1,"285":5,"290":1,"291":1}}],["used",{"2":{"2":4,"3":6,"8":2,"15":1,"24":2,"25":2,"32":1,"35":3,"36":3,"38":1,"39":1,"40":1,"41":7,"42":1,"45":1,"47":2,"49":4,"50":5,"52":1,"53":1,"55":1,"56":11,"64":3,"73":2,"74":2,"75":1,"77":1,"78":2,"80":1,"81":2,"83":1,"84":2,"85":1,"86":9,"107":1,"111":1,"114":1,"115":1,"117":1,"118":1,"140":1,"149":2,"158":1,"164":1,"165":2,"183":1,"184":1,"186":1,"188":1,"235":1,"260":1,"274":1,"281":1}}],["using",{"0":{"46":1,"95":1,"118":1,"119":1,"135":1,"166":1,"170":1,"171":1,"172":1,"202":1,"221":1,"228":1,"235":1,"262":1,"270":1},"1":{"47":1,"119":1,"222":1,"223":1,"224":1,"225":1,"226":1,"227":1,"229":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"263":1,"264":1,"265":1,"266":1,"267":1,"268":1,"269":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"2":1,"3":1,"4":2,"5":2,"7":2,"15":2,"18":2,"22":1,"23":1,"30":1,"31":1,"32":1,"34":11,"38":6,"39":1,"40":2,"41":1,"45":3,"47":4,"49":7,"50":4,"52":1,"53":1,"56":9,"58":2,"67":3,"68":2,"74":1,"77":1,"78":6,"80":2,"84":1,"86":6,"88":1,"89":2,"90":5,"92":1,"95":2,"96":1,"98":1,"99":1,"100":2,"107":1,"112":1,"117":1,"118":8,"119":2,"124":2,"126":1,"136":1,"140":1,"147":2,"148":2,"150":2,"151":2,"155":2,"158":1,"159":1,"164":4,"165":6,"166":1,"167":1,"168":2,"169":5,"172":14,"173":2,"176":1,"177":1,"180":1,"187":1,"188":2,"191":1,"194":2,"195":1,"196":1,"197":1,"198":2,"200":1,"202":1,"205":1,"206":1,"207":1,"208":2,"213":1,"214":1,"221":1,"224":1,"225":1,"227":1,"229":2,"231":1,"235":1,"237":2,"238":1,"246":1,"247":3,"249":1,"250":1,"251":1,"255":1,"256":1,"260":2,"261":1,"265":1,"268":1,"269":1,"270":1,"274":2,"275":1,"276":2,"278":1,"279":1,"280":1,"282":1,"284":1,"285":1,"289":1,"297":1}}],["wu",{"2":{"66":1}}],["w3",{"2":{"56":3,"90":3}}],["w3=dense",{"2":{"56":1,"90":1}}],["w2",{"2":{"56":3,"90":3}}],["w2=",{"2":{"56":1,"90":1}}],["w1",{"2":{"56":3,"90":3}}],["w1=dense",{"2":{"56":1,"90":1}}],["w=w",{"2":{"251":1}}],["w=",{"2":{"56":1}}],["w=rand",{"2":{"56":1}}],["w=ones",{"2":{"56":3}}],["w=gv∥v∥weight",{"2":{"41":1}}],["w",{"2":{"37":6,"39":1,"42":4,"56":21,"77":11,"78":4,"82":7,"86":22,"90":4,"147":1,"155":2,"161":1,"188":1,"196":5,"209":1,"230":1,"250":4,"251":10}}],["wrote",{"0":{"98":1},"1":{"99":1,"100":1},"2":{"220":1}}],["wrong",{"2":{"58":1,"84":2}}],["write",{"2":{"147":1,"188":2,"190":1,"196":1,"242":1}}],["writes",{"2":{"81":3}}],["writing",{"0":{"134":1},"2":{"114":1,"135":1,"151":1}}],["written",{"2":{"45":1}}],["wrapping",{"2":{"76":1,"107":1}}],["wrapper",{"2":{"18":2,"24":1,"38":1,"55":1,"61":1,"80":1,"86":1,"154":2,"168":1,"250":1,"268":1}}],["wrappers",{"0":{"18":1},"2":{"80":2,"86":5}}],["wrappedlayer",{"2":{"36":1}}],["wrappedfunction",{"2":{"35":1,"40":2,"127":3,"237":9,"285":1}}],["wrapped",{"2":{"7":3,"24":1,"34":8,"40":2,"154":1}}],["wrap",{"2":{"31":1,"125":1,"137":1,"154":1,"224":1}}],["wraps",{"2":{"28":2,"38":2,"40":1,"47":1}}],["wrt",{"2":{"18":2,"49":3,"50":2,"51":1,"96":1,"138":1,"168":2,"251":1}}],["wikipedia",{"2":{"86":3}}],["wihch",{"2":{"86":1}}],["wide",{"2":{"192":1}}],["wider",{"2":{"100":1,"105":1,"109":1,"194":1}}],["widest",{"2":{"84":1}}],["widely",{"2":{"80":2,"86":2}}],["width=30",{"2":{"86":2}}],["width",{"2":{"35":1,"273":6}}],["wise",{"2":{"58":1}}],["wio",{"2":{"38":1}}],["wio×x+who×hprev+bo",{"2":{"38":1}}],["wig",{"2":{"38":1}}],["wig×x+whg×hprev+bg",{"2":{"38":1}}],["wif",{"2":{"38":1}}],["wif×x+whf×hprev+bf",{"2":{"38":1}}],["wii",{"2":{"38":1}}],["wii×x+whi×hprev+bi",{"2":{"38":1}}],["wiz",{"2":{"38":1}}],["wiz×x+biz+whz×hprev+bhz",{"2":{"38":1}}],["wir",{"2":{"38":1}}],["wir×x+bir+whr×hprev+bhr",{"2":{"38":1}}],["win",{"2":{"38":1}}],["win×x+bin+r⋅",{"2":{"38":1}}],["windows",{"2":{"77":5,"86":3}}],["window",{"2":{"37":18,"75":4,"77":3,"86":53,"147":10}}],["will",{"2":{"3":2,"5":1,"11":1,"15":3,"19":4,"21":1,"24":2,"25":3,"30":2,"34":5,"36":2,"38":3,"40":1,"41":3,"45":4,"47":1,"49":1,"52":4,"54":4,"55":1,"56":8,"66":1,"68":1,"73":1,"74":2,"77":1,"80":4,"81":5,"83":1,"84":3,"86":5,"89":1,"90":1,"95":2,"96":1,"97":4,"107":1,"118":5,"119":2,"125":1,"126":1,"127":2,"131":1,"142":1,"147":2,"151":1,"154":1,"158":2,"160":1,"164":2,"165":3,"167":1,"169":2,"170":1,"173":1,"174":2,"184":1,"186":1,"188":1,"189":1,"190":1,"191":2,"192":4,"193":1,"194":1,"196":2,"198":2,"200":3,"201":6,"203":1,"207":2,"213":2,"220":1,"221":2,"224":4,"225":1,"231":1,"234":1,"247":3,"249":1,"250":1,"251":2,"252":3,"262":1,"266":1,"267":1,"268":2,"271":1,"276":2,"278":4,"285":1}}],["withgradient",{"2":{"56":1,"84":2}}],["within",{"2":{"56":1,"64":2,"66":2,"84":6,"86":9,"165":2,"260":1,"274":1}}],["without",{"2":{"2":1,"6":1,"56":1,"84":1,"86":1,"100":2,"107":2,"118":1,"119":1,"122":2,"151":1,"165":2,"190":1,"201":1,"225":2}}],["with",{"0":{"207":1},"1":{"208":1,"209":1,"210":1,"211":1,"212":1,"213":1,"214":1},"2":{"1":1,"2":1,"3":1,"5":2,"6":1,"7":2,"8":1,"10":2,"11":1,"15":5,"19":2,"22":5,"23":2,"24":3,"25":1,"28":2,"31":1,"32":2,"34":14,"35":11,"36":6,"37":12,"38":10,"39":12,"40":5,"41":3,"42":5,"45":1,"47":1,"49":2,"50":6,"51":2,"52":2,"53":5,"54":1,"56":3,"58":2,"60":7,"62":1,"63":2,"65":1,"67":1,"72":1,"74":4,"75":6,"76":10,"77":3,"78":5,"79":3,"80":3,"83":1,"84":4,"85":3,"86":20,"90":4,"95":2,"96":1,"98":1,"100":7,"104":1,"105":1,"107":2,"114":3,"116":1,"117":1,"118":2,"119":3,"120":2,"122":1,"123":2,"124":2,"125":3,"126":1,"127":1,"135":2,"137":2,"139":2,"140":1,"141":1,"147":4,"149":1,"151":1,"153":1,"154":1,"155":6,"158":1,"160":1,"161":6,"164":3,"165":1,"166":4,"167":1,"170":1,"171":1,"172":1,"173":1,"180":1,"184":2,"185":2,"186":2,"187":1,"188":5,"189":2,"190":1,"191":2,"192":1,"194":2,"196":4,"198":1,"200":1,"202":1,"203":1,"204":2,"209":1,"211":1,"212":1,"213":1,"220":1,"221":2,"222":1,"225":1,"230":1,"231":3,"232":1,"233":1,"241":1,"242":2,"243":1,"244":1,"248":1,"249":1,"251":2,"253":1,"256":1,"257":2,"258":1,"259":1,"260":1,"264":1,"270":1,"271":2,"272":1,"273":1,"274":1,"276":2,"277":1,"278":6,"279":2,"283":5,"285":2,"286":2,"290":1,"291":2,"292":1,"293":1,"294":1,"295":1}}],["why",{"0":{"98":1,"100":1},"1":{"99":1,"100":1},"2":{"153":1}}],["what",{"2":{"84":1,"86":2,"126":1,"127":1,"153":1,"155":1,"168":1,"176":1,"182":1,"188":2}}],["whatever",{"2":{"40":1,"49":1,"86":1}}],["whole",{"2":{"41":2,"74":1,"85":1}}],["whose",{"2":{"39":1,"41":1,"84":1}}],["who",{"2":{"38":1}}],["whg",{"2":{"38":1}}],["whf",{"2":{"38":1}}],["whn",{"2":{"38":1}}],["whn×hprev+bhn",{"2":{"38":1}}],["whz",{"2":{"38":1}}],["whr",{"2":{"38":1}}],["whcn",{"2":{"35":1,"36":2,"41":3}}],["whi",{"2":{"38":1}}],["while",{"2":{"3":1,"23":1,"34":1,"42":1,"47":1,"55":1,"86":1,"100":1,"119":1,"145":1,"147":1,"151":1,"153":1,"158":1,"194":1,"256":1}}],["which",{"0":{"130":1},"2":{"2":1,"3":1,"7":1,"15":1,"21":1,"22":14,"25":1,"34":3,"35":1,"37":3,"38":1,"40":3,"41":4,"49":1,"50":5,"55":1,"56":5,"58":3,"66":1,"74":1,"78":1,"79":1,"81":1,"82":1,"84":4,"85":1,"86":5,"100":2,"101":1,"116":1,"117":1,"125":2,"151":1,"153":1,"154":2,"155":1,"158":1,"169":1,"176":1,"187":1,"188":2,"190":1,"196":1,"202":1,"225":1,"231":1,"235":1,"268":1,"276":1,"278":1,"281":1,"284":1,"285":1}}],["whether",{"2":{"24":2,"36":1,"47":1,"86":6,"200":1}}],["whereas",{"2":{"35":1,"77":1,"86":1}}],["where",{"2":{"3":1,"15":7,"23":1,"24":1,"35":5,"37":6,"38":6,"39":4,"40":3,"41":3,"42":1,"50":4,"55":1,"58":2,"60":2,"63":1,"65":1,"66":1,"67":1,"75":1,"78":7,"80":1,"82":2,"83":1,"84":2,"86":15,"124":1,"127":2,"162":1,"174":1,"193":1,"201":1,"202":1,"241":1,"278":2,"279":2,"283":3,"291":1,"292":2,"293":3}}],["whenever",{"2":{"80":1}}],["when",{"2":{"3":2,"15":2,"35":1,"39":1,"40":1,"42":1,"47":2,"50":2,"51":1,"56":2,"63":1,"74":1,"80":2,"84":5,"86":2,"95":1,"119":1,"122":1,"123":1,"124":2,"139":1,"144":1,"153":2,"159":1,"164":1,"182":1,"192":2,"201":1,"213":1,"231":1,"283":1,"286":1}}],["woodburymatrices",{"2":{"276":1,"282":1}}],["world",{"2":{"225":1}}],["worthwhile",{"2":{"100":1}}],["word",{"2":{"39":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["words",{"2":{"35":1}}],["workerutilities",{"2":{"229":1}}],["worker",{"2":{"163":1}}],["workers",{"2":{"29":3,"30":5}}],["worked",{"2":{"105":1,"167":1}}],["working",{"2":{"24":1,"120":1,"122":2,"124":1,"140":1,"141":1,"149":1,"186":1,"277":1}}],["work",{"2":{"2":1,"5":1,"22":1,"40":1,"42":1,"71":1,"84":2,"123":1,"124":1,"128":1,"130":1,"137":1,"140":1,"151":1,"164":1,"174":1,"184":1,"188":2,"193":1,"204":2,"260":1}}],["workspace",{"2":{"86":1}}],["works",{"2":{"2":1,"24":1,"42":2,"45":1,"52":1,"55":1,"81":1,"86":1,"100":1,"112":1,"124":1,"135":1,"139":1,"153":2,"155":1,"163":1,"164":1,"165":1,"188":1,"231":1}}],["would",{"2":{"7":2,"8":1,"35":2,"40":3,"79":2,"80":1,"86":4,"98":1,"100":1,"107":1,"118":2,"126":1,"144":1,"174":1}}],["wondered",{"2":{"100":1}}],["won",{"2":{"2":1,"8":1,"56":1,"110":1,"131":1,"201":1,"221":1}}],["wall",{"2":{"278":1}}],["walk",{"2":{"86":1}}],["wan",{"2":{"164":1}}],["wanted",{"2":{"100":1}}],["wants",{"2":{"86":2,"176":1}}],["want",{"0":{"69":1,"70":1},"2":{"8":2,"49":2,"68":1,"90":1,"119":1,"131":1,"144":1,"151":1,"165":1,"168":1,"188":1,"194":1,"221":1,"224":1,"252":1}}],["waveforms",{"0":{"281":1},"1":{"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1}}],["waveform",{"2":{"86":3,"283":5,"284":7,"285":9,"286":4,"288":12}}],["warmup",{"2":{"286":1}}],["warntype",{"2":{"237":3}}],["warn",{"2":{"54":3,"57":1}}],["warning",{"2":{"2":3,"3":1,"4":2,"8":1,"18":2,"35":1,"41":1,"45":3,"47":2,"49":1,"50":1,"52":5,"53":1,"54":5,"55":1,"63":1,"84":1,"86":1,"94":1,"142":1,"149":1,"164":1,"165":2,"204":2,"219":1,"220":1,"260":13,"274":1}}],["warde",{"2":{"40":1}}],["way",{"2":{"34":1,"45":3,"47":2,"58":3,"64":2,"74":1,"84":1,"86":2,"90":1,"100":1,"125":1,"151":2,"170":1,"178":1,"202":2,"278":1}}],["wasteful",{"2":{"153":1}}],["was",{"2":{"2":1,"5":3,"7":1,"11":1,"15":1,"25":1,"35":1,"45":1,"78":1,"86":4,"107":3,"109":1,"110":1,"116":1,"118":1,"139":2,"158":2,"197":1,"206":1,"214":1,"225":1,"227":1,"231":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["webp",{"2":{"276":1,"282":1}}],["welcome",{"2":{"164":1}}],["well",{"2":{"47":1,"50":1,"98":1,"118":1,"120":1,"122":2,"165":1,"166":1,"187":1}}],["weird",{"2":{"153":1,"155":2}}],["weight6",{"2":{"147":12}}],["weight3",{"2":{"147":4}}],["weight1",{"2":{"147":4}}],["weightnorm",{"2":{"41":4,"100":1,"153":1}}],["weighting",{"2":{"50":1}}],["weightinitializerscudaext",{"2":{"229":2,"240":2}}],["weightinitializerschainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["weightinitializersgpuarraysext",{"2":{"192":1,"229":1,"282":1}}],["weightinitializers",{"0":{"12":1,"109":1},"1":{"13":1,"14":1,"15":1,"16":1},"2":{"3":1,"15":8,"16":24,"153":3,"185":2,"187":2,"192":1,"229":2,"240":1,"276":2,"282":3,"285":1}}],["weightih×x+biasih+weighthh×hprev+biashh",{"2":{"38":1}}],["weights",{"0":{"185":1},"1":{"186":1},"2":{"38":4,"50":1,"86":2,"147":1,"185":10,"186":5,"242":1,"278":2,"279":3}}],["weight=truncated",{"2":{"285":3}}],["weight=l",{"2":{"153":1}}],["weight=randn32",{"2":{"116":1}}],["weight=rand32",{"2":{"39":1,"116":1}}],["weight=ps",{"2":{"56":1}}],["weight=ones32",{"2":{"39":1}}],["weight=glorot",{"2":{"35":1,"153":1}}],["weight=nothing",{"2":{"35":1,"38":3,"39":3}}],["weight=zero",{"2":{"23":1}}],["weight",{"2":{"12":1,"15":3,"22":3,"23":4,"25":4,"35":6,"38":12,"39":24,"41":5,"63":4,"65":4,"84":1,"89":15,"118":15,"127":4,"133":2,"143":4,"144":2,"145":4,"147":5,"153":6,"155":4,"165":2,"167":4,"168":4,"185":1,"196":1,"237":30,"242":5,"243":2,"260":2,"267":2,"274":1,"278":1,"285":3,"287":3}}],["weakrefstrings",{"2":{"229":1}}],["weak",{"2":{"94":1}}],["weren",{"2":{"114":1}}],["were",{"2":{"11":1,"102":1,"109":1,"114":2,"164":1}}],["we",{"0":{"98":1,"177":1},"1":{"99":1,"100":1},"2":{"2":4,"3":3,"7":2,"8":4,"21":2,"28":1,"34":3,"35":4,"38":6,"39":4,"45":1,"49":2,"50":2,"52":5,"54":1,"56":3,"60":2,"63":2,"65":3,"86":13,"89":4,"90":3,"94":1,"100":4,"102":2,"107":1,"109":1,"110":2,"114":1,"115":1,"116":1,"118":12,"119":1,"120":1,"122":2,"123":3,"124":4,"126":3,"127":6,"128":2,"130":1,"131":3,"135":1,"138":1,"140":3,"141":2,"142":1,"143":1,"145":1,"147":6,"151":1,"153":5,"154":4,"155":4,"156":1,"158":3,"162":1,"163":1,"164":3,"165":2,"166":3,"167":3,"168":3,"169":3,"172":2,"173":2,"174":2,"177":1,"187":1,"188":11,"190":4,"191":9,"192":6,"194":2,"196":3,"198":1,"200":3,"201":11,"202":2,"203":1,"204":1,"205":3,"207":2,"210":2,"213":3,"220":1,"221":4,"224":4,"225":5,"228":1,"231":3,"234":1,"236":1,"237":2,"242":2,"247":3,"249":2,"250":2,"251":3,"252":3,"256":2,"257":2,"262":1,"266":1,"267":2,"268":5,"271":2,"276":3,"277":1,"278":8,"279":7,"283":3,"285":9,"286":1,"291":2}}],[">randn32",{"2":{"56":2}}],[">",{"2":{"2":3,"3":8,"8":2,"15":8,"16":24,"38":1,"50":3,"56":2,"58":3,"80":2,"81":1,"84":2,"86":3,"118":1,"127":2,"132":1,"147":26,"165":2,"166":2,"167":2,"168":2,"172":1,"173":2,"174":4,"203":1,"253":2,"264":1,"268":1,"283":1,"287":1,"292":2}}],["cvae",{"2":{"270":2,"271":18,"274":9}}],["cvpr",{"2":{"50":1,"78":1}}],["cnt",{"2":{"260":4}}],["cnew",{"2":{"38":2}}],["cnew=f⋅cprev+i⋅ghnew=o⋅tanh",{"2":{"38":1}}],["csv",{"2":{"229":1}}],["cst",{"2":{"147":15}}],["cycle",{"2":{"165":2,"253":2,"295":1}}],["cdev",{"2":{"149":2,"150":1,"204":3,"222":1,"226":1,"248":1,"253":2,"256":1,"267":1,"268":1,"270":1,"290":1}}],["cdims",{"2":{"63":3,"86":19}}],["cc",{"2":{"90":1,"204":1,"213":1,"253":78,"260":1139,"274":1,"295":19}}],["ctc",{"2":{"83":1}}],["cimg",{"2":{"273":2}}],["circ",{"2":{"291":8}}],["circle",{"2":{"284":1,"285":2,"288":4}}],["circbuff",{"2":{"253":1}}],["circumvented",{"2":{"150":1}}],["circularly",{"2":{"76":1}}],["circular",{"2":{"15":1,"35":1,"76":9}}],["ci",{"2":{"100":1,"209":1,"219":1,"230":1,"241":2,"272":2}}],["cite",{"2":{"71":2}}],["citation",{"0":{"71":1}}],["cifar",{"2":{"58":1}}],["c",{"2":{"23":3,"35":4,"37":21,"38":1,"42":5,"77":4,"78":2,"80":13,"82":4,"86":11,"201":2,"209":1,"230":1,"237":9,"279":2,"283":4}}],["cluster",{"2":{"279":1}}],["clamp",{"2":{"271":1}}],["classifying",{"2":{"279":1}}],["classify",{"2":{"198":1,"277":1}}],["classifier=st",{"2":{"201":1}}],["classifier",{"0":{"201":1},"2":{"201":11,"202":3,"278":1}}],["classifiers",{"2":{"50":2}}],["classified",{"2":{"50":1}}],["classification",{"0":{"207":1,"228":1},"1":{"208":1,"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"229":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1},"2":{"15":2,"50":2,"83":2,"276":1}}],["classic",{"2":{"58":2}}],["classes",{"2":{"50":1,"83":1,"257":1}}],["class",{"0":{"123":1},"2":{"50":1,"83":1,"121":1,"124":2,"211":5,"233":5,"244":5,"276":1}}],["clockwise",{"2":{"198":1,"200":4}}],["closeopenintervals",{"2":{"187":1,"276":1,"282":1}}],["closest",{"2":{"133":1}}],["closures",{"2":{"56":1}}],["cl",{"2":{"154":3,"185":4}}],["clear",{"2":{"25":1,"127":1}}],["client",{"2":{"2":3,"208":1}}],["client=missing",{"2":{"2":1}}],["cenum",{"2":{"187":1,"276":1,"282":1}}],["centered",{"2":{"86":2}}],["center=",{"2":{"79":1}}],["center=size",{"2":{"79":4}}],["center",{"2":{"79":7,"82":1,"86":12}}],["central",{"2":{"15":1,"86":1,"237":13}}],["celu",{"2":{"58":4}}],["cell=st",{"2":{"201":1}}],["cell`",{"2":{"201":1}}],["cells",{"2":{"38":1,"116":1}}],["cell",{"2":{"38":45,"201":10,"202":4}}],["certain",{"0":{"175":1},"1":{"176":1},"2":{"7":1,"8":2,"36":2,"51":1,"60":1,"86":1,"98":1,"107":1,"114":1,"137":1,"141":1,"142":1,"158":1,"184":1,"231":1}}],["cairo",{"2":{"263":1,"276":2,"282":2}}],["cairomakie",{"2":{"222":1,"223":1,"226":1,"248":1,"254":2,"263":3,"264":1,"268":1,"276":3,"277":1,"282":3,"284":1,"285":1,"288":2,"290":1,"291":1}}],["cassette",{"2":{"229":1,"282":1}}],["case",{"2":{"22":1,"38":19,"54":1,"76":4,"80":1,"84":1,"86":2,"139":1,"153":1,"165":1,"177":1,"196":1,"225":1,"242":1,"279":1,"285":1}}],["cases",{"2":{"8":3,"15":1,"21":1,"45":1,"49":2,"52":6,"54":1,"55":1,"56":1,"68":1,"77":1,"86":5,"100":1,"123":1,"124":1,"128":2,"162":1,"166":1,"173":1,"190":1}}],["ca",{"2":{"225":2,"242":3}}],["capabilities",{"2":{"189":1,"247":1}}],["capture",{"2":{"164":1}}],["capped",{"2":{"58":1}}],["cat",{"2":{"132":1,"200":1}}],["category",{"2":{"83":1}}],["catch",{"0":{"157":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1},"2":{"49":1,"126":2,"127":2,"133":1,"160":2}}],["causing",{"2":{"158":1}}],["causal",{"2":{"73":4}}],["causes",{"2":{"141":1}}],["caused",{"2":{"80":1}}],["cause",{"2":{"8":1,"25":1,"60":1,"158":1}}],["caching",{"2":{"49":1}}],["cached",{"2":{"49":2}}],["cache",{"2":{"49":2,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"268":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["care",{"2":{"86":1,"191":1}}],["cartesianindex",{"2":{"81":2,"86":2}}],["cartesianindices",{"2":{"81":2}}],["cartesian",{"2":{"39":1}}],["carry",{"2":{"38":4,"201":3,"202":3}}],["caveats",{"2":{"15":1}}],["calc",{"2":{"86":1}}],["calculating",{"2":{"86":1}}],["calculation",{"2":{"77":1,"86":1,"165":1,"166":3}}],["calculations",{"2":{"47":1}}],["calculates",{"2":{"37":3,"84":1,"86":3}}],["calculate",{"2":{"35":4,"37":3,"39":1,"82":1,"86":5}}],["calculated",{"2":{"7":2,"50":7,"64":1,"86":2}}],["callback",{"2":{"225":4,"286":3,"287":1}}],["callable",{"2":{"34":2}}],["calls",{"2":{"7":1,"8":2,"58":1,"164":2}}],["call",{"2":{"7":1,"8":1,"23":1,"36":5,"41":3,"55":1,"58":1,"64":1,"80":2,"81":1,"84":1,"86":5,"89":1,"95":6,"100":1,"147":2,"164":1,"165":2,"185":4,"191":2,"201":1,"260":1,"274":1}}],["calling",{"2":{"5":1,"7":1,"8":1,"28":1,"80":1,"81":1,"95":1,"119":1,"137":1,"140":1,"278":1}}],["called",{"2":{"3":1,"28":1,"34":2,"36":3,"64":2,"66":2,"74":1,"144":1,"147":1,"153":1,"174":2,"176":1,"186":1,"188":2,"201":1}}],["canvas",{"2":{"273":5}}],["candidates",{"2":{"133":1}}],["cannot",{"2":{"15":1,"56":1,"99":1,"153":1,"154":1,"174":1,"219":1}}],["can",{"0":{"177":1},"2":{"2":2,"3":4,"4":2,"7":3,"8":3,"11":1,"22":1,"24":1,"28":3,"34":6,"35":6,"37":3,"38":7,"39":3,"40":3,"49":1,"50":2,"52":1,"55":1,"56":5,"57":1,"58":6,"60":1,"62":1,"63":1,"64":2,"65":1,"66":14,"67":1,"68":2,"73":1,"75":1,"76":5,"78":4,"79":1,"80":1,"81":5,"86":8,"88":2,"89":3,"90":2,"91":1,"95":1,"96":1,"115":1,"118":2,"119":1,"125":2,"126":1,"127":2,"135":1,"137":1,"143":1,"145":1,"148":3,"150":1,"154":1,"155":3,"158":2,"159":1,"160":1,"163":1,"164":2,"165":2,"166":3,"168":1,"169":2,"174":1,"176":1,"177":1,"180":1,"183":1,"184":1,"185":1,"186":1,"188":11,"189":2,"191":1,"193":1,"196":1,"201":1,"202":1,"204":1,"205":1,"225":1,"231":2,"234":1,"235":1,"243":1,"249":1,"250":1,"268":1,"278":2,"279":3,"283":1,"285":1}}],["cover",{"2":{"173":1}}],["covered",{"2":{"128":1}}],["covariate",{"2":{"66":1}}],["cosθ",{"2":{"86":1}}],["cos",{"2":{"86":1,"167":1,"252":1,"283":2,"284":3,"285":8,"291":2}}],["cosh",{"2":{"58":1}}],["coordinates",{"2":{"82":1,"86":3,"279":1}}],["co",{"2":{"58":1}}],["course",{"2":{"135":1,"165":1}}],["courville",{"2":{"40":1}}],["coupled",{"2":{"98":1,"184":1}}],["could",{"2":{"76":1,"153":1,"154":1,"278":1}}],["counterpart",{"2":{"177":1}}],["count",{"2":{"56":1,"75":1,"147":4}}],["coefficient",{"2":{"50":2,"58":2,"86":2}}],["cora",{"0":{"256":1,"257":1},"1":{"257":1,"258":1,"259":1,"260":1,"261":1},"2":{"257":1}}],["cores",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["core",{"2":{"45":1,"95":3,"100":1,"173":1,"190":1,"197":1,"206":1,"214":1,"227":1,"237":5,"238":1,"242":5,"243":3,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["corner",{"2":{"42":1,"82":1}}],["corners",{"2":{"42":1,"78":9,"82":1,"116":1}}],["corners=false",{"2":{"42":1}}],["correlation=true",{"2":{"35":2,"115":1,"117":1}}],["correlation=false",{"2":{"35":2}}],["correlation",{"2":{"35":7}}],["corresponding",{"2":{"4":3,"23":2,"25":2,"39":1,"42":2,"50":3,"111":1,"114":1,"119":1,"147":1,"149":1,"166":1}}],["corresponds",{"2":{"2":1,"225":1}}],["corrections",{"2":{"135":1}}],["correctness",{"0":{"96":1},"2":{"52":1,"94":1}}],["correct",{"0":{"134":1},"2":{"3":2,"96":1,"158":2,"211":3,"233":3,"244":3,"273":1}}],["correctly",{"2":{"2":1,"24":1,"86":1,"128":1,"137":1}}],["colon",{"2":{"292":1}}],["colormap=",{"2":{"279":3}}],["colorbrewer",{"2":{"276":1,"282":1}}],["colorbar",{"2":{"254":1}}],["colorschemes",{"2":{"276":1,"282":1}}],["colors",{"2":{"276":1,"282":1}}],["colorvectorspace",{"2":{"276":2,"282":2}}],["colorview",{"2":{"273":2}}],["colortypes",{"2":{"276":2,"282":2}}],["color=",{"2":{"223":2,"226":4,"264":2,"268":2,"277":2}}],["col",{"2":{"86":6,"273":5}}],["col=similar",{"2":{"86":6}}],["col2im",{"2":{"77":1,"86":1}}],["collect",{"2":{"56":1,"82":1,"90":1,"200":2,"209":2,"230":2,"252":4,"254":1,"264":1,"279":4,"292":1,"293":2}}],["collects",{"2":{"34":1}}],["columns",{"2":{"81":2,"188":1}}],["column",{"2":{"39":1,"74":2,"81":2,"83":1,"86":2,"147":1,"188":4}}],["cols",{"2":{"15":1,"273":8}}],["codeczlib",{"2":{"229":1}}],["codebases",{"2":{"55":1}}],["code",{"0":{"137":1},"2":{"21":2,"45":1,"47":1,"52":4,"56":4,"77":1,"90":2,"102":1,"118":1,"119":2,"122":1,"137":1,"147":8,"156":1,"158":1,"159":1,"160":1,"165":2,"176":1,"177":1,"178":1,"187":1,"202":1,"204":1,"237":3,"277":1,"281":2,"291":1}}],["combinatorics",{"2":{"276":1}}],["combination",{"2":{"133":1,"164":1}}],["combined",{"2":{"38":2,"89":1}}],["combines",{"2":{"34":1}}],["come",{"2":{"126":1,"158":1,"194":1}}],["comes",{"2":{"34":1,"98":1,"100":2,"165":1,"167":1}}],["coming",{"2":{"90":1,"153":1,"158":1}}],["com",{"2":{"38":1,"68":1,"86":1}}],["community",{"2":{"220":1}}],["communication",{"0":{"30":1},"2":{"139":1,"140":1}}],["communications",{"2":{"28":2}}],["commit",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["command",{"2":{"68":2,"90":1,"178":2,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["comm",{"2":{"27":2}}],["commonsolve",{"2":{"229":1,"276":1,"282":1}}],["commonsubexpressions",{"2":{"187":1,"276":1,"282":1}}],["commonworldinvalidations",{"2":{"187":1,"276":1,"282":1}}],["commonly",{"2":{"74":1}}],["common",{"2":{"12":1,"25":1,"52":4,"81":1,"154":1,"163":1,"183":1,"185":1}}],["comprises",{"2":{"154":1}}],["compiling",{"0":{"118":1},"1":{"119":1},"2":{"100":1,"124":1,"147":1}}],["compilation",{"2":{"92":1,"147":1,"260":11}}],["compiled=nothing",{"2":{"273":1}}],["compiled",{"2":{"118":7,"122":1,"204":2,"212":4,"260":6,"273":2,"274":4}}],["compilersupportlibraries",{"2":{"187":1,"276":1,"282":1}}],["compiler",{"2":{"98":2,"118":1}}],["compiles",{"2":{"90":1}}],["compile=true",{"2":{"49":1}}],["compile",{"2":{"3":1,"34":1,"49":1,"53":1,"75":1,"95":1,"118":8,"119":1,"204":1,"212":1,"260":3,"268":2,"274":2}}],["complicated",{"2":{"86":1}}],["completeness",{"2":{"172":1,"234":1}}],["completely",{"2":{"38":1,"84":1,"98":1,"114":1}}],["complete",{"2":{"37":3,"38":1,"185":1}}],["complexity",{"2":{"101":1,"192":1}}],["complexf64",{"2":{"16":8}}],["complexf32",{"2":{"16":8,"185":2}}],["complexf16",{"2":{"16":8}}],["complex",{"2":{"5":1,"49":1,"56":1,"67":1,"77":1,"80":1,"86":6,"185":1}}],["component",{"2":{"283":7}}],["componentvector",{"2":{"196":1,"237":6,"287":1}}],["components",{"2":{"86":1,"283":4}}],["componentarray",{"2":{"25":1,"165":2,"166":2,"167":2,"168":3,"172":4,"196":1,"225":2,"232":1,"234":1,"242":2,"285":1}}],["componentarraysreversediffext",{"2":{"282":2}}],["componentarraysrecursivearraytoolsext",{"2":{"229":2,"282":2}}],["componentarraystrackerext",{"2":{"282":2}}],["componentarraysscimlbaseext",{"2":{"229":2,"282":2}}],["componentarraysgpuarraysext",{"2":{"192":1,"240":2,"282":1}}],["componentarrayszygoteext",{"2":{"192":2,"229":2,"282":2}}],["componentarrayskernelabstractionsext",{"2":{"192":1,"282":1}}],["componentarraysoptimisersext",{"2":{"192":1,"282":1}}],["componentarrays",{"2":{"25":2,"155":1,"164":1,"192":7,"222":1,"229":4,"237":12,"240":2,"278":1,"282":11}}],["composability",{"2":{"276":1}}],["compositionsbaseinversefunctionsext",{"2":{"276":1,"282":1}}],["compositionsbase",{"2":{"276":2,"282":2}}],["composition",{"2":{"66":1,"154":1}}],["composes",{"2":{"120":1}}],["compose",{"2":{"47":1,"278":1}}],["composedlinear",{"2":{"154":3}}],["composed",{"2":{"38":1,"49":1}}],["compatlinearalgebraext",{"2":{"187":1,"276":1,"282":1}}],["compat",{"2":{"187":2,"276":2,"282":2}}],["compatibility",{"2":{"45":1,"124":1,"151":1}}],["compatible",{"2":{"6":1,"24":2,"31":1,"32":2,"50":2,"63":1,"65":1,"151":1,"164":1,"234":1}}],["comparator",{"2":{"253":70,"260":1025,"295":16}}],["compare",{"2":{"138":1,"147":4,"172":1}}],["comparison",{"2":{"34":1}}],["compactmacroimpl",{"2":{"237":7}}],["compactluxlayer",{"2":{"56":2,"237":3,"242":2}}],["compact",{"0":{"56":1,"202":1},"2":{"55":1,"56":20,"90":3,"115":2,"151":2,"164":2,"202":2,"204":1,"231":4,"235":1,"237":7,"242":1,"258":2,"271":2}}],["computing",{"0":{"168":1,"170":1,"171":1,"172":1},"2":{"18":2,"19":1,"74":1,"166":1,"170":1,"193":1}}],["computed",{"2":{"34":4,"35":2,"39":1,"40":1,"41":2,"49":3,"50":3,"58":3,"64":1,"66":1,"193":4,"267":1}}],["compute",{"2":{"18":6,"19":1,"49":9,"50":1,"58":1,"60":2,"65":1,"82":1,"89":3,"107":1,"116":2,"118":1,"127":1,"168":3,"169":5,"172":1,"192":1,"193":1,"194":1,"195":1,"203":1,"225":1,"249":1,"251":2,"278":1,"283":4,"284":1,"285":1,"286":1,"288":1}}],["computes",{"2":{"15":1,"19":1,"41":4,"49":2,"61":1,"63":1,"66":3,"74":1,"75":1,"83":1,"84":1,"86":2,"169":1,"174":1}}],["computer",{"2":{"5":1,"15":2,"50":4,"66":1}}],["computationally",{"2":{"192":1}}],["computational",{"2":{"192":1}}],["computation",{"0":{"165":1,"167":1},"1":{"166":1},"2":{"3":1,"34":2,"41":2,"50":1,"60":1,"79":1,"82":4,"96":1,"107":1,"163":2}}],["copying",{"2":{"201":1}}],["copyto",{"2":{"52":3}}],["copy",{"2":{"8":1,"49":2,"52":1,"80":2,"174":2,"190":5}}],["copied",{"2":{"5":1,"38":4,"80":1,"81":2}}],["conjunction",{"2":{"100":1}}],["conjugate",{"2":{"80":1}}],["concurrentutilities",{"2":{"229":1}}],["concise",{"2":{"151":1,"202":1}}],["concatenate",{"2":{"132":1}}],["concatenated",{"2":{"38":7}}],["conclusion",{"0":{"128":1}}],["concretestructs",{"2":{"187":1,"256":1,"270":1,"276":1,"282":1,"290":1}}],["concreterarray",{"2":{"118":13,"147":11,"267":4}}],["concrete",{"0":{"131":1},"1":{"132":1,"133":1,"134":1,"135":1},"2":{"77":2,"102":1,"155":1,"174":1,"271":1,"272":1,"292":2,"293":1}}],["connectionist",{"2":{"83":1}}],["connection",{"2":{"34":26,"38":4}}],["connected",{"0":{"65":1},"2":{"15":1,"39":3}}],["confusion",{"2":{"107":1}}],["confusing",{"2":{"86":1,"225":1}}],["conform",{"2":{"34":1}}],["conference",{"2":{"15":6,"50":4,"66":2}}],["convolve",{"2":{"86":2}}],["convolutions",{"2":{"35":5,"77":2,"86":4}}],["convolution",{"0":{"77":1},"2":{"15":1,"35":9,"77":6,"86":27,"147":2}}],["convolutional",{"0":{"35":1,"63":1,"256":1,"270":1},"1":{"257":1,"258":1,"259":1,"260":1,"261":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"15":2,"35":4,"50":1,"58":1,"84":1,"86":1,"270":1}}],["conv3d",{"2":{"77":1}}],["conv2d",{"2":{"77":1,"86":2}}],["convdims",{"2":{"63":2,"77":3,"86":5}}],["convtranspose",{"2":{"35":1,"116":1,"117":2}}],["conv",{"2":{"35":4,"47":2,"63":3,"74":1,"77":7,"86":10,"115":1,"116":1,"147":2,"161":2,"173":1,"210":6,"271":6}}],["convention",{"2":{"190":1}}],["conveniently",{"2":{"188":1}}],["convenience",{"0":{"16":1},"2":{"7":1,"55":1,"120":1,"268":1}}],["conveys",{"2":{"86":1}}],["conversely",{"2":{"192":1}}],["converse",{"2":{"35":1}}],["conversions",{"2":{"54":2}}],["conversion",{"0":{"182":1},"2":{"5":1}}],["convert2image",{"2":{"272":1}}],["converted",{"2":{"47":2,"54":1}}],["converts",{"2":{"42":1,"53":4,"78":1,"86":2}}],["converting",{"2":{"25":1,"47":3,"86":2}}],["convert",{"2":{"5":1,"8":1,"45":2,"47":4,"54":3,"81":1,"86":1,"174":1,"177":1,"210":1,"226":1,"231":1}}],["cond",{"2":{"8":3}}],["conditioners=lux",{"2":{"293":1}}],["conditioners",{"2":{"293":19}}],["conditioner",{"2":{"292":2,"293":8}}],["conditions",{"2":{"213":1,"251":1,"284":1,"285":1}}],["conditionals",{"2":{"86":1}}],["condition",{"2":{"8":2,"15":1}}],["contour",{"2":{"254":1,"276":1,"279":6,"282":1}}],["contents",{"2":{"84":1}}],["content",{"2":{"81":1}}],["context`",{"2":{"64":1}}],["context",{"2":{"51":1,"64":1,"66":2}}],["continua",{"2":{"283":1}}],["continue",{"2":{"140":1}}],["continuously",{"2":{"58":1}}],["contiguous",{"2":{"80":1}}],["contracting",{"2":{"147":3}}],["contrastive",{"2":{"50":1}}],["contrast",{"2":{"2":1,"76":1}}],["contrib",{"2":{"126":1}}],["contributions",{"2":{"77":1}}],["controlled",{"2":{"54":1,"99":1,"117":1}}],["controlling",{"2":{"54":1,"182":1,"183":1}}],["control",{"2":{"42":2,"80":2,"86":4,"118":2,"158":1,"187":1}}],["controls",{"2":{"15":1,"35":4,"38":3,"41":8,"50":1}}],["contained",{"2":{"34":1}}],["containerlayer",{"2":{"231":1}}],["containers",{"0":{"34":1},"2":{"107":2,"237":3}}],["container",{"0":{"154":1},"2":{"7":2,"8":1,"24":1,"32":1,"77":2,"86":2,"108":1,"137":1,"154":2,"201":2}}],["contains",{"0":{"167":1},"2":{"8":1,"19":1,"36":1,"40":1,"51":2,"77":1,"82":2,"84":1}}],["contain",{"2":{"7":1,"54":1,"56":1,"81":1,"100":1,"153":2,"201":1}}],["containing",{"0":{"165":1},"1":{"166":1},"2":{"3":1,"7":3,"15":4,"16":24,"38":14,"39":1,"49":2,"50":2,"66":2,"81":2,"86":1,"99":1}}],["consoleprogressmonitor",{"2":{"276":1,"282":1}}],["consensus",{"2":{"167":1}}],["consecutive",{"2":{"34":1}}],["consequence",{"2":{"23":1}}],["consult",{"2":{"20":1}}],["considering",{"2":{"192":1}}],["consider",{"2":{"131":1,"153":1,"155":1,"158":1,"193":1,"196":1}}],["considered",{"2":{"3":1,"4":2,"49":1,"82":1,"96":1,"142":1,"148":1}}],["consistent",{"2":{"38":1}}],["consistency",{"2":{"15":1,"86":1}}],["consists",{"2":{"34":1,"166":1}}],["constrained",{"2":{"156":1}}],["constructured",{"2":{"174":1}}],["constructed",{"2":{"56":1,"64":1,"153":1}}],["constructing",{"2":{"49":1,"137":1}}],["constructionbaseunitfulext",{"2":{"276":1,"282":1}}],["constructionbaseintervalsetsext",{"2":{"276":1,"282":1}}],["constructionbasestaticarraysext",{"2":{"187":1,"276":1,"282":1}}],["constructionbaselinearalgebraext",{"2":{"187":1,"276":1,"282":1}}],["constructionbase",{"2":{"187":3,"276":4,"282":4}}],["construction",{"2":{"7":1,"42":1,"114":1}}],["constructor",{"2":{"22":1,"49":1}}],["construct",{"2":{"15":1,"22":1,"24":2,"56":2,"89":2,"99":1,"119":1,"125":1,"154":2,"225":1,"232":1,"278":2}}],["constructs",{"2":{"7":1,"15":2,"22":1}}],["const",{"2":{"69":4,"70":3,"118":5,"127":1,"147":1,"222":2,"223":2,"237":5,"248":2,"256":2,"267":2,"270":2,"283":6,"284":1,"285":2,"290":2,"293":1}}],["constants",{"2":{"64":1,"284":1,"285":1}}],["constant",{"2":{"3":1,"76":14,"147":5}}],["cupti",{"2":{"238":1,"246":1}}],["cusparse",{"2":{"238":1,"246":1}}],["cusolver",{"2":{"238":1,"246":1}}],["customparamtype",{"2":{"22":2}}],["customabstractluxlayer",{"2":{"7":4}}],["customize",{"2":{"3":1}}],["custom",{"0":{"90":1,"129":1,"174":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"3":1,"7":2,"8":1,"11":1,"22":1,"56":1,"67":1,"72":1,"90":1,"100":1,"127":2,"134":1,"151":1,"155":1,"164":1,"180":1,"190":1,"198":1,"201":1,"202":1,"231":1,"247":1,"251":1}}],["cufft",{"2":{"238":1,"246":1}}],["cublas",{"2":{"238":1,"246":1}}],["cublaslt",{"2":{"65":1}}],["cu",{"2":{"189":4}}],["curand",{"2":{"13":1,"238":1,"246":1}}],["currently",{"2":{"4":4,"24":1,"37":3,"42":1,"65":2,"67":1,"84":1,"86":1,"122":1,"123":2,"124":1,"141":1,"148":1,"164":1,"194":1}}],["current",{"2":{"2":1,"5":1,"7":1,"22":2,"36":1,"49":1,"90":1,"120":1,"131":1,"190":1,"278":1,"279":1}}],["cuarrays",{"2":{"86":1}}],["cuarray",{"2":{"5":3,"13":4,"86":3,"185":2,"237":18}}],["cuiterator",{"2":{"5":2,"112":1,"163":1}}],["cudevice",{"2":{"4":1}}],["cudnn",{"2":{"2":2,"3":1,"63":1,"69":1,"229":1,"240":1}}],["cudadevice",{"2":{"2":2,"4":2,"5":1,"111":1,"140":1,"150":2,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["cuda",{"0":{"189":1},"2":{"2":6,"3":2,"4":1,"5":5,"13":2,"28":7,"63":1,"65":1,"69":1,"79":1,"86":1,"93":1,"105":2,"112":1,"118":1,"139":1,"140":1,"141":1,"148":1,"163":1,"180":3,"181":1,"185":7,"189":5,"197":3,"206":3,"214":3,"227":3,"229":5,"237":36,"238":10,"240":5,"246":10,"255":3,"260":11,"261":3,"269":3,"275":3,"280":3,"289":3,"297":3}}],["ch",{"2":{"278":2,"279":1}}],["christian",{"2":{"66":1}}],["chemfiles",{"2":{"229":2}}],["cheat",{"2":{"167":1}}],["cheaper",{"2":{"58":1}}],["checking",{"2":{"153":1,"237":1}}],["check=",{"2":{"127":2}}],["check",{"2":{"8":4,"24":11,"28":1,"36":3,"41":3,"56":1,"86":3,"96":1,"118":1,"127":3,"160":2,"172":1,"183":3,"273":1}}],["checked",{"2":{"3":1}}],["checks",{"2":{"3":2,"22":2,"24":1,"56":2,"86":1}}],["chunks",{"2":{"51":1}}],["chosen",{"2":{"186":1}}],["choice",{"2":{"85":1,"205":1}}],["chopra",{"2":{"50":1}}],["choose",{"2":{"2":1}}],["chs",{"2":{"35":17,"41":19}}],["chaotic",{"2":{"86":1}}],["characterization",{"2":{"77":1}}],["channel",{"2":{"37":3,"41":12,"42":3,"75":1,"76":4,"77":3,"78":5,"86":4}}],["channels",{"2":{"35":6,"36":2,"41":1,"42":1,"66":1,"77":1,"78":3,"86":2}}],["changed",{"2":{"104":1,"138":1,"176":1}}],["changesofvariablestestext",{"2":{"276":1}}],["changesofvariablesinversefunctionsext",{"2":{"276":1}}],["changesofvariables",{"2":{"276":3}}],["changes",{"0":{"104":1,"107":1,"111":1,"114":1,"115":1,"116":2},"2":{"7":1,"11":1,"34":9,"40":2,"57":1,"98":1,"102":1,"107":1,"109":1,"126":1,"138":1,"164":1,"173":1}}],["change",{"2":{"1":1,"2":1,"55":1,"86":2,"107":1,"110":1,"137":1,"153":1,"234":1,"283":1}}],["chains",{"0":{"47":1},"2":{"47":4,"210":1,"213":1,"278":2}}],["chainrulescoreext",{"2":{"229":1,"240":1}}],["chainrulescoresparsearraysext",{"2":{"192":1,"276":1,"282":2}}],["chainrulescore",{"2":{"79":1,"127":2,"187":1,"192":1,"276":2,"282":2}}],["chainrules",{"2":{"24":2,"50":1,"63":1,"65":1,"84":1,"121":1,"124":1,"164":1,"192":1,"229":1,"276":1,"282":1}}],["chain=chain",{"2":{"23":1}}],["chain",{"0":{"132":1,"146":1},"2":{"23":10,"25":3,"34":7,"38":2,"40":4,"41":6,"45":1,"47":2,"56":1,"74":1,"89":9,"114":1,"118":4,"119":1,"126":8,"127":6,"131":1,"132":2,"135":3,"143":2,"144":1,"146":2,"147":2,"165":1,"166":1,"167":1,"168":1,"172":1,"173":2,"210":6,"225":1,"232":2,"237":18,"243":2,"250":1,"265":2,"268":2,"271":6,"278":5,"279":2,"285":2,"293":1}}],["crude",{"2":{"283":1}}],["crlibm",{"2":{"276":1,"282":1}}],["crayons",{"2":{"276":1}}],["crc32c",{"2":{"276":1,"282":1}}],["crc",{"2":{"127":4}}],["critical",{"2":{"77":1,"123":1}}],["criteria",{"2":{"2":1,"24":1}}],["creating",{"0":{"201":1},"2":{"56":1,"73":1,"231":1}}],["created",{"2":{"56":1}}],["create",{"0":{"232":1,"243":1},"2":{"2":1,"5":1,"27":2,"34":2,"39":3,"45":1,"49":1,"52":2,"56":1,"76":1,"86":3,"118":2,"119":1,"127":1,"134":1,"188":1,"191":1,"198":1,"200":3,"201":2,"204":1,"232":2,"234":1,"237":3,"243":2,"245":1,"250":4,"268":1,"273":6,"278":2}}],["creates",{"2":{"1":1,"8":1,"15":1,"38":5,"56":1,"86":1,"191":1,"278":1}}],["crossentropyloss",{"2":{"50":6,"211":1,"233":1,"244":1,"259":1}}],["crosscor",{"2":{"35":1,"115":1}}],["cross",{"2":{"35":11,"50":3,"74":1,"115":1,"117":1,"151":1}}],["cpu=true",{"2":{"234":2}}],["cpuid",{"2":{"187":1,"276":1,"282":1}}],["cpu`",{"2":{"149":1}}],["cpusummary",{"2":{"187":1,"276":1,"282":1}}],["cpus",{"2":{"47":1,"60":1,"61":2,"158":1,"162":1}}],["cpudevice",{"2":{"2":4,"4":2,"149":2,"150":1,"222":1,"224":2,"248":1,"256":1,"270":1,"290":1}}],["cpu",{"0":{"92":1},"2":{"2":4,"5":1,"37":3,"65":1,"67":1,"70":2,"92":1,"96":1,"100":1,"115":2,"118":2,"121":1,"122":3,"124":1,"149":5,"150":7,"174":1,"184":1,"197":2,"204":2,"205":1,"206":2,"208":1,"213":1,"214":2,"222":1,"227":2,"234":4,"238":2,"246":2,"248":1,"255":2,"256":1,"261":2,"267":1,"269":2,"270":1,"273":2,"275":2,"280":2,"289":2,"290":1,"297":2}}],["rk4",{"2":{"284":1,"285":1,"286":1,"288":1}}],["r₂",{"2":{"283":2}}],["r₁",{"2":{"283":2}}],["r2",{"2":{"283":1}}],["r=r1−r2",{"2":{"283":1}}],["r=σ",{"2":{"38":1}}],["rhat",{"2":{"278":1}}],["rhs",{"2":{"147":2}}],["rmath",{"2":{"276":2,"282":2}}],["risk",{"2":{"141":1}}],["right",{"2":{"40":1,"73":1,"76":5,"82":1}}],["rrules",{"2":{"124":1}}],["rrule",{"2":{"86":1,"127":1}}],["rrelu",{"2":{"58":5}}],["r^d",{"2":{"78":1}}],["r^2",{"2":{"78":1}}],["r1",{"2":{"76":5,"283":1}}],["ryan",{"2":{"66":1}}],["rᴰ",{"2":{"42":1}}],["r²",{"2":{"42":1}}],["r",{"2":{"42":9,"56":9,"76":14,"78":4,"86":1,"147":1,"283":3}}],["rn",{"2":{"76":5}}],["rnns",{"2":{"38":1}}],["rnn",{"2":{"38":3,"51":1}}],["rnncell",{"2":{"38":9,"116":1,"117":1}}],["rng=random",{"2":{"212":1,"271":2}}],["rngs",{"2":{"99":1}}],["rng",{"0":{"13":1},"2":{"3":3,"7":2,"8":6,"9":1,"10":1,"11":2,"13":11,"15":11,"16":24,"23":3,"34":4,"36":6,"38":3,"39":4,"40":18,"45":1,"47":1,"56":5,"64":6,"85":1,"86":4,"89":7,"90":4,"109":2,"114":1,"118":1,"119":1,"126":3,"127":2,"133":2,"135":1,"143":4,"146":6,"147":2,"153":9,"154":2,"155":5,"158":4,"173":7,"174":9,"185":13,"186":6,"187":2,"188":1,"191":5,"193":1,"194":1,"196":8,"204":1,"212":2,"225":1,"232":4,"242":3,"245":2,"253":4,"260":4,"264":4,"267":1,"271":7,"274":4,"277":11,"278":1,"285":1,"291":5,"293":4,"295":5,"296":1}}],["rgb",{"2":{"35":1,"273":1}}],["rough",{"2":{"164":1}}],["route",{"2":{"154":1}}],["routines",{"2":{"164":1}}],["routine",{"2":{"86":1}}],["roundingemulator",{"2":{"276":1,"282":1}}],["rounded",{"2":{"35":2,"37":3}}],["round",{"2":{"28":1,"86":2,"245":4,"277":1}}],["row",{"2":{"83":1,"85":2,"147":1,"188":3,"273":5,"274":5,"279":1}}],["rows",{"2":{"15":1,"74":1,"86":1,"188":1,"273":6}}],["rotated",{"2":{"79":2}}],["rotates",{"2":{"79":2,"86":1}}],["rotate",{"2":{"79":1,"86":2}}],["rotations",{"2":{"86":3}}],["rotation",{"0":{"79":1},"2":{"79":10,"86":7}}],["rootsforwarddiffext",{"2":{"276":1}}],["rootschainrulescoreext",{"2":{"276":1}}],["roots",{"2":{"276":3}}],["root",{"2":{"30":7,"206":1,"214":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"297":1}}],["robin",{"2":{"28":1}}],["rocarray",{"2":{"13":4}}],["rocrand",{"2":{"13":1}}],["rocm",{"2":{"3":1,"69":2,"140":1,"141":1,"180":3,"181":1}}],["rademacher",{"2":{"172":1}}],["radians",{"2":{"79":2}}],["ra",{"2":{"118":18,"119":9,"212":2}}],["ratio≥0",{"2":{"283":1}}],["ratio≤1",{"2":{"283":1}}],["ratio",{"2":{"283":6,"284":2,"285":1,"286":1,"288":1}}],["ratiosfixedpointnumbersext",{"2":{"276":1,"282":1}}],["ratios",{"2":{"276":2,"282":2}}],["rationale",{"2":{"107":1}}],["rate=1e",{"2":{"274":1}}],["rate=16000",{"2":{"86":1}}],["rate",{"2":{"86":4,"196":1,"274":1,"278":1}}],["rather",{"2":{"3":1,"38":1,"63":1,"77":1,"85":1,"86":2,"99":1,"201":1}}],["raw",{"2":{"83":1,"86":1,"209":3,"230":3}}],["raia",{"2":{"50":1}}],["ran",{"2":{"126":1}}],["randc64",{"2":{"16":1}}],["randc32",{"2":{"16":1}}],["randc16",{"2":{"16":1}}],["rand64",{"2":{"16":1}}],["rand32",{"2":{"16":1}}],["rand16",{"2":{"16":1}}],["randnc64",{"2":{"16":1}}],["randnc32",{"2":{"16":1}}],["randnc16",{"2":{"16":1}}],["randn64",{"2":{"16":1}}],["randn32",{"2":{"16":1}}],["randn16",{"2":{"16":1}}],["randn",{"2":{"15":1,"34":2,"40":2,"45":1,"47":1,"56":1,"74":1,"80":13,"118":1,"119":1,"126":1,"133":1,"143":1,"146":1,"147":1,"149":1,"150":1,"153":1,"154":1,"155":1,"165":2,"166":2,"173":2,"174":6,"188":2,"191":1,"193":1,"194":1,"196":4,"264":1,"271":1,"273":1,"274":2,"291":1,"293":1}}],["random123",{"2":{"276":1,"282":1}}],["randomnumbers",{"2":{"276":1,"282":1}}],["randomness",{"0":{"191":1},"2":{"38":3,"89":1,"99":1,"169":1,"187":1,"191":1}}],["randomized",{"2":{"58":1}}],["randomly",{"2":{"36":2,"58":1}}],["random",{"2":{"13":1,"15":8,"16":12,"23":2,"34":2,"36":3,"40":8,"45":2,"47":2,"56":3,"64":4,"85":1,"86":2,"89":4,"90":3,"118":3,"119":1,"126":1,"132":1,"143":2,"146":4,"147":25,"153":4,"155":3,"158":2,"164":1,"169":1,"173":4,"174":4,"185":3,"187":5,"188":1,"191":8,"196":4,"199":1,"204":1,"208":1,"222":1,"224":1,"225":1,"229":1,"232":2,"240":1,"242":1,"248":1,"252":1,"253":2,"256":1,"260":2,"263":1,"264":2,"270":1,"271":1,"274":1,"276":1,"277":2,"282":1,"285":1,"290":1,"291":1,"295":2,"296":1}}],["rand",{"2":{"5":1,"56":1,"73":3,"86":6,"89":3,"90":1,"158":2,"167":1,"168":1,"172":2,"188":5,"189":1,"191":3,"277":8,"291":2}}],["rank",{"2":{"4":3,"29":3,"137":1,"138":1}}],["rangearrays",{"2":{"276":1,"282":1}}],["ranges",{"2":{"34":1}}],["range",{"2":{"3":2,"82":1,"109":1,"194":1,"223":1,"252":4,"264":1,"279":23,"284":1}}],["rule",{"2":{"42":1,"84":1,"268":2}}],["rules",{"0":{"134":1},"2":{"0":1,"52":1,"123":1}}],["runtimegeneratedfunctions",{"2":{"229":1,"276":1,"282":1}}],["runtime",{"2":{"96":2,"238":2,"246":2}}],["runs",{"2":{"79":1}}],["running",{"0":{"133":1},"2":{"41":16,"64":2,"66":18,"86":1,"90":1,"107":1,"118":1,"126":5,"127":11,"143":2,"146":2,"159":1,"176":1,"201":1}}],["run",{"2":{"2":1,"3":1,"5":1,"36":1,"68":2,"69":1,"70":1,"81":2,"86":1,"89":1,"95":1,"114":1,"118":3,"127":3,"134":1,"141":1,"147":5,"148":1,"153":1,"158":1,"174":1,"178":3,"191":1,"201":1,"219":1,"279":1}}],["red",{"2":{"223":1,"226":2,"277":1}}],["reduction",{"2":{"37":3,"50":1,"58":1,"81":2}}],["reducing",{"2":{"34":1,"66":1}}],["reduces",{"2":{"86":1}}],["reduced",{"2":{"78":1}}],["reduce",{"2":{"30":3,"40":1,"77":1,"94":1,"147":2,"278":1}}],["reiterate",{"2":{"174":1}}],["renamed",{"2":{"107":2,"110":1,"111":2}}],["renormalize",{"2":{"41":2}}],["request",{"2":{"101":1,"107":1}}],["requisites",{"2":{"89":1}}],["required",{"2":{"86":2}}],["require",{"2":{"45":1,"47":1,"56":1,"66":1,"140":1,"225":1}}],["requirements",{"2":{"34":1,"268":1}}],["requires",{"2":{"8":1,"10":1,"28":4,"51":1,"93":1,"130":1,"187":1,"225":1,"242":1,"276":1,"282":1}}],["rev=true",{"2":{"253":1}}],["revising",{"2":{"100":1}}],["reversing",{"2":{"86":1}}],["reverses",{"2":{"40":1}}],["reversesequence",{"2":{"40":3}}],["reversed",{"2":{"40":1,"86":1}}],["reversediffadjoint",{"2":{"231":1}}],["reversediff",{"2":{"8":2,"49":1,"52":1,"63":1,"65":1,"67":1,"96":1,"121":1,"122":1,"133":1,"234":1,"282":1}}],["reverse",{"2":{"18":2,"24":2,"40":2,"56":1,"81":2,"86":2,"96":3,"118":1,"121":7,"127":1,"147":2,"192":2,"193":2,"251":3,"293":1}}],["refresher",{"2":{"153":1}}],["ref",{"2":{"86":7}}],["referred",{"2":{"188":1}}],["referring",{"2":{"82":1}}],["refer",{"2":{"90":1,"220":1,"221":1,"247":1}}],["references",{"2":{"15":6,"40":1,"41":1,"50":4,"64":2,"66":4}}],["reference",{"0":{"14":1},"1":{"15":1,"16":1},"2":{"78":1,"119":1,"207":1,"253":7,"260":102,"295":2}}],["reflecting",{"2":{"76":2}}],["reflect",{"2":{"76":9,"86":2}}],["re",{"2":{"72":1,"86":1,"90":1,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["reexport",{"2":{"72":1,"114":2,"187":1,"276":1,"282":1}}],["remake",{"2":{"225":1}}],["remark",{"2":{"166":1}}],["remains",{"2":{"64":1}}],["remember",{"2":{"119":1,"125":1,"128":1,"153":1,"164":1,"201":1,"234":1}}],["remove",{"2":{"191":1}}],["removes",{"2":{"86":1}}],["removed",{"0":{"114":1,"139":1},"2":{"35":1,"86":1,"104":1,"107":2,"111":1,"114":10,"115":4,"118":1,"139":1,"165":2,"166":1}}],["reuse",{"2":{"63":1,"65":1}}],["reusing",{"2":{"63":1,"65":1}}],["reg",{"2":{"165":2,"166":2}}],["regression",{"0":{"196":1},"2":{"158":1,"196":1}}],["regressions",{"2":{"8":1,"213":1}}],["region",{"2":{"86":2}}],["regions",{"2":{"86":3,"279":1}}],["registers",{"2":{"260":11}}],["registered",{"2":{"56":1,"68":1}}],["registry",{"2":{"68":1,"91":1}}],["regarding",{"2":{"174":1,"192":1}}],["regard",{"2":{"80":2}}],["regularization",{"2":{"165":1,"172":1,"278":1}}],["regularized",{"2":{"58":1}}],["regularisation",{"2":{"84":1}}],["regular",{"2":{"47":2,"56":2}}],["rewrite",{"2":{"45":1,"60":1,"249":1}}],["relocatablefolders",{"2":{"276":1,"282":1}}],["reltol",{"2":{"237":13}}],["reltol=1",{"2":{"232":1}}],["relies",{"2":{"174":2,"178":1,"191":1}}],["reliance",{"2":{"165":2}}],["reliable",{"2":{"122":4}}],["reliability",{"2":{"100":1}}],["relevant",{"2":{"118":1,"276":1,"283":1}}],["release",{"2":{"102":2,"109":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["released",{"2":{"68":1}}],["relativisticorbitmodel",{"2":{"284":3}}],["relatively",{"2":{"164":1}}],["relation",{"2":{"38":1}}],["related",{"2":{"101":1}}],["rely",{"2":{"56":1,"62":1,"107":1,"191":1}}],["relu6",{"2":{"58":3}}],["relu",{"2":{"34":4,"38":1,"40":3,"41":8,"45":1,"47":4,"56":3,"58":6,"63":1,"84":1,"90":1,"126":5,"127":5,"147":4,"173":2,"210":12,"243":2,"258":1,"265":2,"268":4}}],["retcode",{"2":{"287":1}}],["retrieve",{"2":{"39":1}}],["retained",{"2":{"36":1}}],["retuened",{"2":{"3":2}}],["returned",{"2":{"13":1,"15":1,"25":1,"32":1,"36":4,"38":3,"39":1,"40":1,"45":1,"47":1,"49":10,"51":1,"64":1,"66":1,"80":2,"86":4}}],["returning",{"2":{"8":1,"153":1,"204":2,"260":1}}],["returns",{"2":{"3":3,"7":2,"8":3,"15":6,"18":2,"19":1,"25":1,"34":7,"35":2,"36":3,"37":9,"38":8,"39":5,"40":6,"41":5,"42":2,"49":3,"50":6,"51":2,"54":1,"60":2,"64":2,"66":4,"67":1,"73":1,"82":2,"84":3,"85":1,"86":16,"97":1,"112":1,"149":3,"166":1,"167":1,"168":1,"225":1,"237":6,"285":1,"292":1}}],["return",{"2":{"2":3,"3":5,"7":1,"9":1,"10":1,"11":2,"15":6,"16":24,"22":1,"23":2,"34":3,"38":4,"40":1,"49":8,"50":9,"51":2,"52":2,"56":19,"58":3,"73":2,"86":13,"90":3,"99":1,"107":1,"118":2,"119":1,"127":3,"132":1,"134":2,"143":1,"144":3,"145":4,"147":4,"153":5,"154":2,"155":3,"165":1,"166":1,"167":1,"168":1,"170":1,"171":1,"172":1,"174":1,"186":1,"196":1,"200":1,"201":2,"202":2,"203":1,"204":1,"209":1,"211":1,"212":1,"223":1,"225":3,"230":1,"231":5,"232":1,"233":1,"235":2,"237":3,"241":2,"242":3,"243":1,"244":1,"245":1,"250":2,"251":3,"253":1,"254":1,"257":1,"258":4,"259":1,"260":2,"264":1,"267":1,"268":1,"271":8,"272":2,"273":6,"274":2,"277":1,"278":2,"279":2,"283":11,"284":1,"285":1,"286":2,"291":2,"292":6,"293":5,"294":1,"295":2}}],["recipesbase",{"2":{"276":1,"282":1}}],["rec",{"2":{"271":4}}],["rectifier",{"2":{"58":1}}],["rectifiers",{"2":{"15":2}}],["rectified",{"2":{"58":6}}],["recon",{"2":{"273":4,"274":2}}],["reconstruct",{"2":{"273":2,"274":1}}],["reconstruction",{"2":{"153":1,"273":3}}],["record",{"2":{"97":1,"254":1,"279":1}}],["recorded",{"2":{"96":1,"97":4}}],["recomputing",{"2":{"64":1}}],["recommendations",{"0":{"122":1},"2":{"157":1,"158":1}}],["recommendation",{"2":{"100":1}}],["recommend",{"2":{"7":1,"100":1,"140":1,"151":1,"153":1,"190":1,"194":1,"202":1,"205":1,"228":1,"231":1,"237":1,"242":1,"256":1}}],["recommended",{"0":{"149":1},"2":{"3":1,"7":2,"8":1,"22":1,"45":1,"52":1,"56":2,"62":1,"72":1,"84":1,"90":1,"94":1,"100":1,"151":1,"153":2,"159":1,"164":1,"165":2,"170":1,"172":1,"174":1,"181":1,"203":1,"285":1}}],["recognition",{"2":{"50":1,"83":2}}],["receives",{"2":{"40":1}}],["recvbuf",{"2":{"30":6}}],["recur",{"2":{"38":3}}],["recurrence",{"2":{"38":4,"201":1}}],["recurrent",{"0":{"38":1},"2":{"15":1,"38":10,"75":1,"116":1,"198":2}}],["recurse",{"2":{"52":3}}],["recurses",{"2":{"24":1}}],["recursion",{"2":{"52":1,"53":4}}],["recursivefactorization",{"2":{"229":1,"282":1}}],["recursivearraytoolsreversediffext",{"2":{"229":1,"282":1}}],["recursivearraytoolszygoteext",{"2":{"229":1,"282":1}}],["recursivearraytoolsforwarddiffext",{"2":{"229":1,"276":1,"282":1}}],["recursivearraytoolsfastbroadcastext",{"2":{"229":1,"282":1}}],["recursivearraytoolstrackerext",{"2":{"229":1,"276":1,"282":1}}],["recursivearraytoolssparsearraysext",{"2":{"229":1,"276":1,"282":1}}],["recursivearraytoolsstructarraysext",{"2":{"229":1,"276":1,"282":1}}],["recursivearraytools",{"2":{"229":8,"276":5,"282":8}}],["recursive",{"0":{"52":1},"2":{"52":9,"158":1}}],["recursively",{"2":{"10":1,"52":5}}],["repository",{"2":{"164":1}}],["report",{"2":{"21":1,"101":1,"124":1}}],["reproducer",{"2":{"141":1}}],["represents",{"2":{"83":2,"278":1}}],["representation",{"2":{"56":1}}],["representing",{"2":{"42":1,"78":1}}],["represent",{"2":{"15":1,"80":1}}],["reparameterized",{"2":{"41":2}}],["reparameterization",{"2":{"41":1,"271":1}}],["repack",{"2":{"8":1}}],["repeated",{"2":{"81":2}}],["repeatedly",{"2":{"34":1}}],["repeatedlayer",{"2":{"34":1}}],["repeating",{"2":{"76":1}}],["repeat",{"2":{"76":8,"200":2}}],["repeats",{"2":{"34":6,"38":5}}],["replacing",{"2":{"84":1,"173":1,"188":1}}],["replacement",{"2":{"86":1,"107":1}}],["replaced",{"2":{"60":1,"85":1}}],["replace",{"2":{"60":2,"135":1,"161":5}}],["replaces",{"2":{"24":1,"37":3,"86":1}}],["repl",{"2":{"68":1,"88":1,"95":1,"159":2,"178":1}}],["replicated",{"2":{"191":1}}],["replicate",{"2":{"7":1,"8":1,"34":1,"191":3,"271":1}}],["readme",{"2":{"118":1}}],["read",{"2":{"101":1,"147":1,"190":1}}],["reason",{"2":{"122":1,"158":1}}],["reasonable",{"2":{"34":1}}],["reasons",{"2":{"8":1,"86":1,"231":1}}],["realnvp",{"2":{"290":2,"293":5,"295":1}}],["realdot",{"2":{"192":1,"276":1,"282":1}}],["reallocations",{"2":{"63":1,"65":1}}],["really",{"2":{"47":1,"56":1,"86":2,"124":2,"153":1,"188":1}}],["real",{"2":{"15":1,"36":1,"50":4,"75":1,"77":1,"78":4,"79":2,"80":1,"86":11,"134":1,"254":2}}],["reactantbackend",{"2":{"268":1}}],["reactantdevice",{"2":{"2":2,"90":1,"147":1,"204":2,"212":2}}],["reactant",{"0":{"70":1,"118":1,"147":1,"270":1},"1":{"119":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"2":1,"49":2,"53":1,"70":11,"90":5,"92":1,"100":1,"118":26,"119":5,"121":1,"122":6,"124":4,"147":16,"148":6,"199":1,"204":1,"208":3,"213":1,"248":2,"256":2,"257":2,"263":1,"267":5,"268":2,"270":2,"290":2}}],["resolve",{"2":{"168":1}}],["resolution",{"2":{"42":1,"78":3}}],["resources",{"0":{"101":1}}],["research",{"2":{"64":1}}],["reserved",{"2":{"56":1}}],["resettablestacks",{"2":{"229":1,"282":1}}],["resets",{"2":{"3":1}}],["reset",{"2":{"3":1}}],["rescale",{"2":{"41":4}}],["reshape",{"2":{"50":4,"76":12,"77":3,"78":1,"80":1,"81":1,"82":1,"86":2,"90":1,"147":1,"170":2,"171":2,"190":2,"200":2,"209":1,"230":1,"231":4,"241":2,"252":2,"254":2,"264":1,"271":1,"278":1}}],["reshaped",{"2":{"40":1,"63":1}}],["reshapes",{"2":{"40":1}}],["reshapelayer",{"2":{"40":3}}],["reshaping",{"2":{"15":1,"80":1,"86":1}}],["res",{"2":{"34":4,"56":2,"225":5,"287":1,"288":1}}],["resnet",{"2":{"34":1}}],["respective",{"2":{"98":1,"137":2,"174":1,"190":1}}],["respectively",{"2":{"77":2}}],["respect",{"2":{"19":2,"79":1,"96":1,"166":2}}],["responsibility",{"2":{"8":1,"155":1}}],["results",{"0":{"226":1,"254":1,"288":1,"296":1},"2":{"86":1,"118":1,"172":2,"253":7,"260":103,"268":1,"274":1,"279":2,"284":1,"285":1,"287":1,"288":1,"295":2}}],["resulting",{"2":{"78":3}}],["result",{"2":{"15":1,"19":2,"30":4,"41":1,"62":1,"74":1,"80":3,"86":8,"97":4,"99":1,"155":4}}],["restoring",{"2":{"86":1}}],["restricted",{"2":{"52":1,"75":1,"80":1}}],["rest",{"2":{"15":1,"201":2,"202":2}}],["restarted",{"2":{"1":1,"57":1}}],["mtensor",{"2":{"283":4}}],["mtlarray",{"2":{"13":3}}],["mvnormal",{"2":{"278":1}}],["mcse",{"2":{"278":1}}],["mcmc",{"2":{"278":1,"279":1}}],["mcmcchains",{"2":{"276":1,"278":1}}],["mcmcdiagnostictools",{"2":{"276":1}}],["mcclelland",{"2":{"15":1}}],["mbedtls",{"2":{"229":1}}],["mnist\\ttraining",{"2":{"245":1}}],["mnist\\ttime",{"2":{"245":50}}],["mnist",{"0":{"207":1,"209":1,"228":1,"230":1,"239":1,"270":1,"272":1},"1":{"208":1,"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"229":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"208":1,"209":2,"229":1,"230":2,"241":1,"245":2,"270":1,"272":2}}],["m×hop",{"2":{"86":1}}],["mkl",{"2":{"65":1,"263":1,"276":2,"282":2}}],["mse",{"2":{"251":3}}],["mseloss",{"2":{"50":4,"89":2,"90":1,"118":1,"119":1,"196":1,"225":1,"251":1,"267":1,"273":1,"286":3}}],["ms",{"2":{"187":94,"192":55,"199":7,"229":174,"240":20,"263":25,"276":474,"282":508}}],["msleloss",{"2":{"50":2}}],["m2",{"2":{"45":3}}],["my",{"2":{"213":1}}],["mybias",{"2":{"155":3}}],["myinputtype",{"2":{"130":3}}],["myfancychain",{"2":{"34":2}}],["myweight",{"2":{"22":1,"155":4}}],["m",{"2":{"34":2,"45":3,"53":8,"66":2,"69":2,"73":2,"75":2,"86":5,"231":2,"235":2,"260":2,"274":2,"277":15,"283":4,"284":5,"285":3,"293":2}}],["mpitrampoline",{"2":{"229":1}}],["mpich",{"2":{"229":1}}],["mpipreferences",{"2":{"229":1}}],["mpi",{"0":{"180":1},"2":{"27":2,"28":8,"140":2,"141":1,"180":5}}],["mpibackend",{"2":{"27":2,"28":2}}],["mₘ",{"2":{"19":2}}],["m₂",{"2":{"19":2,"283":7}}],["m₁",{"2":{"19":2,"283":6}}],["mlstyle",{"2":{"276":1,"282":1}}],["mljmodelinterface",{"2":{"276":1}}],["mlx",{"2":{"256":1,"270":2,"290":1}}],["mldatasets",{"2":{"208":1,"229":3,"240":1,"256":1,"270":1}}],["mldatadevicesreversediffext",{"2":{"282":2}}],["mldatadevicesrecursivearraytoolsext",{"2":{"229":2,"276":2,"282":2}}],["mldatadevicestrackerext",{"2":{"276":2,"282":2}}],["mldatadevicesmlutilsext",{"2":{"229":2}}],["mldatadevicesonehotarraysext",{"2":{"229":2}}],["mldatadevicesgpuarraysext",{"2":{"192":1,"282":1}}],["mldatadeviceszygoteext",{"2":{"192":2,"229":2,"282":2}}],["mldatadevicesfillarraysext",{"2":{"192":2,"276":2,"282":2}}],["mldatadevicessparsearraysext",{"2":{"192":2,"276":2,"282":2}}],["mldatadevicescudnnext",{"2":{"229":2,"240":2}}],["mldatadevicescudaext",{"2":{"229":2,"240":2}}],["mldatadeviceschainrulesext",{"2":{"192":2,"229":2,"276":2,"282":2}}],["mldatadeviceschainrulescoreext",{"2":{"187":1,"263":1,"276":1,"282":1}}],["mldatadevicescomponentarraysext",{"2":{"192":2,"282":2}}],["mldatadevices",{"0":{"0":1,"110":1},"1":{"1":1,"2":1,"3":1,"4":1,"5":1,"111":1,"112":1},"2":{"0":1,"1":1,"2":3,"3":12,"4":1,"5":2,"110":2,"150":2,"181":2,"187":2,"192":6,"197":3,"206":3,"214":3,"222":1,"224":5,"227":3,"229":7,"238":3,"240":2,"246":3,"248":1,"255":3,"256":1,"261":3,"263":1,"269":3,"270":1,"275":3,"276":7,"280":3,"282":11,"289":3,"290":1,"297":3}}],["ml",{"2":{"188":1,"225":2}}],["mlir",{"2":{"90":1,"118":1,"147":3}}],["mlp",{"0":{"262":1},"1":{"263":1,"264":1,"265":1,"266":1,"267":1,"268":1,"269":1},"2":{"56":1,"90":1,"118":1,"243":1,"250":4,"262":1,"293":2}}],["mlutils",{"0":{"32":1},"2":{"5":3,"32":4,"112":1,"137":1,"163":2,"199":1,"200":3,"208":1,"222":1,"224":1,"229":2,"240":1,"248":1,"270":1,"290":1}}],["moons",{"0":{"291":1},"2":{"291":10,"295":1}}],["mooncake",{"2":{"121":1}}],["motion",{"2":{"283":1,"284":1,"285":1}}],["motivating",{"2":{"8":1,"55":1}}],["mosaicviews",{"2":{"276":1,"282":1}}],["mostly",{"2":{"61":1,"100":1,"102":1,"114":1,"141":1,"231":1}}],["most",{"2":{"7":1,"8":1,"15":1,"24":1,"49":2,"52":1,"53":1,"67":1,"68":1,"86":1,"100":1,"110":1,"119":1,"120":1,"122":4,"151":1,"168":1,"170":1,"173":1,"220":1,"285":1}}],["moment",{"2":{"167":1,"283":1}}],["momentum",{"2":{"41":2,"66":6}}],["momentum=0",{"2":{"41":2}}],["monolithic",{"2":{"100":1}}],["monotonic",{"2":{"58":1}}],["month",{"2":{"71":1}}],["mobilenetv3",{"2":{"58":1}}],["mouthful",{"2":{"25":1}}],["moved",{"0":{"115":1},"2":{"107":1,"115":2,"118":1}}],["move",{"2":{"21":1,"118":1,"119":2,"189":1,"190":1,"224":3}}],["movement",{"2":{"3":2}}],["mod",{"2":{"273":1}}],["modivations",{"2":{"114":1}}],["modified",{"2":{"52":2,"132":1}}],["modify",{"2":{"41":1}}],["modulating",{"2":{"50":1}}],["modules=",{"2":{"95":2}}],["modules",{"2":{"95":8}}],["module",{"0":{"51":1},"2":{"21":1,"26":1,"51":1,"95":1,"114":1,"115":1,"147":1}}],["mode=",{"2":{"82":2}}],["modes",{"2":{"42":1,"57":1,"67":1}}],["mode",{"0":{"176":1},"2":{"18":2,"24":7,"36":6,"38":2,"41":3,"42":5,"57":2,"64":2,"66":2,"67":5,"82":6,"96":3,"115":1,"119":1,"121":1,"122":1,"125":3,"126":2,"127":2,"158":1,"159":1,"165":2,"176":4,"178":1,"192":1,"193":4,"194":2,"260":1,"274":1,"283":1}}],["model`",{"2":{"231":1}}],["modeling",{"2":{"84":1}}],["models",{"0":{"45":2,"47":1,"118":1,"125":1,"135":1,"147":1,"219":1,"221":1},"1":{"119":1,"126":1,"127":1,"128":1,"222":1,"223":1,"224":1,"225":1,"226":1,"227":1},"2":{"8":1,"12":1,"45":2,"47":2,"49":2,"56":2,"89":1,"90":1,"94":1,"100":6,"125":2,"128":1,"131":1,"147":3,"185":1,"188":1,"194":1,"198":1,"202":1,"219":2,"221":1}}],["model",{"0":{"126":1,"142":1,"204":1,"205":1,"210":1,"213":1,"225":1,"258":1,"260":1,"271":1,"274":1,"281":1,"284":1,"285":1,"293":1,"295":1},"1":{"143":1,"144":1,"145":1,"146":1,"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"8":8,"23":1,"24":2,"25":2,"34":16,"38":3,"40":12,"41":4,"45":6,"47":11,"49":10,"50":26,"55":3,"56":14,"58":1,"74":2,"89":6,"90":11,"98":1,"118":18,"119":8,"125":3,"126":17,"127":25,"128":1,"131":1,"132":1,"133":2,"134":2,"135":2,"137":2,"142":1,"143":6,"146":7,"147":12,"153":1,"154":5,"158":6,"165":10,"166":7,"167":7,"168":7,"170":2,"171":2,"172":9,"173":9,"174":11,"176":1,"177":1,"196":11,"201":2,"202":1,"203":2,"204":13,"205":6,"207":1,"210":4,"211":2,"212":10,"213":4,"219":1,"225":9,"226":2,"231":11,"232":8,"233":2,"234":10,"235":8,"237":17,"243":4,"244":2,"245":9,"251":4,"252":1,"253":7,"259":2,"260":5,"267":2,"268":6,"273":7,"274":5,"278":7,"279":1,"283":7,"284":4,"285":11,"286":1,"287":1,"288":2,"293":6,"294":2,"295":5,"296":3}}],["more",{"2":{"5":1,"7":1,"8":1,"11":1,"15":2,"34":1,"42":1,"45":1,"47":2,"51":1,"52":1,"55":1,"56":3,"61":1,"73":1,"74":1,"80":3,"86":7,"90":1,"96":1,"100":1,"101":1,"104":1,"107":1,"109":1,"112":1,"114":1,"116":2,"140":1,"158":1,"160":1,"165":1,"166":1,"179":1,"182":1,"183":1,"188":3,"192":1,"201":1,"202":1,"203":1,"221":1,"279":1}}],["mechanics",{"2":{"283":1}}],["mechanism",{"2":{"84":1,"100":1}}],["message",{"2":{"164":1}}],["messages",{"2":{"80":1}}],["mersennetwister",{"2":{"133":1,"185":2,"264":1}}],["merged",{"2":{"38":1}}],["merge",{"2":{"22":1,"23":1,"38":2,"132":1,"201":1,"293":1}}],["mentioned",{"2":{"91":1,"153":1}}],["mel",{"2":{"86":3}}],["mels",{"2":{"86":6}}],["melscale",{"2":{"86":2}}],["medium",{"2":{"65":1}}],["medical",{"2":{"50":1}}],["measure",{"2":{"50":1}}],["meaningless",{"2":{"64":1}}],["meanpool",{"2":{"37":1,"75":3}}],["meant",{"2":{"24":1,"49":1,"53":1,"55":1,"186":1}}],["mean",{"2":{"15":2,"37":7,"41":13,"50":17,"64":2,"66":13,"75":1,"81":1,"85":2,"126":1,"127":2,"143":1,"146":1,"155":1,"196":1,"251":3,"253":12,"259":1,"278":1,"279":1,"294":1}}],["means",{"2":{"15":1,"21":1,"56":1,"79":1,"80":1,"135":1,"153":1,"155":1,"158":1,"188":1,"190":1,"225":2}}],["metadata",{"2":{"257":1}}],["metaldevice",{"2":{"4":2}}],["metal",{"2":{"2":1,"3":2,"13":1,"69":2,"89":1,"93":2,"148":4,"181":1}}],["met",{"2":{"15":1,"24":1}}],["methodinstance",{"2":{"237":3}}],["methoderror",{"2":{"133":1}}],["method=",{"2":{"79":7}}],["methods",{"2":{"42":1,"84":1,"86":1,"169":1,"186":2,"212":1,"221":2,"225":1,"231":1,"232":1,"241":1,"271":1,"278":1,"283":2,"291":1,"292":1,"293":1}}],["method",{"2":{"15":3,"79":1,"84":1,"86":4,"90":1,"118":1,"127":1,"130":1,"133":2,"147":1,"170":1,"171":1,"172":1,"200":1,"202":1,"203":1,"209":1,"211":1,"222":1,"230":1,"231":1,"233":1,"242":1,"243":1,"244":1,"248":1,"251":2,"256":1,"257":1,"258":1,"259":1,"264":1,"270":1,"271":1,"272":1,"273":1,"278":1,"279":2,"283":3,"285":1,"286":2,"290":1,"291":1,"294":1}}],["memory=false",{"2":{"38":1}}],["memory=zeros32",{"2":{"38":1}}],["memory",{"2":{"5":1,"38":20,"63":1,"65":1,"86":2,"197":1,"201":1,"206":1,"214":1,"227":1,"238":2,"246":2,"255":1,"260":11,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["mu",{"2":{"271":1}}],["mu=dense",{"2":{"271":1}}],["mutability",{"0":{"190":1}}],["mutables",{"2":{"174":1}}],["mutable",{"2":{"153":1,"174":1}}],["mutations",{"2":{"98":1,"100":2,"122":1}}],["mutation",{"2":{"63":1,"65":1,"86":1,"122":1,"190":2}}],["mutating",{"2":{"63":1,"65":1}}],["mutate",{"2":{"190":1}}],["mutated",{"2":{"52":1,"62":1,"81":1,"190":2,"191":1}}],["mutates",{"2":{"8":1}}],["muladdmacro",{"2":{"229":1,"282":1}}],["mult",{"2":{"77":1,"86":2}}],["multilayer",{"2":{"262":1}}],["multithreaded",{"2":{"86":1}}],["multihead",{"2":{"73":1}}],["multiply",{"2":{"86":2,"188":1}}],["multiplied",{"2":{"64":1,"85":1,"86":1}}],["multiplication",{"2":{"61":3,"80":4}}],["multiples",{"2":{"78":1}}],["multiple",{"0":{"166":1},"2":{"34":1,"38":1,"41":1,"63":1,"65":1,"81":2,"86":2,"148":1,"279":1}}],["multigate",{"2":{"51":1}}],["multi",{"0":{"4":1},"2":{"39":1,"50":1,"147":1}}],["mul",{"2":{"61":2,"77":2,"80":18,"86":4,"161":1}}],["much",{"2":{"50":2,"58":1,"100":1,"138":1,"279":1}}],["must",{"2":{"1":1,"3":2,"4":1,"7":1,"15":4,"19":2,"23":2,"25":2,"28":1,"32":2,"34":3,"35":2,"36":3,"38":7,"39":4,"41":1,"42":1,"45":1,"47":2,"49":3,"57":1,"66":1,"73":2,"74":1,"76":2,"80":1,"83":3,"86":6,"99":4,"100":1,"137":1,"147":2,"153":1,"156":1,"267":1,"283":4}}],["major",{"0":{"105":1,"108":1,"112":1},"2":{"102":1,"109":1,"147":2}}],["madness",{"2":{"86":2}}],["made",{"2":{"49":1,"52":5,"80":1,"85":1,"102":1,"107":1,"110":1,"153":1,"164":1}}],["masses",{"2":{"283":1}}],["mass2",{"2":{"283":5}}],["mass1",{"2":{"283":6}}],["mass",{"2":{"283":14,"284":2,"285":1,"286":1,"288":1}}],["mass=1",{"2":{"283":3}}],["massachusetts",{"2":{"71":1}}],["master",{"2":{"137":1}}],["masked",{"2":{"292":2}}],["maskedcoupling",{"2":{"292":4,"293":2}}],["masks",{"2":{"73":1}}],["mask",{"2":{"36":10,"64":9,"73":8,"258":2,"259":3,"292":7,"293":8}}],["macrotools",{"2":{"187":1,"276":1,"282":1}}],["macro",{"2":{"56":4,"90":2,"95":1,"96":1,"118":1,"151":2,"202":1,"231":2}}],["machines",{"2":{"50":2}}],["machine",{"2":{"15":1,"58":1,"64":1,"66":1,"169":1,"188":1,"213":1,"283":1}}],["maeloss",{"2":{"50":2}}],["makiecore",{"2":{"276":1,"282":1}}],["makie",{"2":{"263":1,"276":1,"282":1}}],["making",{"2":{"49":1,"98":1,"100":1,"110":1,"118":1}}],["makes",{"2":{"23":1,"38":1,"50":1,"100":1,"117":1,"151":1}}],["make",{"2":{"10":2,"25":1,"38":1,"49":1,"52":6,"73":2,"80":1,"86":2,"98":2,"100":1,"126":1,"138":1,"153":1,"155":1,"165":1,"166":1,"173":1,"178":1,"188":1,"192":1,"200":1,"278":1,"285":2,"291":4}}],["magnitude",{"2":{"41":2,"86":2}}],["maxiters",{"2":{"253":4,"295":4}}],["maxiters=1000",{"2":{"287":1}}],["maxiters=500",{"2":{"225":1}}],["maxiters=epochs",{"2":{"225":1}}],["maximally",{"2":{"86":2}}],["maximum",{"2":{"40":1,"58":1,"63":1,"65":1,"86":5,"147":2,"252":2,"254":1}}],["maxout",{"2":{"40":5}}],["maxpool",{"2":{"37":1,"47":2,"75":1,"147":2,"210":6}}],["max",{"2":{"37":4,"40":1,"50":2,"58":7,"75":1,"81":1,"86":1,"252":8,"254":1,"271":24,"274":2,"279":1,"291":3}}],["mathtexengine",{"2":{"276":1,"282":1}}],["mathematical",{"2":{"83":1}}],["mat",{"2":{"229":1}}],["matmul",{"2":{"61":2,"161":1,"170":1,"171":1}}],["matrices",{"2":{"39":1,"65":1,"81":1,"83":1}}],["matrix",{"2":{"5":3,"15":13,"39":6,"40":1,"50":6,"56":4,"61":3,"62":1,"65":2,"73":1,"74":3,"76":6,"78":2,"80":16,"81":5,"85":5,"86":12,"89":9,"90":1,"118":2,"126":5,"127":13,"133":3,"134":1,"135":1,"166":1,"167":1,"168":1,"169":5,"185":4,"188":13,"192":3,"224":2,"257":3,"296":1}}],["matches",{"2":{"203":2,"273":1}}],["matched",{"2":{"133":5}}],["matching",{"0":{"54":1},"2":{"86":4,"133":1}}],["match",{"2":{"23":1,"38":5,"54":4,"77":1,"86":1,"156":2,"158":1,"182":1,"253":7,"260":102,"295":2}}],["markersize=2",{"2":{"291":1,"296":1}}],["markersize=16",{"2":{"277":2}}],["markersize=12",{"2":{"264":1,"268":2,"284":1,"285":2,"288":4}}],["marker=",{"2":{"284":1,"285":2,"288":4}}],["marked",{"2":{"5":1,"56":1}}],["marks",{"2":{"95":2,"110":1}}],["mark",{"2":{"53":4,"56":2,"224":1,"285":1}}],["margin=2",{"2":{"50":1}}],["margin−y^",{"2":{"50":1}}],["margin",{"2":{"50":2}}],["martens",{"2":{"15":1}}],["mainly",{"2":{"84":1}}],["maintaining",{"2":{"66":1}}],["main",{"0":{"15":1},"2":{"94":1,"134":1,"135":1,"147":1,"153":1,"163":1,"201":1,"204":3,"224":2,"231":1,"237":18,"250":1,"260":2,"268":2,"274":3,"295":2}}],["mappedarrays",{"2":{"276":1,"282":1}}],["mapping",{"2":{"15":7,"50":1,"81":1}}],["maps",{"2":{"15":2,"37":3,"38":2,"50":1,"271":1}}],["map",{"0":{"23":1},"2":{"7":1,"23":4,"38":4,"52":2,"114":2,"115":1,"143":2,"144":1,"273":1,"279":1}}],["managing",{"0":{"191":1},"2":{"191":1}}],["management",{"0":{"148":1,"149":1,"150":1},"1":{"149":1,"150":1},"2":{"110":1,"149":4}}],["manager",{"2":{"88":1}}],["manage",{"2":{"56":1}}],["mandatorily",{"2":{"153":1}}],["many",{"2":{"58":2,"166":1,"188":1,"192":1}}],["manualmemory",{"2":{"187":1,"276":1,"282":1}}],["manual",{"0":{"150":1},"2":{"8":1,"20":1,"22":1,"55":1,"90":1,"128":1,"142":1,"147":1,"164":1,"179":1,"189":1}}],["manually",{"2":{"2":1,"45":1,"47":1,"137":1,"151":1,"189":1,"256":1}}],["manipulations",{"2":{"74":1}}],["manipulation",{"2":{"7":2,"100":1}}],["maybe",{"2":{"54":1,"192":1}}],["may",{"2":{"3":1,"42":2,"56":3,"58":1,"77":2,"80":3,"81":2,"86":2,"213":1,"220":1}}],["microcollections",{"2":{"276":1}}],["microsoftmpi",{"2":{"229":1}}],["mig",{"2":{"238":1,"246":1}}],["migrating",{"0":{"173":1},"1":{"174":1,"175":1,"176":1,"177":1}}],["migration",{"0":{"138":1},"1":{"139":1,"140":1},"2":{"153":1}}],["might",{"2":{"2":1,"8":1,"34":1,"40":1,"45":1,"47":1,"49":3,"55":2,"56":1,"60":1,"62":1,"86":1,"119":1,"124":1,"128":1,"153":2,"158":1,"162":1,"164":1,"165":2,"184":2,"188":1,"190":1,"213":1,"225":1,"236":1,"260":1,"274":1}}],["miopen",{"2":{"117":1}}],["midpoints",{"2":{"86":1}}],["middle",{"2":{"86":1}}],["minibatching",{"2":{"221":1,"225":1}}],["minibatch",{"2":{"200":1,"209":1,"230":1}}],["mini",{"2":{"194":1}}],["minimally",{"2":{"281":1}}],["minimal",{"2":{"141":1}}],["minimum",{"2":{"86":2,"252":4,"254":2}}],["minimized",{"2":{"286":1}}],["minimize",{"2":{"65":1}}],["minimizes",{"2":{"63":1,"196":1}}],["minor",{"2":{"164":1}}],["minded",{"2":{"86":1}}],["min",{"2":{"58":4,"81":1,"252":10,"254":2,"291":5}}],["milletari",{"2":{"50":1}}],["mirza",{"2":{"40":1}}],["mistakes",{"2":{"100":1}}],["mish",{"2":{"58":4}}],["mismatch",{"0":{"126":1},"2":{"54":3,"140":1,"158":1,"182":1}}],["misc",{"0":{"40":1}}],["miscellaneous",{"0":{"3":1,"57":1,"84":1},"2":{"34":1}}],["missings",{"2":{"276":1,"282":1}}],["missing",{"2":{"28":1,"41":1,"66":1,"90":2,"147":2}}],["mixing",{"2":{"19":1,"86":4}}],["mixed",{"2":{"18":2,"63":1}}],["s0025",{"2":{"283":2}}],["ssmproblems",{"2":{"276":1}}],["sgd",{"2":{"196":1}}],["sz",{"2":{"132":2}}],["szegedy",{"2":{"66":1}}],["sneak",{"2":{"86":1}}],["swap",{"2":{"86":1}}],["swapbatch",{"2":{"86":2}}],["swapping",{"2":{"86":1}}],["swish",{"2":{"58":9}}],["switching",{"0":{"44":1},"1":{"45":1},"2":{"164":4,"179":2}}],["switch",{"2":{"36":3,"110":1,"164":1,"192":1}}],["src",{"2":{"81":25,"126":1,"133":1,"165":2,"204":2,"237":3,"260":2,"274":1}}],["srivastava",{"2":{"64":1}}],["s5",{"2":{"78":2}}],["s4",{"2":{"78":4}}],["s3",{"2":{"78":6,"288":2}}],["s2",{"2":{"78":6,"268":2,"285":2,"288":2}}],["s1",{"2":{"78":6,"268":2,"285":2,"288":2}}],["slow",{"2":{"165":2}}],["slower",{"2":{"125":1}}],["sleefpirates",{"2":{"60":2,"282":1}}],["sliding",{"2":{"77":5,"86":3}}],["slight",{"2":{"60":1,"192":1}}],["slightly",{"2":{"34":1,"58":1,"86":1}}],["slighly",{"2":{"58":1}}],["slices",{"2":{"51":1,"80":2,"86":2}}],["slice",{"2":{"15":1,"38":2,"41":2,"66":2}}],["squares",{"2":{"86":1}}],["square",{"2":{"73":1,"188":2,"252":1}}],["squaredhingeloss",{"2":{"50":2}}],["squared",{"2":{"50":3,"86":1,"196":1}}],["sqrt",{"2":{"15":4,"35":2,"38":6,"39":3,"278":1,"284":2}}],["smarter",{"2":{"252":1}}],["smaller",{"2":{"58":1,"100":1}}],["small",{"2":{"47":1,"65":1,"100":1,"118":1,"158":1,"188":1,"207":1,"225":1}}],["sm",{"2":{"238":1,"246":1}}],["smodels",{"2":{"293":5}}],["smodel",{"2":{"165":3,"166":4,"167":2,"168":2,"170":2,"171":2,"172":2,"225":3,"293":2,"294":3}}],["smooth",{"2":{"50":2}}],["smoothing=0",{"2":{"50":3}}],["smoothing",{"2":{"50":16}}],["skipped",{"2":{"34":1,"283":1}}],["skip",{"2":{"34":1,"96":3,"118":1,"252":1}}],["skipconnection",{"2":{"34":4}}],["spill",{"2":{"260":22}}],["spilled",{"2":{"260":11}}],["spiralclassifiercompact",{"2":{"202":2,"204":1}}],["spiralclassifier",{"2":{"201":5,"204":1}}],["spiral",{"2":{"200":1}}],["spirals",{"2":{"198":1,"200":6}}],["spiritual",{"2":{"136":1}}],["spurious",{"0":{"158":1}}],["spectralnorm",{"2":{"100":1}}],["spectrum",{"2":{"86":1}}],["spectrograms",{"2":{"86":1}}],["spectrogram",{"2":{"86":5}}],["specialfunctionsext",{"2":{"229":1,"240":1,"276":1,"282":2}}],["specialfunctionschainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["specialfunctions",{"2":{"187":2,"276":2,"282":2}}],["specialized",{"2":{"56":1,"60":1,"67":1,"190":1}}],["special",{"2":{"3":2,"30":2,"38":1,"50":1,"56":1,"65":2,"123":1,"237":3}}],["specifies",{"2":{"35":4,"37":3,"40":1,"76":5,"194":1}}],["specified",{"2":{"2":2,"15":2,"34":8,"39":1,"40":3,"42":1,"74":1,"78":4,"79":1,"96":1,"149":1,"186":1}}],["specifically",{"2":{"176":1,"188":1}}],["specifications",{"2":{"128":1}}],["specification",{"0":{"126":1},"2":{"278":1}}],["specific",{"2":{"4":2,"8":1,"39":1,"52":1,"123":1,"181":1}}],["specifying",{"2":{"15":1,"35":2,"37":3,"41":2,"56":1,"95":1}}],["specify",{"2":{"2":1,"36":2,"42":1,"50":1,"56":1,"58":3,"78":2,"82":1,"145":1,"278":1}}],["speech",{"2":{"83":1}}],["speedup",{"2":{"58":1,"213":2}}],["splatted",{"2":{"56":1}}],["splatting",{"2":{"56":1}}],["splittablesbase",{"2":{"276":1}}],["split=",{"2":{"209":1,"230":1,"272":1}}],["splitobs",{"2":{"200":1,"209":1,"229":1,"230":1}}],["split",{"2":{"51":1,"73":1,"84":1,"137":2,"200":1,"201":1,"209":2,"230":2}}],["spliced",{"2":{"38":2}}],["space",{"2":{"38":6,"86":1,"271":1}}],["spatial",{"2":{"35":5,"37":6,"77":2,"86":5,"252":1}}],["sparspak",{"2":{"229":1,"282":1}}],["sparsity=0",{"2":{"15":1}}],["sparsity",{"2":{"15":4}}],["sparseconnectivitytracerspecialfunctionsext",{"2":{"276":1,"282":2}}],["sparseconnectivitytracernnlibext",{"2":{"276":1,"282":2}}],["sparseconnectivitytracernanmathext",{"2":{"276":1,"282":2}}],["sparseconnectivitytracerlogexpfunctionsext",{"2":{"276":1,"282":1}}],["sparseconnectivitytracer",{"2":{"276":5,"282":5}}],["sparsematrixcoloringscolorsext",{"2":{"276":2,"282":2}}],["sparsematrixcolorings",{"2":{"276":2,"282":2}}],["sparseinversesubset",{"2":{"192":1,"276":1,"282":1}}],["sparsearraysext",{"2":{"192":2,"276":2,"282":3}}],["sparsearrays",{"2":{"192":1,"276":1,"282":1}}],["sparsevector",{"2":{"86":1}}],["sparsely",{"2":{"15":2,"39":1}}],["sparse",{"2":{"15":2,"58":1,"86":1}}],["scientific",{"2":{"283":1}}],["scientificmachinelearning",{"2":{"281":1}}],["scientifictypesbase",{"2":{"276":1}}],["scimljacobianoperators",{"2":{"229":1,"282":1}}],["scimlbasemakieext",{"2":{"276":2,"282":2}}],["scimlbasezygoteext",{"2":{"229":1,"282":1}}],["scimlbasechainrulescoreext",{"2":{"229":1,"276":1,"282":1}}],["scimlbase",{"2":{"229":3,"276":3,"282":4}}],["scimloperatorssparsearraysext",{"2":{"229":1,"276":1,"282":1}}],["scimloperatorsstaticarrayscoreext",{"2":{"229":1,"276":1,"282":1}}],["scimloperators",{"2":{"229":3,"276":3,"282":3}}],["scimlstructures",{"2":{"229":1,"276":1,"282":1}}],["scimlsensitivity",{"2":{"222":1,"229":3,"237":26,"282":3}}],["sciml",{"2":{"55":1,"56":1,"90":1,"100":2,"194":1}}],["scratch",{"2":{"196":1,"201":1,"228":1,"276":1,"282":1}}],["script",{"2":{"147":1}}],["scopedvalues",{"2":{"192":1,"282":1}}],["scores",{"2":{"73":10}}],["score",{"2":{"50":1}}],["scattered",{"2":{"86":1}}],["scatter",{"0":{"81":1},"2":{"81":17,"86":3,"264":1,"268":2,"277":2,"284":1,"285":2,"288":4,"291":1,"296":1}}],["scalable",{"2":{"170":1}}],["scalars",{"2":{"58":1,"86":1}}],["scalar",{"0":{"159":1},"2":{"3":2,"67":1,"84":2,"86":1,"159":1,"193":1,"250":1}}],["scalekeepaspect",{"2":{"272":1}}],["scales",{"2":{"78":3}}],["scale=ones32",{"2":{"41":4}}],["scale",{"2":{"39":2,"41":20,"42":6,"58":1,"66":15,"78":17,"86":2,"127":1,"146":1,"165":1,"292":9}}],["scaled",{"2":{"15":3,"58":2,"86":1}}],["scaling",{"2":{"15":2,"64":2,"77":1,"78":2,"283":6}}],["schwarzschild",{"2":{"284":1}}],["school",{"2":{"71":1}}],["schemes",{"2":{"12":1,"185":1}}],["s",{"2":{"10":1,"34":1,"36":2,"39":1,"41":3,"42":1,"54":3,"55":1,"56":1,"58":1,"67":1,"69":1,"78":4,"80":4,"82":1,"84":1,"85":1,"86":17,"89":1,"90":1,"100":1,"118":2,"119":1,"122":1,"126":1,"127":1,"138":2,"143":1,"153":1,"154":1,"158":1,"164":2,"165":3,"167":1,"168":1,"169":1,"172":3,"174":1,"187":4,"188":5,"193":1,"196":2,"201":5,"203":1,"205":1,"221":1,"225":2,"234":1,"264":3,"265":1,"268":1,"274":102,"278":2,"284":3,"291":1,"295":12}}],["symmetrically",{"2":{"76":1}}],["symmetric",{"2":{"76":9,"86":2}}],["symbolicindexinginterface",{"2":{"229":1,"276":1,"282":1}}],["symbol",{"2":{"10":1,"24":1,"41":1,"56":1,"118":1,"119":1,"155":1,"205":1,"237":29,"278":2,"293":2}}],["syntax",{"2":{"34":1,"56":1,"188":2,"189":1}}],["synchronized",{"2":{"137":2}}],["synchronize",{"2":{"30":2,"137":6,"138":2,"140":2}}],["systems",{"2":{"64":1,"89":1,"100":2,"184":1}}],["system",{"2":{"2":1,"3":1,"77":1,"118":1,"155":1,"192":1,"247":1,"284":1,"285":1}}],["saving",{"0":{"205":1},"2":{"205":1}}],["saveat=tsteps",{"2":{"284":1,"285":1,"286":1,"288":1}}],["saveat=t",{"2":{"223":1,"225":1,"226":1}}],["save",{"2":{"147":1,"205":4,"232":2,"237":26}}],["saves",{"2":{"86":1}}],["sake",{"2":{"201":1,"203":1,"252":1}}],["sanity",{"2":{"172":1}}],["saw",{"2":{"119":1}}],["say",{"2":{"100":1,"143":1}}],["safely",{"2":{"285":1}}],["safe",{"2":{"84":2,"86":1}}],["sampler",{"2":{"278":1}}],["samples=batchsize",{"2":{"274":1}}],["samples",{"2":{"196":15,"204":7,"272":2,"273":3,"274":6,"278":2,"279":2,"291":2,"295":17}}],["sample",{"2":{"82":6,"86":5,"169":3,"172":1,"271":1,"278":3,"279":1,"293":2,"296":1}}],["sampled",{"2":{"58":1,"82":1,"278":2}}],["sampling",{"0":{"82":1},"2":{"82":3,"252":1,"271":1,"276":1}}],["samepad",{"2":{"35":2,"37":3}}],["same",{"0":{"135":1},"2":{"3":1,"5":2,"7":3,"15":1,"18":2,"23":1,"24":1,"25":1,"34":1,"35":2,"37":3,"38":3,"40":2,"41":1,"49":2,"51":2,"54":2,"60":2,"62":1,"64":1,"66":3,"68":1,"73":1,"75":3,"76":1,"77":3,"80":1,"82":4,"85":1,"86":2,"90":1,"99":2,"118":1,"137":1,"154":1,"166":1,"172":2,"189":1,"191":1,"195":1,"204":3,"207":1,"231":1,"260":1,"273":1}}],["satisfactory",{"2":{"279":1}}],["satisfies",{"2":{"40":1,"47":2}}],["satisfied",{"2":{"8":1,"267":1}}],["satisfying",{"2":{"35":2,"37":6,"153":1}}],["said",{"2":{"21":1}}],["saxe",{"2":{"15":1}}],["sortingalgorithms",{"2":{"276":1,"282":1}}],["sorted",{"2":{"101":1}}],["soln",{"2":{"283":13,"284":2,"285":2,"288":2}}],["soln2orbit",{"2":{"283":3}}],["solution",{"2":{"252":5,"254":1}}],["solutions",{"2":{"15":1}}],["sol",{"2":{"225":2,"226":2}}],["solver",{"2":{"231":6,"235":3,"237":7}}],["solver=tsit5",{"2":{"231":2,"235":1}}],["solvers",{"2":{"100":1}}],["solves",{"2":{"225":1}}],["solve",{"2":{"83":1,"223":1,"225":4,"226":1,"231":2,"235":1,"247":1,"284":1,"285":1,"286":1,"287":1,"288":1}}],["softfail",{"2":{"97":2}}],["soft",{"2":{"96":4}}],["software",{"2":{"71":2}}],["softsign",{"2":{"58":6}}],["softshrink",{"2":{"58":6}}],["softplus",{"2":{"58":6}}],["softmax",{"0":{"74":1},"2":{"50":5,"73":3,"74":13,"83":1}}],["so",{"2":{"58":1,"60":1,"74":1,"80":1,"83":2,"86":9,"89":1,"164":3,"165":1,"166":1,"174":1,"224":1,"231":2,"235":2,"249":1,"250":1,"267":1}}],["society",{"2":{"50":1}}],["sooner",{"2":{"21":1}}],["sometimes",{"2":{"188":2}}],["something",{"2":{"124":1}}],["somewhere",{"2":{"45":1,"47":1}}],["some",{"0":{"223":1,"283":1},"2":{"8":1,"24":1,"38":1,"40":1,"45":2,"56":1,"76":5,"77":1,"81":1,"84":1,"86":5,"100":1,"102":1,"114":1,"120":1,"122":1,"123":1,"124":1,"125":1,"127":1,"136":1,"140":1,"145":1,"154":1,"158":1,"161":1,"163":2,"164":7,"165":1,"174":1,"188":1,"234":1,"242":1,"252":1}}],["source=code",{"2":{"147":1}}],["source",{"2":{"1":1,"2":3,"3":8,"4":2,"5":1,"7":3,"8":7,"9":2,"10":5,"11":1,"15":8,"16":24,"18":2,"19":1,"22":5,"23":1,"24":2,"25":1,"27":2,"28":3,"29":2,"30":4,"31":1,"32":1,"34":6,"35":2,"36":3,"37":9,"38":6,"39":4,"40":7,"41":5,"42":2,"45":3,"47":3,"49":7,"50":15,"51":8,"52":6,"53":4,"54":1,"55":1,"56":3,"57":1,"58":27,"60":2,"61":1,"62":2,"63":1,"64":2,"65":1,"66":4,"67":1,"73":3,"74":2,"75":4,"76":6,"77":7,"78":9,"79":2,"80":6,"81":9,"82":2,"83":1,"84":4,"85":2,"86":50,"95":2,"96":2,"97":1,"107":1,"187":2}}],["sixel",{"2":{"276":1,"282":1}}],["situations",{"2":{"141":1}}],["sided",{"2":{"283":2}}],["sides",{"2":{"76":5,"86":4}}],["side",{"2":{"56":1,"63":1,"73":1,"76":2}}],["siamese",{"2":{"50":1}}],["siamesecontrastiveloss",{"2":{"50":3}}],["sig",{"2":{"278":2}}],["sigma",{"2":{"58":2}}],["sigmoid",{"2":{"50":3,"58":20,"84":3,"201":1,"202":1,"271":1,"278":2}}],["signeddistancefields",{"2":{"276":1,"282":1}}],["signify",{"2":{"102":1,"109":1}}],["significantly",{"2":{"151":1,"234":1}}],["significant",{"2":{"8":1,"52":5,"213":1,"236":1}}],["signal",{"2":{"86":4}}],["signature",{"2":{"23":1,"64":1}}],["si+1",{"2":{"35":1,"37":3}}],["silently",{"2":{"25":1,"84":1}}],["size=1000",{"2":{"200":1}}],["size=",{"2":{"78":4,"274":1,"296":1}}],["sized",{"2":{"51":1,"65":1,"66":1,"76":1,"79":3}}],["sizes",{"2":{"15":2,"65":1,"86":1,"107":1,"258":4}}],["size",{"0":{"11":1},"2":{"11":2,"15":24,"16":72,"19":2,"34":1,"35":16,"37":48,"38":11,"39":16,"40":8,"41":7,"42":8,"45":1,"47":1,"56":6,"60":2,"66":3,"73":16,"74":1,"75":7,"76":6,"77":14,"78":27,"79":2,"80":11,"81":1,"83":1,"86":16,"117":1,"132":1,"170":2,"171":2,"188":1,"196":2,"197":1,"200":7,"206":1,"209":3,"214":1,"227":1,"230":3,"231":2,"234":1,"238":1,"241":2,"246":1,"255":1,"260":2,"261":1,"269":1,"272":2,"273":5,"274":6,"275":1,"278":4,"279":1,"280":1,"283":4,"289":1,"291":1,"292":1,"293":1,"295":1,"297":1}}],["singular",{"0":{"153":1},"2":{"153":1,"154":2}}],["singleton",{"2":{"7":3,"35":2,"37":3,"86":4,"107":2,"108":1}}],["single",{"2":{"7":2,"8":1,"35":6,"37":9,"38":13,"49":6,"50":1,"56":1,"62":1,"63":1,"78":1,"81":2,"86":3,"89":2,"90":1,"107":1,"119":3,"154":1,"166":1,"169":1,"174":1,"188":1,"196":1,"204":1,"212":1,"234":1,"245":1,"253":1,"260":1,"268":1,"273":1,"274":1,"278":1,"295":1}}],["sinθ",{"2":{"86":1}}],["sin",{"2":{"86":1,"283":1,"291":2}}],["since",{"2":{"8":1,"15":1,"23":1,"56":1,"68":1,"80":1,"100":1,"107":1,"130":1,"137":1,"155":1,"194":1,"201":1,"203":1,"225":2,"249":1,"257":1,"268":1}}],["simultaneously",{"2":{"155":1}}],["simulate",{"2":{"127":1,"283":1,"284":1,"285":2}}],["simulating",{"0":{"284":1},"2":{"15":1}}],["simdtypes",{"2":{"187":1,"276":1,"282":1}}],["simd",{"2":{"67":1,"86":1,"276":1,"282":1}}],["simplicity",{"2":{"169":1,"203":1,"252":1}}],["simplified",{"2":{"153":1}}],["simplifies",{"2":{"81":2}}],["simpletraits",{"2":{"276":1,"282":1}}],["simplebufferstream",{"2":{"229":1}}],["simpleunpack",{"2":{"229":1,"276":1,"282":1}}],["simplechainslayer",{"2":{"47":2,"210":1}}],["simplechains",{"0":{"207":1},"1":{"208":1,"209":1,"210":1,"211":1,"212":1,"213":1,"214":1},"2":{"47":14,"100":1,"207":3,"208":2,"210":1,"213":2}}],["simple",{"0":{"47":1,"198":1},"1":{"199":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1},"2":{"38":1,"47":4,"49":1,"56":1,"64":2,"67":1,"118":1,"149":2,"174":1,"176":1,"187":1,"193":1,"210":1,"213":1,"221":1}}],["simplest",{"2":{"34":1}}],["simply",{"2":{"8":1,"49":1,"53":1,"54":1,"56":1,"60":1,"68":1,"86":2,"119":1,"125":1,"135":1,"145":1,"147":1,"155":1,"242":1,"278":1}}],["similarly",{"2":{"80":1,"154":1,"271":1}}],["similarity",{"2":{"5":1}}],["similar",{"2":{"2":1,"3":1,"7":1,"38":1,"39":1,"50":1,"51":1,"52":1,"66":1,"118":1,"119":1,"155":1,"163":1,"173":1,"192":1,"273":1,"274":1,"285":1}}],["ship",{"2":{"125":1}}],["shi",{"2":{"78":1}}],["shifted",{"2":{"15":1}}],["shift",{"2":{"15":4,"41":4,"66":1,"292":7}}],["shuffle=true",{"2":{"200":2,"209":1,"230":1,"241":1,"253":2,"272":1,"291":1}}],["shuffle=false",{"2":{"5":1,"200":1,"209":1,"230":1,"241":1}}],["shuffle",{"2":{"42":1,"78":4,"200":2,"209":2,"230":2}}],["shuffling",{"2":{"42":1,"78":1}}],["shaderabstractions",{"2":{"276":1,"282":1}}],["shape=",{"2":{"274":1}}],["shapedaxis",{"2":{"237":30}}],["shaped",{"2":{"37":9}}],["shape",{"2":{"35":1,"38":16,"40":1,"41":17,"82":8,"86":7,"126":4,"147":2,"188":1,"196":4,"271":11}}],["shate",{"2":{"25":1}}],["sharing",{"2":{"25":9}}],["sharedarrays",{"2":{"276":1,"282":1}}],["shared",{"2":{"25":1,"66":2,"110":1}}],["share",{"2":{"25":4}}],["shooting",{"2":{"225":1}}],["shortcomings",{"0":{"141":1}}],["shortcut",{"2":{"8":1,"34":1}}],["shorter",{"2":{"86":2}}],["short",{"2":{"38":1,"86":3,"192":1}}],["shorthand",{"2":{"8":1}}],["shown",{"2":{"278":1}}],["showoff",{"2":{"276":1,"282":1}}],["showcasing",{"2":{"220":1}}],["showing",{"2":{"172":1}}],["shows",{"2":{"155":1,"168":1,"169":1,"172":1,"279":1}}],["showerror",{"2":{"133":1}}],["show",{"2":{"5":2,"84":2,"89":1,"165":1,"189":1,"221":1,"276":1}}],["shouldn",{"2":{"49":1,"107":1,"155":1,"167":1}}],["should",{"0":{"130":1},"2":{"4":2,"22":1,"25":1,"27":2,"35":5,"37":6,"38":1,"41":1,"42":1,"47":1,"49":2,"50":1,"52":1,"56":1,"62":1,"67":1,"77":1,"82":2,"84":2,"86":4,"110":1,"111":1,"142":1,"148":1,"153":2,"154":2,"155":1,"158":1,"173":1,"174":2,"190":1,"191":1,"228":1,"265":1,"285":1}}],["stubs",{"2":{"276":1,"282":1}}],["stime",{"2":{"212":2,"234":2,"245":2}}],["stick",{"2":{"192":1}}],["still",{"0":{"177":1},"2":{"2":1,"8":1,"47":1,"84":1,"126":1,"135":1,"153":1,"154":1,"201":1,"237":1,"265":1,"279":1}}],["stop=6",{"2":{"279":4}}],["stopping",{"2":{"260":2}}],["stochastic",{"2":{"99":1,"100":1,"196":1}}],["storage",{"2":{"86":6}}],["storing",{"2":{"66":1,"86":2,"134":1,"174":1}}],["stores",{"2":{"36":1,"39":1,"55":1,"86":2,"140":1,"174":1,"188":1,"231":1,"260":11}}],["stored",{"2":{"35":1,"49":3,"78":2,"147":2}}],["store",{"2":{"30":2,"39":1,"86":2,"99":2,"174":1,"225":1,"277":1,"285":1,"286":1}}],["stencils",{"2":{"283":2}}],["stencil",{"2":{"86":1}}],["stepnorm=0",{"2":{"287":1}}],["steprangelen",{"2":{"224":2}}],["steps",{"2":{"49":1,"89":1,"119":1,"278":1}}],["step",{"2":{"49":11,"83":1,"89":1,"90":1,"119":3,"196":1,"204":1,"212":1,"234":1,"245":1,"253":1,"260":1,"268":3,"274":1,"278":3,"295":1}}],["stft",{"2":{"86":10}}],["styledstringsext",{"2":{"276":1,"282":1}}],["style",{"2":{"84":1}}],["stylization",{"2":{"41":1,"66":1}}],["std=1e",{"2":{"285":3}}],["std=0",{"2":{"15":1}}],["stdlib",{"2":{"191":1}}],["stdout",{"2":{"133":1}}],["std",{"2":{"15":5,"66":1,"278":1}}],["stages",{"2":{"296":3}}],["stage",{"2":{"164":1}}],["start=false",{"2":{"232":1}}],["started",{"0":{"87":1,"101":1},"1":{"88":1,"89":1,"90":1,"91":1,"92":1,"93":1},"2":{"173":1}}],["start",{"2":{"86":1,"118":1,"187":1,"237":13,"273":6,"274":3,"276":1,"295":2}}],["starting",{"2":{"52":5,"105":1,"119":1,"188":1,"235":1}}],["stackviews",{"2":{"276":1,"282":1}}],["stacktraces",{"2":{"125":1}}],["stacktrace",{"2":{"76":1,"114":2,"126":1}}],["stack",{"2":{"38":2,"225":1,"252":6,"254":2,"272":1}}],["stackedrnncells",{"2":{"38":1}}],["statsbase",{"2":{"276":1,"282":1}}],["statsapi",{"2":{"276":1,"282":1}}],["statsfunschainrulescoreext",{"2":{"263":1,"276":1,"282":1}}],["statsfunsinversefunctionsext",{"2":{"263":1,"276":1,"282":1}}],["statsfuns",{"2":{"263":3,"276":3,"282":3}}],["stats",{"2":{"49":2,"89":2,"253":7}}],["stats=false",{"2":{"41":5}}],["stats=true",{"2":{"34":2,"41":7,"126":1,"127":2}}],["statisticaltraits",{"2":{"276":1}}],["statistics",{"2":{"15":2,"41":7,"49":2,"66":2,"117":1,"187":1,"192":1,"208":1,"229":1,"248":1,"256":1,"263":1,"267":1,"270":1,"276":2,"278":1,"282":2,"290":1}}],["staticint",{"2":{"210":2}}],["staticarraysext",{"2":{"276":1,"282":2}}],["staticarrayschainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["staticarrayscore",{"2":{"187":1,"276":1,"282":1}}],["staticarraysstatisticsext",{"2":{"187":1,"276":1,"282":1}}],["staticarrays",{"2":{"187":3,"276":3,"282":3}}],["staticarrayinterfaceoffsetarraysext",{"2":{"276":2,"282":1}}],["staticarrayinterfacestaticarraysext",{"2":{"187":1,"276":1,"282":1}}],["staticarrayinterface",{"2":{"187":2,"276":3,"282":3}}],["static",{"2":{"47":1,"51":1,"55":1,"67":1,"89":8,"118":9,"133":2,"187":1,"210":4,"237":58,"268":6,"276":1,"282":1}}],["staticbool",{"2":{"24":1,"51":1,"64":1}}],["staticsymbol",{"2":{"24":1,"51":2}}],["stated",{"2":{"58":1}}],["statements",{"2":{"56":1}}],["statefulrealnvp",{"2":{"293":3}}],["statefulrecurrentcell",{"2":{"38":2}}],["statefulneuralode",{"2":{"235":4,"236":1,"237":5}}],["statefulluxlayers",{"2":{"164":1}}],["statefulluxlayer",{"2":{"55":1,"115":4,"164":7,"165":1,"166":1,"167":1,"168":1,"170":1,"171":1,"172":1,"225":2,"235":2,"251":7,"253":2,"278":2,"285":2,"293":3,"294":1,"295":1}}],["stateful",{"0":{"55":1,"235":1,"236":1},"2":{"38":2,"165":1,"166":1,"168":2,"237":6}}],["state=false",{"2":{"38":4}}],["state=zeros32",{"2":{"38":3}}],["stateless",{"2":{"8":1,"40":1}}],["statelength",{"2":{"7":1,"10":1,"153":4,"154":1}}],["state",{"0":{"156":1},"2":{"7":1,"8":3,"10":5,"22":1,"34":6,"36":4,"38":64,"40":1,"41":4,"45":1,"49":7,"55":4,"56":7,"64":2,"66":1,"89":8,"90":4,"99":3,"100":3,"119":4,"137":5,"153":5,"154":3,"156":4,"174":1,"201":3,"204":7,"212":7,"225":6,"245":11,"253":6,"260":13,"267":1,"274":7,"278":2,"285":1,"295":5}}],["states",{"0":{"10":1},"2":{"3":1,"7":4,"8":1,"10":2,"22":4,"23":6,"24":1,"25":1,"34":16,"36":3,"38":5,"40":2,"41":7,"45":1,"47":1,"49":3,"50":2,"53":2,"54":4,"55":1,"56":5,"89":2,"90":2,"107":2,"119":1,"126":2,"127":1,"132":1,"137":2,"143":3,"146":1,"151":1,"153":5,"154":3,"156":1,"174":3,"196":2,"201":2,"204":2,"205":1,"210":2,"212":2,"231":1,"234":2,"245":4,"253":1,"260":5,"265":1,"267":1,"268":6,"274":2,"278":1,"285":1,"295":1}}],["standard",{"2":{"15":3,"16":6,"35":2,"41":1,"89":1,"165":1,"285":1}}],["stabilities",{"2":{"236":1}}],["stability",{"0":{"237":1},"2":{"8":1,"41":4,"50":1,"56":1,"66":4,"74":1,"102":1,"109":1,"183":1}}],["stablerng",{"2":{"165":3,"166":3,"167":2,"168":2,"172":3}}],["stablerngs",{"2":{"13":1,"164":1,"276":1,"282":1}}],["stablehlo",{"2":{"147":44}}],["stable",{"2":{"49":1,"52":5,"56":2,"58":3,"74":1,"84":1,"183":1,"203":1,"231":1,"237":1}}],["st",{"2":{"7":2,"8":4,"10":6,"22":2,"23":6,"25":1,"34":6,"38":1,"40":14,"45":4,"47":2,"49":3,"50":2,"51":2,"54":1,"55":4,"56":19,"89":4,"90":6,"115":2,"118":14,"119":6,"126":5,"127":7,"130":3,"132":7,"133":2,"134":10,"135":2,"137":2,"143":9,"144":7,"145":6,"146":2,"147":2,"153":7,"154":11,"155":3,"156":2,"158":5,"165":7,"166":7,"167":6,"168":6,"170":2,"171":2,"172":9,"173":3,"174":6,"196":6,"201":10,"203":4,"204":8,"205":3,"211":4,"212":3,"225":5,"231":4,"232":4,"233":4,"234":2,"235":7,"237":16,"244":4,"245":2,"251":7,"253":3,"259":4,"260":5,"267":1,"268":1,"271":13,"273":10,"274":4,"278":2,"285":2,"293":7,"294":3,"295":2}}],["strain",{"2":{"283":5}}],["strange",{"2":{"86":4}}],["strokewidth=2",{"2":{"264":1,"268":2,"277":2,"285":2,"288":4}}],["strokecolor=",{"2":{"264":1,"268":2,"277":2}}],["strongly",{"2":{"190":1}}],["stream",{"2":{"260":11}}],["strength",{"2":{"50":1}}],["structio",{"2":{"282":1}}],["structtypes",{"2":{"229":1}}],["structarraysext",{"2":{"276":1,"282":1}}],["structarrayssparsearraysext",{"2":{"192":1,"276":1,"282":1}}],["structarraysstaticarraysext",{"2":{"192":1,"276":1,"282":1}}],["structarraysgpuarrayscoreext",{"2":{"192":1,"229":1,"263":2,"276":1,"282":1}}],["structarrayslinearalgebraext",{"2":{"192":1,"276":1,"282":1}}],["structarraysadaptext",{"2":{"192":1,"276":1,"282":1}}],["structarrays",{"2":{"192":6,"229":1,"263":1,"276":6,"282":6}}],["struct",{"2":{"132":1,"151":1,"153":1,"174":3,"205":1,"224":1,"271":1,"272":1,"292":2,"293":1}}],["structs",{"2":{"53":4}}],["structures",{"2":{"52":2,"100":1,"158":1}}],["structured",{"2":{"38":3,"155":1,"242":1}}],["structure",{"2":{"3":5,"7":2,"8":3,"18":2,"23":4,"25":1,"30":1,"39":1,"52":3,"126":5,"127":13,"153":2,"154":2,"156":1,"174":1,"234":1}}],["stridedviews",{"2":{"229":1}}],["stridedarray",{"2":{"84":1,"86":2}}],["strided",{"2":{"80":1,"86":5}}],["strides",{"2":{"52":5,"147":2}}],["stridearray",{"2":{"47":1}}],["stridearrayscore",{"2":{"47":1,"187":1,"276":1,"282":1}}],["stride=2",{"2":{"77":1,"271":3}}],["stride=k",{"2":{"75":4}}],["stride=window",{"2":{"37":3}}],["stride=1",{"2":{"35":2,"77":1,"271":3}}],["stride",{"2":{"35":5,"37":6,"75":4,"77":3,"80":5,"86":1,"147":2}}],["stringmanipulation",{"2":{"276":1}}],["stringencodings",{"2":{"229":1}}],["strings",{"2":{"115":1}}],["string=",{"2":{"57":2}}],["string",{"2":{"1":2,"3":1,"57":1,"95":1,"147":1}}],["suitesparse",{"2":{"192":2,"276":2,"282":2}}],["surprise",{"2":{"158":1}}],["surpassing",{"2":{"15":2}}],["sure",{"2":{"155":1,"285":1}}],["supertype",{"2":{"154":1}}],["super",{"2":{"78":2,"283":1}}],["suppose",{"2":{"279":1}}],["supposed",{"2":{"100":1,"114":1,"183":1}}],["supporting",{"2":{"122":1}}],["support",{"0":{"4":1,"69":1,"70":1,"92":1,"93":1,"123":1,"180":1},"2":{"3":4,"4":4,"7":2,"22":1,"37":3,"50":2,"52":1,"53":1,"55":1,"63":2,"65":2,"86":2,"89":1,"93":5,"100":3,"105":2,"109":1,"120":1,"121":1,"122":2,"124":2,"141":1,"148":3,"155":1,"176":1,"257":1}}],["supports",{"2":{"3":1,"56":1,"92":1,"117":1,"122":1,"176":1,"180":1,"188":1,"189":1,"249":1}}],["supported",{"0":{"13":1},"2":{"2":1,"3":2,"18":2,"19":1,"37":3,"42":1,"45":1,"49":6,"54":1,"67":1,"79":1,"80":2,"86":3,"89":1,"99":2,"109":1,"122":2,"123":2,"124":1,"148":2,"163":1,"185":2}}],["supplied",{"2":{"28":1,"34":1,"76":1}}],["supply",{"2":{"28":1}}],["subprocess",{"2":{"260":11}}],["subtracts",{"2":{"188":1}}],["subtract",{"2":{"188":1}}],["subtyping",{"2":{"154":1}}],["subtype",{"2":{"153":1,"154":2}}],["subtlety",{"2":{"86":1}}],["subject",{"2":{"86":1}}],["subsequent",{"2":{"78":1}}],["subclass",{"2":{"77":2}}],["subarray",{"2":{"51":1}}],["suboptimal",{"2":{"49":1}}],["sum",{"2":{"56":5,"74":3,"77":1,"84":1,"95":3,"96":2,"127":2,"165":1,"166":1,"167":2,"168":1,"170":1,"171":1,"173":2,"174":2,"203":1,"211":1,"233":1,"244":1,"251":3,"273":1,"283":1,"294":1}}],["sumit",{"2":{"50":1}}],["summary",{"2":{"5":8,"174":1,"278":1}}],["suggests",{"2":{"40":1}}],["success",{"2":{"287":1}}],["successor",{"2":{"136":1}}],["successfully",{"2":{"1":1,"187":1,"192":12,"199":6,"229":38,"240":14,"263":9,"276":20,"282":62}}],["such",{"2":{"4":2,"6":1,"8":1,"35":2,"37":6,"56":1,"73":1,"79":1,"80":2,"83":1,"84":1,"86":7,"100":1,"118":1,"128":1,"186":1,"202":1,"242":1}}],["seq",{"2":{"200":1}}],["sequentially",{"2":{"34":2,"38":5}}],["sequences",{"2":{"200":1}}],["sequence=true",{"2":{"38":2}}],["sequence",{"2":{"34":1,"38":9,"73":2,"83":2,"86":2,"200":7,"201":3}}],["several",{"2":{"178":1,"192":1}}],["seven",{"2":{"86":1}}],["separation",{"2":{"100":1}}],["separating",{"2":{"100":1}}],["separately",{"2":{"189":1}}],["separate",{"2":{"77":1,"86":2}}],["server",{"2":{"197":1,"206":1,"213":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["serves",{"2":{"45":1}}],["service",{"2":{"90":1,"204":1,"213":1,"253":8,"260":103,"274":1,"295":3}}],["serious",{"2":{"247":1}}],["seriously",{"2":{"89":1}}],["serializes",{"2":{"137":1}}],["serializationext",{"2":{"276":1}}],["serialization",{"2":{"137":1,"192":1,"205":1,"276":1,"282":1}}],["series",{"2":{"69":2}}],["sergey",{"2":{"66":1}}],["seaborn",{"2":{"279":3}}],["searching",{"2":{"58":1}}],["seamlessly",{"2":{"49":1}}],["selu",{"2":{"58":4,"64":1}}],["self",{"2":{"58":4,"64":1}}],["selecting",{"2":{"157":1}}],["selection",{"0":{"181":1},"2":{"2":1,"3":1,"149":1,"150":1,"181":1}}],["selectdim",{"2":{"40":1}}],["select",{"2":{"2":4,"147":4,"148":1}}],["selected",{"0":{"220":1},"2":{"2":3,"3":1,"114":1}}],["selects",{"2":{"2":1,"70":1}}],["segment",{"2":{"58":1}}],["segmentation",{"2":{"50":2}}],["sensealg=reversediffadjoint",{"2":{"234":1}}],["sensealg=gaussadjoint",{"2":{"234":1}}],["sensealg=interpolatingadjoint",{"2":{"225":1,"232":1,"234":1}}],["sensealg",{"2":{"232":1,"234":1,"237":13}}],["sensitivities",{"2":{"231":1,"234":1}}],["sensible",{"2":{"100":1}}],["sensibly",{"2":{"51":1}}],["sensical",{"2":{"56":1}}],["send",{"2":{"101":1}}],["sendbuf",{"2":{"30":6}}],["sendrecvbuf",{"2":{"30":6}}],["seyed",{"2":{"50":1}}],["sec",{"2":{"278":1}}],["seconds",{"2":{"187":1,"192":12,"199":6,"229":38,"240":14,"263":9,"276":20,"278":2,"282":62}}],["second",{"2":{"34":1,"64":1,"86":1,"188":1,"194":1,"283":2}}],["section",{"2":{"34":2,"47":1,"88":1,"128":1,"166":1,"179":1,"189":1,"283":1}}],["semvar",{"2":{"115":1}}],["semver",{"2":{"21":1}}],["semantic",{"2":{"165":2}}],["semantically",{"2":{"34":1,"74":2}}],["semantics",{"2":{"5":2,"158":1}}],["semi",{"2":{"15":2}}],["seems",{"2":{"287":2}}],["seen",{"2":{"80":1}}],["seed=0",{"2":{"274":1}}],["seeding",{"2":{"89":1}}],["seed",{"2":{"34":1,"40":4,"89":1,"90":1,"100":1,"146":1,"153":1,"155":1,"187":1,"191":2,"196":1,"232":1,"253":3,"260":1,"264":1,"274":2,"277":1,"295":1}}],["see",{"2":{"3":1,"7":3,"8":3,"11":1,"22":1,"24":2,"25":1,"34":4,"36":3,"38":5,"40":2,"41":4,"42":1,"45":1,"47":2,"49":1,"50":1,"52":2,"54":1,"55":2,"56":2,"57":1,"58":17,"61":1,"62":2,"64":2,"66":4,"73":3,"74":3,"75":3,"76":5,"77":3,"78":1,"80":1,"81":4,"83":1,"84":1,"86":7,"88":1,"90":2,"96":1,"100":2,"104":1,"107":3,"109":1,"114":2,"116":2,"118":1,"119":1,"125":1,"126":3,"127":2,"136":1,"140":1,"153":1,"154":1,"158":2,"162":1,"179":1,"182":1,"183":1,"185":1,"188":2,"189":1,"201":1,"213":1,"236":1,"279":2,"283":2}}],["session",{"2":{"1":1,"57":1,"119":1}}],["setprogress",{"2":{"276":1}}],["sets",{"2":{"95":1,"118":2,"183":2}}],["setindexing",{"2":{"63":1,"65":1}}],["setups",{"2":{"168":1}}],["setup",{"2":{"8":1,"23":1,"25":1,"34":1,"40":4,"45":1,"47":1,"56":6,"89":1,"90":1,"118":1,"119":1,"126":2,"127":2,"133":1,"135":1,"137":1,"143":1,"146":1,"147":1,"153":2,"154":1,"155":1,"158":1,"165":1,"166":1,"167":1,"168":1,"172":1,"173":1,"174":1,"196":1,"204":1,"212":1,"225":1,"232":1,"245":1,"253":1,"260":1,"267":1,"274":1,"278":1,"285":1,"295":1}}],["setfield",{"2":{"7":1,"107":1,"187":1,"276":1,"282":1}}],["setting",{"0":{"286":1},"2":{"4":2,"35":2,"38":4,"39":3,"80":1,"86":4,"127":1,"160":1,"164":1,"183":2,"184":1}}],["set",{"2":{"1":2,"2":1,"4":6,"15":1,"22":2,"25":1,"28":2,"35":2,"36":1,"38":18,"42":2,"45":2,"47":1,"50":2,"52":1,"55":1,"56":3,"57":5,"64":4,"66":4,"67":1,"70":3,"73":1,"86":2,"95":1,"96":1,"102":1,"118":2,"119":2,"127":2,"140":1,"148":1,"149":1,"160":2,"164":1,"165":4,"178":2,"179":1,"180":3,"181":3,"183":2,"196":3,"208":1,"260":2,"274":2,"276":1}}],["ogg",{"2":{"276":1,"282":1}}],["os",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["ok",{"2":{"167":1}}],["oops",{"2":{"158":1}}],["oh",{"2":{"127":1}}],["own",{"2":{"86":1,"141":1,"155":1,"188":1}}],["our",{"2":{"86":2,"89":1,"90":1,"100":1,"123":1,"127":3,"147":2,"158":2,"165":1,"169":1,"176":1,"187":1,"188":1,"190":1,"191":1,"193":1,"196":2,"200":1,"201":1,"221":1,"225":3,"247":1,"267":1,"268":1,"277":1,"278":3,"279":4}}],["ouput",{"2":{"78":2}}],["outer",{"2":{"291":7}}],["outermost",{"2":{"80":1}}],["outside",{"2":{"52":1,"86":1,"114":2}}],["outperforms",{"2":{"124":1}}],["outpad",{"2":{"35":3,"117":1}}],["outpad=0",{"2":{"35":1}}],["outputs",{"2":{"11":1,"24":1,"34":2,"38":6,"40":2,"47":1,"55":1,"78":4,"99":1,"166":1,"167":1,"191":1,"193":1,"285":1}}],["outputsize",{"2":{"11":3,"107":1}}],["output",{"2":{"11":1,"15":6,"24":1,"34":7,"35":8,"37":33,"38":13,"39":7,"40":4,"41":1,"42":4,"47":2,"50":6,"55":1,"56":2,"60":2,"63":1,"64":3,"65":1,"73":1,"75":1,"77":3,"78":5,"82":3,"83":1,"84":2,"86":11,"90":1,"117":1,"118":2,"126":4,"127":14,"153":1,"156":2,"158":1,"191":1,"201":1,"231":1,"250":1,"276":1,"278":2,"282":1}}],["out",{"2":{"15":2,"21":1,"34":1,"35":8,"36":2,"37":3,"38":15,"39":19,"52":1,"56":10,"64":2,"77":1,"82":18,"85":1,"86":2,"90":6,"102":2,"125":1,"127":1,"151":1,"153":8,"156":1,"160":1,"201":3,"202":2,"234":1,"258":4,"276":1,"293":2}}],["occurs",{"2":{"127":1}}],["occurred",{"2":{"127":3}}],["occurrences",{"2":{"10":3}}],["octavian",{"0":{"184":1},"2":{"65":2,"162":1,"184":1}}],["odefunction",{"2":{"231":2,"235":1}}],["odeproblem",{"2":{"223":1,"225":1,"226":1,"231":2,"235":1,"284":1,"285":1,"288":1}}],["odesolution",{"2":{"231":1}}],["odes",{"0":{"228":1},"1":{"229":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1},"2":{"100":1,"131":1,"228":2,"284":1,"285":1}}],["ode",{"0":{"231":1,"232":1,"236":1,"281":1},"1":{"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"55":1,"131":1,"221":3,"223":3,"224":1,"225":2,"226":2,"231":2,"232":1,"284":3,"285":6,"286":1,"288":2}}],["odd",{"2":{"15":1,"79":3,"86":1}}],["old",{"2":{"49":1,"104":1,"108":2}}],["older",{"0":{"44":1},"1":{"45":1},"2":{"114":1,"140":1}}],["o=σ",{"2":{"38":1}}],["oi=",{"2":{"35":1,"37":3}}],["o",{"2":{"35":4,"37":6,"147":2}}],["observe",{"2":{"278":1}}],["observables",{"2":{"276":1,"282":1}}],["observations",{"2":{"35":1}}],["obtain",{"2":{"137":1}}],["obtained",{"2":{"67":1,"77":1}}],["obviously",{"2":{"125":1}}],["obj",{"2":{"49":2,"154":2,"155":1}}],["objectfile",{"2":{"282":1}}],["objects",{"2":{"50":1,"86":1,"111":3,"118":1,"150":1,"158":1,"224":1}}],["objective",{"2":{"49":8,"268":1,"286":1}}],["object",{"2":{"2":3,"3":4,"18":2,"49":7,"50":2,"63":1,"137":1,"149":3}}],["other",{"0":{"16":1,"43":1},"1":{"44":1,"45":1,"46":1,"47":1},"2":{"5":1,"8":1,"15":1,"16":1,"30":2,"34":1,"35":1,"40":2,"42":1,"47":1,"50":1,"52":2,"58":1,"63":1,"80":1,"84":1,"86":4,"96":1,"105":1,"114":1,"119":1,"124":1,"151":1,"153":1,"154":1,"188":1,"192":1,"205":1,"221":1}}],["otherwisewhere",{"2":{"50":1}}],["otherwise",{"2":{"1":1,"2":2,"3":1,"39":1,"57":1,"58":1,"81":2,"86":1}}],["opus",{"2":{"276":1,"282":1}}],["openexr",{"2":{"276":2,"282":2}}],["openmpi",{"2":{"229":1}}],["openssl",{"2":{"229":1,"276":1,"282":1}}],["openspecfun",{"2":{"187":1,"276":1,"282":1}}],["openlibm",{"2":{"187":1,"276":1,"282":1}}],["open",{"2":{"122":1,"128":1,"141":1,"147":1,"219":1,"220":2}}],["opening",{"2":{"119":1,"164":1}}],["operands",{"2":{"86":3}}],["operator",{"2":{"52":1,"75":1,"77":2,"81":2}}],["operators",{"2":{"40":1,"81":1}}],["operating",{"2":{"38":1}}],["operation",{"2":{"30":4,"37":3,"42":1,"63":2,"65":1,"67":4,"75":4,"77":1,"78":2,"81":4,"84":1,"86":1,"192":1}}],["operations",{"0":{"52":1,"61":1,"80":1},"2":{"2":1,"19":1,"47":1,"51":2,"52":1,"53":1,"63":3,"65":1,"81":1,"86":2,"105":1,"164":2,"184":1,"189":1}}],["operates",{"2":{"80":2,"86":2,"118":1}}],["operate",{"2":{"38":1,"58":1,"174":1}}],["optprob",{"2":{"287":2}}],["optf",{"2":{"287":2}}],["opt",{"2":{"95":6,"119":1,"137":5,"196":2,"225":10,"260":2,"268":1,"274":2,"295":2}}],["optimal",{"2":{"225":1}}],["optim",{"2":{"56":2,"229":1,"276":1,"282":1}}],["optimiser",{"0":{"203":1}}],["optimisersadaptext",{"2":{"187":1,"276":1,"282":1}}],["optimisersenzymecoreext",{"2":{"187":1,"276":1,"282":1}}],["optimisers",{"2":{"31":1,"45":2,"49":4,"56":3,"89":10,"90":1,"118":1,"137":1,"174":2,"177":1,"187":3,"196":2,"198":1,"199":1,"208":1,"221":1,"225":2,"229":1,"240":1,"248":1,"253":1,"256":1,"263":1,"266":1,"270":1,"276":3,"282":3,"290":1}}],["optimize",{"2":{"86":2,"119":2}}],["optimized",{"2":{"53":1,"67":1,"86":1,"162":1,"184":1,"225":1}}],["optimizer",{"0":{"266":1},"2":{"31":4,"49":7,"137":3,"268":3}}],["optimizers",{"0":{"31":1},"2":{"287":1}}],["optimizationzygoteext",{"2":{"282":2}}],["optimizationreversediffext",{"2":{"282":2}}],["optimizationenzymeext",{"2":{"282":2}}],["optimizationmldatadevicesext",{"2":{"276":2,"282":2}}],["optimizationforwarddiffext",{"2":{"276":1,"282":2}}],["optimizationfinitediffext",{"2":{"276":1,"282":2}}],["optimizationfunction",{"2":{"225":1,"287":1}}],["optimizationbase",{"2":{"276":4,"282":7}}],["optimizationproblem",{"2":{"225":2,"287":1}}],["optimizationoptimjl",{"2":{"222":1,"276":1,"282":3}}],["optimizationoptimisers",{"2":{"222":1}}],["optimizations",{"2":{"90":1,"114":1,"118":1}}],["optimization",{"0":{"221":1},"1":{"222":1,"223":1,"224":1,"225":1,"226":1,"227":1},"2":{"15":1,"89":1,"90":3,"118":1,"221":4,"222":1,"225":8,"276":1,"282":3,"287":4}}],["options",{"2":{"54":1,"90":2,"181":1,"204":2,"213":2,"253":2,"260":2,"274":2,"295":2}}],["option",{"2":{"42":2,"54":2,"67":1,"122":5}}],["optional",{"0":{"162":1},"2":{"15":1,"22":3,"38":1,"50":1,"55":1,"56":2,"63":1,"65":1,"85":1,"86":11,"89":1,"107":1,"184":1,"186":1}}],["optionally",{"2":{"7":1,"15":1,"41":1,"153":1,"186":1}}],["opposite",{"2":{"76":1}}],["op",{"2":{"4":2,"5":1,"30":6,"51":3,"54":1,"66":1,"81":8,"86":5,"95":1,"112":1}}],["orbit₂",{"2":{"283":2}}],["orbit₁",{"2":{"283":2}}],["orbits",{"2":{"283":1}}],["orbit2",{"2":{"283":4}}],["orbit2tensor",{"2":{"283":2}}],["orbit1",{"2":{"283":4}}],["orbit",{"2":{"283":17}}],["orange",{"2":{"264":1,"268":1}}],["ordinarydiffeqloworderrk",{"2":{"282":3}}],["ordinarydiffeq",{"2":{"231":1,"284":1}}],["ordinarydiffeqcoreenzymecoreext",{"2":{"229":1,"282":1}}],["ordinarydiffeqcore",{"2":{"229":2,"237":26,"282":2}}],["ordinarydiffeqtsit5",{"2":{"222":1,"229":3,"237":13}}],["orderedcollections",{"2":{"192":1,"276":1,"282":1}}],["ordering",{"2":{"38":6}}],["order",{"0":{"20":1},"2":{"2":1,"22":1,"35":1,"86":1,"121":1,"147":2,"165":1,"167":1,"194":1,"249":3,"251":1,"283":2}}],["orcjit",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["originates",{"2":{"24":1}}],["originally",{"2":{"281":1}}],["original",{"2":{"15":1,"22":1,"25":1,"45":1,"86":3,"118":1,"126":1,"190":2}}],["org",{"2":{"15":1,"40":1,"71":1,"78":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"283":2,"289":1,"297":1}}],["orthogonal",{"2":{"13":4,"15":6}}],["or",{"2":{"2":3,"3":2,"4":2,"8":1,"15":2,"22":1,"24":3,"30":3,"34":7,"35":6,"37":9,"38":3,"39":7,"42":4,"45":1,"47":2,"49":1,"50":5,"51":2,"56":3,"57":1,"63":2,"64":3,"65":1,"66":2,"67":2,"69":1,"73":4,"75":3,"76":1,"77":1,"78":1,"79":2,"80":7,"81":5,"83":1,"84":1,"85":3,"86":19,"88":1,"89":1,"100":2,"101":1,"119":2,"122":1,"127":2,"140":2,"141":1,"153":2,"154":1,"158":1,"161":2,"164":1,"167":1,"174":1,"176":1,"180":1,"186":1,"188":4,"189":1,"200":1,"219":1,"220":2,"283":1}}],["overview",{"0":{"121":1}}],["overlapping",{"2":{"86":1}}],["overloaded",{"2":{"114":1}}],["overload",{"2":{"84":1}}],["overloads",{"2":{"84":1}}],["overloading",{"2":{"52":1}}],["overwrite",{"2":{"84":2}}],["overridden",{"2":{"65":1}}],["overrides",{"2":{"96":1}}],["override",{"2":{"11":1,"45":1,"86":1}}],["overfitting",{"2":{"64":2}}],["overcome",{"2":{"45":1}}],["over",{"0":{"23":1,"100":1},"2":{"2":1,"5":2,"23":1,"24":1,"38":1,"40":1,"41":3,"55":1,"74":1,"81":2,"86":1,"89":1,"142":1,"147":1,"198":1,"224":1,"247":1,"250":1,"268":1,"286":1,"288":1}}],["onlinestats",{"2":{"248":1,"253":5}}],["online",{"2":{"220":1}}],["only",{"2":{"2":1,"8":1,"24":2,"35":1,"36":1,"37":3,"38":9,"39":1,"42":1,"49":3,"50":3,"54":2,"56":1,"73":1,"79":1,"80":1,"84":4,"86":1,"96":3,"118":1,"122":2,"137":1,"145":1,"154":1,"155":1,"164":2,"167":1,"168":1,"193":2,"205":1,"230":1,"272":1}}],["onto",{"2":{"75":1,"86":4}}],["once",{"2":{"11":1,"38":2,"54":1,"77":1,"86":2,"127":1,"153":1,"189":1,"190":1,"192":1}}],["onwards",{"2":{"7":1,"114":1}}],["on",{"0":{"129":1,"159":1,"239":1,"247":1,"256":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1,"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"257":1,"258":1,"259":1,"260":1,"261":1},"2":{"2":2,"3":6,"5":1,"6":2,"7":1,"8":2,"15":11,"20":2,"21":1,"22":1,"23":2,"30":2,"34":2,"35":2,"36":2,"37":6,"38":2,"39":1,"47":1,"50":5,"52":3,"55":2,"56":2,"57":1,"58":2,"60":2,"61":3,"62":1,"63":3,"65":3,"66":4,"68":1,"71":1,"73":1,"74":1,"75":3,"76":12,"77":1,"80":2,"81":4,"82":1,"86":9,"88":1,"89":1,"90":1,"95":1,"100":1,"101":2,"105":1,"107":2,"115":1,"116":2,"117":1,"118":2,"122":3,"124":1,"128":1,"130":2,"134":1,"137":1,"141":1,"151":2,"153":2,"156":1,"158":2,"159":1,"160":1,"162":1,"163":2,"164":4,"165":4,"166":1,"167":2,"174":4,"178":2,"179":1,"182":1,"183":1,"185":1,"187":1,"188":1,"189":1,"191":3,"194":2,"197":1,"206":1,"213":2,"214":1,"219":1,"221":1,"224":1,"225":2,"227":1,"234":2,"238":1,"242":1,"246":1,"252":2,"255":1,"256":1,"261":1,"262":1,"269":1,"270":1,"272":1,"273":1,"275":1,"278":1,"280":1,"289":1,"290":1,"297":1}}],["one2two",{"2":{"283":3}}],["onetbb",{"2":{"276":1,"282":1}}],["onecold",{"2":{"211":2,"233":2,"244":2,"259":2}}],["onehotbatch",{"2":{"209":1,"230":1,"241":2,"257":1}}],["onehotarrays",{"2":{"208":1,"229":3,"240":1,"256":1,"270":1}}],["onearray",{"2":{"13":3}}],["oneapidevice",{"2":{"4":2}}],["oneapi",{"2":{"2":1,"3":2,"13":1,"69":2,"89":1,"93":2,"148":2,"181":1}}],["onesc64",{"2":{"16":1}}],["onesc32",{"2":{"16":1}}],["onesc16",{"2":{"16":1}}],["ones64",{"2":{"16":1}}],["ones32",{"2":{"16":1}}],["ones16",{"2":{"16":1}}],["ones",{"2":{"8":1,"16":7,"56":1,"77":1,"81":1,"85":4,"86":1,"132":1,"188":2,"194":1,"237":1,"252":2,"277":1,"278":1}}],["one",{"2":{"1":1,"24":1,"34":2,"38":1,"41":2,"42":2,"50":1,"53":1,"58":1,"75":3,"80":1,"81":2,"86":3,"137":1,"161":2,"164":1,"186":1,"188":1,"219":1,"220":1,"278":1,"283":4}}],["offsetarraysadaptext",{"2":{"276":1,"282":1}}],["offsetarrays",{"2":{"276":2,"282":2}}],["off",{"2":{"169":1}}],["official",{"2":{"157":1,"197":1,"206":1,"214":1,"220":1,"221":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"276":2,"280":1,"289":1,"297":1}}],["offending",{"2":{"127":9}}],["ofcourse",{"2":{"126":1}}],["often",{"2":{"3":1,"39":1,"123":2,"131":1,"156":1,"158":1,"161":2,"169":1,"172":1,"188":2,"221":1,"278":1}}],["of",{"0":{"143":1,"145":1,"146":1,"168":1},"2":{"1":1,"3":7,"4":3,"6":1,"7":2,"8":17,"9":3,"10":6,"11":4,"15":31,"16":36,"18":6,"19":6,"21":2,"22":6,"23":11,"24":2,"25":9,"28":2,"29":1,"34":32,"35":38,"36":3,"37":42,"38":37,"39":32,"40":23,"41":30,"42":13,"45":7,"47":3,"49":13,"50":13,"51":1,"52":20,"53":5,"54":10,"55":8,"56":17,"58":9,"61":2,"62":2,"63":1,"64":4,"65":1,"66":8,"68":1,"71":3,"72":2,"73":10,"74":2,"75":2,"76":20,"77":18,"78":18,"79":4,"80":7,"81":8,"82":12,"83":2,"84":7,"85":6,"86":90,"88":1,"90":2,"94":1,"95":1,"96":7,"100":5,"102":3,"107":2,"108":1,"109":2,"110":2,"112":2,"114":6,"116":2,"117":1,"118":6,"119":1,"120":2,"124":3,"126":1,"127":6,"131":2,"133":1,"135":1,"137":4,"140":2,"143":2,"144":1,"145":3,"147":4,"151":3,"153":3,"154":4,"155":1,"156":3,"158":1,"161":4,"162":1,"165":1,"166":6,"167":1,"168":1,"169":6,"174":1,"178":1,"179":1,"180":1,"183":4,"184":1,"185":1,"188":9,"189":1,"190":1,"192":4,"194":1,"196":3,"198":1,"200":1,"201":3,"202":2,"203":1,"205":1,"213":1,"219":2,"220":2,"221":1,"228":1,"231":5,"235":1,"236":1,"237":1,"242":2,"247":3,"249":1,"252":1,"253":7,"260":102,"268":4,"271":1,"273":3,"276":2,"277":2,"278":6,"279":3,"283":5,"284":2,"285":3,"295":2}}],["bj",{"2":{"292":19,"293":2}}],["bzip2",{"2":{"276":1,"282":1}}],["bs",{"2":{"209":1,"230":1}}],["b=f",{"2":{"174":1}}],["b=layer",{"2":{"174":1}}],["b=",{"2":{"56":1}}],["b=zeros",{"2":{"56":1}}],["bfgs",{"2":{"221":1,"225":1,"276":1,"282":1,"287":2}}],["bfloat16s",{"2":{"53":1}}],["bfloat16",{"2":{"53":3}}],["bf16",{"2":{"53":1}}],["bc",{"2":{"251":7,"252":21,"253":70}}],["bce",{"2":{"50":5}}],["bcast",{"2":{"30":2}}],["bh2",{"2":{"283":1}}],["bh",{"2":{"283":1}}],["bhn",{"2":{"38":2}}],["bhz",{"2":{"38":2}}],["bhr",{"2":{"38":2}}],["bijector",{"2":{"292":5}}],["bijectorsenzymecoreext",{"2":{"276":2}}],["bijectorstrackerext",{"2":{"276":1}}],["bijectorsdistributionsadext",{"2":{"276":1}}],["bijectorsforwarddiffext",{"2":{"276":1}}],["bijectors",{"0":{"292":1},"2":{"276":5}}],["bigger",{"2":{"188":1}}],["bigfloat",{"2":{"188":3}}],["bitflags",{"2":{"229":1}}],["bittwiddlingconveniencefunctions",{"2":{"187":1,"276":1,"282":1}}],["bit",{"2":{"86":1,"167":1,"225":1}}],["bibtex",{"2":{"71":2}}],["bilinear",{"2":{"39":3,"42":6,"78":10,"79":5,"82":1,"86":3,"116":1}}],["bidirectional",{"2":{"38":1}}],["bidirectionalrnn",{"2":{"38":1}}],["bindings",{"2":{"72":1}}],["binarycrossentropy",{"2":{"203":2}}],["binarycrossentropyloss",{"2":{"50":7,"203":1}}],["binaryfocalloss",{"2":{"50":3}}],["binary",{"2":{"50":3,"81":1,"200":1}}],["bin",{"2":{"38":1,"50":7}}],["biz",{"2":{"38":1}}],["bir",{"2":{"38":1}}],["bias6",{"2":{"147":12}}],["bias3",{"2":{"147":4}}],["bias1",{"2":{"147":4}}],["bias=l",{"2":{"153":1}}],["bias=ps",{"2":{"56":1}}],["bias=false",{"2":{"38":6,"39":3}}],["bias=true",{"2":{"35":4,"38":2,"39":6}}],["bias=nothing",{"2":{"35":1,"38":3,"39":3}}],["bias=zeros32",{"2":{"35":1,"39":1,"41":4,"153":1,"271":2,"285":3}}],["bias=zero",{"2":{"23":1}}],["bias",{"0":{"62":1},"2":{"22":1,"23":4,"25":4,"35":12,"38":25,"39":22,"41":20,"56":1,"58":1,"62":12,"63":3,"65":3,"66":15,"73":3,"84":6,"89":15,"117":4,"118":15,"127":3,"133":2,"143":4,"144":2,"145":2,"146":1,"147":5,"153":6,"155":4,"161":5,"165":3,"167":4,"168":4,"183":1,"196":1,"237":30,"260":2,"267":2,"278":1,"285":3,"287":3}}],["biases",{"2":{"15":1,"147":1,"278":1}}],["blue",{"2":{"223":1,"226":2,"264":1,"277":1}}],["black",{"2":{"264":1,"268":2,"277":2}}],["blanks",{"2":{"86":3}}],["blank",{"2":{"83":1,"273":1}}],["blasfloat",{"2":{"86":1}}],["blas",{"2":{"8":1,"65":1,"80":3,"86":1}}],["blisblas",{"2":{"65":1}}],["blog",{"2":{"47":1}}],["blocks",{"2":{"38":1,"49":1,"201":1}}],["block",{"2":{"34":1,"37":3,"56":3,"201":1}}],["bright",{"2":{"279":3}}],["brackpropagate",{"2":{"271":1}}],["bradbury",{"2":{"86":1}}],["branched",{"2":{"34":1}}],["branchlayer",{"2":{"34":5}}],["breaking",{"0":{"104":1,"107":1,"111":1,"114":1,"115":1,"116":1},"2":{"109":1}}],["break",{"2":{"86":1,"253":1,"260":1,"293":1,"295":1}}],["broadcastfunction",{"2":{"118":1}}],["broadcasting",{"2":{"60":1,"67":1,"86":1,"188":1}}],["broadcastable",{"2":{"41":1,"73":2}}],["broadcasted",{"2":{"30":2,"50":1,"62":1,"63":2,"74":1}}],["broadcast",{"2":{"30":1,"36":1,"40":1,"86":2,"118":1,"147":5,"278":1}}],["broken=true",{"2":{"95":2}}],["broken=false",{"2":{"95":2}}],["broken",{"2":{"21":1,"95":5,"96":4,"97":1}}],["bn=batchnorm",{"2":{"23":1}}],["b",{"2":{"19":3,"37":6,"39":4,"56":6,"63":4,"64":2,"65":3,"80":35,"84":4,"85":2,"86":12,"126":2,"147":4,"149":1,"155":2,"161":2,"166":8,"174":9,"196":8,"276":1,"282":1,"283":4}}],["bulk",{"2":{"278":1}}],["bunch",{"2":{"86":1}}],["builds",{"2":{"126":1,"133":3,"165":2,"204":2,"237":3,"260":2,"274":1}}],["buildkite",{"2":{"126":1,"133":3,"165":2,"197":1,"204":2,"206":1,"214":1,"227":1,"237":3,"238":1,"246":1,"255":1,"260":2,"261":1,"269":1,"274":1,"275":1,"280":1,"289":1,"297":1}}],["build",{"2":{"89":1,"180":3,"187":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["building",{"0":{"278":1},"2":{"49":1}}],["built",{"0":{"33":1},"1":{"34":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"42":1},"2":{"98":1,"151":2,"190":1,"213":1}}],["bufferedstreams",{"2":{"229":1}}],["buffer",{"2":{"30":3,"63":1,"65":1,"86":2,"253":70,"260":1025,"295":16}}],["bugs",{"2":{"101":1,"110":1,"155":1}}],["bug",{"2":{"11":1,"56":1,"219":1,"253":7,"260":102,"295":2}}],["but",{"2":{"3":2,"7":2,"8":1,"15":1,"18":2,"40":1,"42":2,"45":1,"47":3,"51":3,"52":5,"54":2,"56":4,"58":1,"60":1,"61":1,"62":1,"66":1,"74":2,"80":2,"84":1,"86":3,"90":1,"99":1,"100":2,"107":1,"120":1,"122":1,"123":3,"124":3,"126":3,"127":2,"133":1,"136":1,"153":3,"154":1,"155":1,"158":2,"163":1,"165":3,"166":1,"167":1,"172":1,"184":1,"196":1,"201":2,"203":1,"224":1,"234":1,"236":1,"252":1,"260":1,"265":1,"274":1,"278":1,"285":1}}],["book",{"2":{"194":1}}],["boom",{"2":{"167":1}}],["bool=true",{"2":{"38":1,"260":1}}],["bool=false",{"2":{"2":2,"38":4,"45":2,"47":1,"95":1,"232":1,"234":1}}],["boolean",{"2":{"8":1,"47":1,"73":2,"97":1}}],["bool",{"2":{"2":1,"3":5,"8":3,"24":1,"47":1,"50":3,"51":1,"52":1,"78":9,"84":1,"86":19,"209":1,"230":1,"237":26,"241":2,"272":1,"293":1}}],["boilerplate",{"2":{"118":1,"119":1,"202":1}}],["boils",{"2":{"86":1}}],["borrow",{"2":{"276":1}}],["borrowed",{"2":{"86":1}}],["bore",{"2":{"86":1}}],["border",{"2":{"76":5,"82":5}}],["borders",{"2":{"35":2,"37":3}}],["box",{"2":{"277":1}}],["boxtimes",{"2":{"77":1,"80":1}}],["boxing",{"2":{"55":1,"235":1,"237":1}}],["bottom",{"2":{"56":1,"82":1,"118":1}}],["both",{"2":{"2":1,"24":2,"50":2,"76":5,"80":2,"81":1,"86":4,"89":1,"98":1,"117":1,"127":3,"174":1,"180":1,"183":1,"220":1,"278":1}}],["body",{"2":{"56":2,"237":3,"283":9}}],["boundaries",{"2":{"279":1}}],["boundary",{"2":{"251":1}}],["bound",{"2":{"35":6,"38":18,"39":9,"58":1,"82":8}}],["bounds",{"2":{"35":2,"38":6,"39":3}}],["bytes",{"2":{"260":22}}],["bypasses",{"2":{"56":2}}],["bypass",{"2":{"11":1,"154":1,"181":1}}],["by",{"0":{"144":1},"2":{"2":1,"7":1,"8":2,"11":1,"15":5,"34":2,"35":5,"37":3,"38":5,"39":5,"40":1,"41":1,"42":1,"47":1,"49":3,"50":2,"51":1,"53":1,"56":2,"58":3,"60":2,"63":1,"65":1,"66":1,"70":1,"73":1,"74":2,"76":1,"77":4,"78":7,"80":10,"81":1,"82":2,"83":1,"84":1,"85":2,"86":18,"88":1,"89":1,"100":2,"107":1,"114":1,"118":2,"127":1,"137":3,"144":3,"145":2,"149":1,"150":1,"151":1,"158":1,"159":1,"160":1,"164":3,"165":2,"168":1,"169":1,"180":1,"184":1,"188":2,"189":2,"190":1,"192":1,"198":1,"210":1,"220":1,"224":1,"237":1,"267":2,"276":1,"278":1,"279":1}}],["bayes",{"2":{"278":3}}],["bayesian",{"0":{"276":1},"1":{"277":1,"278":1,"279":1,"280":1},"2":{"277":1,"279":1}}],["bangbangtablesext",{"2":{"276":1}}],["bangbangstaticarraysext",{"2":{"276":1}}],["bangbangstructarraysext",{"2":{"276":1}}],["bangbangchainrulescoreext",{"2":{"276":1}}],["bangbangdataframesext",{"2":{"229":1}}],["bangbang",{"2":{"229":1,"276":5}}],["banks",{"2":{"86":2}}],["battery",{"2":{"100":1}}],["batches",{"2":{"209":1,"230":1}}],["batchedtranspose",{"2":{"80":2,"86":3}}],["batchedadjoint",{"2":{"80":3,"86":3}}],["batched",{"0":{"19":1,"61":1,"80":1,"166":1},"2":{"11":1,"19":1,"61":6,"77":2,"80":40,"86":10,"161":2,"164":1,"166":8,"170":1,"171":1}}],["batchsize=32",{"2":{"253":2}}],["batchsize=256",{"2":{"241":1}}],["batchsize=min",{"2":{"241":2}}],["batchsize=8",{"2":{"224":2}}],["batchsize=128",{"2":{"200":2,"274":1}}],["batchsize=12",{"2":{"163":2}}],["batchsize=13",{"2":{"5":1}}],["batchsize",{"2":{"77":1,"209":3,"230":3,"241":4,"272":2,"274":2,"291":2,"295":2}}],["batchlastindex",{"2":{"38":2,"51":1}}],["batchnorm",{"2":{"7":1,"34":4,"41":8,"66":1,"126":4,"127":4,"143":1,"146":1,"153":1,"156":1,"165":2,"166":1,"173":2,"176":1,"271":5}}],["batch",{"2":{"5":1,"19":3,"35":3,"37":3,"38":13,"40":1,"41":6,"42":3,"47":1,"66":6,"73":9,"74":1,"76":4,"77":2,"78":3,"80":6,"86":6,"104":1,"147":2,"166":1,"200":1,"224":1,"225":9,"253":8}}],["ba",{"2":{"66":1}}],["basically",{"2":{"154":1}}],["basic",{"2":{"49":2,"86":1,"187":1}}],["basis",{"2":{"35":2,"39":1}}],["baselet",{"2":{"276":1}}],["baseline",{"2":{"50":1}}],["base",{"2":{"36":1,"40":1,"51":3,"86":2,"95":3,"114":1,"133":1,"155":1,"164":3,"167":1,"168":2,"221":1,"224":2,"232":1,"237":25,"272":3,"285":2,"293":2}}],["based",{"2":{"2":1,"3":2,"34":1,"52":1,"65":1,"66":1,"84":1,"86":1,"101":1,"116":2,"117":1,"165":1,"169":1,"187":1,"221":1,"256":1,"270":1,"290":1}}],["bad",{"2":{"24":1,"84":4,"158":1,"279":1}}],["backtracking",{"2":{"287":1}}],["background",{"2":{"153":1,"158":1,"284":1}}],["backpropagated",{"2":{"78":4}}],["backward",{"2":{"24":2,"38":11,"51":1,"86":5,"127":3,"128":1,"190":1}}],["back",{"2":{"2":1,"3":1,"60":1,"86":1,"155":1}}],["backendtpu",{"2":{"70":1}}],["backendgpu",{"2":{"70":1}}],["backends=",{"2":{"96":2}}],["backends",{"0":{"27":1},"2":{"1":1,"2":4,"3":5,"5":1,"18":6,"19":2,"49":5,"63":3,"65":4,"67":1,"72":1,"79":1,"96":7,"99":2,"112":1,"123":3,"148":3,"163":1,"179":1,"189":1,"194":1,"234":1}}],["backend",{"0":{"46":1,"149":1,"150":1,"181":1},"1":{"47":1},"2":{"1":17,"2":3,"3":1,"18":6,"19":3,"27":5,"28":20,"29":4,"30":11,"31":1,"32":1,"49":6,"59":1,"70":5,"77":2,"96":1,"105":1,"118":4,"137":14,"138":2,"140":1,"148":2,"149":5,"164":1,"181":4,"192":1,"208":1,"221":1}}],["bernoulli",{"2":{"278":1}}],["became",{"2":{"158":1}}],["because",{"2":{"86":3,"147":1,"155":1,"287":1}}],["become",{"2":{"153":1,"198":1}}],["behind",{"2":{"107":1}}],["behaving",{"2":{"186":1}}],["behaviour",{"2":{"107":1}}],["behavior",{"2":{"3":2,"7":2,"8":1,"11":2,"15":1,"38":1,"54":3,"86":2,"158":1,"165":2,"201":1}}],["behave",{"2":{"51":1,"84":1}}],["behaves",{"2":{"34":1,"80":2,"86":2,"108":1,"163":1}}],["beautiful",{"2":{"86":1}}],["benchmarking",{"2":{"213":1}}],["benefits",{"2":{"58":1,"118":1}}],["bengio",{"2":{"15":2,"40":1}}],["beta",{"2":{"86":10}}],["beta=0",{"2":{"86":13}}],["better",{"2":{"58":1,"100":1,"101":1}}],["between",{"0":{"43":1},"1":{"44":1,"45":1,"46":1,"47":1},"2":{"15":1,"25":1,"39":1,"45":1,"50":1,"58":1,"83":1,"86":4,"100":1,"118":1,"140":1,"174":3,"279":3}}],["belief",{"2":{"58":1}}],["below",{"2":{"23":1,"34":1,"54":2,"71":1,"77":1,"82":1,"86":1,"231":1,"277":1,"278":2,"279":1}}],["best",{"2":{"45":1,"56":2,"60":2,"63":1,"65":1,"70":1,"122":4,"123":1,"260":3}}],["been",{"2":{"45":2,"52":5,"86":1,"100":1,"104":2,"107":6,"111":3,"114":10,"115":5,"117":1,"123":1,"139":1,"168":1,"187":1,"281":1}}],["being",{"2":{"41":4,"52":1,"62":1,"64":2,"66":2,"81":2,"83":1,"105":1,"112":1,"122":2,"141":1,"158":1,"164":1,"165":2,"213":1,"225":1,"260":1,"274":1}}],["beginner",{"0":{"216":1}}],["begin",{"2":{"35":2,"37":3,"127":1,"223":1,"226":1,"252":1,"254":1,"284":1,"285":1,"288":1,"296":1}}],["beyond",{"2":{"34":1}}],["beware",{"2":{"16":1,"155":1}}],["before",{"2":{"8":2,"15":2,"28":1,"31":1,"32":1,"34":1,"53":1,"73":2,"90":1,"119":1,"137":1,"153":1,"191":1,"205":1,"268":1}}],["be",{"2":{"1":2,"2":1,"3":10,"4":5,"6":1,"7":4,"8":2,"10":1,"15":8,"19":5,"22":2,"24":3,"25":5,"28":7,"30":2,"32":2,"34":13,"35":16,"36":4,"37":9,"38":17,"39":8,"40":8,"41":4,"42":3,"45":5,"47":5,"49":9,"50":5,"52":3,"53":2,"55":2,"56":16,"57":2,"58":3,"60":2,"61":1,"62":3,"63":1,"64":4,"65":1,"66":20,"67":1,"68":1,"71":1,"73":3,"74":1,"76":2,"77":3,"78":4,"79":3,"80":6,"81":8,"82":1,"83":5,"84":2,"86":24,"89":1,"95":3,"96":3,"97":4,"98":1,"99":3,"100":2,"101":1,"102":1,"110":1,"111":1,"114":1,"115":1,"118":2,"119":1,"125":1,"128":1,"132":1,"137":2,"138":1,"142":1,"144":1,"147":2,"148":1,"150":1,"151":1,"153":1,"155":2,"156":1,"158":1,"162":1,"164":2,"165":5,"166":2,"168":1,"170":1,"173":1,"174":6,"176":1,"181":1,"183":3,"184":2,"186":3,"188":1,"190":1,"191":1,"192":1,"193":1,"198":1,"201":1,"213":2,"219":1,"220":1,"225":1,"235":1,"243":1,"247":2,"252":1,"265":1,"277":1,"278":1,"283":4,"286":1}}],["ixy",{"2":{"283":2}}],["ixx",{"2":{"283":3}}],["iyy",{"2":{"283":3}}],["i∈",{"2":{"196":1}}],["io",{"2":{"147":2}}],["ioffe",{"2":{"66":1}}],["i64",{"2":{"147":10}}],["iid",{"2":{"278":1}}],["iii",{"2":{"121":2,"123":1}}],["ii",{"2":{"121":2,"123":1}}],["ii+pi+p",{"2":{"35":1,"37":3}}],["irtools",{"2":{"192":1,"282":1}}],["irrationalconstants",{"2":{"187":1,"276":1,"282":1}}],["ir",{"2":{"90":1,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["ijk",{"2":{"81":3}}],["ignore",{"2":{"242":1,"285":1}}],["ignores",{"2":{"45":1,"84":1}}],["ignored",{"2":{"2":1,"15":1,"39":3,"84":1}}],["i=σ",{"2":{"38":1}}],["ih",{"2":{"38":6,"117":1}}],["i+n",{"2":{"35":1,"37":3}}],["iclr",{"2":{"15":1}}],["ieee",{"2":{"15":2,"50":5}}],["imath",{"2":{"276":1,"282":1}}],["imageio",{"2":{"276":1,"282":1}}],["imagemetadata",{"2":{"276":1,"282":1}}],["imageaxes",{"2":{"276":1,"282":1}}],["imagebase",{"2":{"276":1,"282":1}}],["imagecore",{"2":{"276":1,"282":1}}],["imagetotensor",{"2":{"272":1}}],["imageshow",{"2":{"229":1,"270":1}}],["images",{"2":{"41":3,"42":2,"78":1,"79":1,"86":1,"209":1,"230":1,"270":1,"273":21,"274":8}}],["image",{"2":{"35":2,"50":1,"58":1,"78":9,"86":4,"271":11,"272":3,"273":8,"274":6}}],["imagenet",{"2":{"15":2,"136":1}}],["img",{"2":{"271":2,"272":2,"273":12,"274":5}}],["imgs",{"2":{"209":6,"230":6,"273":4}}],["im",{"0":{"190":1},"2":{"274":102}}],["immutability",{"2":{"191":1}}],["immutable",{"2":{"52":1,"60":1,"63":1,"65":1,"99":1,"100":1,"190":1}}],["immediately",{"2":{"118":1}}],["imrotate",{"2":{"79":6,"86":2}}],["im2col",{"2":{"77":2,"86":35}}],["imbalanced",{"2":{"50":1}}],["imply",{"2":{"156":1}}],["implements",{"2":{"94":1}}],["implemented",{"2":{"45":1,"80":1,"86":1,"100":1,"164":1,"174":1}}],["implementations",{"2":{"49":1,"52":1,"60":1,"61":1,"65":1,"67":1,"75":1,"86":2,"140":1,"184":1}}],["implementation",{"0":{"132":1,"175":1,"235":1,"292":1},"1":{"176":1},"2":{"11":4,"22":1,"34":1,"38":1,"49":1,"52":5,"55":2,"60":3,"63":4,"65":6,"82":1,"85":1,"86":7,"107":1,"140":1,"169":1,"231":1,"270":2,"290":1}}],["implement",{"0":{"242":1},"2":{"7":1,"22":1,"153":2,"174":1,"276":1}}],["implementing",{"0":{"174":1},"2":{"7":2,"100":1,"131":1,"228":1,"276":1}}],["imposed",{"2":{"86":5}}],["imports",{"0":{"199":1,"208":1,"222":1,"229":1,"240":1,"248":1,"263":1,"282":1}}],["importing",{"2":{"153":1,"187":1,"276":1}}],["important",{"0":{"175":1},"1":{"176":1},"2":{"137":2,"148":1,"153":1,"165":1,"166":1,"174":1}}],["importantly",{"2":{"119":1,"188":1,"279":1}}],["imported",{"2":{"56":1}}],["import",{"2":{"45":1,"47":1,"91":1,"147":3,"276":1}}],["improving",{"2":{"52":5,"124":1}}],["i",{"2":{"5":10,"11":1,"15":1,"22":1,"34":9,"35":6,"37":21,"39":4,"40":8,"42":2,"45":1,"47":1,"49":2,"50":3,"54":1,"55":1,"56":1,"73":2,"76":6,"78":3,"82":1,"83":1,"85":1,"86":5,"90":2,"107":2,"119":2,"121":5,"123":1,"132":3,"137":1,"147":3,"156":1,"167":1,"191":4,"194":1,"196":4,"204":1,"213":1,"224":2,"253":4,"254":6,"260":12,"274":5,"277":12,"278":7,"279":10,"293":10,"295":1,"296":4}}],["ith",{"2":{"34":1}}],["its",{"2":{"8":1,"37":3,"39":1,"40":1,"41":1,"68":1,"73":1,"76":2,"77":1,"84":1,"86":2,"100":1,"279":1}}],["itself",{"2":{"3":1,"40":1,"126":1}}],["itertools",{"2":{"276":1,"282":1}}],["iter",{"2":{"90":4,"119":12,"225":2,"253":7,"274":51,"286":1,"295":20}}],["iteratively",{"2":{"34":1}}],["iterations",{"2":{"196":12,"278":2}}],["iteration",{"0":{"5":1},"2":{"5":2,"90":12,"119":4,"131":1,"163":1,"191":8,"224":1,"225":26,"253":51,"279":1,"286":1,"288":1}}],["iterate",{"2":{"5":1,"21":1}}],["iterates",{"2":{"5":1}}],["iteratorinterfaceextensions",{"2":{"192":1,"276":1,"282":1}}],["iterators",{"2":{"5":1,"201":1,"202":1,"252":1,"253":2,"254":1,"293":1,"295":1}}],["iterator",{"2":{"5":3,"119":1}}],["itemdata",{"2":{"272":1}}],["items",{"2":{"86":1}}],["item",{"2":{"5":1}}],["it",{"2":{"2":1,"3":2,"4":5,"7":3,"8":6,"11":2,"15":3,"21":1,"22":1,"23":1,"25":1,"34":2,"35":4,"37":3,"38":8,"40":1,"41":7,"42":3,"45":3,"49":1,"50":5,"51":1,"52":3,"54":3,"55":1,"56":5,"58":1,"60":3,"62":2,"63":2,"64":1,"71":1,"73":1,"74":5,"76":13,"77":1,"78":1,"79":2,"80":2,"81":7,"83":1,"84":5,"85":1,"86":10,"88":1,"90":3,"94":2,"97":2,"98":1,"100":4,"107":1,"115":1,"117":1,"118":4,"119":1,"120":1,"124":3,"125":2,"126":1,"127":3,"130":1,"137":2,"138":1,"149":1,"151":2,"153":2,"154":5,"155":4,"158":1,"159":2,"160":1,"163":1,"164":3,"165":6,"166":2,"172":1,"181":1,"184":1,"187":1,"188":2,"189":1,"190":2,"191":2,"194":1,"196":1,"201":4,"203":2,"219":1,"220":1,"221":1,"224":3,"225":2,"231":1,"242":2,"260":1,"268":1,"271":1,"274":1,"278":2,"283":1,"285":2}}],["inbounds",{"2":{"86":1}}],["inplaceops",{"2":{"276":1}}],["inplace",{"2":{"49":2}}],["input",{"0":{"129":1,"135":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"8":2,"15":4,"18":2,"19":1,"24":2,"34":30,"35":7,"36":2,"37":6,"38":20,"39":12,"40":7,"41":14,"42":6,"47":3,"50":4,"55":1,"56":1,"60":2,"62":1,"63":1,"64":3,"65":2,"66":9,"73":3,"74":2,"75":4,"76":1,"77":3,"78":1,"79":2,"82":16,"84":2,"86":15,"89":1,"118":1,"126":6,"127":14,"131":1,"137":1,"138":1,"140":1,"147":4,"153":1,"156":2,"158":2,"166":3,"168":1,"182":1,"183":1,"186":1,"201":1,"210":1,"231":1,"250":1,"271":1,"285":1}}],["inputsize",{"2":{"38":1,"107":1}}],["inputs",{"0":{"166":1},"2":{"8":2,"11":1,"19":1,"23":1,"24":1,"34":9,"35":2,"36":3,"37":9,"38":11,"39":1,"40":6,"41":5,"42":2,"49":1,"50":4,"52":1,"55":1,"63":3,"65":1,"99":1,"115":1,"118":1,"140":1,"147":1,"164":1,"166":2,"167":1,"168":1,"195":1,"231":2,"267":1,"278":1,"285":1}}],["ingredient",{"2":{"41":1,"66":1}}],["in2",{"2":{"39":6}}],["in12",{"2":{"39":3}}],["in1",{"2":{"39":8}}],["invertedindices",{"2":{"276":1}}],["inversefunctionsunitfulext",{"2":{"276":1,"282":1}}],["inversefunctionstestext",{"2":{"276":1,"282":1}}],["inversefunctionsdatesext",{"2":{"276":1,"282":1}}],["inversefunctions",{"2":{"276":3,"282":3}}],["inverse",{"2":{"64":1,"77":3,"86":3,"292":4,"293":1}}],["inversability",{"2":{"35":1}}],["investigate",{"2":{"236":1}}],["investigated",{"2":{"141":1}}],["invp",{"2":{"64":4}}],["involving",{"2":{"55":1}}],["invokes",{"2":{"23":1}}],["invoked",{"2":{"2":1,"201":1}}],["invariant",{"2":{"50":1}}],["inv",{"2":{"35":2,"38":6,"39":3,"283":1}}],["injection",{"2":{"34":5}}],["independently",{"2":{"74":1,"85":1,"86":1}}],["independent",{"2":{"74":1,"166":1}}],["indexing",{"0":{"159":1},"2":{"34":1,"40":1,"67":1,"86":1,"159":1}}],["index",{"2":{"2":1,"34":1,"39":1,"40":2,"80":2,"81":4,"85":1,"86":2,"257":1,"279":1}}],["indexed",{"2":{"2":2,"4":2,"40":1,"101":1}}],["indirectarrays",{"2":{"276":1,"282":1}}],["individual",{"2":{"40":1,"74":1,"225":1}}],["individually",{"2":{"34":1,"42":2,"166":1}}],["indices",{"2":{"39":2,"80":1,"86":5}}],["inflate",{"2":{"276":1,"282":1}}],["informed",{"2":{"251":3}}],["informs",{"2":{"154":1}}],["information",{"2":{"8":1,"11":1,"20":1,"24":1,"54":1,"64":1,"77":1,"86":1,"96":1,"99":1,"100":1,"119":1,"140":1,"160":1,"220":1,"221":1}}],["info",{"2":{"126":8,"127":20,"150":1,"197":2,"206":2,"214":2,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"276":2,"280":2,"289":2,"297":2}}],["inferred",{"2":{"66":1,"81":2}}],["inference",{"0":{"176":1},"2":{"24":1,"36":3,"41":6,"71":1,"84":1,"124":1,"165":2,"176":1,"278":2}}],["infinity",{"2":{"64":1}}],["inf",{"2":{"50":1,"75":1,"165":2,"166":2,"167":2,"168":2,"172":4,"260":1}}],["inner",{"2":{"22":2,"24":1,"144":1,"291":7}}],["incase",{"2":{"285":1}}],["including",{"2":{"86":1,"110":1,"118":1,"124":1}}],["included",{"2":{"15":1,"100":1}}],["include",{"2":{"7":1,"8":1,"15":1}}],["incompatibility",{"2":{"257":1}}],["incoming",{"2":{"78":4}}],["inconvenient",{"2":{"130":1}}],["inconsistent",{"2":{"107":1}}],["incorrectly",{"2":{"158":1}}],["incorrect",{"0":{"126":1},"2":{"19":2,"56":1,"84":1,"125":1,"128":1,"260":1,"274":1}}],["increase",{"2":{"35":1,"117":1}}],["initilly",{"2":{"36":1}}],["initialvalues",{"2":{"276":1}}],["initializing",{"0":{"185":1},"1":{"186":1},"2":{"56":1,"90":1,"137":1,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["initialize",{"0":{"232":1,"243":1},"2":{"28":5,"56":3,"81":1,"89":1,"90":1,"137":3,"138":1,"140":1,"196":1,"264":1,"278":1}}],["initialized",{"2":{"15":4,"28":2,"41":8}}],["initializers",{"2":{"15":1}}],["initializer",{"2":{"15":1,"38":10,"39":7,"56":3}}],["initializations",{"2":{"116":2}}],["initialization",{"0":{"28":1},"2":{"12":1,"15":2,"28":1,"35":4,"47":1,"116":1,"174":1,"185":1,"201":1,"242":1,"285":1}}],["initial",{"2":{"9":1,"10":1,"34":1,"38":11,"56":2,"100":1,"107":1,"174":1,"201":1,"284":1,"285":1,"287":2}}],["initialstates",{"2":{"7":3,"8":1,"10":1,"153":2,"174":2,"201":1,"293":2}}],["initialparameters",{"2":{"7":3,"8":1,"9":1,"153":2,"174":2,"201":1,"242":3}}],["init",{"2":{"15":7,"35":10,"38":26,"39":21,"41":16,"51":6,"56":6,"81":5,"116":2,"117":1,"138":1,"153":10,"174":4,"186":8,"201":2,"202":2,"271":2,"285":6}}],["int=4",{"2":{"295":1}}],["int=6",{"2":{"295":1}}],["int=64",{"2":{"260":1}}],["int=length",{"2":{"293":1}}],["int=16",{"2":{"295":1}}],["int=10",{"2":{"295":1}}],["int=100",{"2":{"291":1,"295":1}}],["int=128",{"2":{"273":1,"295":1}}],["int=200",{"2":{"260":1}}],["int=20",{"2":{"260":1}}],["int=2",{"2":{"260":1}}],["int=50000",{"2":{"253":1}}],["int=32",{"2":{"250":1,"253":1}}],["int=0",{"2":{"30":5,"253":1}}],["intro",{"2":{"187":1}}],["introduction",{"2":{"108":1}}],["introductory",{"2":{"101":1}}],["introducing",{"2":{"45":1}}],["introduces",{"2":{"187":1}}],["introduced",{"2":{"15":1}}],["introduce",{"2":{"11":1,"55":1,"127":2}}],["int64",{"2":{"50":3,"76":19,"77":5,"78":2,"81":7,"86":1,"89":16,"118":18,"133":4,"188":3,"190":2,"224":2,"237":99,"268":8}}],["int",{"2":{"35":2,"38":2,"40":1,"41":1,"42":1,"66":1,"75":2,"76":6,"78":2,"86":27,"153":4,"196":1,"231":2,"241":3,"271":6,"272":1,"273":5,"277":1,"291":1,"293":11}}],["into",{"0":{"137":1},"2":{"8":1,"15":2,"24":1,"30":1,"34":2,"35":2,"37":3,"38":4,"40":1,"42":1,"47":1,"50":1,"51":1,"53":5,"56":2,"63":1,"73":1,"74":1,"77":2,"81":3,"86":6,"89":1,"107":1,"118":1,"147":1,"151":1,"153":1,"188":2,"201":1,"209":1,"230":1}}],["intentionally",{"2":{"173":1}}],["intended",{"2":{"84":1,"86":3}}],["internedstrings",{"2":{"229":1}}],["internals",{"2":{"242":1,"278":1}}],["internally",{"2":{"45":1,"55":1,"86":2,"107":1,"164":1,"237":1}}],["internal",{"0":{"86":1},"2":{"22":1,"34":1,"40":2,"47":1,"49":1,"55":2,"66":2,"67":3,"98":1,"100":1,"114":1,"137":1}}],["international",{"2":{"15":6,"50":3,"66":1}}],["intermediate",{"0":{"217":1}}],["interactiveutils",{"2":{"197":2,"206":2,"214":2,"227":2,"229":1,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["interactive",{"2":{"178":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["interested",{"2":{"166":1,"279":1}}],["interest",{"2":{"84":1}}],["interpolatingadjoint",{"2":{"237":13}}],["interpolationsunitfulext",{"2":{"276":1,"282":1}}],["interpolations",{"2":{"276":2,"282":2}}],["interpolation",{"2":{"78":3,"79":1,"82":1,"86":1}}],["interpreted",{"2":{"79":1}}],["interoperability",{"0":{"43":1},"1":{"44":1,"45":1,"46":1,"47":1}}],["inter",{"2":{"19":1}}],["intervalsetsext",{"2":{"276":1,"282":2}}],["intervalsetsrecipesbaseext",{"2":{"276":1,"282":2}}],["intervalsetsrandomext",{"2":{"276":1,"282":1}}],["intervalsetsstatisticsext",{"2":{"276":1,"282":1}}],["intervalsets",{"2":{"276":4,"282":4}}],["intervalarithmeticrecipesbaseext",{"2":{"276":2,"282":2}}],["intervalarithmeticdiffrulesext",{"2":{"276":1,"282":1}}],["intervalarithmeticintervalsetsext",{"2":{"276":1,"282":1}}],["intervalarithmetic",{"2":{"263":1,"276":5,"282":5}}],["intervalarithmeticforwarddiffext",{"2":{"263":2,"276":2,"282":2}}],["interval",{"2":{"15":2}}],["interfacem",{"2":{"47":1}}],["interface",{"0":{"151":1,"152":1,"155":1,"156":1},"1":{"152":1,"153":2,"154":2,"155":1,"156":1},"2":{"7":1,"32":2,"47":2,"151":2,"153":1,"221":1,"231":1,"278":1}}],["integral",{"2":{"50":1}}],["integrate",{"2":{"147":1}}],["integrated",{"2":{"8":1,"160":1}}],["integrating",{"0":{"137":1}}],["integration",{"0":{"31":1,"32":1},"2":{"138":1}}],["integers",{"2":{"28":1,"35":11,"37":15,"39":5,"76":5,"81":5,"86":1}}],["integer",{"2":{"2":3,"4":3,"15":11,"35":12,"37":9,"39":1,"40":1,"41":5,"54":3,"75":6,"76":5,"78":11,"79":2,"81":3,"86":1,"272":1}}],["intelopenmp",{"2":{"263":1,"276":2,"282":2}}],["intelligence",{"2":{"15":2}}],["intel",{"2":{"3":1}}],["insert",{"2":{"86":2}}],["instructions",{"2":{"88":1}}],["institute",{"2":{"71":1}}],["installation",{"0":{"88":1},"2":{"238":1,"246":1}}],["install",{"0":{"68":1},"2":{"68":1,"69":1,"70":1,"88":1,"89":1,"91":1}}],["installed",{"2":{"28":4,"32":1,"137":1,"189":1}}],["instability",{"2":{"55":1,"160":1,"183":1}}],["instabilities",{"0":{"160":1},"2":{"45":1,"118":1,"160":2}}],["instancenorm",{"2":{"41":7,"66":2,"116":1,"117":1}}],["instance",{"2":{"38":1,"41":3,"66":3,"80":3,"278":1}}],["instead",{"2":{"3":2,"4":2,"8":1,"22":2,"27":2,"35":3,"45":1,"49":1,"52":5,"53":1,"54":1,"56":1,"58":1,"74":1,"76":1,"78":2,"80":3,"84":1,"85":1,"86":2,"95":1,"110":1,"111":1,"112":1,"114":1,"115":4,"116":2,"130":1,"131":1,"137":1,"145":1,"147":1,"151":1,"154":1,"155":1,"158":1,"161":1,"163":1,"167":1,"174":2,"184":1,"188":1,"201":1,"228":1,"276":1}}],["inside",{"2":{"23":1,"54":1,"56":1,"84":1,"144":1,"164":3,"174":3}}],["inspiration",{"2":{"5":1}}],["in",{"0":{"33":1,"116":1,"130":1},"1":{"34":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"42":1},"2":{"2":2,"3":1,"4":2,"5":2,"7":4,"8":5,"10":3,"15":15,"18":2,"21":1,"22":2,"23":3,"24":3,"25":6,"28":1,"30":2,"31":1,"34":20,"35":22,"36":2,"37":3,"38":23,"39":13,"40":6,"41":10,"45":5,"47":1,"49":7,"50":3,"51":4,"52":12,"54":1,"55":4,"56":18,"58":5,"60":3,"62":2,"64":3,"66":2,"68":3,"71":2,"72":1,"73":2,"74":2,"75":2,"76":18,"77":8,"78":14,"79":5,"80":4,"81":8,"82":12,"83":1,"84":5,"85":1,"86":41,"88":1,"90":8,"91":1,"94":1,"95":1,"98":2,"99":1,"100":3,"101":2,"102":1,"104":1,"107":1,"116":1,"117":1,"119":4,"120":2,"122":1,"124":1,"125":1,"126":1,"127":11,"128":3,"131":1,"132":1,"137":4,"140":2,"141":1,"142":2,"143":1,"144":1,"147":12,"149":1,"153":10,"156":1,"158":2,"159":2,"160":1,"163":5,"164":3,"165":3,"166":8,"167":1,"168":1,"169":2,"172":1,"173":2,"174":1,"177":2,"178":2,"184":1,"185":1,"186":1,"187":3,"188":2,"190":1,"191":3,"192":13,"196":2,"198":1,"199":6,"200":3,"201":6,"202":3,"204":3,"207":1,"211":1,"212":2,"219":1,"221":2,"224":1,"225":4,"229":38,"233":1,"234":2,"236":1,"240":14,"242":1,"244":1,"245":4,"247":1,"252":1,"253":1,"254":2,"258":5,"260":12,"262":1,"263":9,"268":1,"270":2,"271":1,"273":2,"274":2,"276":21,"277":6,"278":2,"279":13,"282":62,"284":1,"285":3,"290":1,"293":10,"295":2,"296":2}}],["idxs",{"2":{"272":2,"292":3}}],["idx",{"2":{"81":27,"86":5,"244":2,"245":12,"260":15,"273":4,"296":4}}],["ideal",{"2":{"213":1}}],["ideally",{"2":{"67":1,"220":1}}],["identity",{"2":{"15":13,"34":1,"63":1,"66":5,"73":1,"86":1,"89":3,"118":3,"133":2,"237":9,"268":2}}],["id",{"2":{"2":7,"4":3}}],["ifelse",{"2":{"187":1,"276":1,"282":1,"292":1}}],["if",{"2":{"1":2,"2":13,"3":9,"4":3,"6":1,"7":1,"8":5,"11":2,"19":2,"22":3,"23":1,"24":5,"25":2,"28":2,"34":17,"35":8,"36":10,"38":28,"39":12,"40":5,"41":14,"42":3,"45":2,"47":1,"49":6,"50":8,"51":4,"52":3,"54":1,"55":3,"56":6,"57":1,"60":4,"62":2,"63":2,"64":7,"65":2,"66":6,"68":1,"71":2,"73":1,"76":11,"77":1,"79":1,"80":3,"81":4,"84":4,"86":14,"88":1,"89":1,"90":2,"95":1,"96":3,"97":3,"100":1,"107":2,"114":1,"118":2,"119":2,"122":2,"123":1,"124":1,"126":1,"127":1,"128":1,"140":1,"141":1,"143":1,"144":1,"147":1,"148":2,"149":4,"150":1,"151":1,"153":4,"154":3,"155":3,"162":1,"164":3,"165":2,"168":1,"176":1,"177":1,"180":3,"184":1,"185":2,"186":2,"189":2,"191":1,"192":1,"196":1,"197":3,"204":1,"206":3,"209":1,"212":1,"214":3,"219":1,"220":2,"225":2,"227":3,"230":1,"238":3,"241":2,"246":3,"253":1,"255":3,"260":3,"261":3,"268":1,"269":3,"273":2,"274":4,"275":3,"280":3,"283":3,"289":3,"295":1,"297":3}}],["isoband",{"2":{"276":2,"282":2}}],["isdefined",{"2":{"197":3,"206":3,"214":3,"227":3,"238":3,"246":3,"255":3,"261":3,"269":3,"274":1,"275":3,"280":3,"289":3,"297":3}}],["isnan",{"2":{"253":1,"295":1}}],["isn",{"2":{"107":1,"177":1,"283":1}}],["istft",{"2":{"86":1}}],["istraining",{"2":{"51":4}}],["issuing",{"2":{"54":1}}],["issue",{"2":{"54":1,"86":1,"119":1,"122":1,"128":1,"141":1,"162":1,"164":1,"165":2,"168":1,"219":1,"220":2}}],["issues",{"2":{"38":1,"49":1,"56":1,"77":1,"86":1,"100":1,"101":1,"123":2,"124":1,"164":1}}],["isbitstype",{"2":{"52":1}}],["iszero",{"2":{"23":1,"260":1}}],["isa",{"2":{"15":1,"23":1,"56":1,"84":1,"86":1,"204":2,"212":2}}],["isleaf",{"2":{"3":5,"10":1,"224":2}}],["is",{"2":{"0":1,"1":4,"2":19,"3":18,"4":10,"5":3,"6":1,"7":6,"8":12,"10":2,"11":4,"12":1,"15":9,"18":4,"22":5,"23":4,"24":6,"28":2,"32":2,"34":23,"35":3,"36":7,"37":3,"38":38,"39":9,"40":10,"41":18,"42":3,"45":3,"47":7,"49":6,"50":36,"51":5,"52":14,"53":1,"54":5,"55":3,"56":9,"57":2,"58":9,"60":2,"61":1,"62":3,"63":4,"64":6,"65":2,"66":7,"67":4,"68":1,"72":1,"73":1,"74":3,"75":4,"76":16,"77":2,"78":3,"79":7,"80":13,"81":8,"82":5,"83":3,"84":13,"85":5,"86":50,"88":1,"90":2,"94":2,"97":2,"100":6,"102":1,"105":1,"107":1,"108":1,"110":1,"114":3,"116":1,"117":1,"118":2,"119":2,"120":1,"122":5,"124":3,"125":2,"126":4,"127":5,"128":1,"130":1,"133":1,"136":1,"137":4,"138":1,"139":1,"140":2,"141":2,"144":2,"147":1,"148":1,"149":9,"150":1,"151":4,"153":3,"154":6,"155":2,"158":2,"159":2,"160":1,"161":2,"163":3,"164":3,"165":12,"166":2,"169":1,"170":1,"172":1,"174":6,"176":3,"178":1,"180":2,"181":1,"186":3,"187":1,"188":4,"189":3,"190":4,"192":3,"196":1,"200":1,"201":1,"202":1,"203":2,"207":1,"213":2,"221":2,"225":4,"231":1,"234":2,"237":1,"242":1,"247":1,"253":7,"256":1,"260":104,"267":1,"268":1,"270":1,"274":5,"276":3,"277":1,"278":8,"279":2,"281":1,"283":1,"285":2,"290":1,"295":2}}],["fθ",{"2":{"279":1}}],["ffmpeg",{"2":{"276":1,"282":1}}],["fftw",{"2":{"263":1,"276":2,"282":2}}],["fft−0",{"2":{"86":1}}],["fft",{"2":{"86":14}}],["fw",{"2":{"196":1}}],["fwiw",{"2":{"154":1}}],["fd",{"2":{"165":8,"166":8,"167":8,"168":8}}],["fdrop",{"2":{"73":3}}],["f2",{"2":{"153":2}}],["fb",{"2":{"86":3}}],["fmax",{"2":{"86":2}}],["fmaps",{"2":{"8":1}}],["fmap",{"2":{"7":3,"8":2,"10":1,"23":1,"52":22,"278":1}}],["fmin",{"2":{"86":2}}],["f=identity",{"2":{"66":1}}],["f=σ",{"2":{"38":1}}],["fp32",{"2":{"60":1}}],["ft",{"2":{"55":3}}],["f64",{"2":{"53":1,"285":1}}],["f32>",{"2":{"147":11}}],["f32",{"2":{"53":1}}],["f16",{"2":{"53":1}}],["f1",{"2":{"50":1,"153":2}}],["fn=neuralode",{"2":{"232":1}}],["fn",{"2":{"49":2,"50":2,"56":8,"232":1,"292":2}}],["f",{"2":{"18":5,"19":7,"23":3,"34":4,"40":6,"49":2,"52":4,"56":1,"58":34,"60":4,"63":2,"65":2,"84":9,"86":9,"95":2,"96":6,"147":4,"174":1,"186":1,"193":3,"194":1,"195":1,"196":1,"225":1,"292":2}}],["func",{"2":{"147":2,"225":3}}],["functor",{"2":{"52":2,"174":2}}],["functors",{"2":{"3":1,"5":1,"7":2,"10":3,"23":2,"52":17,"53":4,"107":1,"112":1,"115":1,"118":1,"143":1,"174":1,"187":1,"276":2,"282":1}}],["functionwrappers",{"2":{"276":1,"282":1}}],["functionwrapperswrappers",{"2":{"229":1,"276":1,"282":1}}],["functionproperties",{"2":{"229":1,"282":1}}],["function3",{"2":{"168":4}}],["function2",{"2":{"167":4}}],["function1",{"2":{"165":5}}],["functions",{"0":{"15":1,"16":1,"29":1,"50":1,"58":1,"67":1,"86":1,"211":1,"233":1,"244":1,"251":1,"259":1,"273":1,"283":1,"294":1},"2":{"7":1,"13":1,"16":1,"34":1,"38":4,"45":1,"47":1,"49":1,"50":2,"51":1,"53":1,"56":1,"58":1,"60":1,"72":1,"86":1,"98":1,"99":2,"100":1,"107":1,"114":2,"118":1,"119":1,"120":1,"124":1,"140":1,"149":1,"153":3,"164":1,"168":1,"174":1,"183":1,"184":1,"188":1,"190":1,"194":2,"242":1,"267":1,"283":1}}],["function",{"0":{"130":1,"165":1,"167":1,"168":1,"267":1},"1":{"166":1},"2":{"3":1,"4":4,"8":4,"10":2,"11":3,"15":2,"18":4,"19":3,"22":1,"23":4,"27":2,"28":1,"34":2,"35":4,"38":3,"39":4,"40":5,"42":1,"47":1,"49":15,"50":2,"52":11,"53":1,"54":3,"55":1,"56":11,"58":19,"60":4,"61":1,"62":2,"63":1,"64":2,"65":1,"66":7,"73":1,"74":1,"83":2,"84":3,"86":15,"90":3,"95":1,"96":1,"100":1,"107":1,"114":2,"116":3,"118":9,"119":1,"127":2,"132":1,"133":1,"134":1,"137":1,"143":1,"144":3,"145":2,"147":3,"148":1,"149":2,"153":3,"154":1,"155":1,"164":5,"165":1,"166":6,"168":2,"170":1,"171":1,"172":1,"174":2,"177":1,"186":1,"189":2,"191":1,"193":1,"195":1,"196":1,"200":1,"202":1,"203":2,"209":1,"211":2,"212":1,"222":1,"225":4,"230":1,"231":4,"232":1,"233":2,"234":2,"235":2,"241":2,"242":1,"243":1,"244":2,"247":1,"248":1,"250":2,"251":14,"253":1,"256":1,"257":1,"258":2,"259":2,"260":14,"264":2,"267":3,"268":3,"270":1,"271":6,"272":3,"273":6,"274":1,"277":1,"278":5,"279":3,"283":16,"285":2,"286":7,"290":1,"291":3,"292":7,"293":5,"294":4,"295":1}}],["functionalities",{"2":{"24":1,"26":1}}],["functionality",{"0":{"24":1,"114":1,"115":1,"139":1},"2":{"6":1,"38":1,"49":1,"91":1,"107":2,"114":1,"151":2,"189":1}}],["functional",{"2":{"2":5,"3":5,"28":1,"136":1,"150":2,"183":1,"189":1,"197":2,"206":2,"214":2,"220":1,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["future",{"2":{"86":1,"164":1,"187":1,"276":1,"282":1}}],["full",{"0":{"172":1},"2":{"81":1,"86":1,"166":1,"172":16,"274":4}}],["fully",{"0":{"65":1},"2":{"15":1,"39":2,"50":1,"52":1,"123":1,"136":1}}],["fusion",{"2":{"63":1,"65":1,"253":7,"260":113,"295":2}}],["fuse",{"2":{"63":1}}],["fuses",{"2":{"63":1}}],["fused",{"2":{"50":1,"63":2,"65":2,"161":3,"183":1}}],["further",{"2":{"49":1}}],["fetch",{"2":{"155":1,"264":1}}],["feel",{"2":{"153":1}}],["feedforward",{"2":{"15":2,"75":1,"278":1}}],["few",{"2":{"98":1,"188":1,"192":1}}],["fed",{"2":{"38":4}}],["features",{"0":{"21":1,"105":1,"108":1,"112":1,"117":1},"1":{"22":1,"23":1,"24":1,"25":1},"2":{"21":3,"58":2,"73":2,"78":1,"94":1,"99":1,"102":1,"114":1,"141":2,"164":1,"209":2,"230":2,"241":4,"257":1,"260":11}}],["feature",{"2":{"2":1,"15":2,"35":1,"37":6,"41":3,"55":1,"124":1,"128":1,"147":2,"164":2,"188":1}}],["flexibility",{"2":{"155":1}}],["flexible",{"2":{"7":1}}],["flows",{"0":{"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1}}],["flow",{"2":{"118":2}}],["floor",{"2":{"86":2}}],["floating",{"0":{"53":1},"2":{"53":4,"60":1}}],["float16",{"2":{"16":8,"53":1,"54":6,"86":2}}],["float32",{"2":{"5":3,"15":6,"16":8,"34":2,"40":2,"45":1,"47":1,"50":6,"53":2,"54":5,"58":2,"74":1,"78":4,"86":14,"89":57,"90":1,"118":41,"119":1,"126":9,"127":27,"133":6,"134":3,"135":3,"143":10,"146":6,"147":22,"149":1,"150":1,"153":2,"154":2,"155":3,"158":5,"165":10,"166":2,"167":11,"168":11,"172":2,"173":2,"185":5,"188":8,"193":5,"194":4,"195":1,"196":8,"200":1,"209":1,"224":4,"230":1,"237":77,"252":5,"253":5,"264":3,"267":8,"273":1,"274":2,"277":8,"285":7,"291":1,"295":1,"296":2}}],["float64=0",{"2":{"260":3,"295":2}}],["float64",{"2":{"5":4,"16":8,"50":1,"53":1,"54":4,"56":4,"58":3,"74":3,"77":2,"78":4,"79":6,"81":2,"82":4,"84":2,"85":4,"86":3,"158":3,"188":5,"191":1,"224":4,"278":13,"284":1,"285":1,"286":1,"287":2}}],["flipkernel=true",{"2":{"117":1}}],["flips",{"2":{"77":1}}],["flipped=true",{"2":{"77":2}}],["flipped=false",{"2":{"77":2}}],["flipped",{"2":{"77":3}}],["flat",{"2":{"234":1}}],["flattening",{"2":{"114":1}}],["flattened",{"2":{"40":3,"45":1,"271":5,"276":1}}],["flatten",{"2":{"40":1,"74":1}}],["flattens",{"2":{"40":1}}],["flattenlayer",{"2":{"40":3,"47":1,"147":1,"210":3,"232":1,"237":9,"243":1,"271":1}}],["flaky",{"2":{"139":1}}],["flag",{"2":{"47":1}}],["fluxlinear",{"2":{"174":5}}],["fluxlayer",{"2":{"45":2}}],["fluxmpifluxmodel",{"2":{"139":1}}],["fluxmpi",{"0":{"138":1},"1":{"139":1,"140":1},"2":{"136":1,"138":3,"139":1,"140":2}}],["flux",{"0":{"45":1,"100":1,"173":1,"177":1},"1":{"174":1,"175":1,"176":1,"177":1},"2":{"38":3,"45":14,"74":3,"84":2,"98":1,"100":4,"139":1,"153":2,"173":3,"174":5,"176":4,"177":1,"187":1,"276":1,"281":1}}],["fribidi",{"2":{"276":1,"282":1}}],["friendly",{"2":{"98":2}}],["frequently",{"2":{"219":1}}],["frequencies",{"2":{"86":1}}],["frequency",{"2":{"86":6}}],["freqs=200",{"2":{"86":1}}],["freqs",{"2":{"86":7}}],["freq",{"2":{"86":2}}],["freetypeabstraction",{"2":{"276":1,"282":1}}],["freetype",{"2":{"276":1,"282":1}}],["freetype2",{"2":{"276":1,"282":1}}],["frees",{"2":{"163":1}}],["freeze",{"2":{"22":5,"142":1,"143":6,"144":6,"145":5,"146":1}}],["freezing",{"0":{"22":1,"142":1,"143":1,"144":1,"145":1,"146":1},"1":{"143":1,"144":1,"145":1,"146":1},"2":{"142":1,"144":1,"145":2}}],["free",{"2":{"5":1,"15":1,"49":1,"283":1}}],["freeable",{"2":{"5":1}}],["framerate=10",{"2":{"254":1}}],["framework",{"2":{"98":1,"100":1,"151":1,"187":1,"190":1,"207":1}}],["frameworks",{"0":{"44":1},"1":{"45":1},"2":{"35":1,"38":1,"98":3,"151":1,"188":1,"285":1}}],["frame",{"2":{"86":2}}],["frames",{"2":{"86":4}}],["frontend",{"2":{"90":1}}],["frozen",{"2":{"22":8,"143":9,"146":4}}],["frozenlayer",{"2":{"22":8}}],["fromfluxadaptor",{"2":{"45":5,"177":1}}],["from",{"0":{"44":1,"138":1,"173":1},"1":{"45":1,"139":1,"140":1,"174":1,"175":1,"176":1,"177":1},"2":{"4":1,"5":3,"7":2,"11":1,"15":6,"16":12,"23":1,"34":1,"35":2,"36":1,"38":10,"39":1,"41":2,"45":1,"47":1,"49":6,"50":2,"51":1,"56":1,"58":6,"60":1,"64":1,"67":1,"75":1,"76":1,"77":2,"78":4,"79":1,"81":4,"82":4,"83":1,"84":1,"86":5,"89":1,"90":1,"100":1,"114":2,"115":1,"116":2,"147":1,"153":1,"154":1,"155":1,"158":1,"161":1,"163":3,"165":2,"166":2,"167":1,"168":1,"188":1,"196":2,"201":1,"207":1,"221":1,"225":1,"228":1,"237":3,"247":1,"262":1,"264":1,"266":1,"271":2,"273":1,"276":1,"278":3,"279":4,"281":2,"283":3,"285":1,"291":2}}],["far",{"2":{"153":1}}],["farley",{"2":{"40":1}}],["familiar",{"2":{"147":1,"198":1,"242":1}}],["fake",{"2":{"127":1}}],["facusapienza",{"2":{"165":1}}],["fact",{"2":{"86":1,"140":1}}],["factors",{"2":{"78":5,"86":1}}],["factor",{"2":{"15":2,"42":2,"50":2,"64":2,"66":8,"78":1,"86":2}}],["facilitates",{"2":{"55":1}}],["fausto",{"2":{"50":1}}],["failed",{"2":{"126":1}}],["failing",{"2":{"96":1}}],["failures",{"2":{"45":1,"119":1}}],["fail",{"2":{"45":1,"96":4,"123":1}}],["fails",{"2":{"24":1,"95":1,"119":1,"122":2,"124":1}}],["favor",{"2":{"35":1,"52":5}}],["fashionmnist",{"0":{"239":1},"1":{"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1},"2":{"241":1,"245":2}}],["fashion",{"2":{"28":1}}],["fastbroadcast",{"2":{"229":1,"282":1}}],["fastlapackinterface",{"2":{"229":1,"282":1}}],["fastpowerenzymeext",{"2":{"229":1,"282":1}}],["fastpowerreversediffext",{"2":{"229":1,"282":1}}],["fastpowerforwarddiffext",{"2":{"229":1,"282":1}}],["fastpowertrackerext",{"2":{"229":1,"282":1}}],["fastpower",{"2":{"229":5,"282":5}}],["fastclosures",{"2":{"187":1,"276":1,"282":1}}],["fastest",{"2":{"122":5,"170":1}}],["faster",{"0":{"161":1},"2":{"3":1,"58":4,"60":2,"61":3,"80":1,"122":1,"161":2,"162":1,"164":1}}],["fast",{"2":{"21":1,"41":1,"47":1,"58":12,"60":4,"62":2,"66":1,"67":1,"84":2,"86":5,"114":1,"161":2,"163":1,"169":1,"194":1,"285":3}}],["fancy",{"2":{"90":1,"118":1}}],["fan",{"2":{"15":6,"35":2}}],["fallback",{"2":{"7":1,"11":3,"63":2,"65":2,"67":1,"107":1,"177":1}}],["fall",{"2":{"3":1}}],["falls",{"2":{"2":1,"60":1}}],["false",{"2":{"2":2,"10":1,"24":1,"34":2,"35":2,"38":10,"39":3,"45":1,"47":2,"49":2,"50":6,"51":1,"52":1,"55":1,"64":1,"67":1,"77":1,"84":1,"86":11,"97":1,"116":1,"119":1,"159":1,"164":1,"179":1,"180":1,"209":1,"229":1,"230":1,"231":3,"235":2,"237":13,"240":1,"241":2,"260":2,"272":1,"274":2,"286":1,"295":1}}],["fitting",{"0":{"262":1},"1":{"263":1,"264":1,"265":1,"266":1,"267":1,"268":1,"269":1}}],["fit",{"2":{"225":1,"253":4,"262":1,"279":1}}],["figure",{"2":{"223":1,"226":1,"254":1,"264":1,"268":1,"277":1,"284":1,"285":1,"288":2,"291":1,"296":1}}],["figured",{"2":{"127":1}}],["fig",{"2":{"223":3,"226":3,"254":6,"264":3,"268":3,"277":3,"279":6,"284":3,"285":3,"288":6,"291":3,"296":3}}],["finite",{"2":{"165":4,"166":2,"169":1}}],["finitediffsparsearraysext",{"2":{"276":1,"282":2}}],["finitediffstaticarraysext",{"2":{"276":1,"282":2}}],["finitediff",{"2":{"96":1,"164":1,"165":2,"166":2,"276":3,"282":3}}],["finish",{"2":{"125":1}}],["fingerprint",{"2":{"90":1,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["finetune",{"2":{"225":1}}],["fine",{"2":{"80":1,"154":1}}],["findmax",{"2":{"279":1}}],["find",{"2":{"56":1,"128":1,"196":1,"219":1,"279":1}}],["final",{"2":{"56":2,"78":1,"245":3}}],["finally",{"0":{"213":1},"2":{"28":1,"56":1,"137":1,"153":1,"154":1,"188":1,"196":1,"201":1,"237":1,"268":1,"288":1}}],["fix",{"2":{"123":1,"126":1,"127":2,"158":2,"165":2,"169":1,"219":1}}],["fixing",{"2":{"110":1,"168":1,"185":1}}],["fixedpointnumbers",{"2":{"276":1,"282":1}}],["fixed=",{"2":{"56":1}}],["fixed",{"2":{"55":1,"56":4,"64":1,"80":1,"86":1,"115":2,"126":4,"190":1}}],["fix1",{"2":{"36":1,"40":1,"164":3,"167":1,"168":2,"232":1,"237":9,"272":1,"285":2,"293":2}}],["fillarrayspdmatsext",{"2":{"276":1,"282":1}}],["fillarrayssparsearraysext",{"2":{"192":1,"276":1,"282":1}}],["fillarraysstatisticsext",{"2":{"192":1,"276":1,"282":1}}],["fillarrays",{"2":{"192":3,"276":4,"282":4}}],["fill",{"2":{"50":1,"58":1,"81":1,"188":1,"274":1}}],["filterbank",{"2":{"86":3}}],["filterbanks",{"2":{"86":6}}],["filters=64",{"2":{"274":1}}],["filters",{"2":{"86":2,"271":24,"274":1}}],["filter",{"2":{"15":1,"77":2,"86":12}}],["filepaths",{"2":{"276":1,"282":1}}],["filepathsbasetestext",{"2":{"276":1,"282":1}}],["filepathsbasemmapext",{"2":{"276":1,"282":1}}],["filepathsbase",{"2":{"276":3,"282":3}}],["fileio",{"2":{"229":1,"276":1,"282":1}}],["filename",{"2":{"119":1}}],["file",{"2":{"1":1,"119":1,"147":3,"149":3}}],["fields",{"2":{"34":10,"40":2,"49":1,"52":1,"155":1}}],["fieldnames",{"2":{"7":1,"201":1}}],["field",{"2":{"7":3,"8":1,"34":1,"45":2,"51":1,"201":1}}],["first",{"2":{"2":1,"8":1,"23":1,"25":1,"34":2,"37":3,"40":2,"45":1,"47":1,"56":6,"73":1,"76":8,"78":8,"79":2,"80":1,"85":1,"86":2,"89":1,"118":2,"124":2,"127":4,"138":1,"149":1,"153":2,"155":2,"158":2,"164":1,"165":1,"173":1,"174":1,"178":1,"190":1,"191":2,"193":1,"196":1,"201":3,"204":1,"211":1,"212":1,"213":1,"231":1,"233":1,"244":1,"249":1,"260":1,"268":2,"271":1,"274":1,"276":1,"277":2,"279":3,"284":1,"285":2,"286":1,"288":1}}],["fontconfig",{"2":{"276":1,"282":1}}],["footnotes",{"0":{"124":1}}],["fold",{"2":{"77":5}}],["foldl",{"2":{"51":3}}],["follow",{"2":{"119":1}}],["follows",{"2":{"3":1,"5":1,"34":1}}],["following",{"2":{"2":1,"5":1,"19":1,"23":1,"52":1,"53":1,"54":2,"67":1,"68":2,"69":2,"70":2,"74":2,"86":2,"119":1,"143":1,"148":1,"151":1,"157":1,"158":1,"162":1,"164":1,"169":1,"173":1,"178":2,"180":1,"201":1,"251":1,"279":1}}],["focuses",{"2":{"50":1}}],["focalloss",{"2":{"50":2}}],["focal",{"2":{"50":4}}],["four",{"2":{"188":1}}],["fourier",{"2":{"86":5}}],["fourth",{"2":{"50":1}}],["found",{"2":{"2":1,"71":1,"220":1}}],["forum",{"2":{"164":1}}],["forget",{"2":{"125":1}}],["fore",{"2":{"73":1}}],["formulas",{"2":{"283":1}}],["format",{"2":{"188":1,"276":1,"282":1}}],["formats",{"2":{"34":4,"40":1}}],["forms",{"2":{"50":1}}],["form",{"2":{"39":1,"56":1,"84":1,"184":1,"190":2}}],["forwarded",{"2":{"95":1}}],["forwarddiffext",{"2":{"282":1}}],["forwarddiffstaticarraysext",{"2":{"187":1,"276":1,"282":1}}],["forwarddiff",{"2":{"52":1,"67":1,"84":6,"96":1,"121":1,"164":3,"165":2,"167":4,"168":2,"172":1,"187":2,"192":2,"193":4,"194":1,"276":2,"282":2}}],["forward",{"2":{"24":1,"38":1,"39":2,"51":1,"56":5,"90":1,"121":2,"127":1,"128":1,"165":1,"167":1,"168":1,"192":1,"193":2,"194":2,"268":1,"278":1,"279":5,"292":3,"293":1}}],["forbidden",{"2":{"15":1}}],["force=true",{"2":{"248":1,"256":1,"270":1,"290":1}}],["forces",{"2":{"100":1}}],["force",{"2":{"2":6,"45":4,"95":1,"148":1}}],["for",{"0":{"46":1,"135":1,"162":1,"166":1,"178":1,"187":1,"270":1,"286":1,"290":1},"1":{"47":1,"179":1,"180":1,"181":1,"182":1,"183":1,"184":1,"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"271":1,"272":1,"273":1,"274":1,"275":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1},"2":{"0":1,"1":1,"2":3,"3":14,"4":9,"5":3,"6":1,"7":11,"8":7,"11":1,"12":1,"15":11,"18":2,"19":3,"20":1,"22":5,"23":3,"24":8,"25":2,"27":2,"28":8,"29":2,"34":11,"35":10,"37":15,"38":24,"39":12,"40":6,"41":18,"42":8,"45":3,"47":5,"49":8,"50":9,"51":5,"52":15,"53":3,"54":4,"55":2,"56":16,"57":3,"58":5,"59":1,"60":1,"61":1,"63":5,"64":8,"65":5,"66":15,"67":3,"72":3,"73":2,"74":1,"75":9,"76":6,"77":6,"78":5,"79":7,"80":12,"81":11,"82":6,"83":3,"84":7,"85":1,"86":40,"88":1,"89":1,"90":6,"91":1,"92":1,"93":5,"94":2,"95":1,"96":6,"98":1,"99":1,"100":9,"101":1,"104":2,"105":2,"107":2,"108":1,"109":1,"110":1,"111":1,"112":1,"114":1,"116":2,"117":1,"118":5,"119":4,"120":2,"122":14,"123":2,"124":6,"125":1,"126":1,"127":1,"130":1,"132":2,"133":1,"136":1,"137":1,"140":1,"141":2,"142":1,"144":2,"148":1,"151":2,"153":6,"154":3,"155":3,"156":1,"157":2,"158":3,"160":1,"162":1,"163":3,"164":6,"166":1,"167":2,"168":1,"169":1,"170":1,"172":2,"173":1,"176":1,"179":2,"181":1,"182":1,"183":5,"184":1,"185":2,"186":1,"187":1,"188":3,"189":1,"190":2,"191":2,"192":5,"193":2,"194":2,"196":3,"200":3,"201":4,"202":2,"203":1,"204":5,"207":1,"211":1,"212":2,"213":1,"220":1,"221":2,"225":3,"230":1,"231":2,"233":1,"234":5,"237":3,"244":1,"245":3,"247":1,"252":2,"253":1,"254":2,"258":3,"260":2,"265":1,"268":2,"272":1,"273":1,"274":2,"277":5,"278":2,"279":5,"284":1,"293":8,"295":1,"296":2}}],["tval",{"2":{"260":1}}],["t∈",{"2":{"252":1}}],["tmp",{"2":{"241":6,"283":4}}],["tmatch",{"2":{"8":3}}],["ttrain",{"2":{"260":2}}],["ttraining",{"2":{"234":1,"245":2,"295":1}}],["ttest",{"2":{"234":1,"245":2,"260":1}}],["ttime",{"2":{"212":2,"234":3,"245":3}}],["td",{"0":{"135":1}}],["tdchain",{"0":{"133":1},"2":{"132":4,"134":1,"135":1}}],["tl",{"2":{"125":1}}],["tloss",{"2":{"119":1,"204":1}}],["t=rand",{"2":{"96":1}}],["t=float32",{"2":{"15":8}}],["t×hop",{"2":{"86":2}}],["tpu",{"0":{"92":1},"2":{"70":1,"92":1,"118":3,"121":1,"122":1}}],["tsteps",{"2":{"284":5,"285":4,"288":6}}],["tstate",{"2":{"196":6,"234":7,"268":12}}],["tsit5",{"2":{"223":1,"225":1,"226":1,"237":13}}],["tspan=",{"2":{"231":2,"235":1}}],["tspan",{"2":{"223":5,"226":2,"231":5,"235":3,"237":7,"284":4,"285":1,"288":1}}],["tsung",{"2":{"50":2}}],["ts",{"2":{"49":16,"254":6,"277":1,"278":4}}],["tail",{"2":{"278":1}}],["taccuracy",{"2":{"204":1}}],["tall",{"2":{"192":1}}],["tasklocalrng",{"2":{"89":1,"146":1,"187":1,"196":1,"224":1}}],["tasks",{"2":{"50":3}}],["tag",{"2":{"84":2}}],["tab>`",{"2":{"193":1}}],["tab",{"2":{"58":2}}],["tables",{"2":{"192":1,"276":1,"282":1}}],["tabletraits",{"2":{"192":1,"276":1,"282":1}}],["table",{"2":{"39":1,"188":1}}],["tangent",{"2":{"58":1}}],["tanhshrink",{"2":{"58":7}}],["tanh",{"2":{"25":2,"38":1,"58":18,"74":1,"84":2,"86":2,"89":11,"165":1,"166":1,"167":3,"168":3,"172":3,"225":2,"232":4,"237":36,"250":3,"278":3}}],["targets",{"2":{"50":1,"209":2,"230":2,"241":4,"257":1,"260":9}}],["target",{"2":{"50":2,"95":10,"211":3,"233":3,"244":3,"251":6,"252":12,"253":10}}],["taking",{"2":{"40":1,"50":1,"51":1,"74":1,"86":1}}],["takeaway",{"2":{"155":1}}],["takes",{"2":{"8":1,"34":4,"50":2,"56":1,"86":1,"90":1,"118":1,"225":1,"267":1,"279":1,"285":2}}],["taken",{"2":{"5":1,"15":1,"35":2,"39":1,"57":1,"116":1,"118":1,"168":1}}],["take",{"2":{"1":1,"16":1,"23":1,"35":1,"49":1,"50":2,"57":1,"89":1,"123":1,"131":1,"165":1,"186":1,"250":1,"252":1,"278":1}}],["turingoptimext",{"2":{"276":1}}],["turing",{"2":{"276":9}}],["turns",{"2":{"74":1}}],["tutorials",{"0":{"215":1,"216":1,"217":1,"218":1,"220":1},"1":{"216":1,"217":1,"218":1,"219":1,"220":1},"2":{"56":1,"101":1,"220":1,"247":1}}],["tutorial",{"2":{"55":1,"90":1,"187":3,"192":1,"198":2,"207":2,"220":1,"221":6,"247":1,"256":1,"262":1,"276":2,"290":1,"291":1}}],["tu",{"2":{"18":1}}],["tuple=true",{"2":{"234":1}}],["tuples",{"2":{"7":2,"81":3}}],["tuple",{"2":{"3":2,"7":2,"15":3,"22":4,"23":1,"34":7,"35":11,"37":15,"38":22,"39":5,"41":1,"42":3,"47":1,"49":1,"50":1,"52":1,"66":2,"67":1,"75":3,"76":12,"81":2,"86":5,"89":12,"99":1,"118":1,"132":1,"167":1,"168":1,"224":2,"232":2,"237":36,"278":1,"293":2}}],["two",{"2":{"15":3,"34":4,"39":1,"41":1,"42":3,"52":2,"73":1,"78":1,"79":2,"82":1,"86":1,"149":1,"153":1,"154":1,"155":1,"169":1,"186":1,"278":1,"283":4,"285":1}}],["te",{"2":{"212":4,"213":2,"234":2}}],["tell",{"2":{"86":1}}],["temporal",{"2":{"83":1,"252":1}}],["technology",{"2":{"71":1}}],["tends",{"2":{"64":1,"165":1,"166":1}}],["tensordataset",{"2":{"272":4}}],["tensorcore",{"2":{"80":1,"276":1,"282":1}}],["tensors",{"2":{"42":1,"77":2}}],["tensorflow",{"2":{"38":1}}],["tensor",{"2":{"15":1,"41":1,"63":3,"75":3,"77":2,"147":96,"188":1,"271":1,"283":1}}],["terrible",{"2":{"49":1}}],["terminalloggers",{"2":{"276":1,"282":1}}],["terminate",{"2":{"225":1}}],["terminology",{"2":{"225":1}}],["terms",{"2":{"86":2,"124":1}}],["term",{"2":{"38":1,"165":1,"172":1,"278":2}}],["testext",{"2":{"276":1,"282":1}}],["tested",{"2":{"99":1,"100":1,"123":2,"192":1,"219":1}}],["testing",{"0":{"95":1},"2":{"86":2,"94":3,"99":1}}],["tests",{"2":{"86":2,"95":2,"96":2,"123":1,"141":2,"169":1}}],["test",{"0":{"97":1},"2":{"36":3,"41":2,"95":7,"96":10,"97":8,"100":1,"118":1,"124":1,"165":2,"209":5,"212":4,"213":20,"230":5,"234":3,"241":7,"245":13,"260":10,"276":1,"282":1,"284":1}}],["testmode`",{"2":{"165":2}}],["testmode",{"2":{"10":1,"36":3,"41":3,"118":3,"165":1,"176":1,"204":2,"211":1,"212":1,"233":1,"244":1,"253":1,"260":8,"268":2,"273":3,"274":2,"295":1}}],["tiffimages",{"2":{"276":1,"282":1}}],["tile",{"2":{"188":2}}],["tiles",{"2":{"188":1}}],["tilings",{"2":{"76":1}}],["tightly",{"2":{"184":1}}],["tier",{"2":{"121":9,"123":3}}],["tied",{"0":{"25":1},"2":{"100":1}}],["title=",{"2":{"296":1}}],["title",{"2":{"71":2,"254":1,"279":1}}],["tips",{"2":{"157":2}}],["tip",{"2":{"8":1,"38":1,"45":1,"47":1,"56":1,"65":1,"84":2,"85":1,"95":1,"136":1,"153":1,"164":2,"220":1,"283":1}}],["timeroutputs",{"2":{"282":1}}],["timewrapper",{"2":{"224":8,"225":1}}],["timelastindex",{"2":{"38":2,"51":1}}],["timestep",{"2":{"284":1}}],["timespace",{"2":{"284":1}}],["times",{"2":{"34":3,"58":2,"77":1,"94":1,"188":1,"191":1,"221":1}}],["time",{"0":{"132":1},"2":{"3":1,"38":2,"45":1,"75":1,"78":1,"83":3,"86":7,"118":1,"122":2,"131":3,"134":5,"212":3,"213":20,"224":1,"234":3,"236":1,"245":2,"254":1,"274":60,"284":2,"285":3,"286":1,"288":2,"295":4}}],["typing",{"2":{"88":1}}],["typical",{"2":{"56":1,"86":1}}],["typically",{"2":{"7":1,"38":1,"86":1,"100":1,"163":1,"167":1,"203":1,"252":1,"285":1}}],["typed",{"2":{"193":1}}],["typejoin",{"2":{"86":5}}],["typeof",{"2":{"55":1,"84":1,"89":8,"118":9,"127":1,"133":2,"153":4,"237":80,"268":4,"285":2}}],["types",{"0":{"7":1,"13":1,"129":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"3":1,"8":2,"22":1,"52":6,"58":1,"67":2,"77":1,"80":1,"84":1,"86":1,"109":2,"112":1,"133":1,"155":1,"156":1,"183":1,"185":2}}],["type",{"0":{"54":1,"158":1,"160":1,"237":1},"2":{"3":13,"4":8,"7":6,"8":5,"13":2,"15":7,"16":1,"28":5,"34":2,"41":1,"45":1,"47":1,"50":1,"52":6,"54":12,"55":3,"56":4,"63":1,"67":1,"73":1,"77":1,"86":14,"118":1,"126":5,"127":13,"130":1,"137":3,"143":1,"144":1,"154":1,"155":3,"156":1,"158":2,"160":2,"183":2,"186":1,"201":2,"204":2,"236":1,"237":1,"241":1,"291":1,"292":1,"293":1}}],["tr",{"2":{"169":2,"172":11,"212":4,"234":2}}],["trying",{"2":{"153":1,"225":1}}],["try",{"2":{"118":1,"122":1,"124":1,"126":2,"127":1,"133":1,"155":2,"172":1,"188":1,"196":1,"205":1,"220":1}}],["trelu",{"2":{"58":3}}],["treated",{"2":{"56":1,"62":1,"85":1}}],["treat",{"2":{"56":1,"74":2,"86":1,"96":1,"154":1}}],["triplotbase",{"2":{"276":1,"282":1}}],["triangularsolve",{"2":{"229":1,"282":1}}],["triangular",{"2":{"86":4}}],["trick",{"2":{"86":1,"271":1}}],["trivial",{"2":{"86":3,"100":1,"151":1,"190":1,"225":1,"237":26}}],["tries",{"2":{"63":1,"86":1}}],["trilinear",{"2":{"42":6,"78":9}}],["trigger",{"2":{"3":4,"4":2,"137":1,"148":1,"149":2}}],["truth",{"2":{"167":1,"196":1}}],["truncation",{"2":{"114":2}}],["truncatedstacktraces",{"2":{"229":1,"282":1}}],["truncated",{"2":{"13":2,"15":2}}],["truly",{"2":{"100":1}}],["true",{"0":{"284":1},"2":{"2":2,"3":2,"10":1,"15":2,"23":1,"24":2,"25":1,"34":3,"35":2,"36":4,"38":11,"42":1,"45":2,"47":1,"49":4,"50":40,"51":3,"52":1,"55":1,"56":4,"58":2,"64":6,"66":4,"77":2,"78":12,"80":1,"82":1,"84":1,"86":11,"89":8,"96":3,"99":1,"116":1,"118":10,"119":1,"126":1,"127":2,"133":2,"143":2,"146":2,"150":1,"165":3,"166":1,"167":1,"168":1,"170":1,"171":1,"172":1,"180":2,"184":1,"196":2,"203":4,"211":1,"223":2,"224":1,"225":2,"233":1,"235":1,"237":58,"244":1,"251":3,"253":2,"259":1,"264":1,"268":7,"276":2,"278":1,"284":1,"285":2,"293":2,"294":1,"295":1}}],["trace",{"0":{"169":1},"1":{"170":1,"171":1,"172":1},"2":{"169":7,"170":3,"171":2,"172":10,"283":4}}],["tracing",{"2":{"52":1,"118":1}}],["tracking",{"0":{"127":1},"2":{"117":1,"128":2}}],["trackerpdmatsext",{"2":{"229":1,"276":1,"282":1}}],["tracker",{"2":{"49":1,"52":1,"63":1,"65":1,"67":1,"84":1,"96":1,"121":1,"133":1,"229":1,"253":12,"276":3,"282":2}}],["tracked",{"2":{"126":1}}],["trackedarray",{"2":{"8":1,"84":1,"133":2}}],["trackedreals",{"2":{"8":1}}],["track",{"2":{"24":1,"34":2,"41":12,"54":1,"126":1,"127":5,"158":1}}],["traditional",{"2":{"39":1,"278":1}}],["transcodingstreams",{"2":{"276":1,"282":1}}],["transducersadaptext",{"2":{"276":1}}],["transducerslazyarraysext",{"2":{"229":2}}],["transducersdataframesext",{"2":{"229":1}}],["transducers",{"2":{"229":2,"276":2}}],["transb",{"2":{"86":1}}],["transa",{"2":{"86":1}}],["transformer",{"2":{"73":1}}],["transformed",{"2":{"62":1}}],["transform",{"2":{"45":1,"86":5,"272":4}}],["transformations",{"2":{"45":1}}],["transformation",{"2":{"41":1}}],["transforms",{"2":{"37":3,"293":7,"295":2,"296":2}}],["transferred",{"2":{"3":1,"174":1}}],["transferring",{"2":{"0":1,"5":1,"163":1}}],["transfer",{"0":{"2":1,"163":1},"2":{"2":1,"163":2,"174":1,"205":1}}],["transposed",{"2":{"35":2,"86":6,"147":1}}],["transpose",{"2":{"35":3,"77":2,"80":17,"86":14,"147":4,"188":1}}],["trainloader",{"2":{"272":2}}],["trainset",{"2":{"272":2}}],["trainstate",{"0":{"119":1},"2":{"49":14,"89":3,"90":1,"114":1,"118":2,"119":4,"196":1,"204":1,"212":1,"221":1,"234":1,"245":1,"250":1,"253":1,"260":1,"268":5,"274":1,"295":1}}],["trained2",{"2":{"204":2}}],["trained",{"2":{"47":1,"204":2,"205":8,"225":2,"226":2,"253":5,"254":1,"288":3,"295":1,"296":2}}],["train",{"0":{"236":1},"2":{"38":24,"47":1,"49":7,"56":1,"89":8,"90":9,"118":1,"119":10,"196":4,"200":4,"204":13,"207":1,"209":6,"212":14,"213":4,"221":2,"225":3,"230":6,"234":8,"241":12,"245":24,"250":1,"252":1,"253":9,"260":23,"268":1,"272":6,"274":66,"290":1,"295":8}}],["trainmode`",{"2":{"260":1,"274":1}}],["trainmode",{"2":{"10":1}}],["trainable",{"2":{"7":1,"22":1,"35":2,"38":4,"39":3,"41":1,"49":2,"56":9,"174":9,"260":2,"274":2,"295":2}}],["trainingbackendcache",{"2":{"268":1}}],["training=val",{"2":{"36":3}}],["training",{"0":{"49":1,"136":1,"176":1,"198":1,"204":1,"212":1,"213":1,"221":1,"223":1,"225":1,"234":1,"239":1,"245":1,"247":1,"253":1,"260":1,"268":1,"274":1,"281":1,"286":1,"287":1,"295":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1,"199":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"222":1,"223":1,"224":1,"225":1,"226":1,"227":1,"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"4":3,"10":2,"15":2,"24":1,"27":2,"28":4,"36":6,"41":9,"49":15,"50":2,"51":3,"64":9,"66":8,"71":1,"84":1,"89":8,"90":3,"100":2,"114":1,"115":2,"118":1,"119":5,"124":1,"126":1,"127":3,"136":2,"143":2,"145":1,"146":2,"176":2,"196":5,"198":1,"204":2,"207":1,"212":3,"213":20,"221":1,"234":5,"236":1,"245":2,"250":1,"253":2,"260":3,"267":1,"268":6,"274":3,"286":2,"287":1,"295":2}}],["t",{"2":{"2":1,"3":5,"4":6,"8":1,"15":20,"16":1,"23":1,"25":1,"38":1,"40":1,"45":1,"49":1,"51":1,"53":1,"56":6,"60":2,"63":1,"65":1,"72":1,"78":14,"79":2,"80":14,"82":6,"84":1,"86":28,"89":1,"90":1,"94":1,"96":1,"100":1,"107":2,"110":1,"114":1,"123":2,"124":1,"125":1,"128":1,"130":1,"131":1,"132":4,"134":2,"140":1,"141":1,"153":4,"155":3,"156":1,"164":2,"165":1,"166":1,"167":10,"169":1,"174":1,"176":1,"177":1,"188":2,"190":2,"194":1,"196":1,"200":1,"201":4,"202":2,"204":2,"205":1,"209":1,"212":3,"219":1,"221":1,"223":6,"224":11,"225":8,"226":9,"230":1,"231":5,"235":3,"242":1,"243":1,"245":2,"252":6,"253":4,"257":1,"260":2,"268":1,"271":4,"283":16,"284":1,"285":1,"286":2,"291":24,"292":2,"293":9,"295":1}}],["th",{"2":{"86":2}}],["thousands",{"2":{"188":1}}],["though",{"2":{"84":1,"153":1,"162":1,"192":1}}],["those",{"2":{"7":1,"8":1,"42":1,"77":1,"86":2,"91":1,"96":1,"127":1,"141":1,"154":1,"188":1,"231":1,"279":1}}],["thunk",{"2":{"118":1,"127":1}}],["thumb",{"2":{"42":1}}],["thus",{"2":{"34":1,"42":1,"66":1,"74":1,"75":1,"80":1,"84":2,"86":1}}],["threadingutilities",{"2":{"187":1,"276":1,"282":1}}],["threads",{"2":{"86":2,"197":3,"206":3,"214":3,"227":3,"238":3,"246":3,"255":3,"261":3,"269":3,"275":3,"280":3,"289":3,"297":3}}],["thread",{"2":{"86":1}}],["threshold",{"2":{"58":1,"86":2}}],["three",{"2":{"40":1,"77":1,"188":1}}],["throughput",{"2":{"274":106,"295":3}}],["through",{"2":{"5":1,"8":1,"34":4,"41":4,"83":1,"86":1,"88":1,"101":2,"131":1,"155":1,"157":1,"174":1,"190":1,"201":3,"224":1,"271":1,"279":1}}],["throws",{"2":{"97":1}}],["throw",{"2":{"2":1,"3":1,"24":1,"49":1,"54":1,"107":1,"253":1}}],["thrown",{"2":{"2":2,"22":1,"24":2}}],["than",{"2":{"15":2,"58":1,"74":1,"77":1,"80":1,"86":1,"192":1,"234":1}}],["that",{"2":{"2":3,"3":8,"5":1,"7":4,"8":5,"10":1,"11":1,"15":3,"21":1,"22":1,"24":1,"25":2,"30":2,"34":5,"35":2,"36":2,"37":6,"39":1,"40":4,"41":6,"47":2,"49":1,"50":1,"51":1,"52":3,"55":1,"56":5,"58":2,"60":2,"63":1,"64":1,"65":2,"67":1,"73":1,"74":2,"75":1,"76":6,"77":2,"79":1,"80":2,"81":2,"82":1,"84":3,"86":33,"89":1,"90":1,"91":1,"94":1,"100":2,"102":2,"107":2,"110":2,"118":1,"119":2,"124":4,"125":2,"126":3,"127":1,"128":2,"137":5,"138":1,"140":1,"147":2,"153":3,"154":4,"155":1,"156":2,"158":3,"159":1,"164":3,"165":2,"166":3,"167":2,"169":2,"172":3,"173":1,"174":2,"177":1,"178":1,"180":1,"184":1,"188":1,"190":3,"191":1,"193":1,"194":1,"201":3,"205":2,"219":1,"221":2,"225":4,"234":1,"237":1,"249":1,"250":1,"252":1,"265":1,"267":1,"271":1,"278":1,"279":3,"285":3}}],["third",{"2":{"188":2}}],["thirteenth",{"2":{"15":2}}],["think",{"2":{"56":1,"153":1}}],["things",{"2":{"56":1,"77":1,"86":1,"153":1,"155":1,"188":1}}],["thing",{"2":{"40":2,"158":1}}],["this",{"2":{"2":3,"3":8,"4":10,"5":1,"7":5,"8":6,"10":1,"11":6,"12":1,"15":5,"18":2,"19":2,"21":2,"23":2,"24":5,"27":2,"28":3,"31":1,"32":1,"34":3,"35":5,"37":3,"38":6,"39":5,"40":1,"41":5,"42":3,"45":6,"47":3,"49":8,"50":4,"51":1,"52":22,"53":2,"54":7,"55":6,"56":11,"58":4,"60":3,"61":1,"63":3,"65":3,"66":2,"67":4,"71":2,"75":1,"76":4,"79":3,"80":8,"81":1,"82":1,"84":6,"85":1,"86":23,"91":1,"94":2,"95":1,"96":2,"99":1,"100":2,"102":2,"107":6,"108":1,"109":1,"110":3,"115":1,"116":1,"117":1,"118":5,"119":3,"120":1,"122":5,"124":1,"125":2,"126":3,"128":2,"133":1,"137":3,"139":2,"141":2,"142":2,"144":1,"147":3,"148":2,"149":2,"151":1,"153":8,"155":6,"156":1,"158":7,"159":1,"160":1,"162":1,"163":4,"164":6,"165":9,"166":5,"167":1,"168":1,"169":3,"170":1,"172":2,"174":2,"176":3,"179":1,"180":2,"181":3,"183":1,"184":1,"187":1,"188":5,"190":4,"192":1,"196":1,"197":1,"198":2,"200":1,"202":2,"206":1,"207":1,"210":1,"213":1,"214":1,"221":3,"225":5,"227":1,"234":1,"235":1,"237":2,"238":1,"242":1,"246":1,"247":2,"253":7,"255":1,"256":2,"257":1,"260":103,"261":1,"262":1,"265":1,"267":1,"269":1,"270":1,"274":1,"275":1,"276":1,"278":4,"280":1,"281":1,"283":3,"285":2,"287":1,"289":1,"290":2,"291":1,"295":2,"297":1}}],["theoretical",{"2":{"283":1}}],["theoretically",{"2":{"192":1}}],["theorem",{"2":{"40":1}}],["theglobal",{"2":{"119":1}}],["thesis",{"2":{"71":1}}],["these",{"2":{"7":5,"8":1,"16":1,"21":1,"26":1,"28":1,"41":1,"45":3,"47":3,"49":1,"56":1,"72":1,"80":2,"86":4,"89":1,"98":2,"100":1,"107":2,"114":3,"123":7,"131":1,"141":1,"153":2,"154":2,"162":1,"164":1,"168":1,"180":1,"183":1,"184":3,"187":1,"190":1,"194":1,"219":3,"220":1,"228":1,"278":1}}],["theta",{"2":{"58":1}}],["theta=1",{"2":{"58":1}}],["they",{"2":{"49":1,"56":2,"86":1,"127":1,"143":1,"147":1,"174":1,"188":1,"192":2}}],["therefore",{"2":{"82":1,"86":1}}],["there",{"2":{"22":1,"47":1,"49":1,"79":2,"84":1,"86":2,"89":1,"109":1,"122":1,"124":1,"127":1,"128":1,"137":1,"158":1,"162":1,"164":1,"167":2,"186":1,"188":1,"213":1}}],["them",{"0":{"157":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1},"2":{"21":1,"23":1,"34":1,"39":1,"42":1,"50":1,"53":4,"56":1,"58":1,"80":1,"84":1,"89":1,"98":1,"99":1,"107":1,"114":1,"123":1,"158":1,"183":1,"188":2,"191":1,"194":1,"220":2,"250":1,"278":1,"285":2}}],["their",{"2":{"7":2,"35":1,"101":1,"161":1,"188":1,"190":2}}],["then",{"2":{"1":2,"2":7,"3":1,"8":1,"19":2,"22":3,"24":2,"34":5,"35":4,"36":4,"38":9,"39":7,"40":1,"50":6,"51":1,"52":2,"55":2,"57":1,"58":2,"64":2,"66":1,"71":1,"73":1,"80":5,"86":5,"88":1,"90":1,"95":1,"96":3,"97":1,"100":1,"118":1,"147":1,"149":3,"153":1,"154":3,"169":2,"178":1}}],["the",{"0":{"119":1,"133":1,"134":1,"135":1,"145":1,"168":2,"170":1,"171":1,"172":1,"187":1,"202":1,"204":1,"205":1,"210":1,"212":1,"213":1,"224":1,"225":1,"226":1,"231":1,"232":1,"236":1,"243":1,"250":1,"251":1,"252":1,"254":1,"260":1,"274":1,"278":1,"284":1,"286":1,"287":1,"288":1,"291":1,"295":1,"296":1},"1":{"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1},"2":{"1":6,"2":14,"3":31,"4":14,"5":13,"6":3,"7":19,"8":22,"9":4,"10":7,"11":6,"15":60,"16":25,"18":20,"19":13,"20":1,"21":1,"22":14,"23":22,"24":15,"25":14,"26":1,"28":10,"29":4,"30":11,"31":5,"32":3,"34":46,"35":38,"36":6,"37":45,"38":69,"39":19,"40":33,"41":58,"42":14,"45":12,"47":9,"49":41,"50":55,"51":4,"52":18,"53":7,"54":19,"55":18,"56":50,"57":6,"58":6,"60":10,"61":2,"62":3,"63":11,"64":16,"65":6,"66":21,"67":5,"68":7,"69":2,"70":3,"72":1,"73":19,"74":7,"75":5,"76":41,"77":11,"78":32,"79":6,"80":13,"81":14,"82":11,"83":11,"84":9,"85":4,"86":120,"88":3,"89":6,"90":11,"91":1,"95":6,"96":9,"97":4,"98":1,"99":5,"100":6,"101":4,"102":2,"104":2,"107":2,"108":2,"109":2,"110":4,"111":1,"114":11,"116":9,"117":4,"118":25,"119":12,"120":3,"122":12,"123":2,"124":3,"125":3,"126":6,"127":19,"130":2,"131":2,"132":1,"133":1,"134":3,"135":2,"136":1,"137":22,"138":3,"139":1,"140":6,"143":1,"144":7,"145":3,"147":9,"148":2,"149":5,"151":6,"153":21,"154":10,"155":4,"156":5,"157":1,"158":9,"160":1,"161":3,"162":2,"163":8,"164":9,"165":5,"166":16,"167":5,"168":13,"169":11,"170":2,"172":7,"173":4,"174":6,"176":4,"177":1,"178":4,"179":1,"180":1,"181":1,"182":1,"183":10,"184":4,"186":11,"187":2,"188":12,"189":7,"190":2,"191":5,"192":6,"193":1,"194":4,"195":2,"196":6,"198":2,"200":6,"201":24,"202":3,"203":2,"204":8,"205":5,"207":2,"209":2,"210":3,"212":1,"213":5,"219":1,"220":6,"221":5,"224":8,"225":11,"230":2,"231":13,"232":1,"234":3,"235":2,"236":3,"237":5,"242":6,"245":1,"247":2,"249":3,"250":2,"251":4,"252":3,"253":7,"260":103,"264":4,"267":3,"268":3,"270":1,"271":3,"273":6,"276":5,"277":3,"278":16,"279":22,"281":1,"283":7,"284":2,"285":10,"286":4,"287":2,"288":2,"290":1,"291":3,"295":2}}],["toolchain",{"2":{"238":1,"246":1}}],["tool",{"2":{"124":1}}],["tools",{"2":{"24":2,"56":1,"125":1,"187":1,"192":1}}],["too",{"2":{"98":1,"192":1,"279":1}}],["top",{"2":{"82":1,"86":4,"118":1,"151":1}}],["topic",{"2":{"38":1}}],["towards",{"2":{"52":6,"78":1,"190":1,"221":1}}],["toarray",{"2":{"47":1}}],["tosimplechainsadaptor",{"2":{"47":4,"100":1,"210":2}}],["together",{"2":{"23":1,"53":1}}],["total",{"2":{"9":1,"10":1,"25":1,"29":2,"34":4,"41":3,"56":4,"89":1,"90":1,"126":1,"127":1,"132":1,"204":16,"210":2,"211":6,"233":6,"244":6,"253":3,"260":2,"265":1,"272":2,"273":5,"274":12,"285":1,"295":5}}],["to",{"0":{"45":1,"47":1,"68":1,"97":1,"101":1,"102":1,"137":1,"147":1,"157":1,"173":1,"281":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1,"158":1,"159":1,"160":1,"161":1,"162":1,"163":1,"174":1,"175":1,"176":1,"177":1,"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"1":3,"2":9,"3":10,"4":5,"5":5,"6":2,"7":4,"8":10,"10":2,"11":2,"15":14,"18":8,"19":5,"21":1,"22":7,"23":2,"24":10,"25":5,"28":9,"30":6,"32":1,"34":18,"35":19,"36":13,"37":9,"38":59,"39":7,"40":9,"41":22,"42":8,"45":14,"47":17,"49":10,"50":14,"51":3,"52":14,"53":11,"54":4,"55":4,"56":21,"57":1,"58":3,"60":7,"61":2,"62":4,"63":5,"64":13,"65":3,"66":16,"67":2,"68":2,"69":1,"70":1,"71":1,"73":7,"74":6,"75":2,"76":13,"77":8,"78":9,"79":1,"80":16,"81":16,"82":7,"83":5,"84":6,"86":71,"88":3,"89":2,"90":4,"94":2,"95":3,"96":10,"98":2,"99":1,"100":9,"101":2,"102":5,"104":1,"107":5,"108":1,"109":1,"110":5,"111":2,"114":6,"115":3,"116":6,"117":3,"118":16,"119":15,"123":3,"124":1,"125":4,"126":1,"127":7,"130":2,"131":2,"132":1,"136":1,"137":5,"138":3,"140":4,"142":1,"143":1,"144":1,"145":1,"147":7,"148":4,"149":2,"151":5,"153":7,"154":5,"155":4,"156":2,"158":7,"159":1,"160":2,"161":1,"163":3,"164":7,"165":10,"166":6,"167":1,"168":4,"169":6,"172":1,"173":3,"174":8,"176":1,"177":3,"178":4,"179":2,"180":3,"181":4,"183":4,"184":3,"185":1,"186":3,"187":4,"188":7,"189":1,"190":3,"192":1,"193":1,"194":3,"196":2,"198":2,"200":3,"201":10,"203":1,"205":1,"207":2,"209":1,"210":3,"219":1,"220":6,"221":4,"224":6,"225":7,"228":2,"230":1,"231":6,"232":1,"234":1,"235":1,"237":11,"242":1,"243":1,"247":2,"251":2,"252":2,"257":1,"260":14,"267":1,"268":1,"271":2,"274":3,"276":1,"277":3,"278":12,"279":5,"283":5,"285":6,"286":2,"287":2,"290":2,"291":1}}],["toml",{"2":{"1":1,"94":1}}],["l3",{"2":{"288":2}}],["lb",{"2":{"285":1,"288":1}}],["lbfgsb",{"2":{"276":1,"282":1}}],["lbfgs",{"2":{"221":1,"225":3}}],["lzo",{"2":{"276":1,"282":1}}],["lrucache",{"2":{"276":2}}],["lr",{"2":{"253":2,"260":2,"295":2}}],["ld",{"2":{"206":1,"214":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"297":1}}],["ll",{"2":{"187":1,"277":1,"278":1,"285":2}}],["llvmopenmp",{"2":{"276":1,"282":1}}],["llvmextra",{"2":{"192":1,"282":1}}],["llvm",{"2":{"90":3,"192":1,"197":1,"204":3,"206":1,"213":3,"214":1,"227":1,"238":2,"246":2,"253":3,"255":1,"260":3,"261":1,"269":1,"274":3,"275":1,"280":1,"282":1,"289":1,"295":3,"297":1}}],["ln",{"2":{"76":5}}],["l=",{"2":{"50":1}}],["ls",{"2":{"50":4}}],["lstmcell",{"2":{"38":6,"116":1,"117":1,"201":3,"202":1}}],["lstm",{"0":{"198":1},"1":{"199":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1},"2":{"7":1,"38":1,"201":17,"202":4}}],["lprob",{"2":{"293":4,"294":2}}],["lpnorm",{"2":{"117":1}}],["lpnormpool",{"2":{"75":4}}],["lppool",{"2":{"37":1,"75":1,"117":1}}],["lp",{"2":{"37":4,"75":2,"278":1,"279":1}}],["l2",{"2":{"25":5,"86":1,"154":4,"285":2,"288":2}}],["l2=dense",{"2":{"25":1}}],["l1",{"2":{"25":5,"76":5,"154":4,"285":2,"288":2}}],["l1=dense",{"2":{"25":1}}],["lerc",{"2":{"276":1,"282":1}}],["lecture",{"2":{"228":1}}],["lecun",{"2":{"50":1}}],["leftchildrightsiblingtrees",{"2":{"276":1,"282":1}}],["left",{"2":{"76":5,"82":1}}],["len",{"2":{"73":10,"96":2,"200":1,"252":11}}],["length=25",{"2":{"279":4}}],["length=bc",{"2":{"252":3}}],["length=grid",{"2":{"252":1}}],["length=datasize",{"2":{"223":1,"284":1}}],["length=50",{"2":{"200":1}}],["length+k",{"2":{"86":1}}],["length",{"2":{"15":7,"16":24,"25":2,"34":2,"35":2,"37":6,"73":2,"75":6,"76":10,"77":1,"81":2,"86":21,"153":8,"154":2,"200":5,"203":1,"204":5,"211":1,"224":2,"233":1,"241":2,"244":1,"254":8,"272":2,"273":1,"274":2,"278":3,"283":2,"288":1,"293":1}}],["lei",{"2":{"66":1}}],["leibler",{"2":{"50":1}}],["lets",{"2":{"155":1,"212":1,"234":1,"245":1,"276":1}}],["letters",{"2":{"83":1}}],["let",{"2":{"56":1,"90":1,"118":2,"119":1,"125":1,"126":2,"127":7,"138":1,"143":1,"153":1,"154":2,"155":2,"158":1,"164":3,"165":2,"167":1,"168":1,"172":3,"174":2,"187":2,"188":1,"191":1,"193":1,"194":1,"195":1,"196":4,"201":1,"203":1,"205":1,"225":2,"234":1,"236":1,"264":1,"265":1,"268":1,"278":1,"284":1,"285":1,"286":1,"288":2,"291":1}}],["lempitsky",{"2":{"41":1,"66":1}}],["less",{"2":{"40":1,"58":3,"98":1,"123":1,"192":1}}],["levels=10",{"2":{"254":1}}],["level",{"2":{"15":2,"77":1,"86":1,"119":1,"120":1,"183":2}}],["least",{"2":{"86":3}}],["leaky",{"2":{"58":2}}],["leakyrelu",{"2":{"58":5,"116":1,"271":5}}],["learned",{"2":{"75":1,"100":1}}],["learn",{"2":{"58":1,"188":1}}],["learnable",{"2":{"39":1,"41":4}}],["learning",{"2":{"2":1,"12":1,"15":3,"50":1,"58":2,"64":1,"66":1,"158":1,"161":1,"169":1,"185":1,"186":1,"188":1,"196":1,"221":1,"274":1,"283":1}}],["leads",{"2":{"155":1}}],["leading",{"2":{"42":1}}],["lead",{"2":{"24":1,"45":1,"52":1,"56":2,"60":1,"225":1,"260":1,"274":1}}],["leaf",{"2":{"3":1,"8":1,"52":2,"53":4,"89":12,"224":1}}],["leaves",{"2":{"3":1,"52":4,"56":1}}],["l",{"2":{"8":5,"9":2,"10":2,"22":13,"23":5,"34":2,"36":1,"45":2,"47":1,"75":2,"86":2,"132":5,"143":2,"144":2,"153":16,"174":4,"196":1,"201":2,"225":4,"231":4,"264":2,"268":2,"276":1,"282":1,"284":2,"286":3,"293":12}}],["lame",{"2":{"276":1,"282":1}}],["lambda=weight",{"2":{"260":1,"274":1}}],["layoutpointers",{"2":{"187":1,"276":1,"282":1}}],["layerfreezing",{"2":{"145":1}}],["layernorm",{"2":{"41":4,"66":1,"104":1}}],["layer2",{"2":{"34":1}}],["layer1",{"2":{"34":1}}],["layer",{"0":{"11":1,"23":1,"55":1,"56":1,"144":1,"152":1,"153":1,"154":1,"231":1,"232":1,"235":1,"242":1},"1":{"153":1,"154":1},"2":{"7":13,"8":9,"9":4,"10":4,"11":4,"22":7,"23":6,"24":13,"34":67,"35":5,"36":3,"37":12,"38":2,"39":5,"40":8,"41":24,"42":3,"45":10,"47":11,"54":5,"55":7,"56":11,"66":3,"73":1,"84":1,"89":47,"100":2,"114":2,"115":1,"116":1,"118":39,"125":2,"126":30,"127":69,"130":2,"132":3,"134":5,"143":9,"144":9,"145":5,"146":9,"147":18,"153":8,"154":12,"158":1,"164":1,"165":4,"166":1,"167":4,"168":6,"174":5,"183":1,"196":1,"201":1,"210":18,"231":2,"237":211,"242":1,"243":1,"258":9,"265":2,"267":4,"268":8,"278":1,"285":12,"287":4}}],["layers=2",{"2":{"258":1}}],["layers",{"0":{"33":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"63":1,"65":1,"90":1,"143":1,"174":1,"177":1},"1":{"34":1,"35":1,"36":1,"37":1,"38":1,"39":1,"40":1,"41":1,"42":1},"2":{"6":1,"7":9,"8":2,"15":8,"23":3,"24":2,"34":39,"35":1,"40":12,"45":2,"54":1,"55":2,"58":1,"74":1,"78":4,"84":2,"90":1,"99":4,"100":4,"105":1,"114":2,"117":1,"126":9,"127":19,"132":4,"143":1,"151":2,"153":1,"154":3,"158":1,"162":1,"164":1,"173":1,"174":3,"182":1,"183":1,"201":1,"202":1,"231":1,"236":1,"237":6,"242":1,"258":4,"260":2,"278":2,"293":4,"295":2}}],["laid",{"2":{"86":1}}],["lazymodules",{"2":{"276":1,"282":1}}],["lazyarraysstaticarraysext",{"2":{"229":1,"282":1}}],["lazyarrays",{"2":{"229":2,"282":2}}],["lazyartifacts",{"2":{"192":1,"276":1,"282":1}}],["lazy",{"2":{"80":2,"86":2,"174":1}}],["lattice",{"2":{"86":1}}],["latter",{"2":{"80":1,"161":2}}],["later",{"2":{"277":1}}],["lateral",{"2":{"78":2}}],["latexstrings",{"2":{"276":1,"282":1}}],["latent",{"2":{"271":10,"273":2,"274":4}}],["latentsize",{"2":{"38":3}}],["latest",{"2":{"68":1,"220":1}}],["lack",{"2":{"80":1}}],["language",{"2":{"52":2,"84":1}}],["label=l",{"2":{"223":2,"226":4}}],["labels",{"2":{"50":2,"83":1,"200":3,"209":3,"230":3}}],["label",{"2":{"50":18,"83":2}}],["larger",{"0":{"219":1},"2":{"100":1,"188":1,"219":1}}],["large",{"2":{"45":1,"58":1,"86":1,"90":1,"100":3,"192":1}}],["last",{"2":{"2":1,"15":1,"38":1,"40":2,"42":2,"55":1,"56":1,"62":1,"80":1,"83":1,"86":2,"132":1,"137":1,"166":1,"231":1,"258":3,"277":2}}],["luxreversediffext",{"2":{"229":2,"282":2}}],["luxreactantext",{"2":{"199":2,"263":2}}],["luxtrackerext",{"2":{"229":2,"276":2,"282":2}}],["luxtestutils",{"0":{"94":1},"1":{"95":1,"96":1,"97":1},"2":{"95":3,"96":2,"97":1}}],["luxenzymeext",{"2":{"199":2,"229":2,"263":2,"282":2}}],["luxmlutilsext",{"2":{"199":2,"229":2,"240":2}}],["luxzygoteext",{"2":{"192":2,"229":2,"240":2,"282":2}}],["luxlinear",{"2":{"174":7}}],["luxlibcudnnext",{"2":{"229":2,"240":2}}],["luxlibcudaext",{"2":{"229":2,"240":2}}],["luxlibreversediffext",{"2":{"229":2,"282":2}}],["luxlibreactantext",{"2":{"199":2,"263":2}}],["luxlibtrackerext",{"2":{"229":2,"276":2,"282":2}}],["luxlibloopvectorizationext",{"2":{"229":2,"282":2}}],["luxlibsleefpiratesext",{"2":{"229":2,"282":2}}],["luxlibenzymeext",{"2":{"199":2,"229":2,"263":2,"282":2}}],["luxlib",{"0":{"59":1,"103":1},"1":{"60":1,"61":1,"62":1,"63":1,"64":1,"65":1,"66":1,"67":1,"104":1,"105":1},"2":{"57":2,"60":2,"61":1,"62":2,"63":1,"64":2,"65":1,"66":4,"67":1,"84":3,"85":1,"95":2,"104":1,"161":8,"165":4,"178":1,"183":2,"184":2,"187":1,"199":3,"229":8,"240":2,"260":2,"263":3,"274":2,"276":2,"282":6,"285":2}}],["luxflux",{"2":{"173":1,"174":2}}],["luxdeviceutils",{"2":{"110":3}}],["luxdl",{"2":{"38":1,"68":1,"91":1}}],["luxops",{"0":{"51":1},"2":{"51":8,"201":1,"202":1}}],["luxcomponentarraysext",{"2":{"192":2,"229":2,"240":2,"282":2}}],["luxcorechainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["luxcoremldatadevicesext",{"2":{"187":1,"276":1,"282":1}}],["luxcoresetfieldext",{"2":{"187":1,"276":1,"282":1}}],["luxcorefunctorsext",{"2":{"187":1,"276":1,"282":1}}],["luxcoreenzymecoreext",{"2":{"187":1,"276":1,"282":1}}],["luxcorearrayinterfacereversediffext",{"2":{"133":2,"282":1}}],["luxcorearrayinterfacetrackerext",{"2":{"133":2,"229":1,"276":1,"282":1}}],["luxcore",{"0":{"6":1,"106":1},"1":{"7":1,"8":1,"9":1,"10":1,"11":1,"107":1,"108":1},"2":{"6":2,"7":3,"8":7,"9":2,"10":5,"11":1,"57":2,"107":2,"133":5,"151":3,"153":14,"154":8,"155":1,"178":1,"183":3,"187":6,"204":6,"229":1,"260":3,"276":7,"282":8}}],["luxcudadevice",{"2":{"111":1}}],["luxcuda",{"2":{"3":1,"69":2,"89":1,"93":1,"148":1,"185":1,"189":3,"229":3,"240":3}}],["lux",{"0":{"43":1,"45":1,"46":1,"47":1,"68":1,"98":1,"100":1,"102":1,"113":1,"118":1,"125":1,"147":1,"151":1,"173":1,"178":1,"187":1,"221":1},"1":{"44":1,"45":1,"46":1,"47":2,"99":1,"100":1,"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":2,"115":2,"116":2,"117":2,"119":1,"126":1,"127":1,"128":1,"152":1,"153":1,"154":1,"155":1,"156":1,"174":1,"175":1,"176":1,"177":1,"179":1,"180":1,"181":1,"182":1,"183":1,"184":1,"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"222":1,"223":1,"224":1,"225":1,"226":1,"227":1},"2":{"4":4,"6":4,"7":3,"8":4,"11":1,"18":2,"19":1,"22":13,"23":4,"24":8,"25":3,"26":1,"27":2,"28":3,"29":2,"30":4,"31":1,"32":1,"34":11,"35":3,"36":7,"37":9,"38":9,"39":4,"40":12,"41":8,"42":2,"45":11,"47":15,"49":12,"50":16,"51":9,"52":14,"53":6,"54":2,"55":5,"56":19,"57":1,"59":1,"68":5,"69":4,"70":3,"71":1,"72":1,"88":4,"89":8,"90":5,"91":1,"92":1,"93":1,"94":1,"95":2,"98":1,"100":9,"101":1,"102":2,"107":3,"110":1,"111":2,"114":6,"115":8,"118":7,"119":3,"120":2,"123":1,"124":2,"125":3,"126":6,"127":4,"128":1,"130":1,"132":3,"133":4,"134":4,"135":1,"136":1,"140":1,"143":6,"144":2,"145":2,"146":3,"147":13,"148":2,"149":1,"151":7,"153":3,"154":2,"155":1,"158":6,"160":2,"164":4,"165":4,"166":1,"167":1,"168":1,"172":1,"173":3,"174":13,"176":2,"177":2,"178":3,"183":3,"186":1,"187":5,"189":1,"190":1,"191":2,"192":3,"194":1,"196":2,"198":2,"199":6,"201":6,"202":1,"204":5,"207":1,"208":1,"210":4,"211":1,"212":2,"213":2,"219":1,"220":3,"221":2,"222":1,"225":1,"229":9,"231":5,"232":1,"233":1,"235":4,"237":103,"240":4,"242":4,"243":2,"244":1,"245":1,"247":2,"248":1,"249":1,"250":1,"253":2,"256":1,"260":13,"263":5,"267":2,"268":10,"270":1,"271":4,"273":3,"274":5,"276":7,"278":7,"282":8,"285":2,"290":2,"293":1,"295":3}}],["lt",{"2":{"4":2,"5":1,"15":3,"50":1,"56":2,"75":2,"84":1,"86":2,"91":1,"107":2,"147":4,"149":1,"154":2,"164":12,"223":1,"226":1}}],["libwebp",{"2":{"276":1,"282":1}}],["libglvnd",{"2":{"276":1,"282":1}}],["libgcrypt",{"2":{"276":1,"282":1}}],["libgpg",{"2":{"276":1,"282":1}}],["libtiff",{"2":{"276":1,"282":1}}],["libtask",{"2":{"276":1}}],["libvorbis",{"2":{"276":1,"282":1}}],["libsixel",{"2":{"276":1,"282":1}}],["libuuid",{"2":{"276":1,"282":1}}],["libffi",{"2":{"276":1,"282":1}}],["libfdk",{"2":{"276":1,"282":1}}],["libpthread",{"2":{"276":1,"282":1}}],["libpng",{"2":{"276":1,"282":1}}],["libiconv",{"2":{"276":1,"282":1}}],["libass",{"2":{"276":1,"282":1}}],["libaom",{"2":{"276":1,"282":1}}],["libaec",{"2":{"229":1}}],["libxrender",{"2":{"276":1,"282":1}}],["libxext",{"2":{"276":1,"282":1}}],["libx11",{"2":{"276":1,"282":1}}],["libxcb",{"2":{"276":1,"282":1}}],["libxdmcp",{"2":{"276":1,"282":1}}],["libxau",{"2":{"276":1,"282":1}}],["libmount",{"2":{"276":1,"282":1}}],["lib64",{"2":{"206":1,"214":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"297":1}}],["libllvm",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["libraries",{"2":{"164":1,"186":1,"238":1,"246":1,"276":2,"278":1}}],["library",{"2":{"71":1,"173":1,"190":1,"205":1,"206":1,"214":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"297":1}}],["lib",{"2":{"126":1,"133":6,"165":4,"204":4,"206":1,"214":1,"237":3,"238":1,"246":1,"255":1,"260":4,"261":1,"269":1,"274":2,"275":1,"297":1}}],["literate",{"2":{"197":2,"206":2,"214":2,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["literature",{"2":{"169":1}}],["literally",{"2":{"86":1}}],["little",{"2":{"86":1,"118":1,"127":1}}],["lisht",{"2":{"58":5}}],["lists",{"2":{"25":1,"120":1}}],["listed",{"2":{"21":1,"54":1}}],["list",{"2":{"3":1,"25":5,"28":1,"34":4,"40":1,"90":1,"95":1,"96":3,"102":2,"178":1,"185":1,"188":1,"220":1,"245":4,"273":1,"293":4}}],["limits=extrema",{"2":{"254":1}}],["limiter",{"2":{"237":26}}],["limit",{"2":{"51":1,"60":1,"64":1,"197":1,"206":1,"214":1,"227":1,"238":2,"246":2,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["limitations",{"2":{"45":1,"49":1,"100":1}}],["linux",{"2":{"197":2,"206":2,"214":2,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["linked",{"2":{"220":1}}],["link",{"2":{"187":1}}],["linking",{"2":{"34":1}}],["liner",{"2":{"278":1}}],["linewidth=3",{"2":{"264":1,"268":1,"279":3}}],["linewidth=2",{"2":{"254":1,"284":1,"285":2,"288":3}}],["linewidth=4",{"2":{"223":2,"226":4,"288":1}}],["linesearch=linesearches",{"2":{"287":1}}],["linesearches",{"2":{"229":1,"276":1,"282":3}}],["linestyle=",{"2":{"223":2,"226":2}}],["lines",{"2":{"223":2,"226":4,"264":1,"268":1,"284":1,"285":2,"288":4}}],["line",{"2":{"90":1,"96":1,"204":1,"213":1,"253":1,"260":1,"274":1,"295":1}}],["lineplot",{"2":{"58":36,"86":4}}],["linear=dense",{"2":{"271":1}}],["linearsolvecudaext",{"2":{"229":2}}],["linearsolvekernelabstractionsext",{"2":{"229":1,"282":1}}],["linearsolveenzymeext",{"2":{"229":1,"282":1}}],["linearsolverecursivearraytoolsext",{"2":{"229":1,"282":1}}],["linearsolve",{"2":{"229":5,"282":4}}],["linearalgebraext",{"2":{"187":1,"276":2,"282":2}}],["linearalgebra",{"2":{"86":2,"164":1,"276":1}}],["linearly",{"2":{"58":1}}],["linearities",{"2":{"58":1}}],["linear",{"0":{"39":1,"196":1},"2":{"15":1,"40":1,"42":1,"56":4,"58":13,"78":6,"84":1,"153":12,"154":23,"196":1,"271":1}}],["lin",{"2":{"50":2}}],["likely",{"2":{"253":7,"260":102,"295":2}}],["like",{"2":{"15":1,"19":1,"35":2,"38":3,"49":1,"50":3,"52":1,"56":3,"62":1,"63":2,"65":1,"67":1,"74":1,"77":1,"86":3,"94":1,"100":2,"108":1,"151":1,"153":2,"155":1,"163":1,"173":1,"174":2,"186":1,"188":3,"194":1,"221":1,"231":1,"234":1,"271":1,"277":1,"284":1,"285":1,"293":1}}],["lighter",{"2":{"151":1}}],["light",{"2":{"12":1}}],["lightweight",{"2":{"0":1}}],["lost",{"2":{"187":1}}],["lossfn",{"2":{"196":3,"203":2,"204":2,"211":1,"212":1}}],["lossfunctions",{"2":{"50":1}}],["losses",{"0":{"83":1},"2":{"50":1,"286":2,"288":3}}],["loss",{"0":{"50":1,"165":1,"167":1,"168":1,"203":1,"251":1,"267":1},"1":{"166":1},"2":{"49":5,"50":35,"56":5,"74":1,"83":3,"89":2,"90":14,"118":5,"119":2,"164":1,"165":10,"166":9,"167":4,"168":6,"196":19,"203":4,"204":59,"225":30,"244":1,"245":1,"247":1,"249":1,"251":23,"253":243,"259":3,"260":178,"267":4,"268":4,"273":9,"274":111,"286":6,"287":1,"288":2,"294":2,"295":19}}],["love",{"2":{"123":1}}],["lotka",{"2":{"223":2}}],["lot",{"2":{"100":1,"191":1}}],["lower",{"2":{"234":1}}],["low",{"2":{"86":1,"119":1}}],["lo=1",{"2":{"58":1}}],["logdensityproblemsadtrackerext",{"2":{"276":1}}],["logdensityproblemsaddifferentiationinterfaceext",{"2":{"276":1}}],["logdensityproblemsadforwarddiffext",{"2":{"276":1}}],["logdensityproblemsadadtypesext",{"2":{"276":1}}],["logdensityproblemsad",{"2":{"276":5}}],["logdensityproblems",{"2":{"276":1}}],["logexpfunctionsinversefunctionsext",{"2":{"276":1,"282":1}}],["logexpfunctionschangesofvariablesext",{"2":{"276":1}}],["logexpfunctionschainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["logexpfunctions",{"2":{"187":2,"276":4,"282":3}}],["loggingextras",{"2":{"276":1,"282":1}}],["logging",{"2":{"137":1,"276":1}}],["logaddexp",{"2":{"86":1}}],["logarithm",{"2":{"50":1}}],["logarithmic",{"2":{"50":1}}],["log10",{"2":{"86":2}}],["logs",{"2":{"137":1}}],["logsumexp",{"2":{"84":1}}],["logsoftmax",{"2":{"74":3,"83":1,"84":1}}],["logsigmoid",{"2":{"58":3}}],["logσ²",{"2":{"271":8,"273":4}}],["logσ",{"2":{"58":3}}],["logcosh",{"2":{"58":5}}],["log⁡",{"2":{"50":1}}],["log",{"2":{"50":2,"51":2,"58":4,"74":3,"84":1,"86":2,"271":2,"278":1,"279":1,"292":20,"293":8,"294":1}}],["logitcrossentropy",{"2":{"233":1,"234":1}}],["logitbinarycrossentropy",{"2":{"203":1}}],["logitbce",{"2":{"50":5}}],["logit",{"2":{"50":1}}],["logits=val",{"2":{"50":2,"211":1,"233":1,"244":1,"259":1}}],["logits",{"2":{"50":5}}],["longer",{"2":{"114":2,"139":3}}],["long",{"2":{"38":1,"45":1,"155":1}}],["locations",{"2":{"81":1,"82":7}}],["location",{"2":{"24":3,"126":4,"127":10}}],["local",{"2":{"4":1,"29":2,"56":1,"137":1,"138":1,"206":2,"213":1,"214":2,"238":2,"246":2,"255":2,"260":11,"261":2,"269":2,"275":2,"297":2}}],["localpreferences",{"2":{"1":1,"149":2}}],["loosely",{"2":{"187":1}}],["loopedarrayop",{"2":{"67":1}}],["loopvectorization",{"2":{"60":1,"61":2,"65":1,"67":1,"162":1,"184":1,"282":3}}],["loop",{"0":{"184":1,"212":1},"2":{"60":1,"67":1,"86":2,"162":1,"184":1,"268":1}}],["loops",{"2":{"52":1,"67":1}}],["looping",{"2":{"5":1}}],["looks",{"2":{"163":1}}],["lookup",{"2":{"39":1}}],["look",{"2":{"22":1,"42":1,"194":1,"201":1,"228":1}}],["lo",{"2":{"15":2,"58":1}}],["loads",{"2":{"260":11}}],["loadcora",{"2":{"257":2,"260":1}}],["loadmnist",{"2":{"209":2,"212":1,"230":2,"234":1,"272":2,"274":1}}],["loader",{"2":{"119":1,"204":5}}],["loaded",{"2":{"2":2,"3":7,"4":2,"10":1,"11":1,"32":1,"53":1,"65":1,"107":2,"137":1,"149":2,"177":1,"184":1,"189":1}}],["load",{"0":{"291":1},"2":{"32":1,"60":1,"61":1,"65":1,"94":1,"114":1,"119":1,"148":1,"162":1,"205":1,"209":1,"230":1,"241":4,"245":1,"272":1,"291":2,"295":1}}],["loading",{"0":{"163":1,"209":1,"230":1,"241":1,"257":1,"272":1},"2":{"7":1,"45":1,"47":1,"61":1,"65":1,"72":1,"93":1,"163":1,"205":1}}],["=ϕwhere",{"2":{"284":1,"285":1}}],["=χu",{"2":{"284":1,"285":1}}],["=∫θp",{"2":{"279":1}}],["=wx+b",{"2":{"196":1}}],["=x",{"2":{"193":1}}],["=1v∑i=1vvitjvinote",{"2":{"169":1}}],["=1v∑i=1vvitaviwe",{"2":{"169":1}}],["=12xtx",{"2":{"193":1}}],["=12",{"2":{"86":1}}],["=e",{"2":{"169":1}}],["=i",{"2":{"169":1}}],["=α−βcos⁡",{"2":{"86":1}}],["=∑k=0n−1window",{"2":{"86":1}}],["=val",{"2":{"47":1,"50":2}}],["=vcat",{"2":{"38":1}}],["=true",{"2":{"24":1}}],["=static",{"2":{"24":1}}],["=>",{"2":{"22":2,"23":3,"25":8,"34":6,"35":2,"38":4,"39":4,"41":12,"45":2,"47":5,"56":8,"74":3,"89":6,"90":8,"118":9,"119":3,"126":17,"127":14,"132":4,"147":5,"158":1,"165":2,"166":2,"167":4,"168":4,"172":4,"173":4,"196":1,"201":2,"202":2,"210":15,"232":5,"243":1,"250":4,"258":2,"265":4,"268":4,"271":6,"278":3,"285":6,"293":3}}],["=0",{"2":{"15":2}}],["=nothing",{"2":{"2":1,"38":1,"41":1,"50":2,"291":2}}],["==3",{"2":{"80":2,"86":2,"283":1}}],["==1",{"2":{"80":3}}],["===",{"2":{"25":4,"58":2,"241":2,"273":1,"274":1}}],["==",{"2":{"1":1,"15":4,"19":1,"35":10,"37":15,"39":3,"42":2,"51":2,"55":1,"56":1,"64":1,"73":1,"75":3,"78":5,"80":13,"86":3,"90":2,"119":3,"132":1,"137":1,"144":1,"145":2,"155":2,"156":1,"196":2,"203":1,"209":1,"211":1,"225":1,"230":1,"233":1,"244":1,"245":2,"253":2,"259":1,"260":1,"268":2,"273":2,"274":3,"278":1,"283":5,"291":1,"293":1,"295":3}}],["=",{"2":{"1":2,"2":2,"3":1,"5":8,"15":14,"19":1,"22":2,"23":5,"25":8,"27":3,"34":40,"35":7,"36":6,"37":9,"38":6,"39":14,"40":25,"41":15,"42":3,"45":4,"47":5,"50":59,"52":1,"55":2,"56":46,"58":24,"62":1,"64":3,"66":6,"69":4,"70":3,"71":13,"73":2,"74":12,"76":18,"77":30,"78":25,"79":10,"80":4,"81":10,"82":17,"84":12,"85":1,"86":42,"89":52,"90":27,"95":3,"96":2,"107":2,"118":64,"119":9,"126":25,"127":49,"132":10,"133":3,"134":3,"135":2,"137":7,"138":1,"143":29,"144":1,"146":23,"147":124,"149":3,"150":7,"153":8,"154":9,"155":5,"158":5,"163":6,"165":21,"166":12,"167":20,"168":21,"170":2,"171":2,"172":12,"173":7,"174":12,"185":9,"186":6,"187":1,"188":8,"189":1,"190":3,"191":3,"193":3,"194":4,"195":1,"196":17,"197":7,"200":6,"201":6,"202":5,"203":5,"204":19,"206":8,"209":9,"210":21,"211":5,"212":14,"213":2,"214":8,"222":2,"223":12,"224":4,"225":20,"226":6,"227":7,"230":9,"231":6,"232":5,"233":5,"234":10,"235":3,"237":139,"238":8,"241":8,"242":2,"243":3,"244":5,"245":21,"246":8,"248":2,"251":13,"252":19,"253":16,"254":11,"255":8,"256":2,"257":3,"258":6,"259":3,"260":19,"261":8,"264":8,"265":3,"266":1,"267":12,"268":16,"269":8,"270":2,"271":19,"272":7,"273":23,"274":25,"275":8,"277":21,"278":19,"279":19,"280":7,"283":55,"284":20,"285":38,"286":4,"287":14,"288":13,"289":7,"290":2,"291":13,"292":9,"293":14,"294":3,"295":11,"296":5,"297":8}}],["jpegturbo",{"2":{"276":2,"282":2}}],["json3",{"2":{"229":1}}],["jointly",{"2":{"201":1}}],["journal",{"2":{"64":1}}],["jvi",{"2":{"169":1}}],["jvp",{"0":{"18":1},"2":{"18":1,"171":6,"172":10,"192":4,"194":4}}],["j∈rd×d",{"2":{"169":1}}],["jnp",{"2":{"147":1}}],["jet`",{"2":{"95":1}}],["jet",{"0":{"95":1},"2":{"95":14}}],["jerk",{"2":{"86":2}}],["jit",{"2":{"90":1,"147":1,"260":2,"296":1}}],["jimmy",{"2":{"66":1}}],["jax",{"0":{"147":1},"2":{"147":28,"187":1}}],["jamie",{"2":{"66":1}}],["jacobian`",{"2":{"165":1}}],["jacobian",{"0":{"165":1,"166":1,"168":1,"170":1,"171":1,"172":1,"194":1,"195":1},"1":{"166":1},"2":{"18":11,"19":6,"164":3,"165":3,"166":9,"168":5,"169":4,"170":1,"171":1,"172":17,"192":6,"194":2,"195":1,"260":1,"274":1}}],["jmlr",{"2":{"58":1}}],["j",{"2":{"15":1,"19":2,"73":2,"78":2,"86":1,"165":2,"166":3,"168":2,"169":1,"172":8,"296":2}}],["just",{"2":{"3":1,"34":1,"36":3,"41":3,"56":1,"58":1,"73":1,"86":3,"89":1,"140":1,"147":1,"149":1,"151":2,"186":1,"188":1,"194":1,"234":1}}],["julia∂t",{"2":{"167":1}}],["julia∂x",{"2":{"165":1,"166":1,"168":1}}],["juliaz",{"2":{"296":1}}],["juliazygote",{"2":{"127":1}}],["juliazeros",{"2":{"188":2}}],["juliazerosc64",{"2":{"16":1}}],["juliazerosc32",{"2":{"16":1}}],["juliazerosc16",{"2":{"16":1}}],["juliazeros64",{"2":{"16":1}}],["juliazeros32",{"2":{"16":1}}],["juliazeros16",{"2":{"16":1}}],["juliaxt",{"2":{"134":1}}],["juliax",{"2":{"118":1,"147":1,"158":2,"188":11,"190":2,"196":1}}],["juliaxlogy",{"2":{"51":1}}],["juliaxlogx",{"2":{"51":1}}],["julia∇filter",{"2":{"86":1}}],["julia∇depthwiseconv",{"2":{"86":4}}],["julia∇conv",{"2":{"86":4}}],["julia∇grid",{"2":{"82":1}}],["julia∇imrotate",{"2":{"79":1}}],["julia∇upsample",{"2":{"78":4}}],["juliaq",{"2":{"73":1}}],["juliaσ",{"2":{"58":2}}],["juliaelu",{"2":{"58":1}}],["juliaeachslice",{"2":{"51":1}}],["juliaembedding",{"2":{"39":1}}],["juliakldivergenceloss",{"2":{"50":1}}],["juliakaiming",{"2":{"15":2}}],["juliahlo",{"2":{"147":1}}],["juliahamming",{"2":{"86":1}}],["juliahann",{"2":{"86":1}}],["juliahardswish",{"2":{"58":1}}],["juliahardtanh",{"2":{"58":1}}],["juliahardσ",{"2":{"58":2}}],["juliahuberloss",{"2":{"50":1}}],["juliahingeloss",{"2":{"50":1}}],["juliapkg>",{"2":{"178":1}}],["juliaprob",{"2":{"284":1,"285":1,"288":1}}],["juliaprintln",{"2":{"172":1,"193":1}}],["juliapred",{"2":{"118":2}}],["juliapredilate",{"2":{"86":1}}],["juliapredilated",{"2":{"86":1}}],["juliaps",{"2":{"127":1,"137":1,"196":1,"204":1}}],["juliapixel",{"2":{"78":1}}],["juliapixelshuffle",{"2":{"42":1}}],["juliapower",{"2":{"86":1}}],["juliapooldims",{"2":{"75":1}}],["juliapoissonloss",{"2":{"50":1}}],["juliapad",{"2":{"76":6}}],["juliaparallel",{"2":{"34":1}}],["juliaparameterlength",{"2":{"9":1}}],["juliapairwisefusion",{"2":{"34":1}}],["juliaw",{"2":{"188":1,"196":1}}],["juliaweights",{"2":{"185":1,"186":3}}],["juliaweightnorm",{"2":{"41":1}}],["juliawithin",{"2":{"84":1}}],["juliawrappedfunction",{"2":{"40":1}}],["julian",{"2":{"90":1,"196":1}}],["juliannlib",{"2":{"81":4,"86":1}}],["julianooplayer",{"2":{"40":1}}],["juliancclbackend",{"2":{"27":1}}],["juliamodel",{"2":{"118":2,"119":1,"126":2,"127":4,"135":1,"147":1,"154":1,"166":1,"172":1,"196":1,"237":2,"265":1}}],["juliamelscale",{"2":{"86":1}}],["juliameanpool",{"2":{"37":1,"75":1}}],["juliamish",{"2":{"58":1}}],["juliamultigate",{"2":{"51":1}}],["juliamsleloss",{"2":{"50":1}}],["juliamseloss",{"2":{"50":1}}],["juliamake",{"2":{"73":1}}],["juliamatch",{"2":{"54":1}}],["juliamaeloss",{"2":{"50":1}}],["juliamaximum",{"2":{"86":1}}],["juliamaxout",{"2":{"40":1}}],["juliamaxpool",{"2":{"37":1,"75":1}}],["juliampibackend",{"2":{"27":1}}],["juliavjp",{"2":{"195":1,"268":1}}],["juliavariationalhiddendropout",{"2":{"36":1}}],["juliavector",{"2":{"18":1}}],["juliay",{"2":{"34":2}}],["juliabegin",{"2":{"264":1,"268":1,"288":1,"291":1}}],["juliabackend",{"2":{"137":1}}],["juliabatchnorm",{"2":{"41":1,"66":1}}],["juliabatched",{"2":{"19":1,"61":1,"80":6,"86":2}}],["juliabf16",{"2":{"53":1}}],["juliabias",{"2":{"62":2,"84":1}}],["juliabinaryfocalloss",{"2":{"50":1}}],["juliabinarycrossentropyloss",{"2":{"50":1}}],["juliabilinear",{"2":{"39":1}}],["juliabidirectionalrnn",{"2":{"38":1}}],["juliabranchlayer",{"2":{"34":1}}],["juliabcast",{"2":{"30":1}}],["julialux",{"2":{"210":1}}],["julialuxops",{"2":{"51":1}}],["julialength",{"2":{"188":1}}],["julialeakyrelu",{"2":{"58":1}}],["julialang",{"2":{"86":1,"126":1,"133":3,"165":2,"197":1,"204":2,"206":1,"214":1,"227":1,"237":3,"238":1,"246":1,"255":1,"260":2,"261":1,"269":1,"274":1,"275":1,"280":1,"289":1,"297":1}}],["julialayernorm",{"2":{"41":1,"66":1}}],["julialayer",{"2":{"23":1}}],["julialpnormpool",{"2":{"75":1}}],["julialppool",{"2":{"37":1}}],["julialisht",{"2":{"58":1}}],["julialstmcell",{"2":{"38":1}}],["julialossfn",{"2":{"196":1}}],["julialoss",{"2":{"118":1,"286":1}}],["julialogaddexp",{"2":{"86":1}}],["julialogsumexp",{"2":{"84":1}}],["julialogsoftmax",{"2":{"74":1}}],["julialogσ",{"2":{"58":2}}],["julialogcosh",{"2":{"58":1}}],["julialocal",{"2":{"29":1}}],["julialoaded",{"2":{"3":1}}],["juliaunfold",{"2":{"77":1}}],["juliaunfreeze",{"2":{"22":2}}],["juliausing",{"2":{"69":8,"70":4,"89":1,"90":1,"95":1,"118":1,"126":1,"127":1,"132":1,"143":1,"146":1,"147":1,"153":1,"155":1,"158":1,"159":1,"164":1,"173":2,"174":2,"178":1,"185":2,"187":1,"189":1,"192":1,"196":1,"197":1,"199":1,"206":1,"208":1,"214":1,"222":1,"227":1,"229":1,"238":1,"240":1,"246":1,"248":1,"255":1,"256":1,"261":1,"263":1,"269":1,"270":1,"275":1,"280":1,"282":1,"289":1,"290":1,"297":1}}],["juliaupsample",{"2":{"42":1,"78":5}}],["juliaupdate",{"2":{"10":1}}],["juliafig",{"2":{"279":2}}],["juliaf",{"2":{"193":1,"194":1}}],["juliafmap",{"2":{"118":1}}],["juliafunction",{"2":{"118":2,"127":1,"134":1,"153":2,"165":1,"167":1,"168":1,"170":1,"171":1,"172":1,"196":1,"200":1,"201":2,"202":1,"204":1,"209":1,"212":1,"223":1,"225":1,"230":1,"231":2,"232":1,"234":1,"241":1,"242":2,"243":1,"245":1,"251":1,"253":1,"257":1,"258":1,"259":1,"260":1,"264":1,"268":1,"271":2,"273":1,"274":1,"278":1,"283":4,"284":1,"285":1,"291":2,"293":1,"295":1}}],["juliafunctional",{"2":{"3":1}}],["juliafused",{"2":{"63":1,"65":1}}],["juliafast",{"2":{"60":2}}],["juliaf64",{"2":{"53":1}}],["juliaf32",{"2":{"53":1}}],["juliaf16",{"2":{"53":1}}],["juliaforward",{"2":{"268":1}}],["juliafor",{"2":{"191":1}}],["juliafold",{"2":{"77":1}}],["juliafoldl",{"2":{"51":1}}],["juliafocalloss",{"2":{"50":1}}],["juliafluxlayer",{"2":{"45":1}}],["juliaflattenlayer",{"2":{"40":1}}],["juliafromfluxadaptor",{"2":{"45":1}}],["juliafrozenlayer",{"2":{"22":1}}],["juliafreeze",{"2":{"22":2}}],["juliajvp",{"2":{"194":1}}],["juliajet",{"2":{"95":1}}],["juliajacobian",{"2":{"18":1}}],["juliajulia>",{"2":{"5":1,"15":2,"22":1,"23":1,"25":1,"34":3,"40":4,"41":3,"45":1,"47":1,"50":15,"56":6,"58":27,"68":2,"74":2,"76":5,"77":2,"78":3,"79":1,"80":3,"81":4,"82":1,"84":1,"85":1,"86":5,"95":1,"96":1}}],["juliarng",{"2":{"90":1,"126":1,"133":1,"153":1,"174":2,"187":1,"191":1,"264":1}}],["juliarnncell",{"2":{"38":1}}],["juliarrelu",{"2":{"58":1}}],["juliarandom",{"2":{"191":1}}],["juliarandc64",{"2":{"16":1}}],["juliarandc32",{"2":{"16":1}}],["juliarandc16",{"2":{"16":1}}],["juliarand64",{"2":{"16":1}}],["juliarandnc64",{"2":{"16":1}}],["juliarandnc32",{"2":{"16":1}}],["juliarandnc16",{"2":{"16":1}}],["juliarandn64",{"2":{"16":1}}],["juliarandn32",{"2":{"16":1}}],["juliarandn16",{"2":{"16":1}}],["juliarand32",{"2":{"16":1}}],["juliarand16",{"2":{"16":1}}],["juliareverse",{"2":{"86":1}}],["juliareversesequence",{"2":{"40":1}}],["juliarelu6",{"2":{"58":1}}],["juliarelu",{"2":{"58":1}}],["juliarecursive",{"2":{"52":6}}],["juliarecurrence",{"2":{"38":1}}],["juliareshapelayer",{"2":{"40":1}}],["juliares",{"2":{"34":2}}],["juliareset",{"2":{"3":1}}],["juliarepeatedlayer",{"2":{"34":1}}],["juliareplicate",{"2":{"8":1}}],["juliareduce",{"2":{"30":1}}],["juliareactant",{"2":{"2":1}}],["juliaopen",{"2":{"147":1}}],["juliaopt",{"2":{"137":1,"266":1}}],["juliaonesc64",{"2":{"16":1}}],["juliaonesc32",{"2":{"16":1}}],["juliaonesc16",{"2":{"16":1}}],["juliaones64",{"2":{"16":1}}],["juliaones32",{"2":{"16":1}}],["juliaones16",{"2":{"16":1}}],["juliaorthogonal",{"2":{"15":1}}],["juliaoutputsize",{"2":{"11":1}}],["juliatstate",{"2":{"268":1}}],["juliats",{"2":{"254":1}}],["juliatest",{"2":{"96":1}}],["juliatestmode",{"2":{"10":1}}],["juliatanhshrink",{"2":{"58":1}}],["juliatanh",{"2":{"58":1}}],["juliatosimplechainsadaptor",{"2":{"47":1}}],["juliatotal",{"2":{"29":1}}],["juliatr",{"2":{"172":1,"213":2}}],["juliatry",{"2":{"126":1,"127":1}}],["juliatranspose",{"2":{"86":2}}],["juliatrain",{"2":{"234":4,"236":1}}],["juliatrainstate",{"2":{"49":2}}],["juliatrainmode",{"2":{"10":1}}],["juliatrelu",{"2":{"58":1}}],["juliatruncated",{"2":{"15":1}}],["juliaimport",{"2":{"88":1}}],["juliaim2col",{"2":{"86":2}}],["juliaimrotate",{"2":{"79":1}}],["juliaistft",{"2":{"86":1}}],["juliaistraining",{"2":{"51":1}}],["juliais",{"2":{"86":1}}],["juliaisleaf",{"2":{"3":1}}],["juliainsert",{"2":{"86":1}}],["juliainstancenorm",{"2":{"41":1,"66":1}}],["juliainternal",{"2":{"67":1}}],["juliainitialized",{"2":{"28":1}}],["juliainitialize",{"2":{"28":1}}],["juliainitialstates",{"2":{"10":1}}],["juliainitialparameters",{"2":{"9":1}}],["juliaidentity",{"2":{"15":1}}],["juliadsum",{"2":{"294":1}}],["juliadudt",{"2":{"226":1}}],["juliadataloader",{"2":{"163":2}}],["juliadata",{"2":{"137":1}}],["juliadb",{"2":{"86":1}}],["juliadot",{"2":{"73":2}}],["juliadicecoeffloss",{"2":{"50":1}}],["juliadistributedutils",{"2":{"137":1}}],["juliadistributeddatacontainer",{"2":{"32":1}}],["juliadistributedoptimizer",{"2":{"31":1}}],["juliadisplay",{"2":{"8":1}}],["juliadropout",{"2":{"36":1,"64":1,"85":2}}],["juliadepthwiseconvdims",{"2":{"77":1}}],["juliadepthwiseconv",{"2":{"77":1,"86":2}}],["juliadenseconvdims",{"2":{"77":1}}],["juliadense",{"2":{"39":1}}],["juliadebuglayer",{"2":{"24":1}}],["juliadeviceiterator",{"2":{"5":1}}],["juliadefault",{"2":{"3":1}}],["juliacdev",{"2":{"150":1}}],["juliacalc",{"2":{"86":1}}],["juliactc",{"2":{"83":1}}],["juliacelu",{"2":{"58":1}}],["juliacrossentropyloss",{"2":{"50":1}}],["juliacol2im",{"2":{"86":1}}],["juliacompute",{"2":{"49":1}}],["juliaconst",{"2":{"118":1,"203":1,"211":1,"233":1,"244":1,"267":1,"278":1,"285":2,"286":2}}],["juliaconvdims",{"2":{"77":1}}],["juliaconvtranspose",{"2":{"35":1}}],["juliaconv",{"2":{"35":1,"77":1,"86":2}}],["juliacontains",{"2":{"8":1}}],["juliachain",{"2":{"34":1}}],["juliacheck",{"2":{"8":1}}],["juliacpu",{"2":{"2":1}}],["juliaanalytical",{"2":{"252":1}}],["juliaadtype",{"2":{"287":1}}],["juliaadd",{"2":{"86":1}}],["juliaadaptor",{"2":{"210":1}}],["juliaadapt",{"2":{"45":1,"47":1}}],["juliaadaptivemeanpool",{"2":{"37":1}}],["juliaadaptivemaxpool",{"2":{"37":1}}],["juliaadaptivelppool",{"2":{"37":1}}],["juliaalpha",{"2":{"64":1}}],["juliaalphadropout",{"2":{"36":1}}],["juliaallreduce",{"2":{"30":1}}],["juliaapply",{"2":{"8":1,"49":2}}],["juliaabstract",{"2":{"7":3,"292":1}}],["julia>",{"2":{"5":4,"15":3,"23":6,"25":3,"34":3,"40":8,"45":6,"47":7,"50":45,"56":34,"58":32,"68":2,"74":6,"76":9,"77":9,"78":9,"79":7,"80":9,"81":5,"82":9,"84":6,"85":3,"86":12,"95":1,"96":2}}],["juliasafe",{"2":{"86":1}}],["juliastruct",{"2":{"134":1,"154":1,"155":1,"201":1,"224":1,"231":1,"235":1,"250":1}}],["juliastft",{"2":{"86":1}}],["juliastorage",{"2":{"86":2}}],["juliastatefulluxlayer",{"2":{"55":1}}],["juliastatefulrecurrentcell",{"2":{"38":1}}],["juliastatelength",{"2":{"10":1}}],["juliastateless",{"2":{"8":1}}],["juliaspectrogram",{"2":{"86":1}}],["juliasparse",{"2":{"15":1}}],["juliaswish",{"2":{"58":1}}],["juliasoftmax",{"2":{"74":1}}],["juliasoftsign",{"2":{"58":1}}],["juliasoftshrink",{"2":{"58":1}}],["juliasoftplus",{"2":{"58":1}}],["juliasquaredhingeloss",{"2":{"50":1}}],["juliasize",{"2":{"188":1}}],["juliasigmoid",{"2":{"58":1}}],["juliasiamesecontrastiveloss",{"2":{"50":1}}],["juliasingle",{"2":{"49":2}}],["juliasimplechainslayer",{"2":{"47":1}}],["juliaselu",{"2":{"58":1}}],["juliaselectdim",{"2":{"40":1}}],["juliasetup",{"2":{"8":1}}],["juliaset",{"2":{"4":2,"57":1}}],["juliascale",{"2":{"39":1}}],["juliaskipconnection",{"2":{"34":1}}],["juliasynchronize",{"2":{"30":1}}],["juliashare",{"2":{"25":1}}],["juliasupported",{"2":{"3":1}}],["juliaglu",{"2":{"84":1}}],["juliaglobalmeanpool",{"2":{"37":1}}],["juliaglobalmaxpool",{"2":{"37":1}}],["juliagloballppool",{"2":{"37":1}}],["juliaglorot",{"2":{"15":2}}],["juliagather",{"2":{"81":1}}],["juliagemm",{"2":{"86":1}}],["juliagelu",{"2":{"58":1}}],["juliagenericlossfunction",{"2":{"50":1}}],["juliagetproperty",{"2":{"51":1}}],["juliaget",{"2":{"3":2,"28":1}}],["juliagrid",{"2":{"82":1}}],["juliagroupnorm",{"2":{"41":1,"66":1}}],["juliagrucell",{"2":{"38":1}}],["juliagpu",{"2":{"1":1,"2":1}}],["julia",{"0":{"187":1},"1":{"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1},"2":{"1":1,"24":1,"45":1,"56":3,"57":1,"58":1,"63":1,"67":1,"68":2,"71":1,"72":1,"80":1,"86":9,"88":2,"89":3,"90":1,"91":1,"95":1,"96":1,"97":1,"98":1,"100":1,"101":1,"114":2,"118":1,"119":1,"120":1,"144":2,"145":2,"147":2,"148":1,"153":1,"157":1,"158":1,"164":1,"165":1,"167":1,"168":1,"172":1,"184":2,"185":4,"187":2,"188":7,"191":1,"192":1,"197":9,"205":2,"206":9,"214":9,"227":9,"237":2,"238":12,"246":12,"251":1,"255":9,"261":9,"269":9,"272":1,"275":9,"276":1,"277":1,"278":4,"279":2,"280":9,"283":1,"289":9,"297":9}}],["jld2",{"2":{"199":1,"205":3}}],["jllwrappers",{"2":{"187":1,"276":1,"282":1}}],["jll",{"2":{"187":4,"192":3,"229":7,"238":2,"246":2,"263":2,"276":69,"282":71}}],["jl`",{"2":{"65":1,"165":2,"260":1,"274":1}}],["jl",{"0":{"31":1,"32":1,"68":1,"95":1,"103":1,"106":1,"109":1,"110":1,"113":1,"118":1,"138":1,"178":1,"221":1},"1":{"104":1,"105":1,"107":1,"108":1,"111":1,"112":1,"114":1,"115":1,"116":1,"117":1,"119":1,"139":1,"140":1,"179":1,"180":1,"181":1,"182":1,"183":1,"184":1,"222":1,"223":1,"224":1,"225":1,"226":1,"227":1},"2":{"0":1,"2":3,"3":8,"6":5,"7":2,"8":1,"10":2,"11":1,"13":2,"18":1,"19":1,"28":8,"31":1,"32":2,"38":1,"45":1,"47":4,"49":8,"50":2,"51":1,"53":1,"57":1,"59":1,"60":3,"61":1,"65":2,"67":2,"68":4,"74":1,"79":2,"80":1,"84":2,"86":1,"88":2,"90":3,"91":1,"92":2,"93":5,"94":1,"95":1,"96":6,"100":2,"107":3,"110":5,"112":1,"114":1,"118":3,"119":1,"121":10,"122":15,"124":5,"126":2,"133":6,"136":1,"137":1,"138":1,"139":1,"140":1,"148":1,"151":5,"153":1,"160":2,"161":2,"162":2,"163":3,"165":4,"178":3,"180":1,"184":2,"185":1,"187":1,"189":1,"190":1,"192":4,"194":1,"196":1,"197":1,"198":2,"204":4,"206":1,"207":4,"210":1,"213":1,"214":1,"220":3,"221":3,"225":3,"227":1,"228":1,"231":1,"237":6,"238":1,"246":1,"247":3,"255":1,"256":1,"260":4,"261":1,"266":1,"269":1,"274":2,"275":1,"276":2,"280":1,"281":1,"284":1,"285":1,"289":1,"297":1}}],["p=res",{"2":{"288":1}}],["p=θ",{"2":{"286":1}}],["p=params",{"2":{"285":1}}],["p=2",{"2":{"37":3}}],["p^",{"2":{"284":1,"285":1}}],["p^2",{"2":{"284":1}}],["pngfiles",{"2":{"276":1,"282":1}}],["pcre2",{"2":{"276":1,"282":1}}],["pcie",{"2":{"238":1,"246":1}}],["pdmats",{"2":{"276":1,"282":1}}],["pdes",{"2":{"247":1}}],["pde",{"0":{"247":1},"1":{"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1},"2":{"252":8,"253":2,"254":3}}],["ptxas",{"2":{"260":11}}],["ptrarrays",{"2":{"276":1,"282":1}}],["ptr",{"2":{"208":1}}],["physics",{"2":{"251":6,"253":58,"285":1}}],["philosophies",{"2":{"174":1}}],["phase",{"2":{"41":4}}],["pythonfrom",{"2":{"147":1}}],["python",{"2":{"147":3}}],["pytorch",{"2":{"35":3,"39":1,"75":1,"100":1,"116":3,"117":1,"187":1}}],["pkgversion",{"2":{"276":1,"282":1}}],["pkg>",{"2":{"178":2}}],["pkg",{"2":{"68":4,"69":9,"70":2,"88":2,"89":1,"91":2,"178":1,"197":2,"206":2,"214":2,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["pmlr",{"2":{"66":1}}],["p2",{"2":{"50":6}}],["p1",{"2":{"50":4}}],["pixman",{"2":{"276":1,"282":1}}],["pixel",{"2":{"42":2,"78":4,"79":4,"82":3,"86":2}}],["pixelshuffle",{"2":{"42":1}}],["pixels",{"2":{"37":3,"42":2,"82":1}}],["pickle",{"2":{"229":1}}],["pipe",{"2":{"224":1}}],["pipeline",{"2":{"135":1}}],["pipelines",{"2":{"49":1}}],["pitfalls",{"0":{"157":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1}}],["pinn",{"0":{"247":1},"1":{"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1},"2":{"247":1,"250":4,"253":5,"254":1}}],["pinns",{"2":{"167":1,"247":1}}],["pin",{"2":{"125":2}}],["piecewise",{"2":{"58":2}}],["pi",{"2":{"50":1,"78":1}}],["peel",{"2":{"201":1,"202":1}}],["peel`",{"2":{"201":1}}],["people",{"2":{"153":1}}],["pesky",{"2":{"127":1}}],["pessimistic",{"2":{"86":1}}],["peak",{"2":{"86":1}}],["pended",{"2":{"114":1}}],["penalties",{"2":{"56":1}}],["penultimate",{"2":{"38":2,"62":1,"63":1}}],["perspective",{"2":{"283":1}}],["perceptron",{"2":{"262":1}}],["permutedims",{"2":{"273":1,"291":1}}],["permuteddimsarrays",{"2":{"86":1}}],["permuteddimsarray",{"2":{"80":8,"86":3}}],["permutations",{"2":{"80":1}}],["perhaps",{"2":{"58":1,"86":1}}],["per",{"2":{"41":4,"86":1,"166":1,"278":2}}],["periodictable",{"2":{"229":1}}],["periodic=false",{"2":{"86":2}}],["periodic=true",{"2":{"86":2}}],["periodic",{"2":{"35":1,"86":6}}],["period",{"2":{"21":1}}],["perfect",{"2":{"15":1,"84":1}}],["performs",{"2":{"86":2,"149":1,"225":1}}],["performant",{"2":{"86":2}}],["performance",{"0":{"157":1,"162":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1},"2":{"15":2,"24":1,"49":1,"52":5,"56":3,"60":1,"86":3,"122":4,"123":2,"124":1,"157":2,"158":2,"162":1}}],["performing",{"2":{"37":3}}],["performed",{"2":{"35":1,"47":2,"278":1}}],["perform",{"2":{"8":1,"30":2,"35":3,"49":2,"75":3,"86":2,"174":1,"278":1,"283":1}}],["p",{"2":{"36":12,"45":1,"56":2,"64":7,"75":5,"85":5,"223":2,"225":4,"226":2,"231":6,"235":2,"279":2,"283":3,"284":6,"285":2,"287":1}}],["plugin",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["plus",{"2":{"22":1,"25":1,"34":6,"41":7,"56":3,"89":1,"90":1,"126":2,"127":2,"132":1,"210":2,"265":1,"285":1}}],["platform",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["plan",{"2":{"164":1}}],["places",{"2":{"56":2,"77":1,"86":1}}],["place",{"2":{"52":5,"60":1,"62":2,"78":4,"80":2,"81":1,"85":1,"140":1,"273":1}}],["plotutils",{"2":{"276":1,"282":1}}],["plotting",{"0":{"226":1}}],["plot",{"2":{"86":3,"268":1,"277":3,"279":9,"284":1,"285":1,"288":1}}],["please",{"2":{"4":2,"71":2,"88":1,"90":1,"101":1,"124":1,"128":1,"141":1,"164":1,"201":1,"219":1,"220":3,"221":1,"247":1}}],["push",{"2":{"286":1,"296":1}}],["pullback",{"2":{"127":2}}],["pull",{"2":{"101":1,"107":1}}],["publisher",{"2":{"71":1}}],["public",{"2":{"3":1,"50":1,"84":1,"86":1}}],["purpose",{"2":{"192":1}}],["purposes",{"2":{"54":1,"155":1,"225":1,"230":1,"272":1}}],["pure",{"2":{"8":1,"98":1,"99":1,"100":1,"190":1}}],["pseudorandom",{"2":{"191":1}}],["pseudo",{"2":{"36":3,"187":1}}],["ps",{"2":{"7":2,"8":5,"22":2,"23":18,"25":17,"30":2,"34":4,"40":9,"45":4,"47":2,"49":2,"50":2,"54":1,"55":5,"56":22,"89":3,"90":6,"118":16,"119":6,"126":5,"127":12,"130":3,"132":2,"133":2,"134":6,"135":2,"137":2,"143":9,"144":7,"145":6,"146":2,"147":2,"153":7,"154":7,"155":10,"158":5,"164":3,"165":9,"166":9,"167":8,"168":11,"170":2,"171":2,"172":9,"173":5,"174":7,"196":7,"201":4,"203":2,"204":4,"205":3,"211":2,"212":3,"225":4,"226":1,"231":2,"232":5,"233":2,"234":2,"235":3,"237":15,"242":2,"244":2,"245":2,"251":4,"253":3,"259":2,"260":6,"267":1,"268":1,"271":7,"273":8,"274":5,"278":10,"279":1,"285":2,"293":2,"294":2,"295":3}}],["powerful",{"2":{"188":1}}],["power",{"2":{"86":9,"279":1}}],["potential",{"2":{"77":1}}],["potentially",{"2":{"7":1,"11":1,"47":1}}],["pool",{"2":{"75":3}}],["pooldims",{"2":{"75":1}}],["pooling",{"0":{"37":1,"75":1},"2":{"37":21,"75":3,"86":1,"117":1}}],["polygonops",{"2":{"276":1,"282":1}}],["polynomial",{"0":{"262":1},"1":{"263":1,"264":1,"265":1,"266":1,"267":1,"268":1,"269":1},"2":{"262":1,"264":1}}],["polynomials",{"2":{"58":1}}],["polyesterweave",{"2":{"187":1,"276":1,"282":1}}],["polyester",{"2":{"67":1,"187":1,"276":1,"282":1}}],["polyalgorithm",{"2":{"65":1}}],["poissonrandom",{"2":{"229":1,"282":1}}],["poisson",{"2":{"50":1}}],["poissonloss",{"2":{"50":2}}],["pointers",{"2":{"86":1}}],["points",{"2":{"82":1,"86":3,"174":1,"196":1,"264":1,"277":4,"278":1,"279":1}}],["point",{"0":{"53":1},"2":{"4":2,"53":4,"60":1,"125":2,"137":1,"142":1,"148":1,"188":1,"284":1,"285":1}}],["populated",{"2":{"201":2}}],["populate",{"2":{"24":1}}],["positivefactorizations",{"2":{"229":1,"276":1,"282":1}}],["position=",{"2":{"223":1,"226":1,"285":1,"288":1}}],["position",{"2":{"40":2,"273":1,"283":1}}],["possibly",{"2":{"35":2,"37":3,"122":1}}],["possible",{"2":{"1":1,"2":1,"7":1,"11":1,"28":2,"45":2,"60":3,"62":1,"63":2,"65":1,"80":1,"123":1,"165":1}}],["posterior",{"2":{"279":2}}],["posted",{"2":{"164":1}}],["post",{"2":{"7":1,"47":1}}],["packing",{"2":{"276":1,"282":1}}],["packageextensioncompat",{"2":{"229":1}}],["packages",{"0":{"43":1,"91":1,"222":1},"1":{"44":1,"45":1,"46":1,"47":1},"2":{"3":3,"18":4,"19":2,"47":1,"49":2,"57":1,"89":1,"91":3,"93":1,"107":1,"110":1,"120":2,"123":3,"124":2,"148":1,"157":1,"162":1,"183":1,"184":3,"238":1,"246":1,"276":1}}],["package",{"0":{"199":1,"208":1,"229":1,"240":1,"248":1,"263":1,"282":1},"2":{"0":1,"3":1,"4":3,"12":1,"13":1,"31":1,"69":1,"70":1,"84":1,"88":1,"91":1,"94":3,"110":3,"114":1,"120":1,"121":1,"124":2,"149":2,"181":1,"186":1,"189":1,"220":1,"247":1}}],["pango",{"2":{"276":1,"282":1}}],["painful",{"2":{"125":1}}],["pairs",{"2":{"50":1,"237":13}}],["pair",{"2":{"35":2,"38":1,"50":1}}],["pairwisefusion",{"2":{"34":3}}],["past",{"2":{"100":1}}],["passes",{"2":{"34":2,"86":1,"128":1,"190":1}}],["passed",{"2":{"10":1,"25":2,"34":6,"38":5,"40":6,"56":1,"66":1,"74":1,"77":1,"83":1,"95":1,"96":1,"99":1,"116":1,"154":1}}],["passing",{"2":{"8":1,"34":1,"41":1,"56":2,"86":1,"137":2,"185":2}}],["pass",{"2":{"8":1,"18":2,"34":1,"35":1,"39":2,"41":1,"51":2,"53":1,"56":3,"84":1,"86":4,"90":1,"96":1,"97":1,"100":1,"114":1,"127":3,"137":1,"153":1,"185":1,"201":3,"268":2}}],["paying",{"2":{"86":1}}],["pay",{"2":{"84":1}}],["paper",{"2":{"84":1}}],["pal2023efficient",{"2":{"71":1}}],["pal2023lux",{"2":{"71":1}}],["pal",{"2":{"71":2}}],["patience",{"2":{"260":2}}],["patches",{"2":{"86":1}}],["patterns",{"2":{"86":1}}],["pattern",{"2":{"50":1,"118":1,"163":1,"277":1}}],["path",{"2":{"10":1,"34":1,"115":2,"143":2,"197":1,"206":2,"214":2,"227":1,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":1,"283":4,"289":1,"297":2}}],["page",{"2":{"20":1,"21":1,"22":1,"55":1,"102":1,"118":1,"125":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["paddedviews",{"2":{"276":1,"282":1}}],["padded",{"2":{"86":2}}],["padding=0",{"2":{"75":1}}],["padding",{"0":{"76":1},"2":{"15":1,"35":9,"37":12,"76":10,"82":10,"86":11,"147":2}}],["pad=1",{"2":{"77":1,"271":6}}],["pad=0",{"2":{"35":2,"37":3,"75":3,"77":1}}],["pad",{"2":{"35":6,"37":3,"75":6,"76":73,"77":3,"86":7,"147":2}}],["pads",{"2":{"15":1}}],["parent",{"2":{"86":3,"272":1}}],["parse",{"2":{"76":1,"166":1,"209":1,"230":1,"241":2,"272":1}}],["parallelism",{"2":{"163":1}}],["parallel=true",{"2":{"163":2}}],["parallel",{"0":{"136":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"23":1,"34":12,"40":1}}],["parametric",{"2":{"58":1}}],["parameterized",{"2":{"279":1}}],["parameterization",{"2":{"71":1,"276":1}}],["parameter",{"0":{"22":1,"155":1},"2":{"15":3,"22":2,"25":2,"35":4,"40":1,"41":2,"50":1,"56":6,"89":1,"99":1,"100":4,"127":1,"137":1,"145":2,"153":5,"154":2,"155":5,"174":1,"225":2,"234":1,"276":1,"278":2}}],["parameterlength",{"2":{"7":1,"9":1,"153":4,"154":1,"243":1,"260":1,"274":1,"278":3,"295":1}}],["parameters",{"0":{"9":1,"25":1,"142":1,"145":1,"168":1},"1":{"143":1,"144":1,"145":1,"146":1},"2":{"3":1,"7":5,"8":1,"9":2,"15":1,"22":13,"23":7,"24":1,"25":23,"31":1,"34":24,"35":2,"38":9,"39":4,"40":2,"41":31,"45":4,"47":1,"49":10,"50":1,"53":2,"54":4,"55":5,"56":17,"86":6,"89":5,"90":7,"98":1,"100":2,"107":2,"117":1,"119":1,"126":6,"127":7,"132":5,"137":1,"142":2,"145":3,"151":1,"153":5,"154":3,"155":7,"164":1,"168":1,"174":6,"192":1,"196":6,"201":2,"204":2,"205":1,"210":12,"212":2,"225":5,"229":1,"231":1,"234":2,"242":1,"245":4,"253":1,"260":7,"265":3,"267":1,"268":5,"274":4,"276":1,"278":79,"279":2,"282":1,"285":6,"295":3}}],["params=nothing",{"2":{"283":2}}],["params",{"2":{"22":14,"23":6,"41":3,"49":2,"143":3,"146":1,"283":4,"284":3,"285":6,"286":2,"287":1,"288":1,"292":6}}],["parts",{"2":{"225":1}}],["party",{"0":{"220":1}}],["partial=false",{"2":{"200":2,"209":2,"253":2,"272":1,"291":1}}],["particle",{"2":{"284":2,"285":1}}],["particles",{"2":{"283":1}}],["participate",{"0":{"130":1}}],["particularly",{"2":{"86":1}}],["particular",{"0":{"143":1},"2":{"2":1,"86":2,"125":1,"143":1}}],["partition",{"2":{"32":1}}],["part",{"0":{"145":1,"146":1},"2":{"3":1,"51":1,"84":1,"86":1,"102":1,"165":1,"219":1,"221":1,"231":1}}],["practitioners",{"2":{"225":1}}],["prngs",{"2":{"191":1}}],["prng",{"2":{"187":1,"191":2}}],["prngkey",{"2":{"147":11}}],["prs",{"2":{"164":1}}],["principles",{"0":{"99":1}}],["principle",{"2":{"79":1}}],["printf",{"2":{"90":2,"118":1,"119":1,"196":2,"199":1,"204":2,"208":1,"212":1,"222":1,"225":1,"229":1,"234":1,"240":1,"245":2,"248":1,"253":1,"256":1,"260":4,"263":1,"268":1,"270":1,"274":3,"282":1,"286":1,"290":1,"295":2}}],["printouts",{"2":{"100":1}}],["printout",{"2":{"56":1}}],["print",{"2":{"54":1,"56":1,"147":1}}],["printing",{"2":{"40":1,"47":1,"90":1,"100":1}}],["println",{"2":{"23":1,"126":2,"127":2,"153":2,"154":3,"155":2,"165":2,"166":2,"167":2,"168":2,"172":6,"190":2,"191":2,"193":2,"194":1,"195":1,"196":3,"197":2,"206":2,"214":2,"227":2,"238":2,"245":1,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["prints",{"2":{"4":2,"127":1}}],["printed",{"2":{"2":1,"8":1,"45":1,"56":1}}],["primal",{"2":{"79":1,"82":4}}],["primarily",{"2":{"77":2,"278":1}}],["primitives",{"0":{"30":1,"161":1},"2":{"72":1,"161":1,"278":1}}],["priority",{"2":{"123":2}}],["prior",{"2":{"35":1,"278":1}}],["pr",{"2":{"11":1,"219":1,"220":1}}],["preallocationtoolsreversediffext",{"2":{"229":1,"282":1}}],["preallocationtools",{"2":{"229":2,"282":2}}],["preallocated",{"2":{"86":1}}],["precompilation",{"2":{"276":1,"282":1}}],["precompile",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["precompiled",{"2":{"187":2,"192":24,"199":12,"229":76,"240":28,"263":18,"276":40,"282":124}}],["precompiling",{"2":{"187":1,"192":12,"199":6,"229":38,"240":14,"263":9,"276":20,"282":62}}],["precisely",{"2":{"84":1}}],["precision",{"0":{"53":1},"2":{"53":1,"63":2,"158":3,"188":1,"253":7,"260":102,"295":2}}],["press",{"2":{"178":1}}],["pressing",{"2":{"88":1}}],["presence",{"2":{"56":1}}],["presented",{"2":{"194":1}}],["presently",{"2":{"118":1,"122":1}}],["present",{"2":{"35":2,"38":10,"39":2,"45":1,"47":1,"51":1,"76":1,"95":1,"114":1,"126":1,"127":1,"149":2}}],["preservation",{"2":{"45":1}}],["preserved",{"2":{"47":1}}],["preserve",{"2":{"45":7,"47":1}}],["preserving",{"2":{"42":1,"47":1}}],["prepate",{"2":{"86":1}}],["prepare",{"2":{"86":2}}],["preprint",{"2":{"41":1,"66":2}}],["preprocessing",{"2":{"8":1}}],["preds",{"2":{"278":2}}],["pred=ŷ",{"2":{"203":1}}],["predilation",{"2":{"86":1}}],["predilate",{"2":{"86":2}}],["predilated",{"2":{"86":2}}],["predilates",{"2":{"86":1}}],["predictive",{"2":{"279":1}}],["predictions",{"2":{"268":1,"278":2,"279":2}}],["prediction",{"0":{"279":1},"2":{"50":2,"278":1,"279":1}}],["predictor",{"2":{"254":1}}],["predict",{"2":{"200":1,"279":6}}],["predicted",{"2":{"50":3,"211":2,"233":2,"244":2,"279":2}}],["pred",{"2":{"50":8,"118":4,"203":5,"225":2,"226":3,"254":4,"259":5,"268":2,"286":4}}],["prettytables",{"2":{"276":1}}],["pretty",{"2":{"40":1,"90":1,"100":1,"138":1,"165":1}}],["prevents",{"2":{"86":1,"114":1}}],["prevent",{"2":{"28":1,"49":1,"50":1,"64":2,"114":1,"153":1}}],["previously",{"2":{"7":1,"11":1,"100":1,"107":1,"116":1,"139":1}}],["previous",{"2":{"5":1,"7":1,"81":2,"84":1,"100":1,"166":1}}],["pre",{"2":{"7":1,"11":1,"35":1,"86":2,"88":1,"89":1,"114":1}}],["prefer",{"2":{"84":1,"161":1}}],["preferably",{"2":{"67":1}}],["preferred",{"2":{"5":1,"67":1}}],["preferencetools",{"2":{"178":2}}],["preference",{"2":{"1":2,"2":1,"54":1,"57":2,"158":1,"160":1,"164":1,"178":6,"181":2,"182":1,"183":3,"184":1}}],["preferences",{"0":{"1":1,"178":1},"1":{"179":1,"180":1,"181":1,"182":1,"183":1,"184":1},"2":{"54":1,"57":4,"114":1,"160":2,"178":4,"180":3,"183":2}}],["progressmeter",{"2":{"276":1,"282":1}}],["progresslogging",{"2":{"276":1,"282":1}}],["progress",{"2":{"276":3}}],["programming",{"2":{"187":1}}],["proj",{"2":{"271":4}}],["project",{"2":{"94":1}}],["probabilistic",{"2":{"278":2}}],["probability",{"2":{"36":3,"50":1,"64":3,"74":1,"85":1}}],["prob",{"2":{"223":2,"225":9,"226":2,"231":4,"235":2,"284":1,"285":1,"286":1,"288":1,"293":3,"294":1}}],["problematicδ",{"2":{"127":2}}],["problematic",{"2":{"125":1,"126":1}}],["problem",{"0":{"249":1},"2":{"83":1,"126":1,"127":1,"165":1,"168":1,"190":1,"196":2,"225":1,"235":1,"237":1,"249":1,"265":1}}],["problems",{"0":{"126":1},"2":{"83":1,"98":1}}],["promotion",{"0":{"158":1},"2":{"86":1,"158":1}}],["promotions",{"2":{"54":1,"158":1}}],["promote",{"2":{"63":1,"86":1,"158":1}}],["produce",{"2":{"51":1,"80":1,"107":1}}],["products",{"2":{"18":2}}],["product",{"0":{"170":1,"171":1,"194":1,"195":1},"2":{"18":8,"73":10,"164":2,"169":4,"170":1,"171":1,"192":5,"194":2,"195":1,"252":1,"254":1}}],["prod",{"2":{"15":1,"75":2,"77":1,"271":2}}],["propagating",{"2":{"55":1}}],["propagated",{"2":{"34":1}}],["propagate",{"2":{"8":1}}],["proper",{"2":{"135":1,"187":1}}],["properly",{"2":{"24":1}}],["properties",{"2":{"19":1,"34":1}}],["proportion",{"2":{"15":3}}],["proceeding",{"2":{"153":1}}],["proceedings",{"2":{"15":5,"50":2,"66":1}}],["proceeds",{"2":{"38":8}}],["processor",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["processing",{"2":{"38":1,"64":1}}],["processes",{"2":{"31":1,"32":1,"137":3,"163":1}}],["process",{"2":{"4":1,"137":1,"163":1,"209":1,"230":1,"252":1}}],["providing",{"2":{"12":1,"210":1}}],["provides",{"2":{"47":1,"55":1,"89":1,"96":1,"112":1,"151":1,"178":1,"185":1,"194":1,"221":1,"235":1}}],["provide",{"2":{"11":1,"15":1,"45":1,"49":2,"56":1,"86":1,"91":1,"119":1,"120":1,"169":1,"189":1}}],["provided",{"2":{"4":2,"5":1,"21":1,"23":2,"38":3,"51":1,"64":1,"78":1,"80":1,"81":2,"137":1,"151":1,"189":1,"267":1,"279":1}}],["ddot",{"2":{"283":2}}],["ddp",{"2":{"136":1}}],["dt^2",{"2":{"283":1}}],["dt2",{"2":{"283":3}}],["dt",{"2":{"283":21,"284":4,"285":2,"286":2,"288":2}}],["dsum",{"2":{"292":1,"293":1}}],["ds",{"2":{"272":5}}],["dset",{"2":{"241":6}}],["dstsize",{"2":{"81":3}}],["dst",{"2":{"81":23}}],["d63adeda50d",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["dnns",{"2":{"125":1}}],["db",{"2":{"86":13}}],["dw",{"2":{"86":7}}],["dgrid",{"2":{"82":1}}],["dynamicpplenzymecoreext",{"2":{"276":2}}],["dynamicpplzygoterulesext",{"2":{"276":1}}],["dynamicpplmcmcchainsext",{"2":{"276":1}}],["dynamicpplchainrulescoreext",{"2":{"276":1}}],["dynamicpplforwarddiffext",{"2":{"276":1}}],["dynamicppl",{"2":{"276":6}}],["dynamic",{"2":{"94":1}}],["dynamics",{"2":{"15":1}}],["dy",{"2":{"79":2,"86":8}}],["d+2",{"2":{"78":1}}],["dx",{"2":{"78":4,"86":7}}],["duration",{"2":{"278":2}}],["during",{"2":{"3":1,"41":4,"81":1,"84":1,"86":2,"276":1,"282":1}}],["dudt",{"2":{"225":2,"226":1,"231":4,"235":2}}],["du",{"2":{"223":3}}],["dump",{"2":{"119":4}}],["dummy",{"2":{"89":1}}],["due",{"2":{"77":1,"86":1,"166":1,"237":1}}],["dual",{"2":{"67":1,"84":3}}],["dl",{"2":{"51":1,"285":1}}],["dmitry",{"2":{"41":1,"66":1}}],["d2",{"2":{"25":5,"132":1,"133":3,"283":3}}],["d2=dense",{"2":{"25":1,"132":1,"135":1}}],["d3",{"2":{"25":9}}],["d3=chain",{"2":{"25":1}}],["d1×",{"2":{"41":2,"66":2}}],["d1",{"2":{"25":5,"132":1,"133":3}}],["d1=dense",{"2":{"25":1,"132":1,"135":1}}],["driver",{"2":{"238":3,"246":3}}],["dr",{"2":{"125":1}}],["dropdims",{"2":{"294":1}}],["drop",{"2":{"50":1,"164":1}}],["dropout=0",{"2":{"258":1}}],["dropout",{"0":{"36":1,"64":1,"85":1},"2":{"36":17,"64":14,"73":2,"85":10,"100":1,"143":1,"146":1,"156":1,"176":1,"258":5,"260":2}}],["dropped",{"2":{"18":2,"64":2}}],["drawn",{"2":{"15":5,"279":1}}],["d",{"2":{"8":1,"35":4,"37":6,"40":2,"42":4,"56":11,"78":2,"86":1,"123":1,"143":2,"144":5,"145":6,"155":4,"200":4,"234":2,"260":1,"274":3,"283":2,"293":8,"295":1}}],["domain",{"2":{"252":1}}],["domainerror",{"2":{"24":1,"127":2}}],["doubt",{"2":{"153":1}}],["double",{"2":{"76":1}}],["dot",{"2":{"73":8,"126":1,"133":3,"147":3,"165":2,"204":2,"223":2,"226":2,"237":3,"260":13,"274":1}}],["doi",{"2":{"71":2,"283":2}}],["doing",{"2":{"52":4,"80":1,"86":1,"119":1,"242":1,"256":1,"278":1}}],["document",{"2":{"120":1}}],["documented",{"2":{"114":1}}],["documentations",{"2":{"157":1}}],["documentation",{"2":{"7":1,"8":1,"42":1,"57":1,"61":1,"91":1,"96":1,"101":1,"116":2,"153":1,"182":1,"183":1,"213":1,"220":1}}],["doctor",{"0":{"183":1},"2":{"57":4,"160":1,"183":3}}],["docstringextensions",{"2":{"187":1,"276":1,"282":1}}],["docstrings",{"2":{"153":1}}],["docstring",{"2":{"86":1,"114":1}}],["docs",{"2":{"56":2,"104":1,"221":1,"276":2}}],["downloading\\u001b",{"2":{"276":1,"282":1}}],["downright",{"2":{"100":1,"151":1}}],["downsampled",{"2":{"78":4}}],["downstream",{"2":{"78":4,"100":1,"124":1,"220":1}}],["down",{"0":{"127":1},"2":{"50":2,"54":1,"86":1,"126":1,"127":1,"128":1,"154":1,"158":1,"188":2}}],["does",{"2":{"40":1,"54":1,"60":1,"66":2,"77":2,"84":1,"85":1,"86":7}}],["doesn",{"2":{"23":1,"25":1,"38":1,"51":1,"56":3,"60":2,"72":1,"84":1,"130":1,"153":2,"155":1,"156":1,"165":1,"176":1,"204":2,"243":1,"257":1,"260":1}}],["do",{"2":{"6":1,"34":1,"40":2,"56":14,"86":1,"88":1,"90":2,"109":1,"118":2,"147":1,"153":2,"154":1,"155":1,"156":1,"158":1,"163":3,"164":1,"169":2,"176":1,"188":2,"190":1,"201":1,"202":1,"210":1,"225":2,"231":1,"242":1,"253":7,"254":1,"258":2,"260":102,"265":1,"271":2,"273":1,"278":1,"279":1,"283":1,"285":1,"295":2}}],["done",{"2":{"23":1,"54":1,"60":2,"80":1,"86":3,"89":1,"137":1,"149":1,"166":1,"184":1}}],["don",{"2":{"4":2,"16":1,"56":2,"63":1,"65":1,"86":1,"89":1,"94":1,"100":1,"123":2,"124":1,"125":1,"128":1,"140":1,"141":1,"153":2,"155":1,"164":1,"166":1,"174":1,"188":2,"190":2,"194":1,"200":1,"201":1,"205":1,"209":1,"224":1,"230":1,"242":1}}],["dilate",{"2":{"147":2}}],["dilations",{"2":{"147":2}}],["dilation",{"2":{"35":2,"37":3,"75":1,"77":3,"86":3}}],["dilation=1",{"2":{"35":2,"37":3,"75":1,"77":1}}],["direactly",{"2":{"114":1}}],["direct",{"2":{"86":15,"114":1}}],["directions",{"2":{"75":3}}],["direction",{"2":{"41":2}}],["directly",{"2":{"3":1,"5":1,"6":1,"8":1,"22":1,"27":2,"34":2,"40":1,"42":1,"49":1,"52":2,"64":1,"72":1,"74":1,"78":3,"111":1,"115":1,"137":1,"150":1,"164":1,"185":1,"228":1,"256":1}}],["dinput",{"2":{"82":1}}],["digits=2",{"2":{"245":4}}],["digits",{"2":{"188":2}}],["digit",{"2":{"58":1}}],["div",{"2":{"86":2,"273":1}}],["diverges",{"2":{"50":1}}],["divergence",{"2":{"50":2}}],["divisor",{"2":{"77":4}}],["divisible",{"2":{"35":2}}],["divides",{"2":{"42":1,"78":1}}],["divide",{"2":{"35":2}}],["dice",{"2":{"50":2}}],["dicecoeffloss",{"2":{"50":4}}],["diagonal",{"2":{"39":1,"278":1}}],["diffeqsol",{"2":{"231":3,"232":1,"237":9}}],["diffeqnoiseprocessreversediffext",{"2":{"229":1,"282":1}}],["diffeqnoiseprocess",{"2":{"229":2,"282":2}}],["diffeqcallbacks",{"2":{"229":1,"282":1}}],["diffeqbaseunitfulext",{"2":{"282":2}}],["diffeqbasecudaext",{"2":{"229":2}}],["diffeqbasechainrulescoreext",{"2":{"229":1,"282":2}}],["diffeqbaseenzymeext",{"2":{"229":1,"282":1}}],["diffeqbasesparsearraysext",{"2":{"229":1,"282":2}}],["diffeqbasereversediffext",{"2":{"229":1,"282":1}}],["diffeqbasedistributionsext",{"2":{"229":1,"282":1}}],["diffeqbasetrackerext",{"2":{"229":1,"282":1}}],["diffeqbase",{"2":{"229":8,"282":8}}],["diffeqflux",{"2":{"228":1}}],["differ",{"2":{"192":1}}],["differs",{"2":{"77":1}}],["differences",{"0":{"140":1},"2":{"77":1,"136":1,"165":2,"174":1}}],["difference",{"2":{"36":1,"107":1,"118":3,"154":1,"165":2,"166":2,"169":1,"236":1,"253":70,"260":1025,"283":2,"295":16}}],["differentiate",{"2":{"118":1,"172":1,"190":1}}],["differentiationinterfacezygoteext",{"2":{"229":1,"282":1}}],["differentiationinterfaceenzymeext",{"2":{"229":1,"282":1}}],["differentiationinterfaceforwarddiffext",{"2":{"229":1,"276":1,"282":2}}],["differentiationinterfacefinitediffext",{"2":{"229":1,"276":1,"282":2}}],["differentiationinterfacesparsematrixcoloringsext",{"2":{"276":1,"282":1}}],["differentiationinterfacesparsearraysext",{"2":{"229":1,"276":1,"282":1}}],["differentiationinterfacestaticarraysext",{"2":{"229":1,"276":1,"282":2}}],["differentiationinterfacechainrulescoreext",{"2":{"229":1,"276":1,"282":2}}],["differentiationinterfacetrackerext",{"2":{"229":1,"276":1,"282":1}}],["differentiationinterfacereversediffext",{"2":{"229":1,"282":1}}],["differentiationinterface",{"2":{"164":1,"194":2,"229":10,"276":8,"282":11}}],["differentiation",{"0":{"17":1,"120":1,"164":1,"179":1,"192":1},"1":{"18":1,"19":1,"20":1,"121":1,"122":1,"123":1,"124":1,"165":1,"166":1,"167":1,"168":1,"169":1,"170":1,"171":1,"172":1,"193":1,"194":1,"195":1},"2":{"20":1,"84":1,"118":1,"124":1,"155":1,"164":1,"167":1,"179":2,"187":1,"194":1}}],["differential",{"2":{"71":1,"286":1}}],["differentiable",{"2":{"58":1,"84":1}}],["differently",{"2":{"34":1,"82":1,"84":1}}],["different",{"0":{"46":1},"1":{"47":1},"2":{"23":1,"34":1,"50":1,"57":1,"65":1,"67":1,"72":1,"86":1,"116":1,"166":2,"174":1,"182":1,"191":1,"225":1}}],["diffrules",{"2":{"187":1,"276":1,"282":1}}],["diffresults",{"2":{"187":1,"276":1,"282":1}}],["diffractor",{"2":{"121":1}}],["diff",{"2":{"165":1,"167":1,"168":1,"192":1}}],["difficulty",{"2":{"15":2}}],["dimwhere",{"2":{"50":1}}],["dim",{"2":{"40":6,"73":4,"84":2,"147":7,"196":7,"258":10,"260":2,"271":5}}],["dimensionmismatch",{"2":{"126":2}}],["dimensionality",{"2":{"50":1,"86":1}}],["dimensional",{"2":{"39":3,"42":2,"66":2,"78":1,"86":4,"147":1,"194":1}}],["dimension",{"0":{"126":1},"2":{"35":4,"36":2,"37":6,"38":8,"39":1,"40":9,"41":14,"47":1,"51":1,"62":2,"63":1,"66":1,"76":11,"78":1,"80":2,"84":1,"86":6,"104":1,"125":1,"132":1,"135":1,"186":1}}],["dimensions",{"2":{"15":11,"19":2,"35":3,"37":9,"38":2,"39":10,"40":5,"41":4,"42":8,"47":1,"66":2,"73":3,"74":1,"75":1,"76":14,"77":2,"78":7,"79":2,"80":1,"86":4,"117":1,"147":2,"196":1,"210":1}}],["dims=tuple",{"2":{"292":1,"293":1}}],["dims=8",{"2":{"274":1}}],["dims=4",{"2":{"273":1}}],["dims=3",{"2":{"200":1,"272":1}}],["dims=ndims",{"2":{"132":1}}],["dims=1",{"2":{"50":1,"66":1,"74":1,"85":5,"252":5}}],["dims=colon",{"2":{"41":1}}],["dims=2",{"2":{"38":1,"73":1,"74":1,"295":1}}],["dims=",{"2":{"36":2}}],["dims=pad",{"2":{"35":1}}],["dims",{"2":{"15":11,"35":1,"36":4,"38":31,"39":55,"40":7,"41":3,"47":2,"50":2,"51":1,"64":2,"66":3,"73":1,"74":11,"76":37,"77":1,"84":3,"85":3,"86":16,"104":1,"147":14,"153":12,"186":4,"201":9,"202":7,"250":11,"253":2,"271":11,"272":1,"273":2,"274":3,"293":18,"294":3,"295":2}}],["discrete",{"2":{"231":1,"234":1}}],["discouraged",{"2":{"165":2}}],["discourse",{"2":{"164":1,"165":1,"167":1}}],["discuss",{"2":{"221":1}}],["discussed",{"2":{"128":1}}],["discussions",{"2":{"101":1}}],["discussion",{"2":{"38":1,"167":1,"194":1}}],["disabling",{"0":{"184":1},"2":{"162":1}}],["disabled",{"2":{"8":1,"35":2,"39":3,"159":1}}],["disable",{"2":{"8":1,"24":1,"57":3,"86":1,"114":2,"127":1,"159":1,"164":1,"179":1,"183":1,"184":2}}],["disallow",{"2":{"159":1}}],["disaster",{"2":{"158":1}}],["disruptive",{"2":{"98":1}}],["dissimilar",{"2":{"50":1}}],["dist",{"2":{"293":7,"295":1}}],["distinguish",{"2":{"174":2}}],["distinction",{"2":{"100":1}}],["distinct",{"2":{"38":1}}],["distance",{"2":{"50":1,"86":2}}],["distributionsadtrackerext",{"2":{"276":1}}],["distributionsadforwarddiffext",{"2":{"276":1}}],["distributionsad",{"2":{"276":3}}],["distributionsdensityinterfaceext",{"2":{"276":1}}],["distributionstestext",{"2":{"263":1,"276":1,"282":1}}],["distributionschainrulescoreext",{"2":{"263":1,"276":1,"282":1}}],["distributions",{"2":{"50":1,"74":1,"263":3,"276":4,"278":2,"282":3}}],["distribution",{"2":{"15":8,"16":12,"35":2,"38":6,"39":3,"50":5,"58":1,"271":3,"279":1}}],["distributeddatacontainer",{"2":{"32":1,"137":2,"138":1}}],["distributedoptimizer",{"2":{"31":2,"137":2,"138":1}}],["distributedutils",{"0":{"137":1},"2":{"4":2,"26":1,"27":2,"28":3,"29":2,"30":6,"31":1,"32":1,"136":1,"137":11,"138":5,"139":1,"140":3}}],["distributed",{"0":{"26":1,"136":1},"1":{"27":1,"28":1,"29":1,"30":1,"31":1,"32":1,"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"4":3,"15":2,"27":4,"28":7,"100":1,"137":2,"192":1,"276":1,"282":1}}],["disjoint",{"2":{"25":1}}],["displays",{"2":{"277":1,"279":1}}],["displayed",{"2":{"119":1,"149":1}}],["display",{"2":{"8":1,"80":1,"100":1,"154":1,"274":1}}],["dispatch=",{"2":{"242":1}}],["dispatch=nothing",{"2":{"56":1}}],["dispatch₋₋₋",{"2":{"237":3}}],["dispatching",{"0":{"129":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"86":1}}],["dispatchdoctorchainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["dispatchdoctorenzymecoreext",{"2":{"187":1,"276":1,"282":1}}],["dispatchdoctor",{"2":{"8":1,"57":1,"160":1,"187":3,"276":3,"282":3}}],["dispatches",{"2":{"3":1,"8":1,"51":1,"56":1,"67":2,"151":1}}],["dispatch",{"0":{"130":1,"134":1,"183":1},"2":{"3":1,"56":2,"57":4,"75":1,"94":1,"130":2,"134":1,"135":1,"156":1,"160":1,"183":3}}],["date",{"2":{"220":2}}],["dataaugmentation",{"2":{"270":1}}],["dataapi",{"2":{"192":1,"276":1,"282":1}}],["datapoints",{"2":{"264":1}}],["datadeps",{"2":{"229":1}}],["datastructures",{"2":{"276":1,"282":1}}],["datasize",{"2":{"223":1,"284":1}}],["datasets",{"0":{"241":1},"2":{"200":1,"241":2,"245":1}}],["dataset",{"0":{"200":1,"257":1,"264":1,"291":1},"2":{"32":1,"163":2,"200":7,"209":5,"225":1,"230":5,"241":2,"264":2,"272":5,"277":2,"291":2}}],["datavalueinterfaces",{"2":{"192":1,"276":1,"282":1}}],["datatypes",{"2":{"86":4}}],["datatype",{"2":{"8":1}}],["dataloaders",{"2":{"200":3,"204":2,"245":3}}],["dataloader",{"0":{"224":1},"2":{"5":8,"112":1,"119":4,"163":5,"200":5,"209":3,"211":2,"212":6,"224":5,"225":4,"229":1,"230":3,"233":2,"234":5,"241":2,"244":2,"245":9,"253":6,"272":1,"274":5,"291":3,"295":3}}],["data",{"0":{"2":1,"136":1,"163":1,"223":1,"252":1,"277":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"0":1,"2":1,"3":3,"5":1,"32":2,"35":5,"37":9,"38":1,"40":1,"41":6,"42":2,"49":6,"50":1,"56":10,"77":2,"78":1,"81":4,"86":9,"90":14,"100":1,"112":1,"118":1,"119":2,"137":3,"153":1,"158":1,"163":5,"196":1,"200":8,"209":6,"223":3,"224":3,"225":3,"226":2,"230":6,"244":2,"245":16,"251":5,"252":10,"253":63,"257":7,"262":1,"264":4,"267":1,"268":5,"277":6,"278":1,"279":7,"284":3,"285":2,"286":1,"288":2,"291":3}}],["danger",{"2":{"2":1,"4":2,"19":1,"28":1}}],["de",{"2":{"165":2}}],["demonstration",{"2":{"225":1,"230":1,"247":1,"272":1}}],["demonstrative",{"2":{"155":1}}],["demonstrate",{"2":{"165":1,"174":1,"196":1,"207":1}}],["demonstrates",{"2":{"77":1,"290":1}}],["derive",{"2":{"86":1}}],["derivatives",{"2":{"84":1,"86":1,"165":1,"166":1,"194":1,"249":1,"285":1}}],["derivative",{"2":{"84":3,"86":1,"167":1}}],["deadlocks",{"2":{"141":1}}],["deal",{"2":{"86":3,"167":1,"231":1}}],["deactivate",{"2":{"38":3}}],["deg2rad",{"2":{"79":4}}],["dec",{"2":{"271":4}}],["decode",{"2":{"271":2,"273":4,"274":3}}],["decoder=st",{"2":{"271":2}}],["decoder",{"2":{"271":14}}],["decouples",{"2":{"41":1}}],["decay=1e",{"2":{"274":1}}],["decay",{"2":{"260":3,"274":1}}],["decision",{"2":{"178":1}}],["decides",{"2":{"176":1}}],["decibel",{"2":{"86":1}}],["decimal",{"2":{"58":1}}],["decrease",{"2":{"60":1}}],["declared",{"2":{"56":1}}],["debuggable",{"2":{"86":2}}],["debugging",{"0":{"24":1,"125":1},"1":{"126":1,"127":1,"128":1},"2":{"24":3,"54":1,"86":2,"119":1,"125":2}}],["debuglayer",{"2":{"24":3,"126":4,"127":4}}],["debug",{"2":{"24":5,"80":2,"115":1,"125":4,"126":6,"127":10,"158":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["density",{"0":{"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1},"2":{"278":1}}],["densityinterface",{"2":{"276":1}}],["dense=dense",{"2":{"258":1}}],["denselayerparameters",{"2":{"155":3}}],["denseconvdims",{"2":{"77":2}}],["dense",{"2":{"22":2,"23":18,"25":4,"34":12,"39":2,"40":1,"41":12,"45":2,"47":3,"50":2,"56":10,"65":2,"74":3,"77":1,"84":1,"86":1,"89":17,"90":6,"105":1,"116":1,"118":18,"119":3,"126":17,"127":14,"132":6,"133":2,"135":2,"143":9,"144":3,"146":3,"147":10,"153":1,"155":2,"158":1,"161":1,"165":2,"166":2,"167":4,"168":4,"172":4,"173":5,"183":1,"196":2,"201":2,"202":1,"210":9,"225":3,"232":5,"237":45,"243":4,"250":4,"257":1,"258":1,"265":4,"268":8,"278":4,"285":6,"293":3}}],["denom",{"2":{"284":3,"285":3}}],["denominator",{"2":{"41":4,"66":4}}],["denotes",{"2":{"3":4}}],["delaunaytriangulation",{"2":{"276":1,"282":1}}],["delimitedfiles",{"2":{"276":1}}],["delta=0",{"2":{"50":1}}],["delta",{"2":{"50":2}}],["delving",{"2":{"15":2}}],["deleted",{"2":{"1":1}}],["descent",{"2":{"196":2}}],["describes",{"2":{"47":1,"125":1,"284":1,"285":1}}],["described",{"2":{"15":3,"34":2}}],["desperate",{"2":{"86":1}}],["despite",{"2":{"49":1}}],["destination",{"2":{"81":6}}],["destructure",{"2":{"45":3,"100":2,"177":1}}],["desirable",{"2":{"184":1,"278":1}}],["desired",{"2":{"1":1,"35":1,"117":1}}],["design",{"0":{"99":1},"2":{"5":1,"98":1,"174":1}}],["det",{"2":{"292":11,"293":4}}],["detour",{"2":{"192":1}}],["detection",{"2":{"50":2}}],["detected",{"2":{"24":1,"127":2,"253":1}}],["deterministic",{"2":{"15":1,"100":1}}],["determined",{"2":{"77":1,"89":1}}],["determines",{"2":{"47":1}}],["determine",{"2":{"3":1,"52":1,"64":2,"66":2,"140":1,"278":1}}],["details",{"0":{"175":1},"1":{"176":1},"2":{"24":1,"25":1,"41":1,"42":1,"45":1,"47":2,"55":1,"56":2,"57":1,"61":1,"64":2,"66":4,"73":1,"75":3,"81":1,"83":1,"90":1,"96":1,"104":1,"107":1,"109":1,"114":1,"116":2,"153":1,"179":1,"182":1,"183":1,"189":1}}],["detailed",{"2":{"7":1,"22":1,"126":1}}],["depots",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["depot",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["depwarn",{"2":{"114":1}}],["depwthwise",{"2":{"86":1}}],["deprecated",{"2":{"52":10,"104":1}}],["deprecation",{"2":{"21":1,"52":5,"110":2,"165":2}}],["deprecations",{"2":{"2":1,"114":1}}],["depth",{"2":{"78":1}}],["depthwiseconvdims",{"2":{"77":1}}],["depthwiseconv",{"2":{"77":1,"86":2}}],["depthwise",{"2":{"35":2,"77":2,"86":8}}],["depths",{"2":{"23":1}}],["dependent",{"0":{"132":1},"2":{"131":1}}],["dependencies",{"0":{"162":1},"2":{"94":1,"107":1,"184":1,"187":1,"192":5,"199":1,"229":9,"240":3,"263":2,"276":5,"278":1,"282":16}}],["dependency",{"2":{"6":1,"12":1,"94":1,"151":1,"166":1,"192":7,"199":5,"229":29,"240":11,"263":7,"276":16,"282":47}}],["depend",{"2":{"6":1,"66":1,"86":1,"151":1,"153":1}}],["depending",{"2":{"6":1,"8":1,"15":1,"82":1}}],["defiing",{"0":{"285":1}}],["definition",{"0":{"249":1,"258":1,"271":1,"293":1}}],["definitions",{"2":{"154":1}}],["definitely",{"2":{"172":1}}],["defining",{"0":{"90":1,"203":1},"2":{"0":1,"3":2,"11":1,"56":1,"90":1,"118":1,"130":1,"151":1,"153":1,"154":1,"155":1,"201":1,"202":2,"210":1,"242":1}}],["define",{"0":{"210":1,"212":1,"224":1,"231":1,"233":1,"244":1,"250":1,"251":1,"283":1,"291":1},"2":{"7":1,"67":1,"81":1,"86":1,"90":1,"127":2,"130":2,"131":1,"134":1,"147":2,"151":2,"153":3,"154":1,"155":2,"174":2,"196":1,"201":3,"202":1,"203":1,"224":1,"225":1,"231":1,"250":1,"251":1,"271":2,"278":3,"283":2,"285":2,"286":2,"291":1}}],["definesingletons",{"2":{"276":1}}],["defines",{"2":{"6":1,"7":2,"52":1,"283":1,"284":1}}],["defined",{"2":{"3":1,"8":1,"11":1,"79":2,"84":1,"86":1,"133":1,"147":1}}],["defer",{"2":{"194":1}}],["def",{"2":{"147":1}}],["defaulting",{"2":{"86":1}}],["defaults",{"0":{"116":1},"2":{"38":3,"49":2,"55":1,"76":9,"116":4,"285":1}}],["default",{"2":{"2":1,"3":2,"8":1,"13":6,"15":8,"16":24,"23":1,"38":1,"40":4,"41":1,"45":1,"47":3,"53":1,"54":1,"56":1,"66":10,"70":4,"73":4,"74":1,"75":3,"77":3,"79":1,"82":2,"85":1,"86":6,"89":1,"90":1,"100":1,"104":1,"107":2,"114":1,"116":2,"118":5,"119":1,"146":1,"147":2,"148":1,"153":1,"155":5,"158":1,"159":1,"173":2,"174":2,"176":2,"180":1,"183":1,"184":1,"185":4,"186":1,"187":1,"188":1,"190":1,"196":1,"197":1,"201":1,"204":1,"206":1,"208":1,"212":1,"214":1,"224":1,"225":1,"227":1,"232":1,"237":1,"238":1,"242":1,"246":1,"253":1,"255":1,"260":1,"261":1,"269":1,"271":2,"275":1,"277":1,"280":1,"285":1,"289":1,"291":1,"295":1,"296":1,"297":1}}],["dev=gpu",{"2":{"232":1}}],["dev=cpu",{"2":{"212":1}}],["developer",{"2":{"118":1}}],["developed",{"2":{"100":1,"220":1}}],["deviate",{"2":{"285":1}}],["deviation",{"2":{"15":3,"41":1}}],["devicememory",{"2":{"5":3,"185":2,"237":18}}],["deviceiterator",{"2":{"5":3,"112":2,"119":2,"224":1}}],["device=missing",{"2":{"2":1}}],["device",{"0":{"163":1},"2":{"2":27,"3":32,"4":11,"5":3,"28":1,"56":1,"67":1,"69":5,"70":4,"89":2,"90":1,"110":1,"111":3,"115":2,"118":6,"119":5,"140":4,"147":1,"148":1,"149":7,"150":2,"163":5,"174":3,"204":3,"212":1,"213":1,"222":2,"224":1,"232":1,"234":2,"237":1,"238":1,"245":1,"246":1,"248":2,"256":2,"267":2,"270":2,"273":3,"290":2}}],["devices",{"2":{"0":1,"28":4,"174":1}}],["dev",{"2":{"2":1,"3":1,"4":2,"5":4,"67":2,"69":4,"70":3,"89":5,"90":5,"147":3,"204":5,"212":4,"232":2,"234":3,"245":4}}],["deepcopy",{"2":{"49":1}}],["deep",{"2":{"2":1,"12":1,"15":6,"58":3,"66":1,"71":1,"75":1,"100":1,"158":1,"161":1,"185":1,"186":1}}],["aac",{"2":{"276":1,"282":1}}],["a100",{"2":{"238":1,"246":1}}],["a=layer",{"2":{"174":1}}],["a=0",{"2":{"58":1}}],["a×b×x",{"2":{"174":1}}],["a∈rd×d",{"2":{"169":1}}],["aware",{"0":{"180":1},"2":{"140":2,"141":1,"180":4}}],["ah",{"2":{"127":1}}],["ahead",{"2":{"90":1}}],["ahmadi",{"2":{"50":1}}],["ahmad",{"2":{"50":1}}],["aesthetic",{"2":{"86":1}}],["ax",{"2":{"223":4,"226":6,"254":4,"264":4,"268":5,"277":3,"284":4,"285":6,"288":11,"291":2,"296":2}}],["axisalgorithms",{"2":{"276":1,"282":1}}],["axisarrays",{"2":{"276":1,"282":1}}],["axislegend",{"2":{"223":1,"226":1,"264":1,"268":1,"284":1,"285":1,"288":1}}],["axis",{"2":{"86":1,"223":1,"226":1,"237":44,"254":1,"264":1,"268":1,"277":1,"279":1,"284":1,"285":1,"288":2,"291":1,"296":1}}],["axes",{"2":{"86":2,"242":3}}],["ai",{"2":{"194":1}}],["aid",{"2":{"86":2}}],["aims",{"2":{"15":1}}],["audio",{"2":{"86":2}}],["author",{"2":{"71":2}}],["automa",{"2":{"276":1,"282":1}}],["automatically",{"2":{"5":1,"7":2,"8":1,"38":2,"64":3,"66":2,"70":1,"140":1,"148":1,"162":1,"164":1,"176":1,"177":1,"200":1,"201":2,"202":1,"209":1,"230":1,"278":1}}],["automatic",{"0":{"17":1,"120":1,"149":1,"164":1,"179":1,"182":1,"192":1},"1":{"18":1,"19":1,"20":1,"121":1,"122":1,"123":1,"124":1,"165":1,"166":1,"167":1,"168":1,"169":1,"170":1,"171":1,"172":1,"193":1,"194":1,"195":1},"2":{"2":1,"3":1,"8":1,"20":1,"118":1,"148":1,"149":2,"150":1,"155":1,"164":2,"179":4,"181":1,"187":1,"194":1}}],["autotuner",{"2":{"253":7,"260":102,"295":2}}],["autotuning",{"2":{"253":7,"260":102,"295":2}}],["autotracker",{"2":{"49":1,"96":1}}],["autojacvec=reversediffvjp",{"2":{"234":1}}],["autojacvec=zygotevjp",{"2":{"232":1,"234":1}}],["auto",{"2":{"158":1,"176":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["autofinitediff",{"2":{"96":1}}],["autoforwarddiff",{"2":{"18":1,"19":1,"96":1,"166":1,"171":1,"194":2}}],["autodiff",{"2":{"64":2,"66":2,"84":1,"98":1,"155":1,"164":1,"165":2,"192":2,"260":1,"274":1}}],["autoencoder",{"2":{"270":1}}],["autoencoders",{"2":{"58":1}}],["autoenzyme",{"2":{"49":1,"90":1,"96":1,"119":2,"204":1,"212":1,"253":1,"260":1,"268":2,"274":1,"295":1}}],["autoreactant",{"2":{"49":1}}],["autoreversediff",{"2":{"49":2,"96":1}}],["autozygote",{"2":{"18":1,"19":1,"49":1,"89":2,"96":1,"170":1,"195":1,"196":1,"204":1,"212":1,"225":1,"234":1,"245":1,"287":1}}],["autoselection",{"2":{"2":1}}],["aorb",{"2":{"80":1}}],["aka",{"2":{"63":1,"65":1}}],["affected",{"2":{"169":1}}],["affects",{"2":{"35":1}}],["affinebijector",{"2":{"292":5,"293":2}}],["affine",{"2":{"41":1}}],["affine=false",{"2":{"41":5,"116":1}}],["affine=true",{"2":{"34":2,"41":20,"116":1,"126":1,"127":2}}],["afterwards",{"2":{"41":1}}],["after",{"2":{"24":1,"34":2,"36":1,"37":3,"41":4,"56":1,"64":2,"73":1,"83":1,"90":1,"100":1,"125":1,"137":1,"140":1,"163":1,"196":14,"201":1}}],["average",{"2":{"279":3}}],["averages",{"2":{"30":2,"31":1}}],["avik",{"2":{"71":2}}],["avoids",{"2":{"235":1}}],["avoiding",{"2":{"166":1}}],["avoid",{"2":{"38":1,"53":4,"64":1,"86":1,"100":1,"140":1,"235":1,"237":1,"257":1,"278":2}}],["avg",{"2":{"30":2}}],["available",{"2":{"26":1,"32":1,"60":2,"63":1,"65":1,"86":1,"88":1,"91":1,"118":1,"120":1,"122":1,"148":1,"149":1,"150":1,"162":1,"177":1,"238":1,"246":1}}],["ab7d",{"2":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["abyss",{"2":{"187":1}}],["ability",{"2":{"151":1}}],["abitrary",{"2":{"76":1}}],["able",{"2":{"76":1,"86":1,"107":1,"198":1}}],["above",{"2":{"34":2,"51":1,"84":1,"86":2,"88":1,"90":1,"168":1,"186":1,"189":1,"279":1}}],["about",{"2":{"24":1,"54":1,"58":1,"77":1,"86":1,"100":1,"153":1,"188":1,"191":1,"201":1,"252":1,"279":1}}],["abstol",{"2":{"237":13}}],["abstol=1",{"2":{"232":1}}],["abstractbijector",{"2":{"292":3}}],["abstractbatchedmatrix",{"2":{"80":2,"86":2}}],["abstractppl",{"2":{"276":1}}],["abstractmcmc",{"2":{"276":1}}],["abstractmatrix",{"2":{"40":1,"65":2,"153":1,"154":1,"231":1}}],["abstracttrees",{"2":{"276":1,"282":1}}],["abstracttimeseriesdatabatchordering=batchlastindex",{"2":{"38":2}}],["abstractfftstestext",{"2":{"276":1,"282":1}}],["abstractfftschainrulescoreext",{"2":{"192":1,"276":1,"282":1}}],["abstractffts",{"2":{"192":2,"276":3,"282":3}}],["abstractfloat",{"2":{"84":1,"291":2}}],["abstraction",{"2":{"110":1}}],["abstractions",{"2":{"67":1}}],["abstractexplicitcontainerlayer",{"2":{"107":2,"108":1}}],["abstractexplicitlayer",{"2":{"107":1}}],["abstractvector",{"2":{"40":1,"63":1,"65":1,"278":1,"283":2}}],["abstractrange",{"2":{"272":1}}],["abstractrule",{"2":{"49":1}}],["abstractrecurrentcell",{"2":{"38":2}}],["abstractrng=utils",{"2":{"15":8,"16":24}}],["abstractrng",{"2":{"7":2,"8":2,"9":1,"10":1,"15":3,"64":4,"153":2,"174":2,"242":1,"264":1,"291":1,"293":2}}],["abstractadtype",{"2":{"18":2,"19":1,"49":1}}],["abstractarrays",{"2":{"96":1}}],["abstractarray",{"2":{"15":16,"16":60,"19":1,"36":3,"39":6,"40":6,"41":1,"51":1,"52":1,"60":2,"63":2,"66":1,"67":1,"78":10,"79":2,"80":2,"82":5,"86":6,"134":1,"164":3,"168":1,"174":2,"201":1,"202":1,"226":1,"251":3,"273":1,"292":9,"293":2}}],["abstractluxdistributedbacked",{"2":{"31":1}}],["abstractluxdistributedbackend",{"2":{"28":3,"29":2,"30":7,"32":1}}],["abstractluxwrapperlayer",{"2":{"7":4,"23":1,"107":1,"108":2,"132":1,"154":1,"231":1,"235":1}}],["abstractluxcontainerlayer",{"2":{"7":5,"107":1,"154":3,"201":2,"250":1,"271":1,"293":1}}],["abstractluxlayers",{"2":{"34":1}}],["abstractluxlayer",{"2":{"7":5,"8":2,"22":4,"23":3,"24":1,"34":4,"41":1,"47":3,"49":1,"55":1,"107":1,"130":2,"133":3,"134":2,"153":2,"174":1,"231":3,"235":2,"242":2,"271":2}}],["abstract",{"0":{"7":1},"2":{"3":2,"6":1,"7":3,"39":4,"225":1}}],["abstractdevice",{"2":{"2":1,"3":9,"4":4,"5":1}}],["abstractgpudevice",{"2":{"1":1,"150":1}}],["absolute",{"2":{"50":1,"158":1}}],["absolutely",{"2":{"49":1}}],["abs2",{"2":{"50":1,"56":3,"96":1,"165":2,"166":2,"167":2,"168":1,"251":3,"278":1,"293":1}}],["abs",{"2":{"15":1,"40":1,"78":1,"254":2,"283":1}}],["amazing",{"2":{"220":1}}],["am",{"2":{"167":1,"285":1}}],["amin",{"2":{"86":2}}],["amount",{"2":{"86":1}}],["ambiguous",{"2":{"52":1,"107":1}}],["amplitude",{"2":{"86":1}}],["amp",{"0":{"18":1,"147":1,"157":1,"187":1,"291":1},"1":{"158":1,"159":1,"160":1,"161":1,"162":1,"163":1,"188":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1},"2":{"18":2,"19":1,"35":4,"38":1,"39":1,"40":1,"41":1,"49":1,"78":1,"84":2,"98":2}}],["amd",{"2":{"3":1,"122":1,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["amdgpudevice",{"2":{"4":2,"140":1,"150":2,"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["amdgpu",{"2":{"2":2,"3":1,"13":2,"28":1,"69":2,"89":1,"93":2,"123":1,"141":1,"148":2,"197":2,"206":2,"214":2,"227":2,"238":2,"246":2,"255":2,"261":2,"269":2,"275":2,"280":2,"289":2,"297":2}}],["academic",{"2":{"71":1}}],["acts",{"2":{"278":1}}],["action",{"2":{"57":1}}],["activity",{"2":{"96":2}}],["actively",{"2":{"105":1}}],["active",{"2":{"45":2}}],["activates",{"2":{"183":1}}],["activate",{"2":{"96":1}}],["activated",{"2":{"38":4}}],["activations",{"2":{"58":1,"60":1,"63":1,"83":1,"278":1}}],["activation=gelu",{"2":{"293":2}}],["activation=tanh",{"2":{"38":1}}],["activation=identity",{"2":{"35":2,"39":4,"41":4}}],["activation",{"0":{"58":1,"60":1,"62":1},"2":{"35":6,"38":3,"39":10,"41":12,"56":1,"58":19,"60":7,"62":12,"63":4,"65":3,"66":5,"74":2,"83":2,"84":2,"114":3,"116":3,"161":7,"183":1,"278":1,"285":3,"293":2}}],["act",{"2":{"56":8,"60":2,"66":2,"84":4,"86":2,"90":5,"250":4}}],["act=relu",{"2":{"56":2,"90":1}}],["actually",{"2":{"107":1,"151":1}}],["actual",{"2":{"24":1,"193":2,"268":1}}],["acc",{"2":{"204":3,"212":8,"213":4,"234":4,"245":13,"260":66}}],["accumulated",{"2":{"86":1,"225":1}}],["accumulate",{"2":{"86":1}}],["accumulates",{"2":{"41":2,"77":1,"81":1}}],["accumulation",{"2":{"86":2}}],["accuracy",{"0":{"203":1},"2":{"60":1,"203":2,"204":1,"211":2,"212":4,"213":40,"233":2,"234":94,"236":18,"244":2,"245":212,"259":2,"260":3}}],["accurate",{"2":{"58":3}}],["accompanied",{"2":{"86":1}}],["accounting",{"2":{"77":1}}],["account",{"2":{"56":1}}],["accordingly",{"2":{"41":2,"66":2}}],["according",{"2":{"15":1,"81":5,"82":1}}],["accidental",{"2":{"54":1}}],["acceptance",{"2":{"278":1}}],["acceptable",{"2":{"80":1}}],["accepted",{"2":{"80":2}}],["accept",{"2":{"74":1,"155":1,"278":1}}],["accepts",{"2":{"42":1,"86":1}}],["accelerate",{"2":{"184":1}}],["accelerators",{"2":{"118":1}}],["accelerator",{"2":{"72":1}}],["accelerating",{"2":{"66":1}}],["access",{"2":{"34":2,"56":1,"69":1,"70":1,"86":1,"153":1,"154":1,"155":1}}],["accessors",{"2":{"25":1,"276":7,"282":7}}],["accessing",{"2":{"23":1,"45":1,"47":1,"72":1}}],["achieved",{"2":{"35":1,"166":1}}],["achieve",{"2":{"15":1,"38":1,"77":1}}],["across",{"2":{"0":1,"30":2,"31":1,"32":1,"66":2,"76":3,"99":2,"137":3,"188":2,"279":1}}],["adj",{"2":{"258":5,"259":2,"260":10}}],["adjacency",{"2":{"257":1}}],["adjust",{"2":{"253":1}}],["adjoint",{"2":{"58":1,"77":2,"79":2,"80":11,"86":6,"225":2,"257":1}}],["adtype=autotracker",{"2":{"278":1}}],["adtype",{"2":{"96":1,"287":1}}],["adtypeschainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["adtypesconstructionbaseext",{"2":{"187":1,"276":1,"282":1}}],["adtypesenzymecoreext",{"2":{"187":1,"276":1,"282":1}}],["adtypes",{"2":{"49":1,"164":1,"187":4,"199":1,"263":1,"268":1,"276":4,"282":4}}],["advances",{"2":{"64":1}}],["advancedvi",{"2":{"276":2}}],["advancedmhmcmcchainsext",{"2":{"276":1}}],["advancedmhforwarddiffext",{"2":{"276":1}}],["advancedmhstructarraysext",{"2":{"276":1}}],["advancedmh",{"2":{"276":4}}],["advancedpslibtaskext",{"2":{"276":1}}],["advancedps",{"2":{"276":2}}],["advancedhmcmcmcchainsext",{"2":{"276":1}}],["advancedhmc",{"2":{"276":2}}],["advanced",{"0":{"218":1},"2":{"7":2,"118":1}}],["adamw",{"2":{"260":1,"274":1}}],["adam",{"2":{"56":1,"89":15,"90":1,"119":1,"204":1,"212":1,"225":4,"234":1,"245":1,"253":1,"260":1,"266":3,"268":2,"295":1}}],["adapted",{"2":{"281":2}}],["adaptext",{"2":{"187":1,"276":1,"282":1}}],["adaptstaticarraysext",{"2":{"187":1,"276":1,"282":1}}],["adapting",{"2":{"58":1}}],["adaptive=false",{"2":{"284":1,"285":1,"286":1,"288":1}}],["adaptivepredicates",{"2":{"276":1,"282":1}}],["adaptivemeanpool",{"2":{"37":1}}],["adaptivemaxpool",{"2":{"37":1}}],["adaptive",{"2":{"37":3}}],["adaptivelppool",{"2":{"37":1,"117":1}}],["adaptor",{"2":{"47":4,"111":1,"210":1}}],["adapt",{"2":{"3":4,"45":6,"47":6,"187":2,"276":2,"282":2}}],["ads",{"2":{"52":1}}],["adding",{"2":{"141":1}}],["addition",{"2":{"52":1,"62":1}}],["additional",{"0":{"91":1},"2":{"51":1,"74":1,"91":1,"93":1,"95":1,"96":1,"173":1,"189":1,"194":1,"196":1}}],["additionally",{"2":{"7":1,"49":1,"50":1,"51":2,"56":1,"83":1,"120":1,"148":1,"151":1,"163":1,"169":1,"174":1,"205":1,"224":1,"251":1}}],["address",{"2":{"98":1}}],["add",{"2":{"52":3,"68":2,"69":5,"70":1,"86":1,"88":2,"89":1,"91":1,"94":1,"117":1,"123":1,"147":5,"165":1,"178":3,"188":1,"189":1,"220":1}}],["added",{"2":{"35":2,"37":3,"40":1,"41":4,"50":2,"62":1,"66":4,"73":1,"86":1,"102":1,"117":1}}],["adds",{"2":{"24":1,"61":1,"86":2,"188":1}}],["ad",{"0":{"19":1,"20":1},"2":{"18":6,"19":1,"20":1,"24":2,"49":3,"50":1,"52":2,"55":2,"63":1,"65":1,"89":2,"99":1,"100":3,"120":4,"121":2,"124":4,"147":1,"157":1,"164":4,"167":2,"179":1,"187":1,"190":1,"192":2,"193":5,"194":2,"204":2,"234":1,"247":1,"249":2,"251":1,"254":1,"268":1}}],["atomsbase",{"2":{"229":1}}],["atomixcudaext",{"2":{"229":1,"240":1}}],["atomix",{"2":{"187":1,"229":1,"240":1,"276":1,"282":1}}],["at=train",{"2":{"209":1,"230":1}}],["at=0",{"2":{"200":1}}],["attempt",{"2":{"86":2}}],["attempts",{"2":{"45":1,"61":1,"65":1}}],["attention",{"0":{"73":1},"2":{"73":18,"86":1}}],["atleast",{"2":{"66":1}}],["at",{"2":{"4":2,"22":1,"30":2,"35":2,"37":3,"38":1,"42":1,"50":1,"56":1,"58":1,"64":1,"75":1,"81":1,"82":1,"84":1,"86":7,"89":1,"118":1,"126":4,"127":10,"141":1,"142":1,"148":1,"155":1,"183":2,"194":1,"201":1,"220":2,"253":70,"260":1027,"279":2,"283":2,"284":1,"295":16}}],["april",{"2":{"71":1}}],["append",{"2":{"277":2}}],["appendix",{"0":{"197":1,"206":1,"214":1,"227":1,"238":1,"246":1,"255":1,"261":1,"269":1,"275":1,"280":1,"289":1,"297":1}}],["appears",{"2":{"77":1}}],["appreciate",{"2":{"56":1}}],["approx",{"2":{"96":1}}],["approximation",{"2":{"40":1,"58":3}}],["approximately",{"2":{"15":1}}],["approach",{"0":{"149":1},"2":{"72":1,"118":1,"225":1}}],["appropriate",{"2":{"8":1,"15":1,"86":1}}],["applications",{"2":{"100":1,"190":1}}],["applicable",{"2":{"2":1,"4":1}}],["applies",{"2":{"34":1,"41":6,"42":2,"62":1,"66":1,"74":1,"76":1,"86":1,"137":1}}],["applied",{"2":{"8":1,"15":3,"34":1,"36":4,"41":4,"50":2,"52":1,"73":2,"76":5,"81":2,"83":1,"86":6,"174":2}}],["applychain",{"2":{"237":6}}],["applying",{"2":{"15":1,"34":1,"64":2,"73":1,"80":2,"86":2}}],["apply",{"0":{"60":1},"2":{"8":7,"34":1,"35":2,"36":2,"37":3,"49":5,"52":1,"58":1,"77":1,"80":1,"86":2,"89":3,"130":1,"132":1,"133":4,"134":3,"153":1,"154":1,"183":1,"268":1,"272":1,"278":1,"292":3}}],["appleaccelerate",{"2":{"65":1}}],["apple",{"2":{"3":1,"93":1}}],["api",{"0":{"14":1,"49":1,"119":1,"202":1},"1":{"15":1,"16":1},"2":{"3":1,"30":3,"34":9,"40":2,"45":1,"49":2,"50":2,"60":2,"61":1,"62":2,"63":1,"64":2,"65":1,"66":4,"84":3,"85":1,"86":1,"89":3,"102":1,"104":2,"109":1,"118":2,"119":2,"142":1,"173":1,"196":1,"198":1,"202":1,"207":1,"221":1,"231":1,"235":1,"237":1,"238":1,"246":1,"267":1,"285":2}}],["astroinformatics",{"2":{"281":1}}],["ask",{"2":{"188":3}}],["ascii",{"2":{"58":2}}],["asymmetric",{"2":{"35":2,"37":3}}],["assuming",{"2":{"273":1}}],["assume",{"2":{"118":1,"147":1,"283":1}}],["assumed",{"2":{"83":1,"86":2,"96":1}}],["assumes",{"2":{"11":1,"34":1,"82":1,"86":1,"116":1}}],["assert",{"2":{"191":1,"273":1,"278":1,"283":5,"293":1}}],["associated",{"2":{"83":1,"98":1}}],["assymetric",{"2":{"76":1}}],["assigned",{"2":{"56":1}}],["assign",{"2":{"28":1,"81":2}}],["as",{"2":{"3":2,"4":2,"5":3,"7":4,"15":5,"18":2,"24":1,"25":1,"34":10,"35":1,"37":3,"38":3,"40":3,"41":1,"42":1,"45":1,"47":1,"49":2,"50":8,"51":2,"52":1,"54":2,"56":11,"58":2,"60":2,"62":2,"64":2,"66":3,"68":1,"71":1,"73":1,"74":1,"75":5,"76":2,"77":3,"78":4,"80":5,"82":5,"83":1,"86":11,"90":2,"95":2,"96":2,"97":4,"100":2,"102":1,"114":2,"115":2,"118":6,"122":3,"127":1,"131":1,"137":1,"138":1,"140":2,"147":3,"154":3,"155":2,"158":2,"165":1,"166":1,"167":2,"168":1,"172":1,"182":1,"186":2,"187":1,"188":1,"190":1,"191":2,"193":1,"202":1,"207":2,"213":1,"221":1,"224":1,"225":1,"231":1,"237":1,"242":1,"276":1,"278":6,"279":1,"285":2}}],["agent",{"2":{"126":1,"133":3,"165":2,"204":2,"237":3,"260":2,"274":1}}],["aggressive",{"2":{"110":1}}],["aggregating",{"2":{"81":1}}],["aggregation",{"2":{"50":1,"81":1}}],["aggregate",{"2":{"81":1}}],["aggregated",{"2":{"50":1}}],["agg=sum",{"2":{"50":2,"273":1}}],["agg=mean",{"2":{"50":1,"259":1}}],["agg",{"2":{"50":26}}],["agnostic",{"2":{"3":3,"30":3,"163":1}}],["against",{"2":{"123":1,"169":1}}],["again",{"2":{"3":1,"86":1,"137":1,"166":1}}],["algebras",{"2":{"194":1}}],["algebraic",{"2":{"166":1}}],["algorithm",{"2":{"2":1,"169":1,"276":1}}],["alpha=0",{"2":{"264":1,"268":2,"284":2,"285":4,"288":7}}],["alpha=1",{"2":{"86":12}}],["alpha",{"2":{"64":4,"86":8,"278":2}}],["alphadropout",{"2":{"36":4}}],["already",{"2":{"57":1,"89":1,"98":1,"187":1,"191":1,"192":12,"199":6,"229":38,"240":14,"263":9,"267":1,"276":20,"282":62}}],["aliastables",{"2":{"276":1,"282":1}}],["aliases",{"2":{"62":1}}],["aliased",{"2":{"49":3}}],["alias",{"2":{"51":1}}],["alignment",{"2":{"83":1}}],["aligned",{"2":{"42":1}}],["align",{"2":{"42":2,"78":9,"82":1,"116":2,"117":1}}],["almost",{"2":{"45":1,"86":1}}],["alternate",{"0":{"235":1},"2":{"118":1}}],["alternatives",{"2":{"161":1}}],["alternatively",{"2":{"38":2,"42":1,"88":1,"96":1,"119":1,"137":1,"158":1}}],["alternative",{"2":{"3":1,"78":3}}],["alter",{"2":{"86":1}}],["altered",{"2":{"78":1}}],["alts",{"2":{"40":2}}],["always",{"2":{"18":2,"21":1,"22":1,"45":1,"50":1,"52":1,"74":1,"75":3,"76":1,"80":1,"86":3,"116":1,"156":1}}],["al",{"2":{"15":2,"50":2,"64":2,"78":1,"83":1,"281":1}}],["along",{"2":{"15":1,"34":1,"35":1,"36":2,"38":2,"41":1,"50":1,"51":1,"62":1,"66":1,"74":1,"78":1,"84":1,"86":1,"100":1,"132":1}}],["also",{"2":{"3":1,"7":1,"15":2,"22":1,"28":2,"34":2,"36":5,"40":1,"41":7,"42":2,"51":1,"52":2,"54":1,"56":1,"58":10,"62":2,"73":1,"74":2,"75":1,"76":5,"77":2,"78":1,"79":2,"80":5,"81":1,"84":3,"86":5,"88":1,"100":2,"102":1,"110":1,"153":2,"185":1,"188":1,"202":1,"204":1,"231":1,"234":2,"278":1,"279":1}}],["allocations",{"2":{"86":1}}],["allocating",{"2":{"81":2}}],["allocated",{"2":{"86":1}}],["allocate",{"2":{"86":2}}],["allowed",{"2":{"78":2}}],["allow",{"2":{"25":1,"52":1,"96":1,"110":1,"114":1,"158":1}}],["allowscalar",{"2":{"159":1,"229":1,"240":1}}],["allows",{"2":{"6":1,"7":1,"8":1,"18":2,"30":2,"34":1,"38":1,"40":1,"52":1,"56":1,"101":1,"117":1,"154":1,"155":1,"234":1}}],["allreduce",{"2":{"30":3,"31":1,"139":1}}],["all",{"2":{"3":1,"6":1,"7":1,"8":1,"10":3,"15":1,"21":1,"22":3,"23":2,"30":5,"34":3,"35":2,"37":6,"39":1,"40":2,"42":2,"47":2,"49":2,"52":1,"53":1,"55":1,"56":1,"72":1,"75":4,"76":1,"79":1,"80":3,"84":1,"86":7,"89":1,"91":2,"95":2,"96":1,"99":3,"100":3,"110":1,"112":1,"118":1,"137":3,"140":1,"145":2,"147":1,"153":1,"163":1,"183":1,"184":1,"186":1,"188":3,"189":1,"190":1,"224":1,"250":1,"273":1,"277":1,"278":2}}],["artifact",{"2":{"238":1,"246":1,"276":1,"282":1}}],["artificially",{"2":{"127":1}}],["artificial",{"2":{"15":2,"277":2}}],["architecture",{"2":{"153":2,"174":1}}],["architectures",{"2":{"73":1,"100":1}}],["architectural",{"2":{"153":1}}],["arranged",{"2":{"188":1,"277":1}}],["arraylayoutssparsearraysext",{"2":{"229":1,"282":1}}],["arraylayouts",{"2":{"229":2,"282":2}}],["arrayinterfacereversediffext",{"2":{"282":1}}],["arrayinterfacetrackerext",{"2":{"276":1,"282":1}}],["arrayinterfacecudaext",{"2":{"229":2,"240":2}}],["arrayinterfacechainrulesext",{"2":{"192":2,"229":1,"276":1,"282":1}}],["arrayinterfacechainrulescoreext",{"2":{"187":1,"276":1,"282":1}}],["arrayinterfacesparsearraysext",{"2":{"192":2,"276":1,"282":1}}],["arrayinterfacestaticarrayscoreext",{"2":{"187":1,"276":1,"282":1}}],["arrayinterfacegpuarrayscoreext",{"2":{"187":1,"276":1,"282":1}}],["arrayinterface",{"2":{"187":4,"192":2,"229":2,"240":1,"276":7,"282":8}}],["arrayandtime",{"2":{"134":8,"135":1}}],["array",{"2":{"8":1,"13":3,"15":9,"19":2,"35":5,"37":6,"38":2,"39":4,"40":6,"41":16,"47":7,"50":1,"52":2,"56":1,"58":1,"60":5,"64":4,"66":5,"67":1,"73":6,"74":2,"76":7,"77":7,"78":17,"79":10,"80":14,"81":5,"82":5,"83":1,"84":1,"85":1,"86":20,"118":1,"134":3,"147":6,"166":1,"167":1,"186":1,"188":8,"189":1,"190":6,"211":1,"223":1,"225":1,"231":3,"232":1,"237":9,"260":6,"277":4,"278":1,"284":1,"285":1,"286":1,"288":1,"296":1}}],["arrays",{"0":{"159":1,"188":1,"189":1},"1":{"189":1},"2":{"3":1,"11":1,"34":1,"39":1,"40":1,"42":5,"52":1,"53":1,"63":2,"65":3,"67":8,"73":3,"78":1,"79":2,"80":2,"84":1,"118":1,"147":1,"153":1,"174":2,"188":4,"190":1}}],["arr",{"2":{"79":18,"86":2}}],["argcheck",{"2":{"187":1,"276":1,"282":1}}],["arg9",{"2":{"147":2}}],["arg8",{"2":{"147":2}}],["arg7",{"2":{"147":2}}],["arg6",{"2":{"147":2}}],["arg5",{"2":{"147":2}}],["arg4",{"2":{"147":2}}],["arg3",{"2":{"147":2}}],["arg2",{"2":{"147":2}}],["arg12",{"2":{"147":4}}],["arg11",{"2":{"147":4}}],["arg10",{"2":{"147":2}}],["arg1",{"2":{"147":2}}],["arg0",{"2":{"147":2}}],["args",{"2":{"52":2,"54":6,"77":1,"95":2,"96":4,"115":1,"258":2,"291":2}}],["argumenterror",{"2":{"76":1,"253":1}}],["argument",{"2":{"8":2,"16":1,"34":4,"38":1,"40":1,"56":1,"78":3,"80":1,"81":1,"85":1,"86":1,"107":1,"116":1,"117":1,"119":1,"133":1,"144":1}}],["arguments",{"2":{"2":2,"4":2,"8":1,"15":3,"18":2,"19":1,"22":1,"24":3,"25":1,"31":1,"34":11,"35":4,"36":5,"37":9,"38":7,"39":8,"40":7,"41":9,"42":2,"45":2,"47":2,"49":6,"55":1,"56":2,"60":2,"62":1,"63":1,"64":2,"65":1,"66":4,"73":1,"75":3,"77":1,"78":4,"79":1,"81":1,"82":2,"86":16,"95":3,"96":5,"104":1,"153":1,"186":1,"237":3}}],["arbitrary",{"2":{"42":1,"73":1,"75":1,"86":1,"234":1}}],["around",{"2":{"18":2,"35":2,"37":3,"45":1,"61":1,"76":1,"77":1,"79":5,"86":1,"100":1,"154":1,"168":1}}],["arxiv",{"2":{"15":1,"40":1,"41":2,"66":4,"78":1}}],["aren",{"2":{"53":1,"219":1}}],["are",{"2":{"2":3,"3":2,"6":1,"7":2,"8":1,"11":1,"15":4,"19":2,"21":2,"22":6,"24":4,"25":1,"26":1,"28":2,"39":3,"40":2,"41":5,"42":2,"47":2,"49":10,"50":6,"51":2,"52":6,"54":2,"56":2,"57":1,"60":1,"65":1,"66":3,"67":1,"73":1,"76":1,"77":2,"78":3,"80":3,"81":1,"82":3,"84":1,"86":13,"88":1,"90":1,"91":1,"96":2,"99":2,"100":2,"107":1,"115":1,"117":1,"118":1,"122":4,"123":5,"124":2,"125":1,"127":1,"133":1,"137":1,"140":1,"141":1,"143":1,"147":3,"148":2,"153":3,"154":2,"155":2,"156":1,"158":2,"161":1,"162":2,"163":1,"164":2,"165":2,"166":1,"169":1,"172":3,"180":2,"181":1,"183":1,"184":4,"188":1,"190":1,"192":1,"201":2,"219":2,"220":2,"225":1,"231":1,"242":1,"256":1,"260":12,"268":1,"273":1,"274":1,"278":1,"279":1,"284":1,"285":1}}],["animation",{"2":{"279":1}}],["animations",{"2":{"276":1,"282":1}}],["analytical",{"2":{"252":4,"254":1}}],["analogous",{"2":{"80":2,"86":2}}],["anticlockwise",{"2":{"198":1,"200":4}}],["anonymous",{"2":{"114":1}}],["another",{"2":{"39":1,"154":1}}],["angle",{"2":{"79":2}}],["answers",{"2":{"84":1,"101":1}}],["ans",{"2":{"58":6,"78":3,"86":1}}],["anything",{"2":{"153":1}}],["anywhere",{"2":{"107":1}}],["any",{"2":{"8":2,"11":1,"21":1,"24":1,"34":1,"38":3,"40":1,"47":1,"49":2,"50":3,"52":2,"56":2,"58":1,"63":1,"65":1,"76":1,"80":1,"84":3,"86":2,"96":1,"99":1,"110":2,"118":2,"119":1,"124":1,"126":1,"128":1,"133":7,"140":1,"151":2,"153":2,"155":1,"164":1,"174":1,"184":1,"189":1,"190":2,"191":1,"192":1,"205":1,"220":1,"234":1,"237":15,"243":1,"267":1}}],["an",{"2":{"2":5,"3":2,"4":1,"5":1,"8":3,"15":19,"16":36,"18":2,"24":1,"27":2,"30":1,"31":1,"34":10,"35":3,"36":3,"37":3,"38":2,"39":8,"40":2,"41":7,"49":2,"50":2,"52":1,"54":1,"56":1,"58":2,"60":1,"64":4,"66":3,"73":2,"74":2,"75":2,"76":4,"77":2,"78":4,"79":4,"80":2,"81":2,"84":2,"85":1,"86":15,"89":1,"97":3,"100":1,"107":1,"114":1,"116":2,"117":1,"119":3,"120":1,"122":1,"124":2,"125":1,"127":1,"128":2,"131":1,"141":1,"147":3,"149":1,"158":1,"162":1,"164":1,"165":2,"166":1,"168":1,"178":2,"188":3,"191":1,"207":1,"219":1,"220":3,"225":1,"260":1,"273":1,"274":1,"277":1,"278":1,"279":1}}],["andrea",{"2":{"41":1,"66":1}}],["and",{"0":{"43":1,"81":1,"163":1,"203":1,"232":1,"239":1,"243":1},"1":{"44":1,"45":1,"46":1,"47":1,"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1},"2":{"1":1,"2":13,"3":3,"4":7,"5":4,"7":7,"8":7,"15":13,"19":1,"21":2,"22":4,"23":7,"24":5,"25":3,"28":4,"30":2,"32":2,"34":10,"35":9,"37":3,"38":31,"39":8,"40":3,"41":22,"42":8,"45":5,"47":2,"49":8,"50":19,"51":2,"52":11,"53":1,"54":4,"55":1,"56":5,"57":1,"58":5,"60":1,"61":1,"62":1,"63":4,"64":2,"65":3,"66":20,"73":3,"74":1,"75":10,"76":15,"77":8,"78":3,"79":4,"80":13,"81":14,"82":4,"83":4,"84":5,"86":56,"88":1,"89":4,"90":7,"92":1,"94":2,"95":2,"97":2,"98":3,"99":2,"100":6,"101":2,"107":5,"112":2,"114":5,"115":3,"116":3,"117":4,"118":6,"119":2,"122":7,"123":2,"124":2,"126":3,"127":6,"128":3,"135":1,"137":6,"138":2,"140":1,"141":1,"143":1,"147":4,"149":3,"150":1,"151":2,"153":9,"154":8,"155":3,"156":1,"158":3,"161":2,"162":1,"163":5,"164":2,"165":5,"167":1,"168":1,"169":2,"170":2,"172":1,"174":10,"176":1,"181":2,"183":2,"184":4,"186":2,"188":7,"189":2,"190":1,"191":3,"192":3,"193":1,"195":1,"196":3,"198":2,"200":3,"201":10,"202":1,"205":3,"209":1,"210":1,"213":1,"219":1,"220":3,"224":1,"225":3,"230":1,"231":2,"234":2,"237":1,"247":1,"250":1,"252":2,"264":1,"267":2,"268":1,"271":1,"277":1,"278":12,"279":3,"283":3,"284":2,"285":6,"292":11,"293":2}}],["a",{"0":{"46":1,"143":1,"146":1,"198":1,"201":1,"239":1,"242":1,"247":1,"262":1,"281":1,"285":1},"1":{"47":1,"199":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"240":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"248":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"263":1,"264":1,"265":1,"266":1,"267":1,"268":1,"269":1,"282":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"0":1,"1":2,"2":5,"3":5,"4":5,"5":3,"6":1,"7":10,"8":12,"10":1,"11":4,"12":1,"15":20,"16":13,"18":2,"19":4,"21":1,"22":4,"23":1,"24":3,"25":4,"28":2,"30":3,"31":1,"34":27,"35":21,"37":18,"38":43,"39":8,"40":10,"41":20,"42":6,"45":9,"47":9,"49":5,"50":4,"51":5,"52":5,"54":3,"55":4,"56":21,"58":11,"61":1,"62":3,"63":2,"64":4,"65":2,"66":2,"69":1,"70":1,"73":3,"74":4,"75":5,"76":6,"77":6,"78":4,"79":3,"80":55,"81":9,"83":5,"84":9,"85":6,"86":87,"88":1,"89":2,"90":4,"94":2,"95":1,"96":4,"97":5,"98":2,"99":1,"100":4,"101":1,"102":2,"107":9,"108":2,"109":2,"112":3,"118":3,"119":4,"124":1,"125":1,"126":3,"127":5,"130":2,"131":1,"134":3,"135":1,"136":2,"137":1,"140":1,"141":1,"142":1,"143":1,"145":2,"147":2,"149":7,"151":3,"153":8,"154":8,"155":2,"158":5,"163":1,"164":6,"165":7,"166":2,"167":3,"168":1,"169":10,"172":1,"174":17,"176":1,"177":1,"180":1,"181":1,"185":2,"186":2,"187":3,"188":12,"189":1,"191":3,"192":12,"193":1,"194":2,"196":4,"198":1,"200":2,"201":3,"202":1,"207":2,"213":1,"219":1,"221":3,"224":2,"225":6,"231":3,"234":2,"235":1,"236":1,"243":1,"247":2,"250":3,"251":1,"252":1,"253":7,"257":1,"260":103,"262":2,"265":1,"268":2,"271":4,"273":3,"274":1,"276":1,"277":3,"278":7,"279":5,"283":9,"285":3,"286":1,"287":1,"290":1,"291":1,"295":2}}]],"serializationVersion":2}';export{e as default}; diff --git a/dev/assets/chunks/@localSearchIndexroot.CoigOnOV.js b/dev/assets/chunks/@localSearchIndexroot.CoigOnOV.js deleted file mode 100644 index 5a67f40ad2..0000000000 --- a/dev/assets/chunks/@localSearchIndexroot.CoigOnOV.js +++ /dev/null @@ -1 +0,0 @@ -const e='{"documentCount":299,"nextId":299,"documentIds":{"0":"/dev/api/Accelerator_Support/MLDataDevices#MLDataDevices-API","1":"/dev/api/Accelerator_Support/MLDataDevices#preferences","2":"/dev/api/Accelerator_Support/MLDataDevices#Data-Transfer","3":"/dev/api/Accelerator_Support/MLDataDevices#miscellaneous","4":"/dev/api/Accelerator_Support/MLDataDevices#Multi-GPU-Support","5":"/dev/api/Accelerator_Support/MLDataDevices#iteration","6":"/dev/api/Building_Blocks/LuxCore#luxcore","7":"/dev/api/Building_Blocks/LuxCore#Abstract-Types","8":"/dev/api/Building_Blocks/LuxCore#general","9":"/dev/api/Building_Blocks/LuxCore#parameters","10":"/dev/api/Building_Blocks/LuxCore#states","11":"/dev/api/Building_Blocks/LuxCore#Layer-size","12":"/dev/api/Building_Blocks/WeightInitializers#WeightInitializers-API","13":"/dev/api/Building_Blocks/WeightInitializers#Supported-RNG-Types-WeightInit","14":"/dev/api/Building_Blocks/WeightInitializers#API-Reference","15":"/dev/api/Building_Blocks/WeightInitializers#Main-Functions","16":"/dev/api/Building_Blocks/WeightInitializers#Other-Convenience-Functions","17":"/dev/api/Lux/autodiff#autodiff-lux-helpers","18":"/dev/api/Lux/autodiff#JVP-and-VJP-Wrappers","19":"/dev/api/Lux/autodiff#Batched-AD","20":"/dev/api/Lux/autodiff#Nested-2nd-Order-AD","21":"/dev/api/Lux/contrib#Experimental-Features","22":"/dev/api/Lux/contrib#Parameter-Freezing","23":"/dev/api/Lux/contrib#Map-over-Layer","24":"/dev/api/Lux/contrib#Debugging-Functionality","25":"/dev/api/Lux/contrib#Tied-Parameters","26":"/dev/api/Lux/distributed_utils#Distributed-Utils","27":"/dev/api/Lux/distributed_utils#communication-backends","28":"/dev/api/Lux/distributed_utils#initialization","29":"/dev/api/Lux/distributed_utils#Helper-Functions","30":"/dev/api/Lux/distributed_utils#Communication-Primitives","31":"/dev/api/Lux/distributed_utils#Optimizers.jl-Integration","32":"/dev/api/Lux/distributed_utils#MLUtils.jl-Integration","33":"/dev/api/Lux/interop#Interoperability-between-Lux-and-other-packages","34":"/dev/api/Lux/interop#Switching-from-older-frameworks","35":"/dev/api/Lux/interop#flux-to-lux-migrate-api","36":"/dev/api/Lux/interop#Using-a-different-backend-for-Lux","37":"/dev/api/Lux/interop#Lux-Models-to-Simple-Chains","38":"/dev/api/Lux/layers#Built-In-Layers","39":"/dev/api/Lux/layers#containers","40":"/dev/api/Lux/layers#Convolutional-Layers","41":"/dev/api/Lux/layers#Dropout-Layers","42":"/dev/api/Lux/layers#Pooling-Layers","43":"/dev/api/Lux/layers#Recurrent-Layers","44":"/dev/api/Lux/layers#Linear-Layers","45":"/dev/api/Lux/layers#Misc.-Helper-Layers","46":"/dev/api/Lux/layers#Normalization-Layers","47":"/dev/api/Lux/layers#upsampling","48":"/dev/api/Lux/utilities#utilities","49":"/dev/api/Lux/utilities#Training-API","50":"/dev/api/Lux/utilities#Loss-Functions","51":"/dev/api/Lux/utilities#LuxOps-Module","52":"/dev/api/Lux/utilities#Recursive-Operations","53":"/dev/api/Lux/utilities#Updating-Floating-Point-Precision","54":"/dev/api/Lux/utilities#Element-Type-Matching","55":"/dev/api/Lux/utilities#Stateful-Layer","56":"/dev/api/Lux/utilities#Compact-Layer","57":"/dev/api/Lux/utilities#miscellaneous","58":"/dev/api/NN_Primitives/LuxLib#LuxLib-API","59":"/dev/api/NN_Primitives/LuxLib#Apply-Activation","60":"/dev/api/NN_Primitives/LuxLib#Batched-Operations","61":"/dev/api/NN_Primitives/LuxLib#Bias-Activation","62":"/dev/api/NN_Primitives/LuxLib#Convolutional-Layers","63":"/dev/api/NN_Primitives/LuxLib#dropout","64":"/dev/api/NN_Primitives/LuxLib#Fully-Connected-Layers","65":"/dev/api/NN_Primitives/LuxLib#normalization","66":"/dev/api/NN_Primitives/LuxLib#Helper-Functions","67":"/dev/api/NN_Primitives/ActivationFunctions#NNlib-ActivationFunctions-API","68":"/dev/api/Testing_Functionality/LuxTestUtils#luxtestutils","69":"/dev/api/Testing_Functionality/LuxTestUtils#Testing-using-JET.jl","70":"/dev/api/Testing_Functionality/LuxTestUtils#Gradient-Correctness","71":"/dev/api/Testing_Functionality/LuxTestUtils#Extensions-to-@test","72":"/dev/#How-to-Install-Lux.jl?","73":"/dev/#Want-GPU-Support?","74":"/dev/#Want-Reactant-(XLA)-Support?","75":"/dev/api/NN_Primitives/NNlib#NNlib-API","76":"/dev/api/NN_Primitives/NNlib#attention","77":"/dev/api/NN_Primitives/NNlib#softmax","78":"/dev/api/NN_Primitives/NNlib#pooling","79":"/dev/api/NN_Primitives/NNlib#padding","80":"/dev/api/NN_Primitives/NNlib#convolution","81":"/dev/api/NN_Primitives/NNlib#upsampling","82":"/dev/api/NN_Primitives/NNlib#rotation","83":"/dev/api/NN_Primitives/NNlib#Batched-Operations","84":"/dev/api/NN_Primitives/NNlib#Gather-and-Scatter","85":"/dev/api/NN_Primitives/NNlib#sampling","86":"/dev/api/NN_Primitives/NNlib#losses","87":"/dev/api/NN_Primitives/NNlib#miscellaneous","88":"/dev/api/NN_Primitives/NNlib#dropout","89":"/dev/api/NN_Primitives/NNlib#Internal-NNlib-Functions","90":"/dev/introduction/citation#citation","91":"/dev/introduction/overview#Why-we-wrote-Lux?","92":"/dev/introduction/overview#Design-Principles","93":"/dev/introduction/overview#Why-use-Lux-over-Flux?","94":"/dev/introduction/#getting-started","95":"/dev/introduction/#installation","96":"/dev/introduction/#quickstart","97":"/dev/introduction/#Defining-Custom-Layers","98":"/dev/introduction/#Additional-Packages","99":"/dev/introduction/#XLA-(CPU/GPU/TPU)-Support","100":"/dev/introduction/#GPU-Support","101":"/dev/introduction/resources#Resources-to-Get-Started","102":"/dev/introduction/updating_to_v1#updating-to-v1","103":"/dev/introduction/updating_to_v1#LuxLib.jl","104":"/dev/introduction/updating_to_v1#Breaking-Changes","105":"/dev/introduction/updating_to_v1#New-Major-Features","106":"/dev/introduction/updating_to_v1#LuxCore.jl","107":"/dev/introduction/updating_to_v1#Breaking-Changes-2","108":"/dev/introduction/updating_to_v1#New-Major-Features-2","109":"/dev/introduction/updating_to_v1#WeightInitializers.jl","110":"/dev/introduction/updating_to_v1#MLDataDevices.jl","111":"/dev/introduction/updating_to_v1#Breaking-Changes-3","112":"/dev/introduction/updating_to_v1#New-Major-Features-3","113":"/dev/introduction/updating_to_v1#Lux.jl","114":"/dev/introduction/updating_to_v1#Breaking-Changes-(Removed-Functionality)","115":"/dev/introduction/updating_to_v1#Breaking-Changes-(Moved-Functionality)","116":"/dev/introduction/updating_to_v1#Breaking-Changes-(Changes-in-Defaults)","117":"/dev/introduction/updating_to_v1#New-Features","118":"/dev/manual/autodiff#autodiff-lux","119":"/dev/manual/autodiff#overview","120":"/dev/manual/autodiff#autodiff-recommendations","121":"/dev/manual/autodiff#Support-Class","122":"/dev/manual/autodiff#footnotes","123":"/dev/manual/compiling_lux_models#reactant-compilation","124":"/dev/manual/compiling_lux_models#compile_lux_model_trainstate","125":"/dev/manual/debugging#debug-lux-layers","126":"/dev/manual/debugging#Incorrect-Model-Specification:-Dimension-Mismatch-Problems","127":"/dev/manual/debugging#Tracking-down-NaNs","128":"/dev/manual/debugging#conclusion","129":"/dev/manual/dispatch_custom_input#Dispatching-on-Custom-Input-Types","130":"/dev/manual/dispatch_custom_input#Which-function-should-participate-in-dispatch?","131":"/dev/manual/dispatch_custom_input#Concrete-Example","132":"/dev/manual/dispatch_custom_input#Time-Dependent-Chain-Implementation","133":"/dev/manual/dispatch_custom_input#Running-the-TDChain","134":"/dev/manual/dispatch_custom_input#Writing-the-Correct-Dispatch-Rules","135":"/dev/manual/dispatch_custom_input#Using-the-Same-Input-for-Non-TD-Models","136":"/dev/manual/distributed_utils#Distributed-Data-Parallel-Training","137":"/dev/manual/distributed_utils#Guide-to-Integrating-DistributedUtils-into-your-code","138":"/dev/manual/distributed_utils#Migration-Guide-from-FluxMPI.jl","139":"/dev/manual/distributed_utils#Removed-Functionality","140":"/dev/manual/distributed_utils#Key-Differences","141":"/dev/manual/distributed_utils#Known-Shortcomings","142":"/dev/manual/freezing_model_parameters#freezing-model-parameters","143":"/dev/manual/freezing_model_parameters#Freezing-Layers-of-a-Particular-Kind","144":"/dev/manual/freezing_model_parameters#Freezing-by-Layer-Name","145":"/dev/manual/freezing_model_parameters#Freezing-Part-of-the-Parameters","146":"/dev/manual/freezing_model_parameters#Freezing-Part-of-a-Chain","147":"/dev/manual/exporting_to_jax#Exporting-Lux-Models-to-Jax-(via-EnzymeJAX-and-Reactant)","148":"/dev/manual/gpu_management#GPU-Management","149":"/dev/manual/gpu_management#Automatic-Backend-Management-(Recommended-Approach)","150":"/dev/manual/gpu_management#Manual-Backend-Management","151":"/dev/manual/interface#lux-interface","152":"/dev/manual/interface#Layer-Interface","153":"/dev/manual/interface#Singular-Layer","154":"/dev/manual/interface#Container-Layer","155":"/dev/manual/interface#Parameter-Interface","156":"/dev/manual/interface#State-Interface","157":"/dev/manual/migrate_from_flux#migrate-from-flux","158":"/dev/manual/migrate_from_flux#Implementing-Custom-Layers","159":"/dev/manual/migrate_from_flux#Certain-Important-Implementation-Details","160":"/dev/manual/migrate_from_flux#Training/Inference-Mode","161":"/dev/manual/migrate_from_flux#Can-we-still-use-Flux-Layers?","162":"/dev/manual/nested_autodiff#nested_autodiff","163":"/dev/manual/nested_autodiff#Loss-Function-containing-Jacobian-Computation","164":"/dev/manual/nested_autodiff#Using-Batched-Jacobian-for-Multiple-Inputs","165":"/dev/manual/nested_autodiff#Loss-Function-contains-Gradient-Computation","166":"/dev/manual/nested_autodiff#Loss-Function-computing-the-Jacobian-of-the-Parameters","167":"/dev/manual/nested_autodiff#Hutchinson-Trace-Estimation","168":"/dev/manual/nested_autodiff#Computing-using-the-Vector-Jacobian-Product","169":"/dev/manual/nested_autodiff#Computing-using-the-Jacobian-Vector-Product","170":"/dev/manual/nested_autodiff#Computing-using-the-Full-Jacobian","171":"/dev/manual/nn_inside_gpu_kernels#Neural-Networks-Inside-GPU-Kernels","172":"/dev/manual/performance_pitfalls#Performance-Pitfalls-and-How-to-Catch-Them","173":"/dev/manual/performance_pitfalls#Spurious-Type-Promotion","174":"/dev/manual/performance_pitfalls#Scalar-Indexing-on-GPU-Arrays","175":"/dev/manual/performance_pitfalls#Type-Instabilities","176":"/dev/manual/performance_pitfalls#Faster-Primitives","177":"/dev/manual/performance_pitfalls#Optional-Dependencies-for-Performance","178":"/dev/manual/performance_pitfalls#Data-Loading-and-Device-Transfer","179":"/dev/manual/preferences#Preferences-for-Lux.jl","180":"/dev/manual/preferences#Nested-Automatic-Differentiation","181":"/dev/manual/preferences#gpu-aware-mpi-preferences","182":"/dev/manual/preferences#GPU-Backend-Selection","183":"/dev/manual/preferences#automatic-eltypes-preference","184":"/dev/manual/preferences#dispatch-doctor-preference","185":"/dev/manual/preferences#disable_loop_vectorization","186":"/dev/manual/weight_initializers#Initializing-Weights","187":"/dev/manual/weight_initializers#Quick-examples","188":"/dev/tutorials/beginner/1_Basics#Julia-and-Lux-for-the-Uninitiated","189":"/dev/tutorials/beginner/1_Basics#arrays","190":"/dev/tutorials/beginner/1_Basics#CUDA-Arrays","191":"/dev/tutorials/beginner/1_Basics#im-mutability","192":"/dev/tutorials/beginner/1_Basics#Managing-Randomness","193":"/dev/tutorials/beginner/1_Basics#Automatic-Differentiation","194":"/dev/tutorials/beginner/1_Basics#gradients","195":"/dev/tutorials/beginner/1_Basics#Jacobian-Vector-Product","196":"/dev/tutorials/beginner/1_Basics#Vector-Jacobian-Product","197":"/dev/tutorials/beginner/1_Basics#Linear-Regression","198":"/dev/tutorials/beginner/1_Basics#appendix","199":"/dev/tutorials/beginner/3_SimpleRNN#Training-a-Simple-LSTM","200":"/dev/tutorials/beginner/3_SimpleRNN#Package-Imports","201":"/dev/tutorials/beginner/3_SimpleRNN#dataset","202":"/dev/tutorials/beginner/3_SimpleRNN#Creating-a-Classifier","203":"/dev/tutorials/beginner/3_SimpleRNN#Using-the-@compact-API","204":"/dev/tutorials/beginner/3_SimpleRNN#Defining-Accuracy,-Loss-and-Optimiser","205":"/dev/tutorials/beginner/3_SimpleRNN#Training-the-Model","206":"/dev/tutorials/beginner/3_SimpleRNN#Saving-the-Model","207":"/dev/tutorials/beginner/3_SimpleRNN#appendix","208":"/dev/tutorials/beginner/4_SimpleChains#MNIST-Classification-with-SimpleChains","209":"/dev/tutorials/beginner/4_SimpleChains#Package-Imports","210":"/dev/tutorials/beginner/4_SimpleChains#Loading-MNIST","211":"/dev/tutorials/beginner/4_SimpleChains#Define-the-Model","212":"/dev/tutorials/beginner/4_SimpleChains#Helper-Functions","213":"/dev/tutorials/beginner/4_SimpleChains#Define-the-Training-Loop","214":"/dev/tutorials/beginner/4_SimpleChains#Finally-Training-the-Model","215":"/dev/tutorials/beginner/4_SimpleChains#appendix","216":"/dev/tutorials/beginner/5_OptimizationIntegration#Optimization-Lux-Tutorial","217":"/dev/tutorials/beginner/5_OptimizationIntegration#Imports-packages","218":"/dev/tutorials/beginner/5_OptimizationIntegration#Generate-some-training-data","219":"/dev/tutorials/beginner/5_OptimizationIntegration#Define-the-DataLoader","220":"/dev/tutorials/beginner/5_OptimizationIntegration#Training-the-model","221":"/dev/tutorials/beginner/5_OptimizationIntegration#Plotting-the-results","222":"/dev/tutorials/beginner/5_OptimizationIntegration#appendix","223":"/dev/tutorials/#tutorials","224":"/dev/tutorials/#beginner-tutorials","225":"/dev/tutorials/#intermediate-tutorials","226":"/dev/tutorials/#advanced-tutorials","227":"/dev/tutorials/#larger-models","228":"/dev/tutorials/#selected-3rd-party-tutorials","229":"/dev/tutorials/intermediate/1_NeuralODE#MNIST-Classification-using-Neural-ODEs","230":"/dev/tutorials/intermediate/1_NeuralODE#Package-Imports","231":"/dev/tutorials/intermediate/1_NeuralODE#Loading-MNIST","232":"/dev/tutorials/intermediate/1_NeuralODE#Define-the-Neural-ODE-Layer","233":"/dev/tutorials/intermediate/1_NeuralODE#Create-and-Initialize-the-Neural-ODE-Layer","234":"/dev/tutorials/intermediate/1_NeuralODE#Define-Utility-Functions","235":"/dev/tutorials/intermediate/1_NeuralODE#training","236":"/dev/tutorials/intermediate/1_NeuralODE#Alternate-Implementation-using-Stateful-Layer","237":"/dev/tutorials/intermediate/1_NeuralODE#Train-the-new-Stateful-Neural-ODE","238":"/dev/tutorials/intermediate/1_NeuralODE#Type-Stability","239":"/dev/tutorials/intermediate/1_NeuralODE#appendix","240":"/dev/tutorials/intermediate/3_HyperNet#Training-a-HyperNetwork-on-MNIST-and-FashionMNIST","241":"/dev/tutorials/intermediate/3_HyperNet#Package-Imports","242":"/dev/tutorials/intermediate/3_HyperNet#Loading-Datasets","243":"/dev/tutorials/intermediate/3_HyperNet#Implement-a-HyperNet-Layer","244":"/dev/tutorials/intermediate/3_HyperNet#Create-and-Initialize-the-HyperNet","245":"/dev/tutorials/intermediate/3_HyperNet#Define-Utility-Functions","246":"/dev/tutorials/intermediate/3_HyperNet#training","247":"/dev/tutorials/intermediate/3_HyperNet#appendix","248":"/dev/tutorials/intermediate/4_PINN2DPDE#Training-a-PINN-on-2D-PDE","249":"/dev/tutorials/intermediate/4_PINN2DPDE#Package-Imports","250":"/dev/tutorials/intermediate/4_PINN2DPDE#Problem-Definition","251":"/dev/tutorials/intermediate/4_PINN2DPDE#Define-the-Neural-Networks","252":"/dev/tutorials/intermediate/4_PINN2DPDE#Define-the-Loss-Functions","253":"/dev/tutorials/intermediate/4_PINN2DPDE#Generate-the-Data","254":"/dev/tutorials/intermediate/4_PINN2DPDE#training","255":"/dev/tutorials/intermediate/4_PINN2DPDE#Visualizing-the-Results","256":"/dev/tutorials/intermediate/4_PINN2DPDE#appendix","257":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Convolutional-VAE-Tutorial","258":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Model-Definition","259":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Loading-MNIST","260":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Helper-Functions","261":"/dev/tutorials/intermediate/5_ConvolutionalVAE#Training-the-Model","262":"/dev/tutorials/intermediate/5_ConvolutionalVAE#appendix","263":"/dev/tutorials/intermediate/2_BayesianNN#Bayesian-Neural-Network","264":"/dev/tutorials/intermediate/2_BayesianNN#Generating-data","265":"/dev/tutorials/intermediate/2_BayesianNN#Building-the-Neural-Network","266":"/dev/tutorials/intermediate/2_BayesianNN#Prediction-Visualization","267":"/dev/tutorials/intermediate/2_BayesianNN#appendix","268":"/dev/tutorials/beginner/2_PolynomialFitting#Fitting-a-Polynomial-using-MLP","269":"/dev/tutorials/beginner/2_PolynomialFitting#Package-Imports","270":"/dev/tutorials/beginner/2_PolynomialFitting#dataset","271":"/dev/tutorials/beginner/2_PolynomialFitting#Neural-Network","272":"/dev/tutorials/beginner/2_PolynomialFitting#optimizer","273":"/dev/tutorials/beginner/2_PolynomialFitting#Loss-Function","274":"/dev/tutorials/beginner/2_PolynomialFitting#training","275":"/dev/tutorials/beginner/2_PolynomialFitting#appendix","276":"/dev/tutorials/intermediate/6_GCN_Cora#GCN-Tutorial-Cora","277":"/dev/tutorials/intermediate/6_GCN_Cora#Loading-Cora-Dataset","278":"/dev/tutorials/intermediate/6_GCN_Cora#Model-Definition","279":"/dev/tutorials/intermediate/6_GCN_Cora#Helper-Functions","280":"/dev/tutorials/intermediate/6_GCN_Cora#Training-the-Model","281":"/dev/tutorials/intermediate/6_GCN_Cora#appendix","282":"/dev/tutorials/intermediate/7_RealNVP#RealNVP-Tutorial","283":"/dev/tutorials/intermediate/7_RealNVP#Define-and-Load-the-Moons-Dataset","284":"/dev/tutorials/intermediate/7_RealNVP#Bijectors-Implementation","285":"/dev/tutorials/intermediate/7_RealNVP#Model-Definition","286":"/dev/tutorials/intermediate/7_RealNVP#Helper-Functions","287":"/dev/tutorials/intermediate/7_RealNVP#Training-the-Model","288":"/dev/tutorials/intermediate/7_RealNVP#Visualizing-the-Results","289":"/dev/tutorials/intermediate/7_RealNVP#appendix","290":"/dev/tutorials/advanced/1_GravitationalWaveForm#Training-a-Neural-ODE-to-Model-Gravitational-Waveforms","291":"/dev/tutorials/advanced/1_GravitationalWaveForm#Package-Imports","292":"/dev/tutorials/advanced/1_GravitationalWaveForm#Define-some-Utility-Functions","293":"/dev/tutorials/advanced/1_GravitationalWaveForm#Simulating-the-True-Model","294":"/dev/tutorials/advanced/1_GravitationalWaveForm#Defiing-a-Neural-Network-Model","295":"/dev/tutorials/advanced/1_GravitationalWaveForm#Setting-Up-for-Training-the-Neural-Network","296":"/dev/tutorials/advanced/1_GravitationalWaveForm#Training-the-Neural-Network","297":"/dev/tutorials/advanced/1_GravitationalWaveForm#Visualizing-the-Results","298":"/dev/tutorials/advanced/1_GravitationalWaveForm#appendix"},"fieldIds":{"title":0,"titles":1,"text":2},"fieldLength":{"0":[1,1,14],"1":[1,1,46],"2":[2,1,163],"3":[1,1,193],"4":[3,1,93],"5":[1,1,114],"6":[1,1,38],"7":[2,1,160],"8":[1,1,214],"9":[1,1,19],"10":[1,1,62],"11":[2,1,77],"12":[1,1,16],"13":[3,1,28],"14":[2,1,1],"15":[2,3,342],"16":[3,3,92],"17":[3,1,1],"18":[4,3,72],"19":[2,3,86],"20":[4,3,13],"21":[2,1,54],"22":[2,2,131],"23":[3,2,124],"24":[2,2,140],"25":[2,2,118],"26":[2,1,11],"27":[1,2,29],"28":[1,2,81],"29":[2,2,18],"30":[2,2,58],"31":[3,2,31],"32":[3,2,35],"33":[6,1,1],"34":[4,6,1],"35":[4,10,172],"36":[6,6,1],"37":[5,11,188],"38":[3,1,1],"39":[1,3,292],"40":[2,3,257],"41":[2,3,115],"42":[2,3,168],"43":[2,3,361],"44":[2,3,191],"45":[3,3,243],"46":[2,3,294],"47":[1,3,165],"48":[1,1,1],"49":[2,1,241],"50":[2,1,449],"51":[2,1,134],"52":[2,1,186],"53":[4,1,75],"54":[3,1,108],"55":[2,1,112],"56":[2,1,365],"57":[1,1,54],"58":[1,1,5],"59":[2,1,100],"60":[2,1,44],"61":[2,1,68],"62":[2,1,128],"63":[1,1,160],"64":[3,1,132],"65":[1,1,232],"66":[2,1,92],"67":[2,1,536],"68":[1,1,48],"69":[4,1,91],"70":[2,1,130],"71":[3,1,33],"72":[6,1,47],"73":[4,1,34],"74":[5,1,33],"75":[1,1,33],"76":[1,1,139],"77":[1,1,152],"78":[1,1,126],"79":[1,1,166],"80":[1,1,229],"81":[1,1,247],"82":[1,1,119],"83":[2,1,220],"84":[3,1,181],"85":[1,1,127],"86":[1,1,96],"87":[1,1,243],"88":[1,1,99],"89":[3,1,959],"90":[1,1,67],"91":[5,1,63],"92":[2,5,58],"93":[6,5,218],"94":[2,1,1],"95":[1,2,51],"96":[1,2,399],"97":[3,2,230],"98":[2,2,36],"99":[5,2,13],"100":[2,2,17],"101":[4,1,48],"102":[4,1,41],"103":[2,4,1],"104":[2,6,27],"105":[3,6,22],"106":[2,4,1],"107":[2,6,119],"108":[3,6,19],"109":[2,4,30],"110":[2,4,48],"111":[2,6,22],"112":[3,6,30],"113":[2,4,1],"114":[5,5,117],"115":[5,5,54],"116":[5,5,68],"117":[2,5,63],"118":[2,1,35],"119":[1,2,37],"120":[1,2,71],"121":[2,2,65],"122":[1,2,86],"123":[6,1,589],"124":[4,6,191],"125":[3,1,76],"126":[6,3,155],"127":[3,3,190],"128":[1,3,51],"129":[5,1,1],"130":[7,5,36],"131":[2,5,40],"132":[4,7,72],"133":[3,7,82],"134":[5,7,48],"135":[8,7,46],"136":[4,1,27],"137":[7,4,122],"138":[5,4,42],"139":[2,9,28],"140":[2,9,79],"141":[2,4,52],"142":[3,1,27],"143":[6,3,123],"144":[4,3,53],"145":[5,3,43],"146":[5,3,49],"147":[10,1,1561],"148":[2,1,58],"149":[6,2,76],"150":[3,2,44],"151":[2,1,80],"152":[2,2,1],"153":[2,3,242],"154":[2,3,158],"155":[2,2,151],"156":[2,2,47],"157":[5,1,67],"158":[3,5,164],"159":[4,5,1],"160":[3,9,49],"161":[7,5,34],"162":[3,1,169],"163":[5,3,262],"164":[6,8,155],"165":[5,3,220],"166":[7,3,216],"167":[3,3,88],"168":[6,6,46],"169":[6,6,31],"170":[5,6,111],"171":[5,1,286],"172":[7,1,18],"173":[3,7,149],"174":[5,7,32],"175":[2,7,39],"176":[2,7,43],"177":[4,7,39],"178":[5,7,75],"179":[4,1,49],"180":[3,4,21],"181":[4,4,40],"182":[3,4,33],"183":[3,4,23],"184":[2,4,61],"185":[4,4,58],"186":[2,1,133],"187":[2,2,64],"188":[6,1,77],"189":[1,6,286],"190":[2,7,65],"191":[3,6,110],"192":[2,6,130],"193":[2,6,96],"194":[1,8,72],"195":[3,8,106],"196":[3,8,30],"197":[2,6,253],"198":[1,6,101],"199":[4,1,38],"200":[2,4,212],"201":[1,4,96],"202":[3,4,160],"203":[4,4,74],"204":[5,4,56],"205":[3,4,260],"206":[3,4,48],"207":[1,4,108],"208":[4,1,33],"209":[2,4,24],"210":[2,4,71],"211":[3,4,64],"212":[2,4,38],"213":[4,4,81],"214":[4,4,155],"215":[1,4,108],"216":[6,1,78],"217":[2,6,27],"218":[4,6,58],"219":[3,6,75],"220":[3,6,240],"221":[3,6,47],"222":[1,6,101],"223":[1,1,1],"224":[2,1,1],"225":[2,1,1],"226":[2,1,1],"227":[2,1,41],"228":[4,1,65],"229":[5,1,24],"230":[2,5,319],"231":[2,5,74],"232":[5,5,139],"233":[7,5,60],"234":[3,5,37],"235":[1,5,236],"236":[5,5,60],"237":[6,5,71],"238":[2,5,181],"239":[1,5,151],"240":[7,1,1],"241":[2,7,220],"242":[2,7,53],"243":[4,7,76],"244":[5,7,42],"245":[3,7,39],"246":[1,7,251],"247":[1,7,151],"248":[6,1,45],"249":[2,6,27],"250":[2,6,26],"251":[4,6,55],"252":[4,6,71],"253":[3,6,86],"254":[1,6,774],"255":[3,6,73],"256":[1,6,108],"257":[6,1,45],"258":[2,6,129],"259":[2,6,70],"260":[2,6,118],"261":[3,6,677],"262":[1,6,108],"263":[3,1,930],"264":[2,3,105],"265":[4,3,549],"266":[2,3,174],"267":[1,3,100],"268":[5,1,16],"269":[2,5,323],"270":[1,5,274],"271":[2,5,37],"272":[1,5,18],"273":[2,5,115],"274":[1,5,158],"275":[1,5,108],"276":[5,1,47],"277":[3,5,54],"278":[2,5,42],"279":[2,5,28],"280":[3,5,2151],"281":[1,5,108],"282":[5,1,43],"283":[6,5,93],"284":[2,5,55],"285":[2,5,107],"286":[2,5,25],"287":[3,5,261],"288":[3,5,43],"289":[1,5,108],"290":[8,1,21],"291":[2,8,861],"292":[4,8,202],"293":[4,8,124],"294":[5,8,1353],"295":[7,8,73],"296":[4,8,1251],"297":[3,8,83],"298":[1,8,101]},"averageFieldLength":[2.8929765886287626,3.5250836120401337,133.438127090301],"storedFields":{"0":{"title":"MLDataDevices","titles":[]},"1":{"title":"Preferences","titles":["MLDataDevices"]},"2":{"title":"Data Transfer","titles":["MLDataDevices"]},"3":{"title":"Miscellaneous","titles":["MLDataDevices"]},"4":{"title":"Multi-GPU Support","titles":["MLDataDevices"]},"5":{"title":"Iteration","titles":["MLDataDevices"]},"6":{"title":"LuxCore","titles":[]},"7":{"title":"Abstract Types","titles":["LuxCore"]},"8":{"title":"General","titles":["LuxCore"]},"9":{"title":"Parameters","titles":["LuxCore"]},"10":{"title":"States","titles":["LuxCore"]},"11":{"title":"Layer size","titles":["LuxCore"]},"12":{"title":"WeightInitializers","titles":[]},"13":{"title":"Supported RNG Types","titles":["WeightInitializers"]},"14":{"title":"API Reference","titles":["WeightInitializers"]},"15":{"title":"Main Functions","titles":["WeightInitializers","API Reference"]},"16":{"title":"Other Convenience Functions","titles":["WeightInitializers","API Reference"]},"17":{"title":"Automatic Differentiation Helpers","titles":[]},"18":{"title":"JVP & VJP Wrappers","titles":["Automatic Differentiation Helpers"]},"19":{"title":"Batched AD","titles":["Automatic Differentiation Helpers"]},"20":{"title":"Nested 2nd Order AD","titles":["Automatic Differentiation Helpers"]},"21":{"title":"Experimental Features","titles":[]},"22":{"title":"Parameter Freezing","titles":["Experimental Features"]},"23":{"title":"Map over Layer","titles":["Experimental Features"]},"24":{"title":"Debugging Functionality","titles":["Experimental Features"]},"25":{"title":"Tied Parameters","titles":["Experimental Features"]},"26":{"title":"Distributed Utils","titles":[]},"27":{"title":"Backends","titles":["Distributed Utils"]},"28":{"title":"Initialization","titles":["Distributed Utils"]},"29":{"title":"Helper Functions","titles":["Distributed Utils"]},"30":{"title":"Communication Primitives","titles":["Distributed Utils"]},"31":{"title":"Optimizers.jl Integration","titles":["Distributed Utils"]},"32":{"title":"MLUtils.jl Integration","titles":["Distributed Utils"]},"33":{"title":"Interoperability between Lux and other packages","titles":[]},"34":{"title":"Switching from older frameworks","titles":["Interoperability between Lux and other packages"]},"35":{"title":"Flux Models to Lux Models","titles":["Interoperability between Lux and other packages","Switching from older frameworks"]},"36":{"title":"Using a different backend for Lux","titles":["Interoperability between Lux and other packages"]},"37":{"title":"Lux Models to Simple Chains","titles":["Interoperability between Lux and other packages","Using a different backend for Lux"]},"38":{"title":"Built-In Layers","titles":[]},"39":{"title":"Containers","titles":["Built-In Layers"]},"40":{"title":"Convolutional Layers","titles":["Built-In Layers"]},"41":{"title":"Dropout Layers","titles":["Built-In Layers"]},"42":{"title":"Pooling Layers","titles":["Built-In Layers"]},"43":{"title":"Recurrent Layers","titles":["Built-In Layers"]},"44":{"title":"Linear Layers","titles":["Built-In Layers"]},"45":{"title":"Misc. Helper Layers","titles":["Built-In Layers"]},"46":{"title":"Normalization Layers","titles":["Built-In Layers"]},"47":{"title":"Upsampling","titles":["Built-In Layers"]},"48":{"title":"Utilities","titles":[]},"49":{"title":"Training API","titles":["Utilities"]},"50":{"title":"Loss Functions","titles":["Utilities"]},"51":{"title":"LuxOps Module","titles":["Utilities"]},"52":{"title":"Recursive Operations","titles":["Utilities"]},"53":{"title":"Updating Floating Point Precision","titles":["Utilities"]},"54":{"title":"Element Type Matching","titles":["Utilities"]},"55":{"title":"Stateful Layer","titles":["Utilities"]},"56":{"title":"Compact Layer","titles":["Utilities"]},"57":{"title":"Miscellaneous","titles":["Utilities"]},"58":{"title":"LuxLib","titles":[]},"59":{"title":"Apply Activation","titles":["LuxLib"]},"60":{"title":"Batched Operations","titles":["LuxLib"]},"61":{"title":"Bias Activation","titles":["LuxLib"]},"62":{"title":"Convolutional Layers","titles":["LuxLib"]},"63":{"title":"Dropout","titles":["LuxLib"]},"64":{"title":"Fully Connected Layers","titles":["LuxLib"]},"65":{"title":"Normalization","titles":["LuxLib"]},"66":{"title":"Helper Functions","titles":["LuxLib"]},"67":{"title":"Activation Functions","titles":[]},"68":{"title":"LuxTestUtils","titles":[]},"69":{"title":"Testing using JET.jl","titles":["LuxTestUtils"]},"70":{"title":"Gradient Correctness","titles":["LuxTestUtils"]},"71":{"title":"Extensions to @test","titles":["LuxTestUtils"]},"72":{"title":"How to Install Lux.jl?","titles":[]},"73":{"title":"Want GPU Support?","titles":[]},"74":{"title":"Want Reactant (XLA) Support?","titles":[]},"75":{"title":"NNlib","titles":[]},"76":{"title":"Attention","titles":["NNlib"]},"77":{"title":"Softmax","titles":["NNlib"]},"78":{"title":"Pooling","titles":["NNlib"]},"79":{"title":"Padding","titles":["NNlib"]},"80":{"title":"Convolution","titles":["NNlib"]},"81":{"title":"Upsampling","titles":["NNlib"]},"82":{"title":"Rotation","titles":["NNlib"]},"83":{"title":"Batched Operations","titles":["NNlib"]},"84":{"title":"Gather and Scatter","titles":["NNlib"]},"85":{"title":"Sampling","titles":["NNlib"]},"86":{"title":"Losses","titles":["NNlib"]},"87":{"title":"Miscellaneous","titles":["NNlib"]},"88":{"title":"Dropout","titles":["NNlib"]},"89":{"title":"Internal NNlib Functions","titles":["NNlib"]},"90":{"title":"Citation","titles":[]},"91":{"title":"Why we wrote Lux?","titles":[]},"92":{"title":"Design Principles","titles":["Why we wrote Lux?"]},"93":{"title":"Why use Lux over Flux?","titles":["Why we wrote Lux?"]},"94":{"title":"Getting Started","titles":[]},"95":{"title":"Installation","titles":["Getting Started"]},"96":{"title":"Quickstart","titles":["Getting Started"]},"97":{"title":"Defining Custom Layers","titles":["Getting Started"]},"98":{"title":"Additional Packages","titles":["Getting Started"]},"99":{"title":"XLA (CPU/GPU/TPU) Support","titles":["Getting Started"]},"100":{"title":"GPU Support","titles":["Getting Started"]},"101":{"title":"Resources to Get Started","titles":[]},"102":{"title":"Updating to Lux v1","titles":[]},"103":{"title":"LuxLib.jl","titles":["Updating to Lux v1"]},"104":{"title":"Breaking Changes","titles":["Updating to Lux v1","LuxLib.jl"]},"105":{"title":"New Major Features","titles":["Updating to Lux v1","LuxLib.jl"]},"106":{"title":"LuxCore.jl","titles":["Updating to Lux v1"]},"107":{"title":"Breaking Changes","titles":["Updating to Lux v1","LuxCore.jl"]},"108":{"title":"New Major Features","titles":["Updating to Lux v1","LuxCore.jl"]},"109":{"title":"WeightInitializers.jl","titles":["Updating to Lux v1"]},"110":{"title":"MLDataDevices.jl","titles":["Updating to Lux v1"]},"111":{"title":"Breaking Changes","titles":["Updating to Lux v1","MLDataDevices.jl"]},"112":{"title":"New Major Features","titles":["Updating to Lux v1","MLDataDevices.jl"]},"113":{"title":"Lux.jl","titles":["Updating to Lux v1"]},"114":{"title":"Breaking Changes (Removed Functionality)","titles":["Updating to Lux v1","Lux.jl"]},"115":{"title":"Breaking Changes (Moved Functionality)","titles":["Updating to Lux v1","Lux.jl"]},"116":{"title":"Breaking Changes (Changes in Defaults)","titles":["Updating to Lux v1","Lux.jl"]},"117":{"title":"New Features","titles":["Updating to Lux v1","Lux.jl"]},"118":{"title":"Automatic Differentiation","titles":[]},"119":{"title":"Overview","titles":["Automatic Differentiation"]},"120":{"title":"Recommendations","titles":["Automatic Differentiation"]},"121":{"title":"Support Class","titles":["Automatic Differentiation"]},"122":{"title":"Footnotes","titles":["Automatic Differentiation"]},"123":{"title":"Compiling Lux Models using Reactant.jl","titles":[]},"124":{"title":"Using the TrainState API","titles":["Compiling Lux Models using Reactant.jl"]},"125":{"title":"Debugging Lux Models","titles":[]},"126":{"title":"Incorrect Model Specification: Dimension Mismatch Problems","titles":["Debugging Lux Models"]},"127":{"title":"Tracking down NaNs","titles":["Debugging Lux Models"]},"128":{"title":"Conclusion","titles":["Debugging Lux Models"]},"129":{"title":"Dispatching on Custom Input Types","titles":[]},"130":{"title":"Which function should participate in dispatch?","titles":["Dispatching on Custom Input Types"]},"131":{"title":"Concrete Example","titles":["Dispatching on Custom Input Types"]},"132":{"title":"Time-Dependent Chain Implementation","titles":["Dispatching on Custom Input Types","Concrete Example"]},"133":{"title":"Running the TDChain","titles":["Dispatching on Custom Input Types","Concrete Example"]},"134":{"title":"Writing the Correct Dispatch Rules","titles":["Dispatching on Custom Input Types","Concrete Example"]},"135":{"title":"Using the Same Input for Non-TD Models","titles":["Dispatching on Custom Input Types","Concrete Example"]},"136":{"title":"Distributed Data Parallel Training","titles":[]},"137":{"title":"Guide to Integrating DistributedUtils into your code","titles":["Distributed Data Parallel Training"]},"138":{"title":"Migration Guide from FluxMPI.jl","titles":["Distributed Data Parallel Training"]},"139":{"title":"Removed Functionality","titles":["Distributed Data Parallel Training","Migration Guide from FluxMPI.jl"]},"140":{"title":"Key Differences","titles":["Distributed Data Parallel Training","Migration Guide from FluxMPI.jl"]},"141":{"title":"Known Shortcomings","titles":["Distributed Data Parallel Training"]},"142":{"title":"Freezing Model Parameters","titles":[]},"143":{"title":"Freezing Layers of a Particular Kind","titles":["Freezing Model Parameters"]},"144":{"title":"Freezing by Layer Name","titles":["Freezing Model Parameters"]},"145":{"title":"Freezing Part of the Parameters","titles":["Freezing Model Parameters"]},"146":{"title":"Freezing Part of a Chain","titles":["Freezing Model Parameters"]},"147":{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","titles":[]},"148":{"title":"GPU Management","titles":[]},"149":{"title":"Automatic Backend Management (Recommended Approach)","titles":["GPU Management"]},"150":{"title":"Manual Backend Management","titles":["GPU Management"]},"151":{"title":"Lux Interface","titles":[]},"152":{"title":"Layer Interface","titles":["Lux Interface"]},"153":{"title":"Singular Layer","titles":["Lux Interface","Layer Interface"]},"154":{"title":"Container Layer","titles":["Lux Interface","Layer Interface"]},"155":{"title":"Parameter Interface","titles":["Lux Interface"]},"156":{"title":"State Interface","titles":["Lux Interface"]},"157":{"title":"Migrating from Flux to Lux","titles":[]},"158":{"title":"Implementing Custom Layers","titles":["Migrating from Flux to Lux"]},"159":{"title":"Certain Important Implementation Details","titles":["Migrating from Flux to Lux"]},"160":{"title":"Training/Inference Mode","titles":["Migrating from Flux to Lux","Certain Important Implementation Details"]},"161":{"title":"Can we still use Flux Layers?","titles":["Migrating from Flux to Lux"]},"162":{"title":"Nested Automatic Differentiation","titles":[]},"163":{"title":"Loss Function containing Jacobian Computation","titles":["Nested Automatic Differentiation"]},"164":{"title":"Using Batched Jacobian for Multiple Inputs","titles":["Nested Automatic Differentiation","Loss Function containing Jacobian Computation"]},"165":{"title":"Loss Function contains Gradient Computation","titles":["Nested Automatic Differentiation"]},"166":{"title":"Loss Function computing the Jacobian of the Parameters","titles":["Nested Automatic Differentiation"]},"167":{"title":"Hutchinson Trace Estimation","titles":["Nested Automatic Differentiation"]},"168":{"title":"Computing using the Vector-Jacobian Product","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"169":{"title":"Computing using the Jacobian-Vector Product","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"170":{"title":"Computing using the Full Jacobian","titles":["Nested Automatic Differentiation","Hutchinson Trace Estimation"]},"171":{"title":"Neural Networks Inside GPU Kernels","titles":[]},"172":{"title":"Performance Pitfalls & How to Catch Them","titles":[]},"173":{"title":"Spurious Type-Promotion","titles":["Performance Pitfalls & How to Catch Them"]},"174":{"title":"Scalar Indexing on GPU Arrays","titles":["Performance Pitfalls & How to Catch Them"]},"175":{"title":"Type Instabilities","titles":["Performance Pitfalls & How to Catch Them"]},"176":{"title":"Faster Primitives","titles":["Performance Pitfalls & How to Catch Them"]},"177":{"title":"Optional Dependencies for Performance","titles":["Performance Pitfalls & How to Catch Them"]},"178":{"title":"Data Loading and Device Transfer","titles":["Performance Pitfalls & How to Catch Them"]},"179":{"title":"Preferences for Lux.jl","titles":[]},"180":{"title":"Nested Automatic Differentiation","titles":["Preferences for Lux.jl"]},"181":{"title":"GPU-Aware MPI Support","titles":["Preferences for Lux.jl"]},"182":{"title":"GPU Backend Selection","titles":["Preferences for Lux.jl"]},"183":{"title":"Automatic Eltype Conversion","titles":["Preferences for Lux.jl"]},"184":{"title":"Dispatch Doctor","titles":["Preferences for Lux.jl"]},"185":{"title":"Disabling Loop Vectorization / Octavian","titles":["Preferences for Lux.jl"]},"186":{"title":"Initializing Weights","titles":[]},"187":{"title":"Quick examples","titles":["Initializing Weights"]},"188":{"title":"Julia & Lux for the Uninitiated","titles":[]},"189":{"title":"Arrays","titles":["Julia & Lux for the Uninitiated"]},"190":{"title":"CUDA Arrays","titles":["Julia & Lux for the Uninitiated","Arrays"]},"191":{"title":"(Im)mutability","titles":["Julia & Lux for the Uninitiated"]},"192":{"title":"Managing Randomness","titles":["Julia & Lux for the Uninitiated"]},"193":{"title":"Automatic Differentiation","titles":["Julia & Lux for the Uninitiated"]},"194":{"title":"Gradients","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"195":{"title":"Jacobian-Vector Product","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"196":{"title":"Vector-Jacobian Product","titles":["Julia & Lux for the Uninitiated","Automatic Differentiation"]},"197":{"title":"Linear Regression","titles":["Julia & Lux for the Uninitiated"]},"198":{"title":"Appendix","titles":["Julia & Lux for the Uninitiated"]},"199":{"title":"Training a Simple LSTM","titles":[]},"200":{"title":"Package Imports","titles":["Training a Simple LSTM"]},"201":{"title":"Dataset","titles":["Training a Simple LSTM"]},"202":{"title":"Creating a Classifier","titles":["Training a Simple LSTM"]},"203":{"title":"Using the @compact API","titles":["Training a Simple LSTM"]},"204":{"title":"Defining Accuracy, Loss and Optimiser","titles":["Training a Simple LSTM"]},"205":{"title":"Training the Model","titles":["Training a Simple LSTM"]},"206":{"title":"Saving the Model","titles":["Training a Simple LSTM"]},"207":{"title":"Appendix","titles":["Training a Simple LSTM"]},"208":{"title":"MNIST Classification with SimpleChains","titles":[]},"209":{"title":"Package Imports","titles":["MNIST Classification with SimpleChains"]},"210":{"title":"Loading MNIST","titles":["MNIST Classification with SimpleChains"]},"211":{"title":"Define the Model","titles":["MNIST Classification with SimpleChains"]},"212":{"title":"Helper Functions","titles":["MNIST Classification with SimpleChains"]},"213":{"title":"Define the Training Loop","titles":["MNIST Classification with SimpleChains"]},"214":{"title":"Finally Training the Model","titles":["MNIST Classification with SimpleChains"]},"215":{"title":"Appendix","titles":["MNIST Classification with SimpleChains"]},"216":{"title":"Training Lux Models using Optimization.jl","titles":[]},"217":{"title":"Imports packages","titles":["Training Lux Models using Optimization.jl"]},"218":{"title":"Generate some training data","titles":["Training Lux Models using Optimization.jl"]},"219":{"title":"Define the DataLoader","titles":["Training Lux Models using Optimization.jl"]},"220":{"title":"Training the model","titles":["Training Lux Models using Optimization.jl"]},"221":{"title":"Plotting the results","titles":["Training Lux Models using Optimization.jl"]},"222":{"title":"Appendix","titles":["Training Lux Models using Optimization.jl"]},"223":{"title":"Tutorials","titles":[]},"224":{"title":"Beginner Tutorials","titles":["Tutorials"]},"225":{"title":"Intermediate Tutorials","titles":["Tutorials"]},"226":{"title":"Advanced Tutorials","titles":["Tutorials"]},"227":{"title":"Larger Models","titles":["Tutorials"]},"228":{"title":"Selected 3rd Party Tutorials","titles":["Tutorials"]},"229":{"title":"MNIST Classification using Neural ODEs","titles":[]},"230":{"title":"Package Imports","titles":["MNIST Classification using Neural ODEs"]},"231":{"title":"Loading MNIST","titles":["MNIST Classification using Neural ODEs"]},"232":{"title":"Define the Neural ODE Layer","titles":["MNIST Classification using Neural ODEs"]},"233":{"title":"Create and Initialize the Neural ODE Layer","titles":["MNIST Classification using Neural ODEs"]},"234":{"title":"Define Utility Functions","titles":["MNIST Classification using Neural ODEs"]},"235":{"title":"Training","titles":["MNIST Classification using Neural ODEs"]},"236":{"title":"Alternate Implementation using Stateful Layer","titles":["MNIST Classification using Neural ODEs"]},"237":{"title":"Train the new Stateful Neural ODE","titles":["MNIST Classification using Neural ODEs"]},"238":{"title":"Type Stability","titles":["MNIST Classification using Neural ODEs"]},"239":{"title":"Appendix","titles":["MNIST Classification using Neural ODEs"]},"240":{"title":"Training a HyperNetwork on MNIST and FashionMNIST","titles":[]},"241":{"title":"Package Imports","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"242":{"title":"Loading Datasets","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"243":{"title":"Implement a HyperNet Layer","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"244":{"title":"Create and Initialize the HyperNet","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"245":{"title":"Define Utility Functions","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"246":{"title":"Training","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"247":{"title":"Appendix","titles":["Training a HyperNetwork on MNIST and FashionMNIST"]},"248":{"title":"Training a PINN on 2D PDE","titles":[]},"249":{"title":"Package Imports","titles":["Training a PINN on 2D PDE"]},"250":{"title":"Problem Definition","titles":["Training a PINN on 2D PDE"]},"251":{"title":"Define the Neural Networks","titles":["Training a PINN on 2D PDE"]},"252":{"title":"Define the Loss Functions","titles":["Training a PINN on 2D PDE"]},"253":{"title":"Generate the Data","titles":["Training a PINN on 2D PDE"]},"254":{"title":"Training","titles":["Training a PINN on 2D PDE"]},"255":{"title":"Visualizing the Results","titles":["Training a PINN on 2D PDE"]},"256":{"title":"Appendix","titles":["Training a PINN on 2D PDE"]},"257":{"title":"Convolutional VAE for MNIST using Reactant","titles":[]},"258":{"title":"Model Definition","titles":["Convolutional VAE for MNIST using Reactant"]},"259":{"title":"Loading MNIST","titles":["Convolutional VAE for MNIST using Reactant"]},"260":{"title":"Helper Functions","titles":["Convolutional VAE for MNIST using Reactant"]},"261":{"title":"Training the Model","titles":["Convolutional VAE for MNIST using Reactant"]},"262":{"title":"Appendix","titles":["Convolutional VAE for MNIST using Reactant"]},"263":{"title":"Bayesian Neural Network","titles":[]},"264":{"title":"Generating data","titles":["Bayesian Neural Network"]},"265":{"title":"Building the Neural Network","titles":["Bayesian Neural Network"]},"266":{"title":"Prediction Visualization","titles":["Bayesian Neural Network"]},"267":{"title":"Appendix","titles":["Bayesian Neural Network"]},"268":{"title":"Fitting a Polynomial using MLP","titles":[]},"269":{"title":"Package Imports","titles":["Fitting a Polynomial using MLP"]},"270":{"title":"Dataset","titles":["Fitting a Polynomial using MLP"]},"271":{"title":"Neural Network","titles":["Fitting a Polynomial using MLP"]},"272":{"title":"Optimizer","titles":["Fitting a Polynomial using MLP"]},"273":{"title":"Loss Function","titles":["Fitting a Polynomial using MLP"]},"274":{"title":"Training","titles":["Fitting a Polynomial using MLP"]},"275":{"title":"Appendix","titles":["Fitting a Polynomial using MLP"]},"276":{"title":"Graph Convolutional Networks on Cora","titles":[]},"277":{"title":"Loading Cora Dataset","titles":["Graph Convolutional Networks on Cora"]},"278":{"title":"Model Definition","titles":["Graph Convolutional Networks on Cora"]},"279":{"title":"Helper Functions","titles":["Graph Convolutional Networks on Cora"]},"280":{"title":"Training the Model","titles":["Graph Convolutional Networks on Cora"]},"281":{"title":"Appendix","titles":["Graph Convolutional Networks on Cora"]},"282":{"title":"Normalizing Flows for Density Estimation","titles":[]},"283":{"title":"Define & Load the Moons Dataset","titles":["Normalizing Flows for Density Estimation"]},"284":{"title":"Bijectors Implementation","titles":["Normalizing Flows for Density Estimation"]},"285":{"title":"Model Definition","titles":["Normalizing Flows for Density Estimation"]},"286":{"title":"Helper Functions","titles":["Normalizing Flows for Density Estimation"]},"287":{"title":"Training the Model","titles":["Normalizing Flows for Density Estimation"]},"288":{"title":"Visualizing the Results","titles":["Normalizing Flows for Density Estimation"]},"289":{"title":"Appendix","titles":["Normalizing Flows for Density Estimation"]},"290":{"title":"Training a Neural ODE to Model Gravitational Waveforms","titles":[]},"291":{"title":"Package Imports","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"292":{"title":"Define some Utility Functions","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"293":{"title":"Simulating the True Model","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"294":{"title":"Defiing a Neural Network Model","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"295":{"title":"Setting Up for Training the Neural Network","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"296":{"title":"Training the Neural Network","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"297":{"title":"Visualizing the Results","titles":["Training a Neural ODE to Model Gravitational Waveforms"]},"298":{"title":"Appendix","titles":["Training a Neural ODE to Model Gravitational Waveforms"]}},"dirtCount":0,"index":[["↦",{"2":{"292":1}}],["ϕ̇",{"2":{"293":2,"294":2}}],["ϕ",{"2":{"292":5,"293":1,"294":1}}],["χ̇",{"2":{"293":2,"294":2}}],["χ",{"2":{"292":4,"293":4,"294":2}}],["~",{"2":{"265":2}}],["~=",{"2":{"89":1}}],["μ",{"2":{"258":5,"260":3}}],["₋₋₋kwargs₋₋₋",{"2":{"238":4}}],["₋₋₋no",{"2":{"238":3}}],["─",{"2":{"238":3}}],["\\u001b",{"2":{"200":2,"269":3}}],["✓",{"2":{"200":59,"230":128,"241":76,"263":448,"269":105,"291":403}}],["∥22we",{"2":{"197":1}}],["⟶∑i=1k12∥yi−fw",{"2":{"197":1}}],["$",{"2":{"255":1,"287":1,"288":1}}],["$i",{"2":{"192":2,"266":1}}],["$name",{"2":{"23":1}}],["∞",{"2":{"163":4,"164":4,"165":4,"166":4,"170":8}}],["∘",{"2":{"162":4,"165":1,"252":3,"259":2}}],["⋱",{"2":{"147":4}}],["⋮",{"2":{"147":8,"171":4}}],["└──",{"2":{"238":3}}],["└──────────────────────────────┘",{"2":{"89":2}}],["└────────────────────────────────────────┘",{"2":{"67":34,"89":1}}],["└",{"2":{"126":1,"163":2,"200":1,"205":2,"261":1,"269":2,"280":2}}],["┌",{"2":{"126":1,"163":2,"200":1,"205":2,"261":1,"269":2,"280":2}}],["┌──────────────────────────────┐",{"2":{"89":2}}],["┌────────────────────────────────────────┐",{"2":{"67":34,"89":1}}],["∂w",{"2":{"252":2}}],["∂v",{"2":{"252":2}}],["∂y",{"2":{"252":4}}],["∂u",{"2":{"252":10}}],["∂t",{"2":{"165":7,"252":2}}],["∂xyt",{"2":{"252":4}}],["∂x",{"2":{"163":7,"164":7,"166":7,"170":11,"252":4}}],["∂ps",{"2":{"123":4,"163":8,"164":8,"165":8,"166":8,"170":11}}],["∂f∂x",{"2":{"18":2}}],["↩︎",{"2":{"122":9}}],["❓",{"2":{"119":6}}],["❌",{"2":{"119":13}}],["\\tval",{"2":{"280":27}}],["\\tfashionmnist\\ttraining",{"2":{"246":1}}],["\\tfashionmnist\\ttime",{"2":{"246":50}}],["\\ttraining",{"2":{"287":11}}],["\\ttest",{"2":{"235":45,"237":9,"246":102}}],["\\ttime",{"2":{"235":45,"237":9}}],["\\tloss",{"2":{"124":11,"205":50}}],["\\t",{"2":{"97":11,"214":60,"246":51,"254":200}}],["ω",{"2":{"87":2,"89":2}}],["⊡",{"2":{"83":1}}],["∇f",{"2":{"194":3}}],["∇filter",{"2":{"89":3}}],["∇offending",{"2":{"127":2}}],["∇depthwiseconv",{"2":{"89":4}}],["∇conv",{"2":{"89":4}}],["∇grid",{"2":{"85":1}}],["∇imrotate",{"2":{"82":1}}],["∇upsample",{"2":{"81":4}}],["θ|x",{"2":{"266":2}}],["θ",{"2":{"82":5,"89":3,"220":2,"265":2,"266":9,"295":3}}],["⊠",{"2":{"80":2,"83":7}}],["∑ᵢ",{"2":{"78":1}}],["÷",{"2":{"76":1,"82":4,"89":4,"201":4,"258":16,"260":2,"283":1,"284":1,"288":1}}],["qoi",{"2":{"263":1,"269":1,"291":1}}],["qk",{"2":{"76":2}}],["q",{"2":{"76":6}}],["quantiles",{"2":{"265":1}}],["quadrupole",{"2":{"292":10}}],["quadratic",{"2":{"67":1,"270":1,"274":1}}],["quadgkenzymeext",{"2":{"230":1,"269":2,"291":1}}],["quadgk",{"2":{"230":2,"263":1,"269":2,"291":2}}],["questions",{"2":{"101":3,"162":1,"193":1}}],["query",{"2":{"76":4}}],["quick",{"0":{"187":1},"2":{"188":1}}],["quickstart",{"0":{"96":1},"2":{"101":1,"153":1}}],["quite",{"2":{"89":1,"91":1,"191":1,"193":1}}],["quoting",{"2":{"123":1}}],["quot",{"2":{"1":2,"8":2,"15":12,"39":4,"45":2,"46":2,"50":8,"52":2,"54":16,"56":2,"57":6,"63":4,"64":1,"65":8,"67":32,"78":2,"79":2,"81":2,"83":2,"87":2,"89":10,"96":4,"123":4,"140":2,"148":2,"158":4,"173":2,"182":8}}],["✖",{"2":{"70":4}}],["✔️",{"2":{"119":17}}],["✔",{"2":{"70":8}}],["λ=0",{"2":{"67":1}}],["λ",{"2":{"67":6}}],["λβ",{"2":{"63":1}}],["`θ`",{"2":{"266":1}}],["`x`",{"2":{"266":1}}],["`p",{"2":{"232":1}}],["`ps",{"2":{"154":1}}],["`ps`",{"2":{"147":1}}],["`tasklocalrng`",{"2":{"205":4,"280":2}}],["`training`",{"2":{"163":2,"261":1,"280":1}}],["`replicate`",{"2":{"205":2,"280":1}}],["`carry`",{"2":{"202":2}}],["`iterators",{"2":{"202":1}}],["`eachslice`",{"2":{"202":1}}],["`∇`",{"2":{"194":1}}],["`val",{"2":{"163":2,"261":1,"280":1}}],["`zygote",{"2":{"163":1}}],["`b`",{"2":{"158":2}}],["`denselayerparameters`",{"2":{"155":2}}],["`namedtuple`",{"2":{"155":2}}],["`model",{"2":{"154":1}}],["`lstm",{"2":{"202":1}}],["`linear",{"2":{"154":2}}],["`l",{"2":{"153":1}}],["`luxcore",{"2":{"163":2,"261":1,"280":1}}],["`luxcore`",{"2":{"153":1}}],["`lux",{"2":{"163":2,"261":1,"280":1}}],["`lux``",{"2":{"190":1}}],["`lux`",{"2":{"153":1}}],["```",{"2":{"149":1}}],["`st",{"2":{"154":1}}],["`st`",{"2":{"147":1,"154":1,"294":2}}],["`softmax",{"2":{"77":1}}],["`autoforwarddiff",{"2":{"164":1}}],["`autozygote",{"2":{"164":1}}],["`apply`",{"2":{"133":1}}],["`a`",{"2":{"67":1,"158":3}}],["`",{"2":{"67":1,"69":1,"77":1,"96":2,"149":1,"153":1,"154":1,"163":2,"164":2,"194":1,"261":1,"280":1}}],["`u",{"2":{"67":1}}],["`octavian",{"2":{"64":1}}],["π",{"2":{"67":1,"89":3,"283":1,"292":2,"293":1}}],["√",{"2":{"67":1,"292":2}}],["√t",{"2":{"15":2}}],["⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀200⠀",{"2":{"89":1}}],["⠀0⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀100⠀",{"2":{"89":2}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀",{"2":{"67":1}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀",{"2":{"67":32}}],["⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀x",{"2":{"67":1}}],["⠀",{"2":{"67":34}}],["│",{"2":{"200":1,"238":6,"269":2}}],["│⣇⣇⣸⣀⣸⣀⣀⣟⣀⣀⣸⣃⣀⣀⣀⣿⣀⣀⣀⣀⣀⣿⣀⣀⣀⣀⣀⣀⣈⣇⣀⣀⣀⣀⣀⣀⣀⣀⣀⣱│",{"2":{"89":1}}],["│⡇⡇⢸⠇⢸⡇⠀⣿⠀⠀⢣⡇⠀⠀⠸⣄⠇⠀⠀⠀⠸⡀⡇⠀⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄│",{"2":{"89":1}}],["│⡇⢸⢸⡇⡇⡇⠸⣰⠃⠀⡇⡸⠀⠀⢸⠀⡜⠀⠀⠀⢣⠀⢸⠁⠀⠀⠀⠈⡆⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀│",{"2":{"89":1}}],["│⡇⢸⡇⡇⡇⡇⢸⠀⡇⠈⡆⢰⠁⠀⡇⠀⢰⠁⠀⠈⡆⠀⠀⡎⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀│",{"2":{"89":1}}],["│⢸⠀⡇⢸⠀⠀⡇⠀⠀⡇⠀⠀⠀⡇⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⣿⠀⠀⠀⠀⠀⠀⠀⠀⢱⡀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⢸⢸⡇⡇⡇⡸⢸⠀⡎⢸⠀⠀⡇⠈⡆⠀⠀⡇⠀⢸⠀⠀⠀⢰⠁⠀⠘⡆⠀⠀⠀⠀⠀⠀⠀⠀⠸⡄⠀⠀│",{"2":{"89":1}}],["│⢸⢸⡇⡸⡇⢸⡇⠀⢸⢸⠀⠀⡜⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⡎⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀│",{"2":{"89":1}}],["│⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⢇⠀⠀⠀⢀⠿⡀⠀⠀⠀⠀⢰⠛⡄⠀⠀⠀⠀⠀⠀⠀⠀⢣⠀⠀⠀⠀│",{"2":{"89":1}}],["│⢸⢸⡇⢸⠀⢸⡇⠀⢸⡇⠀⠀⢸⡎⠀⠀⠀⠈⣶⠁⠀⠀⠀⠀⠸⣤⠃⠀⠀⠀⠀⠀⠀⠘⡆⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⢸⢸⡇⢱⡇⢸⡇⠀⢸⢸⠀⠀⢣⢸⠀⠀⠀⢸⠀⡇⠀⠀⠀⠀⢇⠀⡜⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⢠⢻⡇⡇⡇⢱⢸⠀⢇⢸⠀⠀⡇⢀⠇⠀⠀⡇⠀⢸⠀⠀⠀⠸⡀⠀⢠⠇⠀⠀⠀⠀⢱⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⡠⠴⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠒⠒⠢⠤⢄⣀⡀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠀│",{"2":{"67":1}}],["│⠒⠒⠒⠒⠒⠊⠉⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣒⣒⣒⣒⣒⣊⣉⣉⣉⣉⣁⣀⣀⡠⠤⠒⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠊⣉⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠤⠤⠖⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠮⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⡤⠤⠤⠤⠤⠤⠤⡧⠤⠤⠤⠤⠶⠮⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⣤⣤⡤⡧⠴⠶⠯⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡯⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡷⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":2}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⡤⡧⠶⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⢤⡤⠤⣤⣤⢤⣤⣤⠤⠤⠤⢼⠮⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⡷⠶⠶⠶⠶⠶⠾⠿⠯⠭⠭⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠤⠤⠤⠤⠔⠒⠒⠒⠊⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠃⠉⠙⠘⠃⠈⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣰⢀⣆⡄⣄⡄⡠⡰⠦⠷⡜⢢⠷⠳⠢⠊⠉⠉⠀⠀⠁⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⡤⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠉⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉│",{"2":{"89":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡜│",{"2":{"67":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⣉⠭⠛⡏⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"67":1}}],["│⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠒⠒⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸│",{"2":{"67":1}}],["│⠉⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠉⠑⠒⠒⠒⠒⠒⠒⠒⠒⠒⠒⠉⠉⠉⠉⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠉⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│",{"2":{"67":1}}],["│⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠢⣄⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⣀⠔│",{"2":{"67":1}}],["│⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔│",{"2":{"67":2}}],["│⠀⣿⡇⡇⡇⡇⢸⠀⡇⢀⠇⠸⡀⠀⡇⠀⠸⡀⠀⢀⠇⠀⠀⢇⠀⠀⠀⡸⠀⠀⠀⠸⡄⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⣿⢸⡇⡇⡇⢰⠹⡄⠀⡇⢱⠀⠀⢸⠀⢣⠀⠀⠀⡜⠀⢸⡀⠀⠀⠀⢀⠇⠀⠈⡇⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⡇⢸⡆⢸⡇⠀⣿⠀⠀⡜⡇⠀⠀⢰⠋⡆⠀⠀⠀⢰⠁⡇⠀⠀⠀⠀⠀⡸⠀⢣⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⡀⢸⠀⢸⠀⠀⣧⠀⠀⢸⡄⠀⠀⠀⣷⠀⠀⠀⠀⠀⣷⠀⠀⠀⠀⠀⠀⢀⣿⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⢀⡠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⣀⠀│",{"2":{"89":1}}],["│⠀⢀⣀⡠⠤⠖⢒⣋⠭⠗⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⣀⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⢀⠴⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠣⡀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠑⠢⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⢀⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠈⠣⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⡰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⢀⡜⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢇⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⣀⡠⠴⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⢰⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡄⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠈⠉⠑⠒⠦⢄⣘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡴⠃⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⢀⣀⠤⠖⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠑⠦⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⣠⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠳⡀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢰⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠱⡀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢦⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢀⡎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⢣⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠤⠒⠒⠋⠉⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠦⣌⡓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⣁⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠊⠀⠀⠀⠀⠀⠀⠀⠀⠀⢣⡀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠞⠀⠀⠀⠀⠀⠀⠀⠀⠈⢆⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⠎⠁⠀⠀⠀⠀⠀⠈⢢⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⢋⠕⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⠤⠤⠒⠋⠁⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠒⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢄⠀⠀⠀⠀⠀⠀⠀⠀⢠⡇⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡔⠁⠀⠀⠀⠀⠀⠘⢄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠒⠉⠁⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠒⠉│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⢄⡀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⢀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠚⠉⠉⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣠⠚⠉⠉⠉⠢⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"89":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⠪⠷⣦⣄⣀⣀⣇⣀⣀⣤⠶⠕⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡏⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠴⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠑⡖⠦⢄⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⢔⠏⠁⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⣄⠀⠀⠀⠀⠀⢠⡞⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⡤⠤⠒⠊⠉⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠉⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠑⠢⢄⣀⣀⣇⣀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⠦⣄⣀⣀⣇⣀⣀⠤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠣⣄⠀⠉⠑⠒⠦⠤⢄⣀⣀⣀⣀⡠⠤⠖⣊⠕⠁⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡴⠃⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠁⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⠤⡀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⣉⠥⠚⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⡧⠤⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⡗⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠒⠢⠤⠤⠔⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣧⡞⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⡠⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⣀⠤⠖⠊⠉⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⢀⡠⠔⠊⠁⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢸⠀⠀⠀⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⡔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⢀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⣃⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡔⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⡤⠖⠒⠋⠉⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠊⠉⠉⠉⣉⣉⣉⣉⣉⠭⠭⠭⠭⠭│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⢀⡠⠖⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⣀⡤⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⣀⡠⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⢤⡲⠝⠋⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡠⠤⠒⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⡠⠎⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⡠⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⢀⡠⠖⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⡤⠃⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⠔⠋⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠒⠒⠋⠉⠉⠉⠉⠉⠉│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⣀⡤⠖⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⣒⣋⠥⠤⠒⠊⠉⠁⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⣀⡤⠤⠒⠋⠁⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⣀⡠⠤⠒⠊⠉⠁⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠁⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠖⠋⠁⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣀⣀⣀⣀⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⣤⡴⠞⠋⠁⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠔⠊⠁⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠎⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠔⠋⠁⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠉⠀⠀⠀⠀│",{"2":{"67":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⣉⡡│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⡤⠔⠒⠉⠁│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠖⠊│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":8}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠋│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠖⠋│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉│",{"2":{"67":3}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠖⠋⠁⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠊⠁⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣠⡴⠞⠋⠁│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡠⠔⠊⠁⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡤⠒⠁⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠔⠊⠁⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠤⠤⠒⠒⠒⠊⠉⠉⠉│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠒⠉⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡧⠤⠔⠒⠒⠒⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉⠉│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⡏⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠈⠓⣄⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢠⠇⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡔⠋⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠓⢄⣀⣀⡤⢣⠃⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠖⠋⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⠤⠊⠁⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣀⡠⠔⠊⠁⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠖⠋⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⣀⣀⠤⠔⠒⠋⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⠀⠀⠀⠀⠀⠀⠀⠀⠈⠉⠉⠉⠉⠉⠉⠉⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠉⠢⢄⡀⠉⠢⡄⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⢀⠔⠋⠀⡠⠔⠋⠁⠀⠀⠀⠀│",{"2":{"67":1}}],["│⠀⠀⠀⠀⠀⠈⠉⠒⠤⣀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⣸⠁│",{"2":{"67":1}}],["│⠀⠈⠑⠢⣀⠀⠀⠑⢆⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡠⠊⠁⠀⣀⠔⠊⠁⠀│",{"2":{"67":1}}],["│⠀⠈⠑⢦⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⢀⡤⠊⠁⠀│",{"2":{"67":2}}],["│⣀⣀⠔⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠙⢤⣀│",{"2":{"89":1}}],["│⣀⣀⠤⠤⠒⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⡠⠤⠤⠤⠖⠒⠊⠉⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣠⣤⣤⣤⣤⣔⣒⣒⣚⣉⣉⣁⣀⣇⠴⠒⠉⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⡠⠤⠤⠤⠤⠔⠒⠒⠚⠉⠉⠁⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣀⣀⣀⣀⣀⣀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⡧⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⠔⠋⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣧⣔⣊⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣇⣤⣤⣖⣚⣉⣁⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⣀⠔⠋⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":1}}],["│⣀⣀⣀⣀⣀⣀⣀⣀⣀⠤⠊⠁⠀⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":2}}],["│⣀⣀⣀⣀⣀⣀⣀⠤⠤⠤⠒⠊⠉⠀⠀⠀⠀⠀⠀⠀⡇⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀│",{"2":{"67":4}}],["│⠑⠒⠢⠤⣄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠉⠓⢄⡀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⡇│",{"2":{"67":1}}],["│⣤⣤⣤⣤⣤⣤⣤⣤⡤⠤⠤⠤⠤⠤⠤⠤⣤⣤⣤⡤⡧⠶⠶⠭⠥⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤⠤│",{"2":{"67":1}}],["σ=identity",{"2":{"65":1}}],["σ",{"2":{"59":10,"61":5,"62":3,"64":3,"65":7,"67":13,"87":4,"176":3,"258":3}}],["^bb0",{"2":{"147":2}}],["^2",{"2":{"78":2,"293":2,"294":1}}],["^",{"2":{"56":1,"65":9,"78":1,"97":1,"123":1,"124":1,"260":1,"292":2}}],["δ",{"2":{"50":1,"81":12,"85":2,"127":3,"218":2}}],["ϵ",{"2":{"50":1,"258":2}}],["ŷ",{"2":{"50":6}}],["α=0",{"2":{"89":1}}],["α=1",{"2":{"67":2,"83":1}}],["α",{"2":{"50":3,"63":3,"67":5,"76":1,"83":1,"89":3,"218":2,"266":3}}],["∈",{"2":{"50":2,"78":3,"83":1,"292":1}}],["∗y+α∗size",{"2":{"50":1}}],["∗y+α∗0",{"2":{"50":1}}],["∗y^−logσ",{"2":{"50":1}}],["∗log⁡",{"2":{"50":1}}],["−j2πωkn",{"2":{"89":1}}],["−log⁡",{"2":{"50":1}}],["−∑y~log⁡",{"2":{"50":1}}],["−",{"2":{"50":1}}],["−y~∗log⁡",{"2":{"50":1}}],["−x",{"2":{"15":1}}],["≈∑θ∼p",{"2":{"266":1}}],["≈",{"2":{"50":25,"56":1,"67":2,"78":2,"89":3,"149":1,"150":1,"192":2}}],["β=0",{"2":{"83":1,"89":1}}],["β",{"2":{"46":1,"65":4,"83":1,"89":3,"218":2}}],["γ=0",{"2":{"50":2}}],["γ",{"2":{"46":1,"50":1,"65":4,"218":2}}],["⋅n+z⋅hprevarguments",{"2":{"43":1}}],["↗",{"2":{"39":2}}],["↘",{"2":{"39":2}}],["→",{"2":{"39":5,"200":34,"230":56,"241":37,"263":147,"269":56,"291":158}}],["8\\ttrain",{"2":{"280":1}}],["8c79",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["8x8x16x4xi1>",{"2":{"147":2}}],["8x8x16x4xf32>",{"2":{"147":8}}],["868265560298705e",{"2":{"296":1}}],["8685084f",{"2":{"294":1}}],["868",{"2":{"291":1}}],["86434166798617e",{"2":{"296":1}}],["86434f",{"2":{"294":1}}],["864",{"2":{"280":3}}],["864781",{"2":{"261":1}}],["865",{"2":{"280":8}}],["865611",{"2":{"280":1}}],["865606",{"2":{"280":1}}],["865604",{"2":{"280":1}}],["865601",{"2":{"280":1}}],["865598",{"2":{"280":1}}],["865595",{"2":{"280":1}}],["865592",{"2":{"280":1}}],["865589",{"2":{"280":1}}],["865586",{"2":{"280":1}}],["865583",{"2":{"280":1}}],["865569",{"2":{"280":1}}],["8652344",{"2":{"261":1}}],["863690\\tthroughput",{"2":{"287":1}}],["863456",{"2":{"280":1}}],["863452",{"2":{"280":1}}],["863449",{"2":{"280":1}}],["863446",{"2":{"280":1}}],["863443",{"2":{"280":1}}],["863440",{"2":{"280":1}}],["863437",{"2":{"280":1}}],["863434",{"2":{"280":1}}],["863431",{"2":{"280":1}}],["863428",{"2":{"280":1}}],["863415",{"2":{"280":1}}],["8632812",{"2":{"261":1}}],["860",{"2":{"291":1}}],["8603811",{"2":{"270":1}}],["86081326",{"2":{"270":1}}],["8601828553599953",{"2":{"192":1}}],["8601870861800127",{"2":{"171":2}}],["866",{"2":{"291":1}}],["86614174",{"2":{"270":2}}],["8667",{"2":{"265":1}}],["86624",{"2":{"147":1}}],["861",{"2":{"291":1}}],["861394",{"2":{"280":1}}],["861390",{"2":{"280":1}}],["861387",{"2":{"280":1}}],["861384",{"2":{"280":1}}],["861381",{"2":{"280":1}}],["861378",{"2":{"280":1}}],["861375",{"2":{"280":1}}],["861371",{"2":{"280":1}}],["861368",{"2":{"280":1}}],["861365",{"2":{"280":1}}],["861349",{"2":{"280":1}}],["86135",{"2":{"280":1}}],["8613281",{"2":{"261":1}}],["861023e",{"2":{"165":1}}],["86",{"2":{"246":1,"263":1,"280":15}}],["862480861009647",{"2":{"171":2}}],["86299",{"2":{"147":1}}],["86290836",{"2":{"143":1}}],["869375914682902e",{"2":{"296":1}}],["869286041744173e",{"2":{"296":1}}],["869243f",{"2":{"294":1}}],["869927",{"2":{"280":1}}],["869922",{"2":{"280":1}}],["869919",{"2":{"280":1}}],["869916",{"2":{"280":1}}],["869913",{"2":{"280":1}}],["869911",{"2":{"280":1}}],["869908",{"2":{"280":1}}],["869905",{"2":{"280":1}}],["869902",{"2":{"280":1}}],["869899",{"2":{"280":1}}],["869885",{"2":{"280":1}}],["86975616",{"2":{"270":1}}],["8694s",{"2":{"261":1}}],["8691406",{"2":{"261":2}}],["869",{"2":{"214":1,"280":9,"291":1}}],["869632",{"2":{"163":1}}],["86957",{"2":{"147":1}}],["867",{"2":{"291":1}}],["867798",{"2":{"280":1}}],["867794",{"2":{"280":1}}],["867791",{"2":{"280":1}}],["867786",{"2":{"280":1}}],["867784",{"2":{"280":1}}],["867781",{"2":{"280":1}}],["867778",{"2":{"280":1}}],["867775",{"2":{"280":1}}],["867772",{"2":{"280":1}}],["867769",{"2":{"280":1}}],["867755",{"2":{"280":1}}],["8671875",{"2":{"261":1}}],["8674763752817414",{"2":{"171":1}}],["8674763752817416",{"2":{"171":1}}],["867202",{"2":{"163":1}}],["867646",{"2":{"147":1}}],["854",{"2":{"291":1}}],["8541596",{"2":{"163":1}}],["855",{"2":{"291":2}}],["855268",{"2":{"280":1}}],["855263",{"2":{"280":1}}],["855261",{"2":{"280":1}}],["855258",{"2":{"280":1}}],["855255",{"2":{"280":1}}],["855252",{"2":{"280":1}}],["855249",{"2":{"280":1}}],["855246",{"2":{"280":1}}],["855243",{"2":{"280":1}}],["855240",{"2":{"280":1}}],["855224",{"2":{"280":1}}],["8554s",{"2":{"261":1}}],["852",{"2":{"280":3}}],["857314",{"2":{"280":1}}],["857309",{"2":{"280":1}}],["857306",{"2":{"280":1}}],["857303",{"2":{"280":1}}],["857300",{"2":{"280":1}}],["8573820474674845",{"2":{"189":1}}],["857297",{"2":{"280":1}}],["857294",{"2":{"280":1}}],["857291",{"2":{"280":1}}],["857288",{"2":{"280":1}}],["857285",{"2":{"280":1}}],["857271",{"2":{"280":1}}],["8571",{"2":{"280":6}}],["85747254",{"2":{"270":1}}],["853198",{"2":{"280":1}}],["853195",{"2":{"280":1}}],["853192",{"2":{"280":1}}],["853189",{"2":{"280":1}}],["853186",{"2":{"280":1}}],["853183",{"2":{"280":1}}],["853170",{"2":{"280":1}}],["853212",{"2":{"280":1}}],["853207",{"2":{"280":1}}],["853204",{"2":{"280":1}}],["853201",{"2":{"280":1}}],["8532",{"2":{"265":1}}],["8536",{"2":{"265":1}}],["853",{"2":{"263":1,"280":1}}],["8534s",{"2":{"261":1}}],["851187",{"2":{"280":1}}],["851182",{"2":{"280":1}}],["851179",{"2":{"280":1}}],["851176",{"2":{"280":1}}],["851173",{"2":{"280":1}}],["851170",{"2":{"280":1}}],["851167",{"2":{"280":1}}],["851164",{"2":{"280":1}}],["851160",{"2":{"280":1}}],["851156",{"2":{"280":1}}],["851143",{"2":{"280":1}}],["85107",{"2":{"280":1}}],["8515625",{"2":{"261":1}}],["85185",{"2":{"235":1}}],["8581494",{"2":{"270":1}}],["858",{"2":{"230":1,"263":2}}],["8560912346856114e",{"2":{"296":1}}],["856181f",{"2":{"294":1}}],["856",{"2":{"230":1,"280":1,"291":1}}],["856428",{"2":{"147":1}}],["85",{"2":{"214":2,"235":4,"246":1,"280":16}}],["850642303425243e",{"2":{"296":1}}],["8506884f",{"2":{"294":1}}],["850",{"2":{"211":2,"291":1}}],["859351",{"2":{"280":1}}],["859346",{"2":{"280":1}}],["859343",{"2":{"280":1}}],["859341",{"2":{"280":1}}],["859338",{"2":{"280":1}}],["859335",{"2":{"280":1}}],["859332",{"2":{"280":1}}],["859329",{"2":{"280":1}}],["859326",{"2":{"280":1}}],["859323",{"2":{"280":1}}],["859309",{"2":{"280":1}}],["859",{"2":{"200":1,"280":8,"291":1}}],["859452+0",{"2":{"186":1}}],["859405",{"2":{"165":1,"166":1}}],["8595328f",{"2":{"123":1}}],["893062571680914e",{"2":{"296":1}}],["893005f",{"2":{"294":1}}],["8931",{"2":{"241":1}}],["8958",{"2":{"291":1}}],["895668",{"2":{"147":1}}],["891154288068659e",{"2":{"296":1}}],["891108f",{"2":{"294":1}}],["891",{"2":{"291":1}}],["891006",{"2":{"147":1}}],["8927280407803166e",{"2":{"296":1}}],["8929433f",{"2":{"294":1}}],["892",{"2":{"291":1}}],["8908595093011135e",{"2":{"296":1}}],["890915f",{"2":{"294":1}}],["8900",{"2":{"291":1}}],["890",{"2":{"280":6}}],["894083646787928e",{"2":{"296":1}}],["894006f",{"2":{"294":1}}],["8946",{"2":{"265":1}}],["894248",{"2":{"147":1}}],["8976378",{"2":{"270":2}}],["8971",{"2":{"265":1}}],["897",{"2":{"263":1,"280":20}}],["89",{"2":{"246":1,"265":2,"280":10}}],["896",{"2":{"211":2,"280":14,"291":1}}],["899405641953519e",{"2":{"296":1}}],["899405247850415e",{"2":{"296":1}}],["899461f",{"2":{"294":1}}],["899495f",{"2":{"294":1}}],["899",{"2":{"269":1,"280":23}}],["8997",{"2":{"254":1}}],["899766",{"2":{"189":1}}],["899997",{"2":{"166":1}}],["898",{"2":{"280":20}}],["89859",{"2":{"165":1}}],["89874",{"2":{"147":1}}],["825706f",{"2":{"294":1}}],["82273089201041e",{"2":{"296":1}}],["822358358875405e",{"2":{"296":1}}],["822358f",{"2":{"294":1}}],["8226982f",{"2":{"294":1}}],["822",{"2":{"280":1,"291":1}}],["8229s",{"2":{"261":1}}],["823277439518023e",{"2":{"296":1}}],["823947",{"2":{"280":1}}],["823937",{"2":{"280":1}}],["823930",{"2":{"280":1}}],["823924",{"2":{"280":1}}],["823917",{"2":{"280":1}}],["823911",{"2":{"280":1}}],["823904",{"2":{"280":1}}],["823897",{"2":{"280":1}}],["823891",{"2":{"280":1}}],["823884",{"2":{"280":1}}],["823854",{"2":{"280":1}}],["82379496",{"2":{"270":1}}],["826832247611537e",{"2":{"296":1}}],["8265894f",{"2":{"294":1}}],["8262526f",{"2":{"294":1}}],["826244",{"2":{"280":1}}],["826234",{"2":{"280":1}}],["826227",{"2":{"280":1}}],["826221",{"2":{"280":1}}],["826214",{"2":{"280":1}}],["826207",{"2":{"280":1}}],["826201",{"2":{"280":1}}],["826126467847036e",{"2":{"296":1}}],["826104511965575e",{"2":{"296":1}}],["8261045f",{"2":{"294":1}}],["826194",{"2":{"280":1}}],["826185",{"2":{"280":1}}],["826178",{"2":{"280":1}}],["826144",{"2":{"280":1}}],["8261s",{"2":{"261":1}}],["826088\\tval",{"2":{"280":1}}],["8213308f",{"2":{"294":1}}],["8211219681205216e",{"2":{"296":1}}],["821168",{"2":{"287":1}}],["821161",{"2":{"287":1}}],["821159",{"2":{"287":1}}],["821156",{"2":{"287":1}}],["821153",{"2":{"287":1}}],["821151",{"2":{"287":1}}],["821148",{"2":{"287":1}}],["821145",{"2":{"287":1}}],["821142",{"2":{"287":1}}],["821146",{"2":{"261":1}}],["821139",{"2":{"287":1}}],["821113",{"2":{"287":1}}],["821548",{"2":{"280":1}}],["821538",{"2":{"280":1}}],["821531",{"2":{"280":1}}],["821524",{"2":{"280":1}}],["821518",{"2":{"280":1}}],["821511",{"2":{"280":1}}],["821505",{"2":{"280":1}}],["821498",{"2":{"280":1}}],["821491",{"2":{"280":1}}],["821484",{"2":{"280":1}}],["821455",{"2":{"280":1}}],["821027",{"2":{"280":1}}],["824544",{"2":{"287":1}}],["824368",{"2":{"261":1}}],["8242188",{"2":{"261":1}}],["820",{"2":{"291":1}}],["8202727",{"2":{"273":1}}],["8202s\\ttraining",{"2":{"237":1}}],["8206s",{"2":{"261":1}}],["8203125",{"2":{"261":1}}],["828191302998365e",{"2":{"296":1}}],["8281250",{"2":{"261":1}}],["828774150877818e",{"2":{"296":1}}],["82872f",{"2":{"294":1}}],["828978f",{"2":{"294":1}}],["8282454f",{"2":{"294":1}}],["828523",{"2":{"280":1}}],["828512",{"2":{"280":1}}],["828506",{"2":{"280":1}}],["828499",{"2":{"280":1}}],["828493",{"2":{"280":1}}],["828486",{"2":{"280":1}}],["828480",{"2":{"280":1}}],["828473",{"2":{"280":1}}],["828466",{"2":{"280":1}}],["828460",{"2":{"280":1}}],["828433",{"2":{"280":1}}],["828",{"2":{"230":1}}],["82805836",{"2":{"143":1}}],["82",{"2":{"214":1,"235":7,"237":2,"246":2,"263":1,"280":5}}],["8277528748447354e",{"2":{"296":1}}],["827732f",{"2":{"294":1}}],["827957644177645e",{"2":{"296":1}}],["827579f",{"2":{"294":1}}],["827663f",{"2":{"294":1}}],["827",{"2":{"200":1}}],["829268678976743e",{"2":{"296":1}}],["829",{"2":{"263":1,"269":1,"291":1}}],["8297s",{"2":{"261":1}}],["8293s",{"2":{"261":1}}],["829671",{"2":{"147":1}}],["82945836",{"2":{"96":1}}],["871953952520206e",{"2":{"296":1}}],["8717985f",{"2":{"294":1}}],["8709",{"2":{"291":1}}],["870364",{"2":{"147":1}}],["876",{"2":{"291":3}}],["876615f",{"2":{"123":1}}],["87502902674283e",{"2":{"296":1}}],["8756228566078988e",{"2":{"296":1}}],["8756064f",{"2":{"294":1}}],["875",{"2":{"280":6,"291":1}}],["872813989330821e",{"2":{"296":1}}],["872279f",{"2":{"294":1}}],["8725888f",{"2":{"294":1}}],["872",{"2":{"280":4,"291":2}}],["87276655966161e",{"2":{"296":1}}],["8727",{"2":{"265":1}}],["8794859611534308e",{"2":{"296":1}}],["8794399f",{"2":{"294":1}}],["879",{"2":{"291":1}}],["8795",{"2":{"287":1}}],["87961",{"2":{"280":3}}],["8791",{"2":{"265":1}}],["873341573382579e",{"2":{"296":1}}],["873707475527475e",{"2":{"296":1}}],["873785f",{"2":{"294":1}}],["8735144f",{"2":{"294":1}}],["8730854f",{"2":{"294":1}}],["8730469",{"2":{"261":1}}],["8731666",{"2":{"270":1}}],["874470808341595e",{"2":{"296":1}}],["874804311597348e",{"2":{"296":1}}],["8748185f",{"2":{"294":1}}],["8743809f",{"2":{"294":1}}],["8745784f",{"2":{"294":1}}],["8747025",{"2":{"270":1}}],["8740157",{"2":{"270":2}}],["874",{"2":{"263":1,"280":6}}],["8741s\\ttraining",{"2":{"237":1}}],["877895f",{"2":{"294":1}}],["877",{"2":{"263":1,"280":1}}],["877497",{"2":{"195":1,"196":1}}],["878185381595422e",{"2":{"296":1}}],["8781s",{"2":{"261":1}}],["8786",{"2":{"265":1}}],["878222",{"2":{"147":1}}],["87",{"2":{"126":1,"246":19,"280":29}}],["8f",{"2":{"124":1,"197":1}}],["8142538465319903e",{"2":{"296":1}}],["8142538f",{"2":{"294":1}}],["81481",{"2":{"235":3,"237":1}}],["819239",{"2":{"280":1}}],["819221",{"2":{"280":1}}],["819215",{"2":{"280":1}}],["819208",{"2":{"280":1}}],["819201",{"2":{"280":1}}],["819194",{"2":{"280":1}}],["819187",{"2":{"280":1}}],["819180",{"2":{"280":1}}],["819173",{"2":{"280":1}}],["819165",{"2":{"280":1}}],["8191676f",{"2":{"123":1}}],["819090",{"2":{"280":1}}],["8110236",{"2":{"270":2}}],["8115s",{"2":{"261":1}}],["811523",{"2":{"147":1}}],["8164148",{"2":{"270":1}}],["8166",{"2":{"265":1,"287":1}}],["8162",{"2":{"263":1}}],["8162s",{"2":{"261":1}}],["817752580105708e",{"2":{"296":1}}],["8175724427119784e",{"2":{"296":1}}],["8175974500674233e",{"2":{"296":1}}],["81751f",{"2":{"294":1}}],["8172565f",{"2":{"294":1}}],["817",{"2":{"291":2}}],["817326f",{"2":{"294":1}}],["8173s",{"2":{"261":1}}],["8173783982243132",{"2":{"171":2}}],["8170s",{"2":{"261":1}}],["8101329476251788e",{"2":{"296":1}}],["810108",{"2":{"261":1}}],["81012277e",{"2":{"97":1}}],["8124768911780975e",{"2":{"296":1}}],["8123917f",{"2":{"294":1}}],["8126",{"2":{"269":1}}],["8125000",{"2":{"261":1}}],["8127s\\ttraining",{"2":{"235":1}}],["818412256654567e",{"2":{"296":1}}],["81842",{"2":{"254":1}}],["818376f",{"2":{"294":1}}],["81879705",{"2":{"270":1}}],["818",{"2":{"241":1,"280":5,"291":1}}],["818807",{"2":{"147":1}}],["81",{"2":{"214":4,"235":6,"237":1,"246":7,"280":34}}],["8130",{"2":{"265":1}}],["813",{"2":{"263":1,"291":1}}],["8136",{"2":{"241":1}}],["81372\\taccuracy",{"2":{"205":1}}],["813359",{"2":{"147":1}}],["815681115969442e",{"2":{"296":1}}],["815659f",{"2":{"123":1}}],["81598f",{"2":{"294":1}}],["815",{"2":{"291":1}}],["8157s",{"2":{"261":1}}],["815336",{"2":{"166":1}}],["8848806f",{"2":{"294":1}}],["884",{"2":{"280":1}}],["884919f",{"2":{"123":1}}],["880",{"2":{"280":6}}],["889",{"2":{"280":4}}],["8893",{"2":{"265":1}}],["883981549044766e",{"2":{"296":1}}],["883344208940647e",{"2":{"296":1}}],["8833277f",{"2":{"294":1}}],["883",{"2":{"280":2}}],["8834435",{"2":{"273":1}}],["883664f0",{"2":{"163":1}}],["88146",{"2":{"269":1}}],["881",{"2":{"263":1,"280":14}}],["88135",{"2":{"147":1}}],["88",{"2":{"246":15,"280":9}}],["8824552457635503e",{"2":{"296":1}}],["8827756843400645e",{"2":{"296":1}}],["8827296f",{"2":{"294":1}}],["8823291f",{"2":{"294":1}}],["88214755",{"2":{"166":1}}],["8828",{"2":{"147":1}}],["8864837534567e",{"2":{"296":1}}],["88639f",{"2":{"294":1}}],["886353",{"2":{"147":1}}],["886539f",{"2":{"294":1}}],["8862365f",{"2":{"294":1}}],["886",{"2":{"280":6}}],["886018",{"2":{"147":1}}],["888843398620999e",{"2":{"296":1}}],["88889",{"2":{"81":2}}],["888",{"2":{"291":1}}],["8885",{"2":{"263":1}}],["888162e",{"2":{"220":1}}],["88897",{"2":{"147":1}}],["88713719321926e",{"2":{"296":1}}],["887155",{"2":{"261":1}}],["887453f",{"2":{"294":1}}],["8874373",{"2":{"134":1}}],["887",{"2":{"291":1}}],["8873",{"2":{"265":1}}],["887089",{"2":{"147":1}}],["8875",{"2":{"147":1}}],["88592060258278e",{"2":{"296":1}}],["885",{"2":{"263":1,"291":1}}],["8852217",{"2":{"166":1}}],["885272",{"2":{"147":1}}],["885355",{"2":{"123":1}}],["831809f",{"2":{"294":1}}],["831",{"2":{"291":1}}],["8317274f",{"2":{"197":1}}],["839248930000781e",{"2":{"296":1}}],["839",{"2":{"291":2}}],["839611\\tthroughput",{"2":{"287":1}}],["8393",{"2":{"189":1}}],["8393034",{"2":{"189":1}}],["839303",{"2":{"189":2}}],["8353",{"2":{"287":1}}],["8359s\\ttraining",{"2":{"235":1}}],["8375s",{"2":{"261":1}}],["8379912",{"2":{"166":1}}],["8302858282384107e",{"2":{"296":1}}],["8302s\\ttraining",{"2":{"235":1}}],["8303757f",{"2":{"294":1}}],["830761",{"2":{"280":1}}],["830750",{"2":{"280":1}}],["830744",{"2":{"280":1}}],["830737",{"2":{"280":1}}],["830731",{"2":{"280":1}}],["830724",{"2":{"280":1}}],["830717",{"2":{"280":1}}],["830711",{"2":{"280":1}}],["830704",{"2":{"280":1}}],["830697",{"2":{"280":1}}],["830665",{"2":{"280":1}}],["830117",{"2":{"261":1}}],["832",{"2":{"291":1}}],["832839",{"2":{"280":1}}],["832835",{"2":{"280":1}}],["832832",{"2":{"280":1}}],["832827",{"2":{"280":1}}],["832824",{"2":{"280":1}}],["832821",{"2":{"280":1}}],["832818",{"2":{"280":1}}],["832815",{"2":{"280":1}}],["832812",{"2":{"280":1}}],["832809",{"2":{"280":1}}],["83289f",{"2":{"96":1}}],["832792",{"2":{"280":1}}],["8322s",{"2":{"261":1}}],["8320312",{"2":{"261":1}}],["838918",{"2":{"280":1}}],["838913",{"2":{"280":1}}],["838910",{"2":{"280":1}}],["838908",{"2":{"280":1}}],["838905",{"2":{"280":1}}],["838902",{"2":{"280":1}}],["838899",{"2":{"280":1}}],["838896",{"2":{"280":1}}],["838893",{"2":{"280":1}}],["838890",{"2":{"280":1}}],["838876",{"2":{"280":1}}],["8385266",{"2":{"273":1}}],["838",{"2":{"263":2,"280":3,"291":3}}],["8381",{"2":{"254":3}}],["838756f",{"2":{"123":1}}],["83",{"2":{"214":1,"235":6,"237":1,"246":2,"280":4}}],["836592970014069e",{"2":{"296":1}}],["836604f",{"2":{"294":1}}],["83635f",{"2":{"294":1}}],["836897",{"2":{"280":1}}],["836894",{"2":{"280":1}}],["836891",{"2":{"280":1}}],["836888",{"2":{"280":1}}],["836885",{"2":{"280":1}}],["836882",{"2":{"280":1}}],["836879",{"2":{"280":1}}],["836876",{"2":{"280":1}}],["836873",{"2":{"280":1}}],["836860",{"2":{"280":1}}],["8364",{"2":{"265":1}}],["836901",{"2":{"280":1}}],["8369",{"2":{"254":2}}],["836",{"2":{"211":2,"230":1,"241":1,"280":3,"291":1}}],["836232",{"2":{"147":1}}],["834",{"2":{"291":2}}],["834877",{"2":{"280":1}}],["834872",{"2":{"280":1}}],["834869",{"2":{"280":1}}],["834866",{"2":{"280":1}}],["834864",{"2":{"280":1}}],["834861",{"2":{"280":1}}],["834858",{"2":{"280":1}}],["834855",{"2":{"280":1}}],["834852",{"2":{"280":1}}],["834849",{"2":{"280":1}}],["834833",{"2":{"280":1}}],["8346457",{"2":{"270":2}}],["8340s",{"2":{"261":1}}],["834073",{"2":{"166":1}}],["834789",{"2":{"147":1}}],["833069936112578e",{"2":{"296":1}}],["833",{"2":{"291":1}}],["833276",{"2":{"280":1}}],["8331",{"2":{"265":1}}],["8337032",{"2":{"143":1}}],["83333",{"2":{"81":1}}],["833333333333333",{"2":{"50":1}}],["801416946729513e",{"2":{"296":1}}],["801712f",{"2":{"294":1}}],["80128",{"2":{"147":1}}],["804364651804558e",{"2":{"296":1}}],["804817754579744e",{"2":{"296":1}}],["8044307f",{"2":{"294":1}}],["8044664f",{"2":{"294":1}}],["804459",{"2":{"287":1}}],["804736",{"2":{"287":1}}],["804710",{"2":{"287":1}}],["804706",{"2":{"287":1}}],["804702",{"2":{"287":1}}],["804699",{"2":{"287":1}}],["804695",{"2":{"287":1}}],["80465555",{"2":{"197":1}}],["802276\\tval",{"2":{"280":1}}],["8021",{"2":{"265":1}}],["8027344",{"2":{"261":1}}],["8027763f",{"2":{"123":1}}],["8075657294035454e",{"2":{"296":1}}],["807485951105246e",{"2":{"296":1}}],["8074464f",{"2":{"294":1}}],["8074496f",{"2":{"294":1}}],["8071",{"2":{"287":1}}],["8079",{"2":{"265":1}}],["807",{"2":{"263":1,"280":3}}],["808",{"2":{"280":1}}],["8087",{"2":{"263":1}}],["8083s",{"2":{"261":1}}],["8084s",{"2":{"261":1}}],["808093",{"2":{"147":1}}],["803399\\tval",{"2":{"280":1}}],["8031496",{"2":{"270":2}}],["8031s",{"2":{"261":1}}],["803",{"2":{"263":1}}],["8037s",{"2":{"261":1}}],["803491",{"2":{"147":1}}],["80",{"2":{"214":2,"235":10,"237":2,"239":1,"246":2,"247":1,"264":1,"280":2}}],["806723811284637e",{"2":{"296":1}}],["8067697",{"2":{"165":1}}],["8069",{"2":{"265":1}}],["806",{"2":{"263":1}}],["80644",{"2":{"147":1}}],["8000",{"2":{"280":5,"287":1}}],["8002s",{"2":{"261":1}}],["8001",{"2":{"197":1,"254":1}}],["800792614736468",{"2":{"171":2}}],["8003641",{"2":{"165":1}}],["800",{"2":{"124":1,"288":1}}],["809476653108627e",{"2":{"296":1}}],["80944f",{"2":{"294":1}}],["809456",{"2":{"189":1}}],["809907f",{"2":{"294":1}}],["809874",{"2":{"287":1}}],["809",{"2":{"263":1,"280":4}}],["8096s",{"2":{"261":1}}],["809001f",{"2":{"123":1}}],["80922f",{"2":{"123":1}}],["8051127f",{"2":{"294":1}}],["805",{"2":{"230":1,"291":2}}],["8058887f",{"2":{"123":1}}],["80556",{"2":{"81":1}}],["80f0",{"2":{"89":1}}],["8",{"2":{"39":2,"45":1,"50":1,"67":1,"77":2,"79":77,"83":9,"84":3,"89":1,"96":36,"123":7,"147":2,"163":1,"165":2,"166":3,"189":2,"191":4,"198":1,"200":10,"201":1,"205":3,"214":2,"220":1,"222":1,"230":10,"235":5,"237":1,"241":9,"246":2,"254":4,"258":4,"260":4,"261":2,"263":34,"265":4,"269":19,"270":1,"272":1,"274":2,"280":44,"291":34,"294":65,"296":66,"298":1}}],["843685185627884e",{"2":{"296":1}}],["843532f",{"2":{"294":1}}],["843",{"2":{"291":1}}],["8439s",{"2":{"261":1}}],["844",{"2":{"291":1}}],["844982",{"2":{"280":1}}],["844977",{"2":{"280":1}}],["844974",{"2":{"280":1}}],["844971",{"2":{"280":1}}],["844968",{"2":{"280":1}}],["844965",{"2":{"280":1}}],["844963",{"2":{"280":1}}],["844960",{"2":{"280":1}}],["844957",{"2":{"280":1}}],["844954",{"2":{"280":1}}],["844939",{"2":{"280":1}}],["8445s",{"2":{"261":1}}],["847420345512655e",{"2":{"296":1}}],["8473186f",{"2":{"294":1}}],["847396",{"2":{"270":1}}],["8471844f",{"2":{"294":1}}],["8471048",{"2":{"166":1}}],["847",{"2":{"291":1}}],["847099247013986e",{"2":{"296":1}}],["847064",{"2":{"280":1}}],["847060",{"2":{"280":1}}],["847057",{"2":{"280":1}}],["847054",{"2":{"280":1}}],["847051",{"2":{"280":1}}],["847046",{"2":{"280":1}}],["847043",{"2":{"280":1}}],["847040",{"2":{"280":1}}],["847037",{"2":{"280":1}}],["847034",{"2":{"280":1}}],["847021",{"2":{"280":1}}],["847535",{"2":{"273":1}}],["8475s",{"2":{"261":1}}],["8482455",{"2":{"270":1}}],["8481s",{"2":{"261":1}}],["84867f",{"2":{"96":1}}],["8422634286951925e",{"2":{"296":1}}],["8420195f",{"2":{"294":1}}],["8425405f",{"2":{"294":1}}],["8425196",{"2":{"270":2}}],["842960",{"2":{"280":1}}],["842955",{"2":{"280":1}}],["842952",{"2":{"280":1}}],["842949",{"2":{"280":1}}],["842946",{"2":{"280":1}}],["842944",{"2":{"280":1}}],["842941",{"2":{"280":1}}],["842938",{"2":{"280":1}}],["842935",{"2":{"280":1}}],["842932",{"2":{"280":1}}],["842915",{"2":{"280":1}}],["842",{"2":{"280":11}}],["8427734",{"2":{"261":1}}],["842642",{"2":{"147":1}}],["8466797",{"2":{"261":1}}],["8464594",{"2":{"166":1}}],["841605602300225e",{"2":{"296":1}}],["841615f",{"2":{"294":1}}],["8416s",{"2":{"261":1}}],["841589f",{"2":{"294":1}}],["841543",{"2":{"147":1}}],["841",{"2":{"291":1}}],["841792782832408e",{"2":{"296":1}}],["84174657",{"2":{"273":1}}],["8417",{"2":{"265":1}}],["841231",{"2":{"166":1}}],["840075f",{"2":{"294":1}}],["840091424996789e",{"2":{"296":1}}],["84009",{"2":{"147":1}}],["840",{"2":{"291":2}}],["840943",{"2":{"280":1}}],["840938",{"2":{"280":1}}],["840935",{"2":{"280":1}}],["840932",{"2":{"280":1}}],["840930",{"2":{"280":1}}],["840927",{"2":{"280":1}}],["840924",{"2":{"280":1}}],["840921",{"2":{"280":1}}],["840918",{"2":{"280":1}}],["840915",{"2":{"280":1}}],["840900",{"2":{"280":1}}],["840588",{"2":{"163":1}}],["849155",{"2":{"280":1}}],["849151",{"2":{"280":1}}],["849148",{"2":{"280":1}}],["849145",{"2":{"280":1}}],["849142",{"2":{"280":1}}],["849139",{"2":{"280":1}}],["849136",{"2":{"280":1}}],["849133",{"2":{"280":1}}],["849130",{"2":{"280":1}}],["849127",{"2":{"280":1}}],["849110",{"2":{"280":1}}],["849",{"2":{"280":6}}],["8493",{"2":{"265":1}}],["8490221",{"2":{"155":1}}],["84954",{"2":{"147":1}}],["8495f",{"2":{"96":1}}],["84x4xi1>",{"2":{"147":2}}],["84x4xf32>",{"2":{"147":8}}],["84x10xf32>",{"2":{"147":2}}],["84xf32>",{"2":{"147":2}}],["845862031901837e",{"2":{"296":1}}],["845563f",{"2":{"294":1}}],["8455785f0",{"2":{"96":1}}],["845",{"2":{"263":1,"291":2}}],["845295",{"2":{"261":1}}],["845412",{"2":{"166":1}}],["8457602",{"2":{"96":1}}],["84",{"2":{"37":2,"147":5,"211":6,"214":2,"235":1,"237":1,"241":1,"246":16,"280":17}}],["7\\ttrain",{"2":{"280":1}}],["7f",{"2":{"261":2}}],["768131173337056e",{"2":{"296":1}}],["7689272356710357",{"2":{"171":2}}],["760862645875768e",{"2":{"296":1}}],["760",{"2":{"280":6}}],["7605s\\ttraining",{"2":{"235":1}}],["769",{"2":{"280":6}}],["7693",{"2":{"269":1}}],["76929",{"2":{"147":1}}],["763341599604373e",{"2":{"296":1}}],["763305f",{"2":{"294":1}}],["7635973f",{"2":{"294":1}}],["7632381",{"2":{"270":1}}],["763",{"2":{"263":1,"280":3}}],["76454868059012e",{"2":{"296":1}}],["764277f",{"2":{"294":1}}],["764969",{"2":{"280":1}}],["7649312",{"2":{"166":1}}],["764",{"2":{"263":1,"280":9}}],["764429",{"2":{"261":1}}],["7626",{"2":{"265":1}}],["762",{"2":{"241":1}}],["76251",{"2":{"147":1}}],["7679081189237525e",{"2":{"296":1}}],["7679296f",{"2":{"294":1}}],["7678969480112144e",{"2":{"296":1}}],["767823f",{"2":{"294":1}}],["767",{"2":{"280":6}}],["7674",{"2":{"265":1}}],["7672",{"2":{"230":1}}],["7673453",{"2":{"143":1}}],["76",{"2":{"220":2,"235":7,"237":2,"265":1,"280":9}}],["765709287963186e",{"2":{"296":1}}],["765953f",{"2":{"294":1}}],["7656",{"2":{"265":1}}],["76545",{"2":{"186":1}}],["765517",{"2":{"147":1}}],["766",{"2":{"263":2}}],["7662s",{"2":{"261":1}}],["766022",{"2":{"147":1}}],["766737",{"2":{"147":1}}],["761864311167285e",{"2":{"296":1}}],["76188",{"2":{"147":1}}],["761311860236556e",{"2":{"296":1}}],["7614955f",{"2":{"294":1}}],["761069f",{"2":{"294":1}}],["7610886f",{"2":{"294":1}}],["7617s\\ttraining",{"2":{"235":1}}],["76159",{"2":{"147":1}}],["7185",{"2":{"291":1}}],["718",{"2":{"287":1}}],["7181164",{"2":{"163":1}}],["7108025f",{"2":{"294":1}}],["7102",{"2":{"265":1}}],["7109375",{"2":{"261":1}}],["715382717035302e",{"2":{"296":1}}],["7153286f",{"2":{"294":1}}],["7155",{"2":{"265":1}}],["715686",{"2":{"261":1}}],["7138095716372747e",{"2":{"296":1}}],["713277741292608e",{"2":{"296":1}}],["713295f",{"2":{"294":1}}],["713",{"2":{"263":2}}],["713567516238075",{"2":{"171":2}}],["7115",{"2":{"265":1}}],["711",{"2":{"263":1}}],["712948",{"2":{"280":1}}],["712943",{"2":{"280":1}}],["712940",{"2":{"280":1}}],["712937",{"2":{"280":1}}],["712934",{"2":{"280":1}}],["712931",{"2":{"280":1}}],["712928",{"2":{"280":1}}],["712924",{"2":{"280":1}}],["712921",{"2":{"280":1}}],["712918",{"2":{"280":1}}],["712907",{"2":{"280":1}}],["712979078066394",{"2":{"171":2}}],["712767\\tval",{"2":{"280":1}}],["71243864",{"2":{"270":1}}],["712",{"2":{"241":1,"280":1}}],["7177792052481816e",{"2":{"296":1}}],["717725f",{"2":{"294":1}}],["7177s\\ttraining",{"2":{"235":1}}],["717",{"2":{"291":2}}],["7172643",{"2":{"270":1}}],["7171",{"2":{"230":1}}],["717863343114228",{"2":{"171":2}}],["71",{"2":{"214":1,"235":2,"246":12,"280":27}}],["714613799888262e",{"2":{"296":1}}],["7145597f",{"2":{"294":1}}],["7143",{"2":{"280":2}}],["71494",{"2":{"205":1}}],["714467",{"2":{"147":1}}],["7163",{"2":{"291":1}}],["71638",{"2":{"147":1}}],["716102",{"2":{"280":1}}],["716134755899883",{"2":{"171":2}}],["716097",{"2":{"280":1}}],["716094",{"2":{"280":1}}],["716091",{"2":{"280":1}}],["716088",{"2":{"280":1}}],["716085",{"2":{"280":1}}],["716082",{"2":{"280":1}}],["716077",{"2":{"280":1}}],["716074",{"2":{"280":1}}],["716070",{"2":{"280":1}}],["716059",{"2":{"280":1}}],["7165354",{"2":{"270":2}}],["7165512278088038",{"2":{"171":2}}],["716",{"2":{"269":1}}],["7162",{"2":{"265":1,"291":1}}],["719276326338101e",{"2":{"296":1}}],["719201",{"2":{"280":1}}],["719732337618453e",{"2":{"296":1}}],["7198057f",{"2":{"294":1}}],["719309f",{"2":{"294":1}}],["7190835f",{"2":{"294":1}}],["7190",{"2":{"291":1}}],["719126521181299e",{"2":{"296":1}}],["719196",{"2":{"280":1}}],["719193",{"2":{"280":1}}],["719190",{"2":{"280":1}}],["719188",{"2":{"280":1}}],["719184",{"2":{"280":1}}],["719181",{"2":{"280":1}}],["719178",{"2":{"280":1}}],["719175",{"2":{"280":1}}],["719172",{"2":{"280":1}}],["719161",{"2":{"280":1}}],["719686f",{"2":{"294":1}}],["7196",{"2":{"230":1}}],["7195462169922173",{"2":{"171":1}}],["7195462169922175",{"2":{"171":1}}],["71991",{"2":{"147":1}}],["719462f",{"2":{"123":1}}],["745594685677983e",{"2":{"296":1}}],["7456964f",{"2":{"294":1}}],["74538",{"2":{"291":1}}],["745",{"2":{"280":10,"291":1}}],["7442823f",{"2":{"294":1}}],["744",{"2":{"291":2}}],["7445",{"2":{"265":1}}],["744489",{"2":{"163":1}}],["74448",{"2":{"147":1}}],["749990668843518e",{"2":{"296":1}}],["7499907f",{"2":{"294":1}}],["7499069f",{"2":{"294":1}}],["749",{"2":{"269":1}}],["7495",{"2":{"265":1}}],["7492",{"2":{"265":1}}],["743237365915371e",{"2":{"296":1}}],["743404516323532e",{"2":{"296":1}}],["743617254112895e",{"2":{"296":1}}],["743695f",{"2":{"294":1}}],["7438",{"2":{"291":1}}],["7435063f",{"2":{"294":1}}],["743580\\tval",{"2":{"280":1}}],["7435",{"2":{"265":1}}],["743",{"2":{"263":1,"280":3}}],["74311",{"2":{"147":1}}],["748340284060588e",{"2":{"296":1}}],["7483137",{"2":{"270":1}}],["748",{"2":{"280":3}}],["7480315",{"2":{"270":2}}],["748226",{"2":{"261":1}}],["74884f",{"2":{"96":1}}],["7418320629378204e",{"2":{"296":1}}],["7417303f",{"2":{"294":1}}],["741387\\tval",{"2":{"280":1}}],["741932",{"2":{"261":1}}],["741",{"2":{"241":1,"263":1,"280":6}}],["74",{"2":{"214":1,"235":4,"246":1,"280":16}}],["740309722821748e",{"2":{"296":1}}],["740",{"2":{"280":3}}],["74074",{"2":{"235":4}}],["7402",{"2":{"207":1,"215":1,"239":1,"247":1,"256":1,"262":1,"275":1,"281":1,"289":1}}],["740117f",{"2":{"294":1}}],["7401575",{"2":{"270":2}}],["74012",{"2":{"147":1}}],["7401915f",{"2":{"123":1}}],["7466",{"2":{"265":1}}],["746",{"2":{"263":1}}],["7468008914093891",{"2":{"192":4}}],["746801",{"2":{"189":1}}],["74694291453392",{"2":{"192":4}}],["746943",{"2":{"189":1}}],["747831",{"2":{"147":1}}],["742756136062776e",{"2":{"296":1}}],["742419210400781e",{"2":{"296":1}}],["7429945f",{"2":{"294":1}}],["7429947",{"2":{"146":1}}],["7428515f",{"2":{"294":1}}],["7423866f",{"2":{"294":1}}],["742347",{"2":{"147":1}}],["742",{"2":{"280":3,"291":1}}],["797",{"2":{"291":1}}],["7976",{"2":{"265":1}}],["7996935f",{"2":{"294":1}}],["799",{"2":{"280":3}}],["7951882913192419e",{"2":{"296":1}}],["795",{"2":{"280":6}}],["795939",{"2":{"261":1}}],["793",{"2":{"291":1}}],["7934",{"2":{"265":1}}],["793159",{"2":{"189":1}}],["7964141823655893e",{"2":{"296":1}}],["7964297f",{"2":{"294":1}}],["7962",{"2":{"291":1}}],["7968750",{"2":{"261":1}}],["79691f",{"2":{"96":1}}],["7917s",{"2":{"261":1}}],["7911117",{"2":{"165":1}}],["79",{"2":{"214":1,"230":1,"235":4,"246":5,"263":1,"280":12}}],["798029",{"2":{"165":1}}],["794825",{"2":{"287":1}}],["794864",{"2":{"186":1}}],["794",{"2":{"280":5}}],["794685\\ttrain",{"2":{"280":1}}],["794909",{"2":{"165":1}}],["794371",{"2":{"147":1}}],["790",{"2":{"291":1}}],["7905s",{"2":{"261":1}}],["790177",{"2":{"147":1}}],["7907005f",{"2":{"123":1}}],["792",{"2":{"280":8}}],["7925",{"2":{"265":1,"291":1}}],["7925039",{"2":{"165":1}}],["7929688",{"2":{"261":1}}],["79206",{"2":{"147":1}}],["792885",{"2":{"147":1}}],["7720349216853204e",{"2":{"296":1}}],["77208",{"2":{"166":1}}],["771185604533769e",{"2":{"296":1}}],["771018676147833e",{"2":{"296":1}}],["7713386f",{"2":{"294":1}}],["77165353",{"2":{"270":2}}],["7716s",{"2":{"261":1}}],["7715s",{"2":{"261":1}}],["7717s",{"2":{"261":1}}],["7747005299770105e",{"2":{"296":1}}],["7746544f",{"2":{"294":1}}],["7746656838366666",{"2":{"192":1}}],["7745113",{"2":{"270":1}}],["7745",{"2":{"265":1}}],["774",{"2":{"263":1}}],["7743s",{"2":{"261":1}}],["7753702765142216e",{"2":{"296":1}}],["775350101112189e",{"2":{"296":1}}],["7753547f",{"2":{"294":1}}],["7753s\\ttraining",{"2":{"235":1}}],["7757464f",{"2":{"294":1}}],["775593f",{"2":{"294":1}}],["775",{"2":{"280":1}}],["775089",{"2":{"261":1}}],["7751s",{"2":{"261":1}}],["775820",{"2":{"200":1}}],["778083988010678e",{"2":{"296":1}}],["778347360671323e",{"2":{"296":1}}],["778331f",{"2":{"294":1}}],["7784528f",{"2":{"294":1}}],["7789896f",{"2":{"294":1}}],["7788s",{"2":{"261":1}}],["778",{"2":{"230":1,"263":1,"280":3}}],["778203e",{"2":{"220":1}}],["77",{"2":{"200":1,"235":1,"246":3,"269":1,"280":5}}],["779924599417398e",{"2":{"296":1}}],["7793657233099e",{"2":{"296":1}}],["7790832398519353e",{"2":{"296":1}}],["779847f",{"2":{"294":1}}],["7798s",{"2":{"261":1}}],["7792s",{"2":{"261":1}}],["7795275",{"2":{"270":2}}],["7795",{"2":{"200":1}}],["7794657",{"2":{"143":1}}],["776036757462998e",{"2":{"296":1}}],["776",{"2":{"263":1,"269":1}}],["7768s",{"2":{"261":1}}],["7766",{"2":{"189":1}}],["776598",{"2":{"189":2}}],["773609",{"2":{"261":1}}],["773",{"2":{"230":1,"280":6}}],["7733535276924052",{"2":{"192":4}}],["773354",{"2":{"189":1}}],["7732686f",{"2":{"123":1}}],["7705720728361014e",{"2":{"296":1}}],["770063415439979e",{"2":{"296":1}}],["77004",{"2":{"165":1}}],["7703583f",{"2":{"294":1}}],["77037",{"2":{"165":1}}],["770257f",{"2":{"294":1}}],["7709756f",{"2":{"294":1}}],["770",{"2":{"263":1,"280":6}}],["7701346738750905",{"2":{"171":2}}],["7777348153795833e",{"2":{"296":1}}],["77778",{"2":{"81":2,"235":1}}],["77762319501544e",{"2":{"296":1}}],["7779135f",{"2":{"294":1}}],["7779136f",{"2":{"294":1}}],["7779s",{"2":{"261":1}}],["777",{"2":{"291":1}}],["7773s",{"2":{"261":1}}],["7770s",{"2":{"261":1}}],["777076",{"2":{"147":1}}],["7365002504150038e",{"2":{"296":1}}],["7365557f",{"2":{"294":1}}],["736711891037259e",{"2":{"296":1}}],["736759f",{"2":{"294":1}}],["736905f",{"2":{"294":1}}],["736",{"2":{"291":1}}],["734145424856716e",{"2":{"296":1}}],["734306\\tval",{"2":{"280":1}}],["734901",{"2":{"261":1}}],["733353740297418e",{"2":{"296":1}}],["7333174f",{"2":{"294":1}}],["733291\\tval",{"2":{"280":1}}],["733926\\tval",{"2":{"280":1}}],["73391f",{"2":{"96":1}}],["733",{"2":{"280":7}}],["732307228545356e",{"2":{"296":1}}],["732260928460576e",{"2":{"296":1}}],["73255f",{"2":{"294":1}}],["732",{"2":{"280":7}}],["7316",{"2":{"265":1}}],["731059",{"2":{"50":2,"77":2}}],["739902325812697e",{"2":{"296":1}}],["739897017180204e",{"2":{"296":1}}],["73976579013286e",{"2":{"296":1}}],["739779628192939e",{"2":{"296":1}}],["739714\\tval",{"2":{"280":1}}],["7396969f",{"2":{"294":1}}],["739671f",{"2":{"294":1}}],["7395082f",{"2":{"294":1}}],["739571",{"2":{"147":1}}],["739",{"2":{"263":1,"280":7}}],["737999199227543e",{"2":{"296":1}}],["7370",{"2":{"291":1}}],["737",{"2":{"263":1}}],["7353386f",{"2":{"294":1}}],["735",{"2":{"280":6}}],["73507",{"2":{"254":1}}],["7359853f",{"2":{"123":1}}],["73",{"2":{"214":1,"235":2,"237":1,"246":1,"280":13}}],["738536867571874e",{"2":{"296":1}}],["7380766f",{"2":{"294":1}}],["738206",{"2":{"287":1}}],["738672",{"2":{"280":1}}],["738356",{"2":{"189":1}}],["738786f",{"2":{"123":1}}],["730944",{"2":{"166":1}}],["730082",{"2":{"147":1}}],["72097632225374e",{"2":{"296":1}}],["720709098174664e",{"2":{"296":1}}],["7207522f",{"2":{"294":1}}],["7207",{"2":{"265":1}}],["7212312863981315e",{"2":{"296":1}}],["7215027f",{"2":{"294":1}}],["721584\\tval",{"2":{"280":1}}],["721",{"2":{"280":1,"291":1}}],["721785",{"2":{"261":1}}],["7234104f",{"2":{"294":1}}],["723",{"2":{"280":4}}],["7237308297251812",{"2":{"192":1}}],["72373",{"2":{"189":1}}],["723731",{"2":{"189":2}}],["725162\\ttrain",{"2":{"280":1}}],["72508",{"2":{"263":1}}],["725559",{"2":{"147":1}}],["727418\\tval",{"2":{"280":1}}],["727",{"2":{"263":1}}],["7282071930744482e",{"2":{"296":1}}],["728",{"2":{"263":2}}],["7285156",{"2":{"261":1}}],["7267406",{"2":{"270":1}}],["7264304",{"2":{"270":1}}],["7268",{"2":{"265":1}}],["726",{"2":{"230":1,"263":1}}],["7263",{"2":{"230":1}}],["7262076",{"2":{"165":1}}],["72",{"2":{"214":1,"235":3,"246":1,"269":1,"280":11}}],["72471243",{"2":{"165":1}}],["72428167",{"2":{"143":1}}],["7222343953739493e",{"2":{"296":1}}],["72222",{"2":{"81":1}}],["7221",{"2":{"265":1}}],["722",{"2":{"263":1}}],["7226562",{"2":{"261":1}}],["722046e",{"2":{"165":1}}],["72281",{"2":{"147":1}}],["7225742f",{"2":{"123":2}}],["7294s\\ttraining",{"2":{"235":1}}],["7290931",{"2":{"166":1}}],["729647",{"2":{"165":1}}],["7292344",{"2":{"163":1}}],["729",{"2":{"96":6,"269":1}}],["7027803f",{"2":{"294":1}}],["70276451e",{"2":{"97":1}}],["702",{"2":{"280":3}}],["701317779203884e",{"2":{"296":1}}],["701872f",{"2":{"294":1}}],["7018592",{"2":{"163":1}}],["701",{"2":{"263":1}}],["707",{"2":{"263":1,"280":6}}],["707534",{"2":{"189":1}}],["7095",{"2":{"291":1}}],["709797",{"2":{"280":1}}],["709793",{"2":{"280":1}}],["709790",{"2":{"280":1}}],["709787",{"2":{"280":1}}],["709784",{"2":{"280":1}}],["709781",{"2":{"280":1}}],["709778",{"2":{"280":1}}],["709775",{"2":{"280":1}}],["709772",{"2":{"280":1}}],["709768",{"2":{"280":1}}],["709758",{"2":{"280":1}}],["709",{"2":{"263":1}}],["7092047",{"2":{"270":1}}],["7092",{"2":{"147":1}}],["7056998617454448e",{"2":{"296":1}}],["7055738f",{"2":{"294":1}}],["7059",{"2":{"287":1}}],["705991",{"2":{"261":1}}],["7054552",{"2":{"165":1}}],["7082",{"2":{"291":1}}],["708",{"2":{"280":2}}],["70866144",{"2":{"270":2}}],["7080078",{"2":{"261":1}}],["7089",{"2":{"265":1}}],["7089844",{"2":{"261":1}}],["7089496198123055",{"2":{"171":2}}],["706671248086988e",{"2":{"296":1}}],["706773f",{"2":{"294":1}}],["7067s\\ttraining",{"2":{"237":1}}],["706856",{"2":{"280":1}}],["706852",{"2":{"280":1}}],["706849",{"2":{"280":1}}],["706846",{"2":{"280":1}}],["706843",{"2":{"280":1}}],["706840",{"2":{"280":1}}],["706837",{"2":{"280":1}}],["706834",{"2":{"280":1}}],["706831",{"2":{"280":1}}],["706827",{"2":{"280":1}}],["706816",{"2":{"280":1}}],["706551\\tval",{"2":{"280":1}}],["706",{"2":{"263":1}}],["706354",{"2":{"186":1}}],["70",{"2":{"214":2,"235":1,"280":2,"291":1}}],["7030751714110095e",{"2":{"296":1}}],["703932",{"2":{"280":1}}],["703927",{"2":{"280":1}}],["703924",{"2":{"280":1}}],["703921",{"2":{"280":1}}],["703918",{"2":{"280":1}}],["703915",{"2":{"280":1}}],["703912",{"2":{"280":1}}],["703909",{"2":{"280":1}}],["703906",{"2":{"280":1}}],["703903",{"2":{"280":1}}],["70390546",{"2":{"143":1}}],["703891",{"2":{"280":1}}],["703892",{"2":{"147":1}}],["703",{"2":{"241":1,"263":1,"291":1}}],["70370",{"2":{"235":1}}],["7033041",{"2":{"96":1}}],["700639005161747e",{"2":{"296":1}}],["7005836f",{"2":{"294":1}}],["7000",{"2":{"287":1}}],["700957539020366e",{"2":{"296":1}}],["700955",{"2":{"280":1}}],["700921f",{"2":{"294":1}}],["700995",{"2":{"280":1}}],["700990",{"2":{"280":1}}],["700987",{"2":{"280":1}}],["700984",{"2":{"280":1}}],["700981",{"2":{"280":1}}],["700978",{"2":{"280":1}}],["700975",{"2":{"280":1}}],["700972",{"2":{"280":1}}],["700969",{"2":{"280":1}}],["700966",{"2":{"280":1}}],["7001",{"2":{"197":1,"254":1}}],["700",{"2":{"80":6,"124":1,"280":2}}],["785779094827512e",{"2":{"296":1}}],["78565f",{"2":{"96":1}}],["786932413068825e",{"2":{"296":1}}],["786074f",{"2":{"294":1}}],["7860",{"2":{"265":1}}],["7868",{"2":{"263":1}}],["7866s",{"2":{"261":1}}],["786558e",{"2":{"220":1}}],["7886873525721716e",{"2":{"296":1}}],["7887046f",{"2":{"294":1}}],["788",{"2":{"263":1,"269":1}}],["788953",{"2":{"261":1}}],["788505",{"2":{"147":1}}],["78395927",{"2":{"270":1}}],["7835s",{"2":{"261":1}}],["7833s\\ttraining",{"2":{"235":1}}],["7834253",{"2":{"165":1}}],["78",{"2":{"200":1,"214":2,"235":4,"237":1,"246":2,"280":20}}],["7892051763693166e",{"2":{"296":1}}],["7898418f",{"2":{"294":1}}],["7898816f",{"2":{"294":1}}],["7893245f",{"2":{"294":1}}],["7895s",{"2":{"261":1}}],["789",{"2":{"200":1,"230":1,"241":1,"280":1}}],["78971",{"2":{"147":1}}],["78123350160414e",{"2":{"296":1}}],["781235f",{"2":{"294":1}}],["781962637414239e",{"2":{"296":1}}],["781800170113297e",{"2":{"296":1}}],["781523f",{"2":{"294":1}}],["781",{"2":{"200":1,"280":3,"291":1}}],["7817315740767116",{"2":{"192":1}}],["7811481",{"2":{"163":1}}],["780445987778172e",{"2":{"296":1}}],["7805197188460757e",{"2":{"296":1}}],["7808627266741597e",{"2":{"296":1}}],["7808904",{"2":{"90":2}}],["780156f",{"2":{"294":1}}],["7801849f",{"2":{"294":1}}],["780",{"2":{"291":1}}],["78000563",{"2":{"154":1}}],["7826495f",{"2":{"294":1}}],["782338\\ttrain",{"2":{"280":1}}],["782407",{"2":{"269":1}}],["7829s",{"2":{"261":1}}],["782232",{"2":{"261":1}}],["78226817",{"2":{"153":1}}],["782822",{"2":{"147":1}}],["787714373136293e",{"2":{"296":1}}],["7874884f",{"2":{"294":1}}],["7874s",{"2":{"261":1}}],["78741f",{"2":{"96":1}}],["787159",{"2":{"163":1}}],["787919",{"2":{"147":1}}],["784768",{"2":{"165":1,"166":1}}],["784",{"2":{"46":6,"233":1,"238":6,"244":1,"291":1}}],["757",{"2":{"291":1}}],["758470932962243e",{"2":{"296":1}}],["7584168f",{"2":{"294":1}}],["758",{"2":{"291":1}}],["7580194f0",{"2":{"67":1}}],["7580993408473766",{"2":{"63":1}}],["752653028135617e",{"2":{"296":1}}],["7520163f",{"2":{"294":1}}],["752",{"2":{"280":3}}],["7528",{"2":{"263":1}}],["7532",{"2":{"291":1}}],["753",{"2":{"280":5}}],["7530",{"2":{"265":1}}],["7534",{"2":{"241":1}}],["756",{"2":{"263":1,"280":3,"291":1}}],["756535",{"2":{"147":1}}],["751999847588415e",{"2":{"296":1}}],["751",{"2":{"263":1,"280":3}}],["7513776",{"2":{"165":1}}],["75512549570616e",{"2":{"296":1}}],["7551082f",{"2":{"294":1}}],["755",{"2":{"280":6,"291":1}}],["755903\\tval",{"2":{"280":1}}],["7556",{"2":{"254":3}}],["755543f",{"2":{"123":1}}],["759410016157477e",{"2":{"296":1}}],["759412",{"2":{"166":1}}],["759291f",{"2":{"294":1}}],["759",{"2":{"230":1}}],["7500846627020414e",{"2":{"296":1}}],["750724f",{"2":{"294":1}}],["7507f",{"2":{"96":1}}],["750569283999701e",{"2":{"296":1}}],["7505262f",{"2":{"294":1}}],["7505",{"2":{"291":1}}],["750511",{"2":{"147":1}}],["750",{"2":{"239":1,"247":1,"263":1,"291":1}}],["7501",{"2":{"230":1}}],["750158",{"2":{"147":1}}],["7502",{"2":{"198":1,"222":1,"265":1,"267":1,"298":1}}],["754521163426771e",{"2":{"296":1}}],["7545672f",{"2":{"294":1}}],["754",{"2":{"280":4}}],["754806\\ttrain",{"2":{"280":1}}],["754746",{"2":{"166":1}}],["7544622",{"2":{"166":1}}],["754008",{"2":{"147":1}}],["75433f",{"2":{"96":1}}],["75",{"2":{"81":10,"214":5,"235":1,"237":1,"246":24,"265":1,"280":15,"293":1,"294":2,"297":4}}],["7×1×1",{"2":{"80":3}}],["7",{"2":{"39":4,"50":5,"65":9,"67":2,"77":3,"79":26,"80":2,"84":2,"88":1,"96":18,"97":1,"123":8,"147":2,"163":5,"165":2,"166":5,"170":2,"191":3,"197":1,"200":5,"205":2,"214":2,"230":15,"235":5,"237":1,"239":2,"241":2,"246":2,"247":2,"261":2,"263":48,"265":8,"269":14,"270":5,"280":31,"291":34,"294":77,"296":77}}],["9\\ttrain",{"2":{"280":1}}],["9f",{"2":{"254":8}}],["956921632019073e",{"2":{"296":1}}],["956374564857722e",{"2":{"296":1}}],["9563063f",{"2":{"294":1}}],["9563205f",{"2":{"294":1}}],["959246290830727e",{"2":{"296":1}}],["959036f",{"2":{"294":1}}],["959",{"2":{"280":3}}],["959184",{"2":{"166":1}}],["952660337093097e",{"2":{"296":1}}],["952644f",{"2":{"294":1}}],["952",{"2":{"280":6}}],["952260",{"2":{"261":1}}],["955617393532209e",{"2":{"296":1}}],["9550832f",{"2":{"294":1}}],["955002",{"2":{"261":1}}],["955",{"2":{"280":2}}],["9554",{"2":{"265":1}}],["95414482861056e",{"2":{"296":1}}],["954930192583369e",{"2":{"296":1}}],["9549065f",{"2":{"294":1}}],["9542003f",{"2":{"294":1}}],["95425",{"2":{"280":3}}],["954",{"2":{"263":1}}],["951689",{"2":{"273":1}}],["9515314",{"2":{"270":1}}],["951",{"2":{"263":1,"280":6}}],["9511774",{"2":{"165":1}}],["958511953661725e",{"2":{"296":1}}],["958520",{"2":{"261":1}}],["958222f",{"2":{"294":1}}],["958",{"2":{"241":1,"263":2,"280":8}}],["958919",{"2":{"147":1}}],["953735\\ttrain",{"2":{"280":1}}],["9532s\\ttraining",{"2":{"235":1}}],["953683",{"2":{"147":1}}],["95",{"2":{"214":1,"246":4,"280":12}}],["95s",{"2":{"214":1}}],["95041504410511e",{"2":{"296":1}}],["950784f",{"2":{"294":1}}],["950",{"2":{"230":1,"280":1}}],["9501405",{"2":{"166":1}}],["9500434",{"2":{"165":1}}],["957330964518243e",{"2":{"296":1}}],["9576298f",{"2":{"294":1}}],["9571987f",{"2":{"294":1}}],["957152",{"2":{"147":1}}],["9577401",{"2":{"165":1}}],["982552744009511e",{"2":{"296":1}}],["9824676f",{"2":{"294":1}}],["982357264",{"2":{"254":2}}],["9840822566072335e",{"2":{"296":1}}],["984069898454611e",{"2":{"296":1}}],["9843726f",{"2":{"294":1}}],["9841892f",{"2":{"294":1}}],["984",{"2":{"291":1}}],["988",{"2":{"291":1}}],["9886",{"2":{"269":1}}],["9858484244530953e",{"2":{"296":1}}],["9858915f",{"2":{"294":1}}],["985",{"2":{"280":6}}],["983344\\ttrain",{"2":{"280":1}}],["98396176",{"2":{"127":1}}],["9810639",{"2":{"270":1}}],["9813",{"2":{"265":1}}],["981339",{"2":{"189":1}}],["981533",{"2":{"261":1}}],["987999344558813e",{"2":{"296":1}}],["9873047",{"2":{"261":1}}],["987689",{"2":{"163":1}}],["98",{"2":{"230":1,"241":1,"246":3,"280":25,"291":1}}],["989",{"2":{"200":1,"291":1}}],["980246068299549e",{"2":{"296":1}}],["980523f",{"2":{"294":1}}],["980",{"2":{"177":1,"291":1}}],["980885",{"2":{"147":1}}],["910490053714717e",{"2":{"296":1}}],["910249",{"2":{"165":1}}],["911679564428883e",{"2":{"296":1}}],["9115602f",{"2":{"294":1}}],["91188717",{"2":{"270":1}}],["919",{"2":{"291":1}}],["9193",{"2":{"265":1}}],["91551897718699e",{"2":{"296":1}}],["9155488",{"2":{"273":1}}],["915425500096644e",{"2":{"296":1}}],["915604f",{"2":{"294":1}}],["9151306f",{"2":{"294":1}}],["915911f",{"2":{"294":1}}],["915362\\ttrain",{"2":{"280":1}}],["9178",{"2":{"265":1}}],["9179",{"2":{"265":1}}],["9177",{"2":{"263":1}}],["917010",{"2":{"261":1}}],["913553f",{"2":{"294":1}}],["913795",{"2":{"261":1}}],["9139607",{"2":{"166":1}}],["914746611289046e",{"2":{"296":1}}],["914729\\tval",{"2":{"280":1}}],["9140625",{"2":{"261":1}}],["914827",{"2":{"147":1}}],["912578f",{"2":{"294":1}}],["912",{"2":{"263":1,"291":2}}],["912147e",{"2":{"220":1}}],["912781",{"2":{"170":1}}],["9127817",{"2":{"170":2}}],["91",{"2":{"214":1,"246":3,"280":11}}],["918",{"2":{"280":3}}],["918918",{"2":{"186":1}}],["91862",{"2":{"147":1}}],["9160008830096583e",{"2":{"296":1}}],["916997931702271e",{"2":{"296":1}}],["916923788936427e",{"2":{"296":1}}],["916955f",{"2":{"294":1}}],["9169564f",{"2":{"294":1}}],["9167686f",{"2":{"294":1}}],["9167142",{"2":{"273":1}}],["9164",{"2":{"265":1}}],["9164896",{"2":{"166":1}}],["91685375713648e",{"2":{"296":1}}],["9168",{"2":{"265":1}}],["916684",{"2":{"261":1}}],["91667",{"2":{"81":1}}],["939828548297046e",{"2":{"296":1}}],["939884f",{"2":{"294":1}}],["9397587f",{"2":{"123":1}}],["930",{"2":{"291":1}}],["9307",{"2":{"265":1}}],["931",{"2":{"291":1}}],["931977629685243e",{"2":{"296":1}}],["93197",{"2":{"147":1}}],["935870857312072e",{"2":{"296":1}}],["93526192990018e",{"2":{"296":1}}],["93594f",{"2":{"294":1}}],["935177f",{"2":{"294":1}}],["935",{"2":{"280":3,"291":1}}],["9355469",{"2":{"261":1}}],["938",{"2":{"280":4}}],["9382",{"2":{"265":1}}],["938214",{"2":{"261":1}}],["9372884",{"2":{"273":1}}],["9370079",{"2":{"270":2}}],["9360056",{"2":{"270":1}}],["9327640044975404e",{"2":{"296":1}}],["9320526037594694e",{"2":{"296":1}}],["9323952f",{"2":{"294":1}}],["9324214f",{"2":{"294":1}}],["9321865f",{"2":{"294":1}}],["9322",{"2":{"265":1}}],["932",{"2":{"263":1}}],["932907",{"2":{"147":1}}],["93",{"2":{"200":1,"241":1,"246":6,"280":6}}],["933374076758323e",{"2":{"296":1}}],["933300992238576e",{"2":{"296":1}}],["9333175f",{"2":{"294":1}}],["933527f",{"2":{"294":1}}],["9338737",{"2":{"166":1}}],["933106",{"2":{"147":1}}],["934",{"2":{"230":1,"263":1,"280":2}}],["9340098",{"2":{"165":1}}],["93466",{"2":{"147":1}}],["9348226f",{"2":{"123":1}}],["946577355117967e",{"2":{"296":1}}],["9467824f",{"2":{"294":1}}],["946637",{"2":{"147":1}}],["94298194745899e",{"2":{"296":1}}],["9422212377400927e",{"2":{"296":1}}],["9421195f",{"2":{"294":1}}],["9426603f",{"2":{"294":1}}],["942",{"2":{"280":4}}],["942482486300257e",{"2":{"296":1}}],["9424",{"2":{"265":1}}],["9442386560467134e",{"2":{"296":1}}],["944013118965557e",{"2":{"296":1}}],["944",{"2":{"280":3}}],["9449",{"2":{"265":1}}],["94444",{"2":{"81":1}}],["9491734226378475e",{"2":{"296":1}}],["949689003892607e",{"2":{"296":1}}],["949024986692077e",{"2":{"296":1}}],["9490266f",{"2":{"294":1}}],["9492173",{"2":{"270":1}}],["949",{"2":{"263":1,"280":8,"291":1}}],["94879f",{"2":{"294":1}}],["9487925f",{"2":{"123":1}}],["948",{"2":{"280":7}}],["948015",{"2":{"270":1}}],["948099",{"2":{"261":1}}],["9450424f",{"2":{"294":1}}],["945628",{"2":{"261":1}}],["9452923791531558",{"2":{"171":2}}],["9434475735285212e",{"2":{"296":1}}],["9434f",{"2":{"294":1}}],["943694651903882e",{"2":{"296":1}}],["9436797",{"2":{"197":1}}],["943281f",{"2":{"294":1}}],["9431573f",{"2":{"294":1}}],["9439437f",{"2":{"294":1}}],["9437694f",{"2":{"294":1}}],["943",{"2":{"263":1,"280":6,"291":1}}],["9433594",{"2":{"261":1}}],["94387",{"2":{"254":1}}],["94",{"2":{"246":6,"280":17}}],["9415703",{"2":{"270":1}}],["9414062",{"2":{"261":1}}],["941",{"2":{"200":1,"269":1,"280":8}}],["941303",{"2":{"147":1}}],["9405848223512736",{"2":{"192":4}}],["940585",{"2":{"189":1}}],["940073",{"2":{"147":1}}],["940139",{"2":{"147":1}}],["9475934369674194e",{"2":{"296":1}}],["947595f",{"2":{"294":1}}],["9472796554823704e",{"2":{"296":1}}],["947429f",{"2":{"294":1}}],["9470537f",{"2":{"294":1}}],["9470896",{"2":{"146":1}}],["947371",{"2":{"147":1}}],["977832855146648e",{"2":{"296":1}}],["977469f",{"2":{"123":1}}],["979991458023563e",{"2":{"296":1}}],["97999f",{"2":{"294":1}}],["9799082",{"2":{"270":1}}],["97957f",{"2":{"123":1}}],["974",{"2":{"263":1}}],["974060",{"2":{"261":1}}],["9746094",{"2":{"261":1}}],["973815844367526e",{"2":{"296":1}}],["973901f",{"2":{"294":1}}],["973904",{"2":{"261":1}}],["973196\\ttrain",{"2":{"280":1}}],["9734574",{"2":{"163":1}}],["970944",{"2":{"261":1}}],["9705f",{"2":{"123":1}}],["975165302043561e",{"2":{"296":1}}],["97511f",{"2":{"294":1}}],["975813",{"2":{"261":1}}],["975088",{"2":{"147":1}}],["97",{"2":{"246":3,"263":1,"265":1,"280":7,"291":3}}],["9766699015845924",{"2":{"192":4}}],["97667",{"2":{"189":1}}],["971431",{"2":{"147":1}}],["971899",{"2":{"126":1}}],["9725",{"2":{"265":1}}],["9721554734769875",{"2":{"171":2}}],["9727281",{"2":{"165":1}}],["972335",{"2":{"147":1}}],["97222",{"2":{"81":1}}],["922617776541621e",{"2":{"296":1}}],["9227306301632845e",{"2":{"296":1}}],["9224338f",{"2":{"294":1}}],["922374f",{"2":{"294":1}}],["9229235f",{"2":{"294":1}}],["92295444",{"2":{"270":1}}],["922240",{"2":{"261":1}}],["9291751862516236e",{"2":{"296":1}}],["92913383",{"2":{"270":2}}],["929734409168132e",{"2":{"296":1}}],["929657f",{"2":{"294":1}}],["929",{"2":{"291":1}}],["9295",{"2":{"230":1}}],["928",{"2":{"280":5}}],["92863405",{"2":{"270":1}}],["928609",{"2":{"163":1}}],["92880917",{"2":{"270":1}}],["9280",{"2":{"265":1}}],["9287109",{"2":{"261":1}}],["927642194308173e",{"2":{"296":1}}],["92744761801707e",{"2":{"296":1}}],["927937f",{"2":{"294":1}}],["9275405f",{"2":{"294":1}}],["927",{"2":{"280":4}}],["927238",{"2":{"261":1}}],["92788666",{"2":{"123":1}}],["9233653",{"2":{"270":1}}],["9238",{"2":{"269":1}}],["9238281",{"2":{"261":1}}],["923950547913545",{"2":{"171":2}}],["92",{"2":{"246":4,"280":8}}],["925494368529956e",{"2":{"296":1}}],["925051617807157e",{"2":{"296":1}}],["925019f",{"2":{"294":1}}],["925",{"2":{"280":6}}],["9257812",{"2":{"261":1}}],["9253",{"2":{"254":1}}],["92593",{"2":{"235":1,"237":1}}],["9258",{"2":{"200":1}}],["924",{"2":{"230":1,"280":1,"291":2}}],["924262",{"2":{"147":1}}],["920205403827503e",{"2":{"296":1}}],["920894f",{"2":{"294":1}}],["920",{"2":{"239":1}}],["9206433",{"2":{"163":1}}],["920099f",{"2":{"123":1}}],["926346f",{"2":{"294":1}}],["926",{"2":{"280":3}}],["92652786",{"2":{"270":1}}],["926518",{"2":{"123":1}}],["9269",{"2":{"265":1,"291":1}}],["92663",{"2":{"147":1}}],["921002\\tval",{"2":{"280":1}}],["92155594",{"2":{"194":4}}],["9214194",{"2":{"166":1}}],["921201",{"2":{"147":1}}],["921374",{"2":{"147":1}}],["921",{"2":{"56":1}}],["96497f",{"2":{"294":1}}],["964",{"2":{"280":5}}],["964035",{"2":{"254":1}}],["963044f",{"2":{"294":2}}],["963052",{"2":{"270":1}}],["963",{"2":{"280":8}}],["96357",{"2":{"280":2}}],["960991",{"2":{"270":1}}],["96062994",{"2":{"270":2}}],["967143996219695e",{"2":{"296":1}}],["9673",{"2":{"265":1}}],["9670442",{"2":{"123":1}}],["966966f",{"2":{"294":1}}],["966",{"2":{"280":1}}],["9668",{"2":{"265":1}}],["966289",{"2":{"147":1}}],["968504",{"2":{"270":2}}],["968",{"2":{"263":1,"269":1,"280":1}}],["965247146236758e",{"2":{"296":1}}],["965581",{"2":{"261":1}}],["965117",{"2":{"143":1}}],["96",{"2":{"246":8,"280":7}}],["961",{"2":{"200":1}}],["969030203080347e",{"2":{"296":1}}],["9690475f",{"2":{"294":1}}],["9693534027106658e",{"2":{"296":1}}],["9693907",{"2":{"166":1}}],["9698942f",{"2":{"294":1}}],["96927",{"2":{"147":1}}],["962753388145616e",{"2":{"296":1}}],["962192808182691e",{"2":{"296":1}}],["962",{"2":{"269":1,"291":1}}],["96296",{"2":{"235":1,"237":1}}],["9626999",{"2":{"165":1}}],["96227f",{"2":{"96":1}}],["9g",{"2":{"97":1}}],["908",{"2":{"280":8}}],["90597f",{"2":{"294":1}}],["90554",{"2":{"280":3}}],["9055119",{"2":{"270":2}}],["905",{"2":{"280":19}}],["90537953",{"2":{"270":1}}],["90508",{"2":{"143":1,"146":1}}],["9031531937412355e",{"2":{"296":1}}],["903",{"2":{"269":1,"280":14}}],["9032",{"2":{"265":1}}],["904667782658295e",{"2":{"296":1}}],["9048661387528156e",{"2":{"296":1}}],["904794f",{"2":{"294":1}}],["9044973f",{"2":{"294":1}}],["904",{"2":{"280":20}}],["9049499",{"2":{"270":1}}],["904907",{"2":{"261":1}}],["9043228",{"2":{"163":1}}],["90651088822658e",{"2":{"296":1}}],["906372699849947e",{"2":{"296":1}}],["9064158f",{"2":{"294":1}}],["906",{"2":{"263":1,"280":8,"291":1}}],["9062500",{"2":{"261":1}}],["90661",{"2":{"205":1}}],["9020681f",{"2":{"294":1}}],["902",{"2":{"280":18}}],["90215015",{"2":{"270":1}}],["902982902377702",{"2":{"171":2}}],["902204",{"2":{"147":1}}],["901013\\ttrain",{"2":{"280":1}}],["901",{"2":{"230":1,"280":20,"291":1}}],["9019142280758281",{"2":{"171":2}}],["901397",{"2":{"147":1}}],["9091032",{"2":{"166":1}}],["9094505",{"2":{"163":1}}],["90908",{"2":{"147":1}}],["9070533",{"2":{"270":1}}],["907",{"2":{"263":2,"280":8}}],["9073486e",{"2":{"166":1}}],["9076533",{"2":{"163":1}}],["90772897",{"2":{"143":1}}],["9000",{"2":{"287":1}}],["900330",{"2":{"287":1}}],["9001",{"2":{"197":1,"254":1}}],["900",{"2":{"124":1,"280":20}}],["90",{"2":{"82":2,"246":15,"280":3,"291":1}}],["995052838194472e",{"2":{"296":1}}],["995047f",{"2":{"294":1}}],["99575",{"2":{"88":1}}],["996",{"2":{"280":6}}],["996490e",{"2":{"220":1}}],["994869224486138e",{"2":{"296":1}}],["9949754f",{"2":{"294":1}}],["994",{"2":{"280":4}}],["9940",{"2":{"265":1}}],["994433",{"2":{"147":1}}],["991372",{"2":{"261":1}}],["99125",{"2":{"88":1}}],["993",{"2":{"280":3}}],["9931641",{"2":{"261":1}}],["9930s\\ttraining",{"2":{"235":1}}],["990327688345358e",{"2":{"296":1}}],["990632522912101e",{"2":{"296":1}}],["9906325f",{"2":{"294":1}}],["990",{"2":{"269":2}}],["99047",{"2":{"254":1}}],["990099f0",{"2":{"67":1}}],["997851592357001e",{"2":{"296":1}}],["997832516337966e",{"2":{"296":1}}],["9979205f",{"2":{"294":1}}],["9979343f",{"2":{"294":1}}],["99799",{"2":{"254":1}}],["997003",{"2":{"96":6}}],["992126",{"2":{"270":2}}],["9920",{"2":{"200":1}}],["992662",{"2":{"163":1}}],["99",{"2":{"166":1,"246":5,"280":2,"291":1}}],["998722876844519e",{"2":{"296":1}}],["9980469",{"2":{"261":1}}],["998",{"2":{"88":1}}],["999049f",{"2":{"294":1}}],["99900760833609",{"2":{"50":1}}],["999",{"2":{"96":7,"272":1,"274":2}}],["9999881",{"2":{"127":2}}],["999986",{"2":{"126":1}}],["99998605",{"2":{"126":1}}],["9999092f0",{"2":{"67":1}}],["9999546f0",{"2":{"67":2}}],["9",{"2":{"39":2,"50":4,"67":4,"79":34,"80":1,"81":3,"83":6,"84":1,"87":1,"96":9,"97":1,"114":1,"123":1,"143":2,"146":2,"147":2,"163":1,"165":1,"166":2,"189":1,"197":1,"200":6,"205":2,"210":1,"213":1,"214":2,"230":12,"231":1,"235":52,"237":10,"238":3,"241":6,"242":2,"246":2,"254":3,"261":2,"263":41,"265":4,"269":18,"272":1,"274":2,"280":17,"291":32,"294":73,"296":88}}],["×dn−2×1×1",{"2":{"65":1}}],["×dn−2×1×1`",{"2":{"46":1}}],["×dn−2×1×dn",{"2":{"46":1,"65":1}}],["×",{"2":{"19":6,"198":1,"201":2,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["≥",{"2":{"19":2,"51":1,"67":4,"89":1,"254":1,"285":1,"287":1}}],["zstd",{"2":{"263":1,"291":1}}],["zlib",{"2":{"263":1}}],["znver2",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["zipfile",{"2":{"230":1}}],["zip",{"2":{"124":1,"254":1,"278":1,"285":2}}],["zips",{"2":{"23":1}}],["zyg",{"2":{"123":2}}],["zygotetrackerext",{"2":{"291":1}}],["zygotecolorsext",{"2":{"241":2,"291":2}}],["zygoterules",{"2":{"241":1,"263":1,"291":1}}],["zygotevjp",{"2":{"238":13}}],["zygote",{"2":{"18":1,"19":1,"49":1,"50":1,"56":2,"70":1,"87":4,"93":1,"96":3,"119":1,"120":3,"123":3,"127":2,"157":2,"158":2,"162":3,"163":1,"164":1,"165":3,"166":3,"170":3,"188":1,"191":1,"193":3,"194":4,"199":1,"209":1,"230":1,"241":4,"291":3}}],["zenodo",{"2":{"90":3}}],["zeroing",{"2":{"23":4,"52":1}}],["zero",{"2":{"15":3,"23":3,"40":1,"41":3,"44":1,"50":2,"51":2,"52":9,"67":1,"84":2,"88":2,"164":1,"189":1}}],["zeroed",{"2":{"15":3}}],["zerosc64",{"2":{"16":1}}],["zerosc32",{"2":{"16":1}}],["zerosc16",{"2":{"16":1}}],["zeros64",{"2":{"16":1}}],["zeros32",{"2":{"16":1,"153":2}}],["zeros16",{"2":{"16":1}}],["zeros",{"2":{"15":3,"16":6,"78":3,"79":5,"80":1,"82":2,"85":7,"89":2,"171":2,"189":1,"253":3,"264":1,"265":1,"285":1}}],["z=",{"2":{"70":1}}],["z=σ",{"2":{"43":1}}],["z",{"2":{"44":3,"70":2,"80":4,"89":5,"258":8,"260":3,"261":2,"265":2,"266":8,"283":7,"288":7}}],["≤",{"2":{"15":2,"44":2,"70":2,"76":1,"89":1,"285":2}}],["ylabel=",{"2":{"255":1,"264":1,"270":1,"274":1,"283":1,"293":1,"294":1,"297":2}}],["ys",{"2":{"255":6}}],["y∈",{"2":{"253":1}}],["yᵢ",{"2":{"124":2}}],["y=x2−2x",{"2":{"270":1}}],["y=x−e",{"2":{"46":1,"65":1}}],["y==0",{"2":{"89":1}}],["year",{"2":{"90":2}}],["yes",{"2":{"89":1,"92":1}}],["yet",{"2":{"35":1,"53":1,"89":1,"123":1,"277":1}}],["yrot",{"2":{"89":3}}],["yuxin",{"2":{"65":1}}],["yann",{"2":{"50":1}}],["y+ϵ",{"2":{"50":1}}],["yi∈rm",{"2":{"197":1}}],["yi",{"2":{"50":2,"197":1}}],["y~=",{"2":{"50":2}}],["y~",{"2":{"50":4}}],["y^2+y∗max",{"2":{"50":1}}],["y^−y∗log⁡",{"2":{"50":1}}],["y^−y",{"2":{"50":1}}],["y^",{"2":{"50":7}}],["y^+ϵ",{"2":{"50":3}}],["ŷ",{"2":{"50":11,"86":8,"163":2,"164":2,"165":2,"204":2,"205":5}}],["y3",{"2":{"39":1}}],["y2",{"2":{"39":1,"80":1,"264":2}}],["y1",{"2":{"39":1,"80":1,"264":2}}],["y",{"2":{"15":3,"19":5,"39":8,"40":2,"42":9,"44":10,"45":6,"46":4,"50":63,"51":3,"52":6,"55":1,"56":10,"60":2,"61":2,"65":1,"70":3,"76":1,"80":4,"81":2,"85":4,"86":3,"87":5,"89":25,"96":1,"97":6,"123":11,"124":2,"127":2,"134":4,"153":3,"154":4,"163":7,"164":7,"171":2,"178":5,"197":11,"201":4,"202":5,"203":3,"204":12,"205":12,"210":6,"212":2,"213":2,"218":4,"231":6,"234":2,"235":2,"242":4,"243":2,"245":2,"246":2,"253":7,"255":1,"258":5,"260":3,"264":1,"270":5,"274":5,"279":9,"283":5,"284":11,"292":6,"294":3}}],["yoshua",{"2":{"15":2}}],["yourself",{"2":{"228":1}}],["your",{"0":{"137":1},"2":{"35":2,"46":3,"56":2,"67":1,"68":1,"102":1,"125":2,"127":1,"141":1,"151":1,"153":2,"154":2,"155":3,"161":2,"162":1,"163":2,"175":1,"179":1,"181":2,"206":1}}],["you",{"2":{"2":1,"6":2,"24":2,"35":1,"49":3,"56":5,"67":4,"68":1,"69":1,"72":3,"76":1,"87":1,"89":5,"90":2,"95":3,"96":1,"97":1,"98":1,"114":1,"122":1,"123":2,"124":2,"125":3,"126":2,"127":2,"128":1,"137":2,"141":1,"144":1,"148":3,"151":1,"153":9,"154":6,"155":5,"157":2,"161":2,"162":5,"163":4,"164":1,"171":1,"174":1,"175":2,"181":2,"186":1,"189":1,"190":3,"191":1,"193":1,"199":1,"203":1,"206":1,"227":1,"228":1,"235":2,"243":1,"253":1,"261":1,"271":1,"280":1,"294":1}}],["5\\ttrain",{"2":{"280":1}}],["5g",{"2":{"274":1}}],["5gb",{"2":{"239":1,"247":1}}],["5+0",{"2":{"239":1,"247":1}}],["5fs",{"2":{"246":1}}],["5f",{"2":{"205":3,"235":2}}],["5f0",{"2":{"67":8,"89":2,"143":1,"146":1,"170":1,"204":1,"264":12}}],["5e9a32e7af2",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["56431967259246e",{"2":{"296":1}}],["5642",{"2":{"263":1}}],["5644531",{"2":{"261":1}}],["564142f",{"2":{"294":1}}],["5641",{"2":{"147":1}}],["562308\\tval",{"2":{"280":1}}],["562357",{"2":{"270":1}}],["5625000",{"2":{"261":1}}],["5627227",{"2":{"163":1}}],["561",{"2":{"241":1,"263":3,"291":1}}],["5613775f",{"2":{"123":1}}],["560515662882762e",{"2":{"296":1}}],["5605157f",{"2":{"294":1}}],["56055",{"2":{"189":1}}],["560",{"2":{"239":1,"247":1}}],["56",{"2":{"235":1,"246":3,"280":26}}],["563968906779954e",{"2":{"296":1}}],["5639845f",{"2":{"294":1}}],["5638073752387038e",{"2":{"296":1}}],["563868841405652e",{"2":{"296":1}}],["563867f",{"2":{"294":1}}],["5635985f",{"2":{"294":1}}],["563",{"2":{"230":1}}],["565181422905006e",{"2":{"296":1}}],["5651641f",{"2":{"294":1}}],["565",{"2":{"263":1,"291":1}}],["565469e",{"2":{"220":1}}],["56542",{"2":{"147":1}}],["56538",{"2":{"205":1}}],["5674",{"2":{"269":1}}],["5679",{"2":{"265":1}}],["5679s\\ttraining",{"2":{"235":1}}],["5670",{"2":{"263":1}}],["567",{"2":{"263":3}}],["5675557670686644",{"2":{"192":1}}],["56779",{"2":{"189":1}}],["567794",{"2":{"189":2}}],["567746",{"2":{"186":1}}],["568299359469238e",{"2":{"296":1}}],["568298",{"2":{"147":1}}],["5680174f",{"2":{"294":1}}],["5680565f",{"2":{"294":1}}],["5680085614623662",{"2":{"171":2}}],["568935\\tval",{"2":{"280":1}}],["568",{"2":{"263":5,"280":1}}],["568888",{"2":{"261":1}}],["56817",{"2":{"147":1}}],["5692",{"2":{"287":1}}],["56926",{"2":{"147":1}}],["569339",{"2":{"287":1}}],["569892\\ttrain",{"2":{"280":1}}],["569",{"2":{"263":2}}],["569012",{"2":{"147":1}}],["570313726075517e",{"2":{"296":1}}],["570324",{"2":{"270":1}}],["57007f",{"2":{"294":1}}],["5708s\\ttraining",{"2":{"235":1}}],["579873625268897e",{"2":{"296":1}}],["579598f",{"2":{"294":1}}],["5799",{"2":{"265":1}}],["5797962f",{"2":{"294":1}}],["5797",{"2":{"147":1,"280":1}}],["572545103509233e",{"2":{"296":1}}],["572647f",{"2":{"294":1}}],["572315f",{"2":{"294":1}}],["5728182f",{"2":{"294":1}}],["572",{"2":{"263":1,"291":1}}],["573044155891073e",{"2":{"296":1}}],["5733",{"2":{"265":1}}],["573",{"2":{"263":1}}],["5737s\\ttraining",{"2":{"235":1}}],["571522860804267e",{"2":{"296":1}}],["5715213f",{"2":{"294":1}}],["571318979492865e",{"2":{"296":1}}],["571319f",{"2":{"294":1}}],["5714",{"2":{"280":3}}],["5712",{"2":{"265":1}}],["571",{"2":{"263":1,"280":1,"291":1}}],["5719s\\ttraining",{"2":{"237":1}}],["5718s\\ttraining",{"2":{"235":1}}],["5718",{"2":{"147":1,"292":2}}],["575",{"2":{"263":3}}],["5750s\\ttraining",{"2":{"235":1}}],["57555",{"2":{"147":1}}],["5747520285184625e",{"2":{"296":1}}],["574",{"2":{"263":2}}],["5744s\\ttraining",{"2":{"235":1}}],["5742s\\ttraining",{"2":{"235":1}}],["5745s\\ttraining",{"2":{"235":1}}],["574128f",{"2":{"123":1}}],["576822008828056e",{"2":{"296":1}}],["5768774f",{"2":{"294":1}}],["576786621522814e",{"2":{"296":1}}],["57671577",{"2":{"155":1}}],["5769644f",{"2":{"294":1}}],["5769",{"2":{"291":1}}],["576",{"2":{"230":1,"263":2,"280":6,"291":1}}],["57806788398924e",{"2":{"296":1}}],["578872205579631e",{"2":{"296":1}}],["578243732",{"2":{"254":2}}],["578",{"2":{"230":1,"263":1,"280":3}}],["578686",{"2":{"147":1}}],["57s",{"2":{"214":1}}],["57",{"2":{"200":1,"235":4,"241":1,"246":1,"263":1,"280":19,"291":1}}],["577646869960186e",{"2":{"296":1}}],["5776052f0",{"2":{"50":1}}],["577562f",{"2":{"294":1}}],["57744",{"2":{"205":1}}],["577181",{"2":{"189":1}}],["5779067839062536",{"2":{"171":2}}],["5918049184321998e",{"2":{"296":1}}],["5918604f",{"2":{"294":1}}],["591231677906073e",{"2":{"296":1}}],["591978254079201e",{"2":{"296":1}}],["5919569827766042e",{"2":{"296":1}}],["5910605f",{"2":{"294":1}}],["591752f",{"2":{"294":1}}],["591793",{"2":{"147":1}}],["5911",{"2":{"291":1}}],["591",{"2":{"291":1}}],["59365081777121e",{"2":{"296":1}}],["593563170156848e",{"2":{"296":1}}],["593582f",{"2":{"294":1}}],["593546f",{"2":{"294":1}}],["593f",{"2":{"294":1}}],["5930",{"2":{"291":1}}],["593",{"2":{"263":1,"280":2,"291":1}}],["59377456",{"2":{"147":1}}],["598",{"2":{"280":1}}],["5980846275178036e",{"2":{"296":1}}],["5980862f",{"2":{"294":1}}],["5980",{"2":{"263":1}}],["598391712",{"2":{"254":2}}],["598188",{"2":{"147":1}}],["595737",{"2":{"261":1}}],["595",{"2":{"230":1,"263":1,"280":1,"291":1}}],["595443",{"2":{"186":1}}],["59s",{"2":{"214":4}}],["590970593475195e",{"2":{"296":1}}],["5905511",{"2":{"270":2}}],["590",{"2":{"200":1,"263":1,"291":2}}],["59",{"2":{"166":1,"214":1,"241":1,"246":4,"263":1,"265":1,"280":631,"291":1}}],["5943626",{"2":{"270":1}}],["594",{"2":{"263":1,"280":7}}],["5947266",{"2":{"261":1}}],["594261",{"2":{"166":1}}],["594664",{"2":{"147":1}}],["5975793909755746e",{"2":{"296":1}}],["5975794f",{"2":{"294":1}}],["59751064",{"2":{"96":1}}],["597",{"2":{"263":5,"291":1}}],["5979974",{"2":{"163":1}}],["596318",{"2":{"147":1}}],["599",{"2":{"291":1}}],["599733\\ttrain",{"2":{"280":1}}],["5996094",{"2":{"261":1}}],["599632",{"2":{"147":1}}],["5999714",{"2":{"143":1}}],["592762f",{"2":{"294":1}}],["59277534",{"2":{"123":1}}],["5922284f",{"2":{"294":1}}],["5929635100818784e",{"2":{"296":1}}],["592982",{"2":{"270":1}}],["5929",{"2":{"265":1}}],["592",{"2":{"263":1,"280":3}}],["59259",{"2":{"237":1}}],["5925",{"2":{"230":1}}],["592152",{"2":{"147":1}}],["586",{"2":{"263":1,"269":1,"291":1}}],["5865265807660498",{"2":{"171":2}}],["585",{"2":{"263":1,"280":3,"291":1}}],["58597",{"2":{"186":1}}],["5844289897328725e",{"2":{"296":1}}],["584434140166735e",{"2":{"296":1}}],["5847765372435807e",{"2":{"296":1}}],["5847781f",{"2":{"294":1}}],["584",{"2":{"263":1}}],["5843s\\ttraining",{"2":{"235":1}}],["5829416f",{"2":{"294":1}}],["5826772",{"2":{"270":2}}],["582",{"2":{"263":3}}],["5809698548712676e",{"2":{"296":1}}],["58017106253865e",{"2":{"296":1}}],["58088f",{"2":{"294":1}}],["58065f",{"2":{"294":1}}],["580",{"2":{"263":1}}],["5800781",{"2":{"261":1}}],["5803s\\ttraining",{"2":{"235":1}}],["5807s\\ttraining",{"2":{"235":1}}],["58",{"2":{"235":5,"237":1,"280":8,"291":1}}],["58s",{"2":{"214":1}}],["58762349086803e",{"2":{"296":1}}],["5873464f",{"2":{"294":1}}],["587352",{"2":{"205":1}}],["58715504",{"2":{"270":1}}],["587",{"2":{"263":2,"280":4,"291":1}}],["58721",{"2":{"189":1}}],["58720636",{"2":{"189":1}}],["58720636f0",{"2":{"189":1}}],["587206",{"2":{"189":2}}],["587816",{"2":{"165":1}}],["588",{"2":{"263":1}}],["5880",{"2":{"241":1}}],["588045",{"2":{"186":1}}],["5886s\\ttraining",{"2":{"235":1}}],["588749",{"2":{"147":1}}],["58873f",{"2":{"96":1}}],["581354",{"2":{"147":1}}],["5898740521908645e",{"2":{"296":1}}],["5898585f",{"2":{"294":1}}],["589782f",{"2":{"294":1}}],["589929\\tthroughput",{"2":{"287":1}}],["5896s\\ttraining",{"2":{"235":1}}],["589",{"2":{"230":1,"263":1}}],["589140e",{"2":{"220":1}}],["589177",{"2":{"147":1}}],["58912802",{"2":{"124":1}}],["5890444",{"2":{"147":1}}],["5835",{"2":{"265":1}}],["583",{"2":{"263":4}}],["583191",{"2":{"147":1}}],["583349f",{"2":{"294":1}}],["5833",{"2":{"241":1}}],["58338",{"2":{"147":1}}],["58333",{"2":{"81":1}}],["58328676",{"2":{"147":1}}],["53589711832492e",{"2":{"296":1}}],["535",{"2":{"280":2}}],["53522",{"2":{"280":1}}],["5354",{"2":{"265":1}}],["5301196637787933e",{"2":{"296":1}}],["5301181",{"2":{"147":1}}],["5306",{"2":{"291":1}}],["530390\\tthroughput",{"2":{"287":1}}],["530",{"2":{"269":1}}],["5341797",{"2":{"261":1}}],["5340",{"2":{"241":1}}],["5343s\\ttraining",{"2":{"235":1}}],["5348",{"2":{"230":1}}],["5345728",{"2":{"96":1}}],["533",{"2":{"200":1,"241":1}}],["53333336f0",{"2":{"67":1}}],["5316129887754148e",{"2":{"296":1}}],["5315974f",{"2":{"294":1}}],["5314368f",{"2":{"294":1}}],["531",{"2":{"280":1}}],["5317138603176017e",{"2":{"296":1}}],["5317",{"2":{"241":1}}],["5310851",{"2":{"197":1}}],["53181f",{"2":{"96":1}}],["538993233130405e",{"2":{"296":1}}],["538980323372986e",{"2":{"296":1}}],["5389185f",{"2":{"294":1}}],["538632\\tval",{"2":{"280":1}}],["5386295f",{"2":{"123":1}}],["5385857",{"2":{"270":1}}],["5381",{"2":{"265":1}}],["5381101016248151",{"2":{"171":2}}],["538",{"2":{"263":1}}],["538367",{"2":{"261":1}}],["538796",{"2":{"261":1}}],["5364738280550114e",{"2":{"296":1}}],["53614f",{"2":{"294":1}}],["536155\\tthroughput",{"2":{"287":1}}],["5365513f",{"2":{"294":1}}],["5365167",{"2":{"166":1}}],["536",{"2":{"291":1}}],["5367613",{"2":{"165":1}}],["539008366440695e",{"2":{"296":1}}],["5395174f",{"2":{"294":1}}],["539191f",{"2":{"294":1}}],["539292f",{"2":{"294":1}}],["5392",{"2":{"291":1}}],["53926",{"2":{"147":1}}],["539",{"2":{"280":3}}],["5394",{"2":{"241":1}}],["539333",{"2":{"147":1}}],["53978",{"2":{"147":1}}],["5374121369268324e",{"2":{"296":1}}],["537721",{"2":{"147":1}}],["53753",{"2":{"147":1}}],["53789175",{"2":{"96":1}}],["532207332072905e",{"2":{"296":1}}],["532207f",{"2":{"294":1}}],["532299",{"2":{"147":1}}],["532076\\tthroughput",{"2":{"287":1}}],["5320771",{"2":{"147":1}}],["5329",{"2":{"241":1}}],["532",{"2":{"241":1,"291":2}}],["5323",{"2":{"241":1}}],["5325065f",{"2":{"123":1}}],["53",{"2":{"127":1,"214":4,"246":5,"269":1,"280":20}}],["5265173f",{"2":{"294":1}}],["526",{"2":{"291":2}}],["5261",{"2":{"269":1}}],["5267549745189349",{"2":{"171":2}}],["520916708362308e",{"2":{"296":1}}],["5200",{"2":{"291":1}}],["5204",{"2":{"265":1}}],["5203194",{"2":{"147":1}}],["521194f",{"2":{"294":1}}],["521",{"2":{"280":2,"291":1}}],["5213735990850587e",{"2":{"296":1}}],["5213",{"2":{"265":1}}],["5212719f",{"2":{"294":1}}],["5212",{"2":{"265":1}}],["5210742834996898",{"2":{"171":2}}],["52",{"2":{"214":1,"246":1,"254":77,"265":1,"269":1,"280":15}}],["524",{"2":{"291":2}}],["524228",{"2":{"287":1}}],["5246",{"2":{"265":1}}],["5249",{"2":{"200":1}}],["524008",{"2":{"189":1}}],["5247808",{"2":{"166":1}}],["5285188399938945e",{"2":{"296":1}}],["528563",{"2":{"147":1}}],["5284862f",{"2":{"294":1}}],["5284",{"2":{"241":1}}],["52876\\taccuracy",{"2":{"205":1}}],["5281",{"2":{"90":2}}],["5292097631191662e",{"2":{"296":1}}],["529258",{"2":{"147":1}}],["5291409f",{"2":{"294":1}}],["529",{"2":{"269":1}}],["5293",{"2":{"254":1}}],["529416im",{"2":{"186":1}}],["529701",{"2":{"147":1}}],["522",{"2":{"280":2}}],["5224609",{"2":{"261":1}}],["5221656",{"2":{"197":1}}],["522138",{"2":{"166":1}}],["522541",{"2":{"147":1}}],["52272",{"2":{"147":1}}],["522238f",{"2":{"123":1}}],["52564013",{"2":{"147":1}}],["5255064f",{"2":{"123":1}}],["527058117738606e",{"2":{"296":1}}],["527",{"2":{"291":2}}],["527924",{"2":{"280":1}}],["527559",{"2":{"270":2}}],["527162",{"2":{"261":1}}],["527171",{"2":{"165":1}}],["5271416",{"2":{"134":1}}],["52778",{"2":{"81":1}}],["54102412068817e",{"2":{"296":1}}],["54106732759271e",{"2":{"296":1}}],["541777333852939e",{"2":{"296":1}}],["5417522044284086e",{"2":{"296":1}}],["5417983f",{"2":{"294":1}}],["5419303f",{"2":{"294":1}}],["541368570010827e",{"2":{"296":1}}],["5413144f",{"2":{"294":1}}],["5413",{"2":{"265":1,"291":1}}],["547",{"2":{"263":1,"291":1}}],["5476424498276177",{"2":{"192":4}}],["547642",{"2":{"189":1}}],["5486969525076378e",{"2":{"296":1}}],["5489919f",{"2":{"294":1}}],["5489093",{"2":{"270":1}}],["548569\\ttest",{"2":{"280":1}}],["5484",{"2":{"265":1}}],["548",{"2":{"263":2,"291":1}}],["548012",{"2":{"147":1}}],["54601397702874e",{"2":{"296":1}}],["546014f",{"2":{"294":1}}],["546",{"2":{"263":1}}],["545282037516485e",{"2":{"296":1}}],["545525f",{"2":{"294":1}}],["545",{"2":{"263":1,"291":2}}],["5458s\\ttraining",{"2":{"235":1}}],["5499725f",{"2":{"294":1}}],["5490334f",{"2":{"294":1}}],["549",{"2":{"263":2}}],["54983395f0",{"2":{"67":2}}],["540530700940394e",{"2":{"296":1}}],["540567f",{"2":{"294":1}}],["5406592",{"2":{"270":1}}],["5406784f",{"2":{"123":1}}],["540",{"2":{"238":8}}],["5428822719140895e",{"2":{"296":1}}],["542816\\ttrain",{"2":{"280":1}}],["5422",{"2":{"291":1}}],["542682",{"2":{"287":1}}],["5426s\\ttraining",{"2":{"237":1}}],["542727",{"2":{"261":1}}],["54248",{"2":{"186":1}}],["543848307821684e",{"2":{"296":1}}],["543198f",{"2":{"294":1}}],["543",{"2":{"263":1,"280":3}}],["5437s\\ttraining",{"2":{"237":1}}],["5436s\\ttraining",{"2":{"237":1}}],["544968",{"2":{"261":1}}],["5446",{"2":{"230":1}}],["54465",{"2":{"147":1}}],["544",{"2":{"200":1,"263":1}}],["54",{"2":{"89":1,"214":1,"235":1,"269":1,"280":7,"291":1}}],["5×3",{"2":{"189":8}}],["5×3×1",{"2":{"80":1}}],["5×10×1×1",{"2":{"81":1}}],["5×9",{"2":{"79":1}}],["5×5",{"2":{"15":1,"88":1,"189":3}}],["5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀",{"2":{"67":12}}],["5⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀",{"2":{"67":1}}],["5x5x6x16xf32>",{"2":{"147":3}}],["5x5x1x6xf32>",{"2":{"147":3}}],["5x",{"2":{"67":1}}],["5∗δ",{"2":{"50":1}}],["5∗|y−y^|2if",{"2":{"50":1}}],["552",{"2":{"291":1}}],["5524933572539585731",{"2":{"269":1}}],["5525",{"2":{"265":1}}],["5579",{"2":{"265":1}}],["55744",{"2":{"147":1}}],["5574339",{"2":{"147":1}}],["551747207908346e",{"2":{"296":1}}],["5517578",{"2":{"261":1}}],["551",{"2":{"291":1}}],["5511811",{"2":{"270":2}}],["55131584",{"2":{"123":1}}],["555",{"2":{"241":1,"263":1}}],["5553797706980106",{"2":{"192":1}}],["55556",{"2":{"81":2,"235":1}}],["5592",{"2":{"287":1}}],["5590551",{"2":{"270":2}}],["5593",{"2":{"265":1}}],["559",{"2":{"263":1}}],["5598s\\ttraining",{"2":{"235":1}}],["55989707",{"2":{"166":1}}],["5595843665394066",{"2":{"192":1}}],["5597971",{"2":{"147":1}}],["553579\\tval",{"2":{"280":1}}],["5535s\\ttraining",{"2":{"235":1}}],["553",{"2":{"263":2}}],["5530449923320618e",{"2":{"296":1}}],["5530s\\ttraining",{"2":{"237":1}}],["55306894",{"2":{"165":1}}],["5533s\\ttraining",{"2":{"237":1}}],["5532s\\ttraining",{"2":{"235":1}}],["5532",{"2":{"189":1}}],["553633",{"2":{"165":1}}],["554363854785273e",{"2":{"296":1}}],["554",{"2":{"263":1,"291":2}}],["5544s\\ttraining",{"2":{"235":1}}],["554008",{"2":{"147":1}}],["55419f",{"2":{"96":1}}],["550026545484949e",{"2":{"296":1}}],["550",{"2":{"263":1,"291":1}}],["550935",{"2":{"186":1}}],["550164",{"2":{"147":1}}],["5505513",{"2":{"147":1}}],["558",{"2":{"263":1}}],["5584",{"2":{"200":1}}],["55841",{"2":{"147":1}}],["55864",{"2":{"147":1}}],["5563364812823254e",{"2":{"296":1}}],["556338f",{"2":{"294":1}}],["556733f",{"2":{"294":1}}],["556019\\ttrain",{"2":{"280":1}}],["5565324",{"2":{"123":1}}],["556467",{"2":{"123":1}}],["55",{"2":{"50":1,"237":1,"246":1,"265":1,"280":7}}],["5where",{"2":{"50":1}}],["5d",{"2":{"47":1,"80":2,"220":1,"295":1}}],["516",{"2":{"280":1,"291":1}}],["5164032",{"2":{"197":1}}],["519354832205625e",{"2":{"296":1}}],["5192774f",{"2":{"294":1}}],["519",{"2":{"291":1}}],["519665",{"2":{"270":1}}],["51968503",{"2":{"270":2}}],["51987",{"2":{"241":1}}],["513753214535398e",{"2":{"296":1}}],["513248f",{"2":{"294":1}}],["513476f",{"2":{"294":1}}],["513",{"2":{"269":1,"280":3}}],["513944",{"2":{"261":1}}],["51385",{"2":{"147":1}}],["5170",{"2":{"265":1}}],["5171875",{"2":{"270":1}}],["5171",{"2":{"265":1}}],["51712",{"2":{"186":1}}],["5175781",{"2":{"261":1}}],["518435",{"2":{"280":1}}],["518900",{"2":{"280":1}}],["5182",{"2":{"241":1}}],["5181",{"2":{"241":1}}],["51852",{"2":{"235":1}}],["518659",{"2":{"147":1}}],["5109",{"2":{"230":1}}],["514409789703838e",{"2":{"296":1}}],["514463186952786e",{"2":{"296":1}}],["514570825462949e",{"2":{"296":1}}],["5146139f",{"2":{"294":1}}],["514605\\ttrain",{"2":{"280":1}}],["5146",{"2":{"263":1}}],["514",{"2":{"200":1}}],["514301",{"2":{"147":1}}],["515333612354784e",{"2":{"296":1}}],["515301f",{"2":{"294":1}}],["515",{"2":{"280":1,"291":2}}],["5158",{"2":{"263":1}}],["51524514",{"2":{"197":1}}],["5155236",{"2":{"147":1}}],["511",{"2":{"263":1,"280":12}}],["511911",{"2":{"261":1}}],["511156",{"2":{"147":1}}],["51152f",{"2":{"96":1}}],["512534f",{"2":{"294":1}}],["5125",{"2":{"287":1}}],["512984\\tthroughput",{"2":{"287":1}}],["51285f",{"2":{"96":1}}],["512",{"2":{"56":3,"253":1}}],["51",{"2":{"46":3,"126":1,"220":2,"246":3,"254":1,"265":1,"280":11,"291":1}}],["5076018665202704e",{"2":{"296":1}}],["507163f",{"2":{"294":1}}],["5075692f",{"2":{"294":1}}],["5078125",{"2":{"261":1}}],["509",{"2":{"291":1}}],["509239",{"2":{"147":1}}],["503860913922144e",{"2":{"296":1}}],["5039164f",{"2":{"294":1}}],["503",{"2":{"280":1}}],["50378",{"2":{"147":1}}],["504214049809485e",{"2":{"296":1}}],["504",{"2":{"263":2}}],["504491",{"2":{"147":1}}],["506144f",{"2":{"294":1}}],["5063776117056426e",{"2":{"296":1}}],["5063",{"2":{"263":1,"291":1}}],["506982",{"2":{"147":1}}],["5026965",{"2":{"273":1}}],["502",{"2":{"254":2,"263":1,"280":8}}],["5027",{"2":{"241":1,"291":1}}],["502312852219817",{"2":{"50":1}}],["508",{"2":{"280":3}}],["5080",{"2":{"265":1}}],["5087",{"2":{"230":1}}],["5083343",{"2":{"147":1}}],["5058451121565134e",{"2":{"296":1}}],["5051",{"2":{"263":1}}],["50507\\taccuracy",{"2":{"205":1}}],["50502",{"2":{"147":1}}],["505",{"2":{"200":1}}],["50536",{"2":{"147":1}}],["5018",{"2":{"241":1}}],["5016018",{"2":{"165":1}}],["501695",{"2":{"147":1}}],["50136f",{"2":{"96":1}}],["50063384",{"2":{"270":1}}],["5005703",{"2":{"270":1}}],["5000×30×1",{"2":{"265":1}}],["50000",{"2":{"254":50}}],["5000",{"2":{"254":1,"265":5,"280":1,"287":1}}],["5001",{"2":{"197":1,"254":1}}],["50096",{"2":{"147":1}}],["500",{"2":{"84":1,"124":1,"201":2,"263":3}}],["50",{"2":{"40":2,"46":3,"84":1,"97":1,"200":1,"205":1,"214":1,"246":126,"254":1,"261":3,"265":1,"269":1,"274":1,"280":1,"287":1}}],["5",{"2":{"15":8,"23":2,"37":4,"40":2,"46":6,"50":15,"52":5,"56":11,"65":9,"67":33,"77":2,"78":3,"79":24,"80":7,"81":47,"83":5,"84":14,"85":4,"88":4,"89":5,"90":1,"96":11,"114":1,"119":1,"123":55,"132":5,"134":1,"146":1,"147":13,"165":2,"166":2,"186":6,"189":29,"190":1,"191":3,"195":3,"197":6,"200":11,"205":4,"211":14,"214":2,"218":1,"230":15,"235":5,"236":2,"237":1,"238":40,"239":2,"241":18,"246":2,"247":2,"258":1,"261":3,"263":51,"264":4,"265":20,"266":1,"269":11,"270":9,"274":2,"278":1,"280":12,"285":2,"291":46,"292":4,"293":2,"294":759,"296":718,"297":3}}],["6\\ttrain",{"2":{"280":1}}],["6f",{"2":{"261":2,"280":3,"287":2}}],["6e",{"2":{"220":1}}],["6d",{"2":{"197":1,"254":2,"287":2}}],["6xf32>",{"2":{"147":2}}],["6x1x5x5xf32>",{"2":{"147":2}}],["656",{"2":{"291":1}}],["656396",{"2":{"280":1}}],["656391",{"2":{"280":1}}],["656388",{"2":{"280":1}}],["656385",{"2":{"280":1}}],["656382",{"2":{"280":1}}],["656379",{"2":{"280":1}}],["656376",{"2":{"280":1}}],["656373",{"2":{"280":1}}],["656370",{"2":{"280":1}}],["656367",{"2":{"280":1}}],["656355",{"2":{"280":1}}],["658514074062714e",{"2":{"296":1}}],["65856f",{"2":{"294":1}}],["658",{"2":{"291":2}}],["6584",{"2":{"265":1}}],["6599",{"2":{"287":1}}],["659400",{"2":{"280":1}}],["659395",{"2":{"280":1}}],["659392",{"2":{"280":1}}],["659389",{"2":{"280":1}}],["659386",{"2":{"280":1}}],["659383",{"2":{"280":1}}],["659380",{"2":{"280":1}}],["659377",{"2":{"280":1}}],["659374",{"2":{"280":1}}],["659370",{"2":{"280":1}}],["659359",{"2":{"280":1}}],["6595012",{"2":{"147":1}}],["6524158",{"2":{"270":1}}],["652",{"2":{"269":1}}],["6521",{"2":{"200":1}}],["6533643887533557e",{"2":{"296":1}}],["6533817f",{"2":{"294":1}}],["653225",{"2":{"280":1}}],["653211",{"2":{"280":1}}],["653205",{"2":{"280":1}}],["653198",{"2":{"280":1}}],["653191",{"2":{"280":1}}],["653185",{"2":{"280":1}}],["653178",{"2":{"280":1}}],["653171",{"2":{"280":1}}],["653164",{"2":{"280":1}}],["653156",{"2":{"280":1}}],["653130",{"2":{"280":1}}],["6535434",{"2":{"270":2}}],["653",{"2":{"263":1}}],["654445166646065e",{"2":{"296":1}}],["65445",{"2":{"280":3}}],["654575479460169e",{"2":{"296":1}}],["654202f",{"2":{"294":1}}],["6543",{"2":{"254":3}}],["65476",{"2":{"200":1}}],["65",{"2":{"214":1,"246":10,"280":34}}],["6575044",{"2":{"270":1}}],["6572",{"2":{"254":1}}],["657",{"2":{"200":1}}],["657670184695808",{"2":{"171":2}}],["6510675433865744e",{"2":{"296":1}}],["651",{"2":{"200":2}}],["65178",{"2":{"165":1}}],["65176255",{"2":{"147":1}}],["655654999622882e",{"2":{"296":1}}],["6556360448565952",{"2":{"192":1}}],["655534686413504e",{"2":{"296":1}}],["6551064034667553e",{"2":{"296":1}}],["6557743f",{"2":{"294":1}}],["6554806f",{"2":{"294":1}}],["655881",{"2":{"189":1}}],["650011609270983e",{"2":{"296":1}}],["6500547f",{"2":{"294":1}}],["650561506556804e",{"2":{"296":1}}],["65058047",{"2":{"123":1}}],["650203",{"2":{"147":1}}],["650",{"2":{"46":3,"263":1,"280":1}}],["69559273648144e",{"2":{"296":1}}],["695886933111921e",{"2":{"296":1}}],["6958885f",{"2":{"294":1}}],["695629f",{"2":{"294":1}}],["695020201126724e",{"2":{"296":1}}],["695016",{"2":{"280":1}}],["695011",{"2":{"280":1}}],["695008",{"2":{"280":1}}],["695005",{"2":{"280":1}}],["695002",{"2":{"280":1}}],["699935738377742e",{"2":{"296":1}}],["699846f",{"2":{"294":1}}],["699",{"2":{"280":2}}],["699691\\ttrain",{"2":{"280":1}}],["691116246527768e",{"2":{"296":1}}],["691047f",{"2":{"294":1}}],["691019\\ttrain",{"2":{"280":1}}],["691",{"2":{"291":1}}],["691973",{"2":{"280":1}}],["691969",{"2":{"280":1}}],["691966",{"2":{"280":1}}],["691963",{"2":{"280":1}}],["691960",{"2":{"280":1}}],["691957",{"2":{"280":1}}],["691954",{"2":{"280":1}}],["691951",{"2":{"280":1}}],["691948",{"2":{"280":1}}],["691944",{"2":{"280":1}}],["691933",{"2":{"280":1}}],["6913705",{"2":{"163":1}}],["690608435210316e",{"2":{"296":1}}],["690377356828372e",{"2":{"296":1}}],["690341f",{"2":{"294":1}}],["6905395f",{"2":{"294":1}}],["690",{"2":{"280":3}}],["6909446526419574",{"2":{"171":2}}],["698153148654638e",{"2":{"296":1}}],["698364f",{"2":{"294":1}}],["698017",{"2":{"280":1}}],["698013",{"2":{"280":1}}],["698010",{"2":{"280":1}}],["698007",{"2":{"280":1}}],["698004",{"2":{"280":1}}],["698001",{"2":{"280":1}}],["698",{"2":{"280":2,"291":1}}],["698953",{"2":{"280":1}}],["6985s\\ttraining",{"2":{"235":1}}],["6948114f",{"2":{"294":1}}],["694997",{"2":{"280":1}}],["694994",{"2":{"280":1}}],["694991",{"2":{"280":1}}],["694988",{"2":{"280":1}}],["694985",{"2":{"280":1}}],["694973",{"2":{"280":1}}],["6946",{"2":{"265":1}}],["694",{"2":{"263":1,"280":1}}],["6944s\\ttraining",{"2":{"235":1}}],["69444",{"2":{"81":1}}],["692657",{"2":{"273":1}}],["692",{"2":{"263":1}}],["697998",{"2":{"280":1}}],["697995",{"2":{"280":1}}],["697992",{"2":{"280":1}}],["697988",{"2":{"280":1}}],["697978",{"2":{"280":1}}],["697829",{"2":{"261":1}}],["697421",{"2":{"147":1}}],["693934\\tval",{"2":{"280":1}}],["6930",{"2":{"269":1}}],["6932",{"2":{"254":3}}],["693396",{"2":{"147":1}}],["69",{"2":{"132":1,"214":1,"230":1,"235":2,"280":17,"291":1}}],["682826061856272e",{"2":{"296":1}}],["6828815f",{"2":{"294":1}}],["68299204",{"2":{"147":1}}],["689063",{"2":{"280":1}}],["689058",{"2":{"280":1}}],["689055",{"2":{"280":1}}],["689052",{"2":{"280":1}}],["689049",{"2":{"280":1}}],["689046",{"2":{"280":1}}],["689043",{"2":{"280":1}}],["689040",{"2":{"280":1}}],["689037",{"2":{"280":1}}],["689034",{"2":{"280":1}}],["689023",{"2":{"280":1}}],["689576\\tval",{"2":{"280":1}}],["68927",{"2":{"270":1}}],["6893",{"2":{"265":1}}],["6815048238749006e",{"2":{"296":1}}],["68142f",{"2":{"294":1}}],["681684900062376e",{"2":{"296":1}}],["681681f",{"2":{"123":1}}],["681658f",{"2":{"294":1}}],["681639f",{"2":{"294":1}}],["6816",{"2":{"291":1}}],["6812",{"2":{"265":1}}],["680232",{"2":{"280":1}}],["680227",{"2":{"280":1}}],["680224",{"2":{"280":1}}],["680221",{"2":{"280":1}}],["680218",{"2":{"280":1}}],["680215",{"2":{"280":1}}],["680212",{"2":{"280":1}}],["680209",{"2":{"280":1}}],["680206",{"2":{"280":1}}],["680202",{"2":{"280":1}}],["680191",{"2":{"280":1}}],["680",{"2":{"263":1}}],["680748",{"2":{"143":1}}],["68",{"2":{"235":3,"237":2,"246":14,"280":12}}],["6845",{"2":{"291":1}}],["684",{"2":{"230":1,"291":1}}],["6846315",{"2":{"147":1}}],["683150",{"2":{"280":1}}],["683145",{"2":{"280":1}}],["683143",{"2":{"280":1}}],["683140",{"2":{"280":1}}],["683137",{"2":{"280":1}}],["683134",{"2":{"280":1}}],["683131",{"2":{"280":1}}],["683127",{"2":{"280":1}}],["683124",{"2":{"280":1}}],["683121",{"2":{"280":1}}],["683110",{"2":{"280":1}}],["6835938",{"2":{"261":1}}],["6830945313415872",{"2":{"192":1}}],["683456",{"2":{"147":1}}],["686146",{"2":{"280":1}}],["686141",{"2":{"280":1}}],["686138",{"2":{"280":1}}],["686135",{"2":{"280":1}}],["686132",{"2":{"280":1}}],["686129",{"2":{"280":1}}],["686126",{"2":{"280":1}}],["686123",{"2":{"280":1}}],["686120",{"2":{"280":1}}],["686116",{"2":{"280":1}}],["686103",{"2":{"280":1}}],["686",{"2":{"230":1}}],["6867144",{"2":{"166":1}}],["68639",{"2":{"147":1}}],["686369",{"2":{"147":1}}],["685444614630656e",{"2":{"296":1}}],["685894639342157e",{"2":{"296":1}}],["6858583f",{"2":{"294":1}}],["685743f",{"2":{"294":1}}],["685129f",{"2":{"294":1}}],["6850394",{"2":{"270":2}}],["685079",{"2":{"189":1}}],["68539",{"2":{"166":1}}],["6856",{"2":{"147":1}}],["68522483",{"2":{"123":1}}],["688",{"2":{"280":3}}],["6884766",{"2":{"261":1}}],["688665",{"2":{"166":1}}],["6886741",{"2":{"143":1}}],["6882201",{"2":{"147":1}}],["6879709296454276e",{"2":{"296":1}}],["687611",{"2":{"186":1}}],["687264",{"2":{"166":1}}],["687716",{"2":{"147":1}}],["6877121",{"2":{"147":1}}],["6875213",{"2":{"147":1}}],["613980434982519e",{"2":{"296":1}}],["6138905f",{"2":{"294":1}}],["61375",{"2":{"280":3}}],["6135",{"2":{"200":1}}],["614279288111362e",{"2":{"296":1}}],["614648f",{"2":{"294":1}}],["614",{"2":{"280":7}}],["61417323",{"2":{"270":2}}],["6143286",{"2":{"143":1}}],["618",{"2":{"280":6}}],["6184501",{"2":{"270":1}}],["6186",{"2":{"265":1}}],["6181641",{"2":{"261":1}}],["617487891816275e",{"2":{"296":1}}],["61768365",{"2":{"273":1}}],["617",{"2":{"263":1,"291":1}}],["615725036940715e",{"2":{"296":1}}],["6157415f",{"2":{"294":1}}],["6151",{"2":{"291":1}}],["615",{"2":{"263":1,"280":6,"291":1}}],["61507\\taccuracy",{"2":{"205":1}}],["616",{"2":{"263":1}}],["6113281",{"2":{"261":1}}],["611",{"2":{"230":1,"263":1,"280":4}}],["611584190904272",{"2":{"171":2}}],["619",{"2":{"263":1,"280":6,"291":1}}],["6191406",{"2":{"261":1}}],["619161e",{"2":{"220":1}}],["61977077",{"2":{"123":1}}],["61",{"2":{"214":1,"246":2,"280":7}}],["61216",{"2":{"280":3}}],["612179",{"2":{"186":1}}],["612",{"2":{"263":1,"280":1}}],["6123047",{"2":{"261":1}}],["6120",{"2":{"15":1}}],["610460166741359e",{"2":{"296":1}}],["6103024242692998e",{"2":{"296":1}}],["6103799f",{"2":{"294":1}}],["610755f",{"2":{"294":1}}],["6105",{"2":{"291":1}}],["610",{"2":{"166":1,"263":1,"269":1,"280":4}}],["6244",{"2":{"287":1}}],["624",{"2":{"280":10}}],["624383955429775",{"2":{"192":1}}],["624384",{"2":{"189":1}}],["627",{"2":{"280":7}}],["6279",{"2":{"230":1}}],["6238638942811507e",{"2":{"296":1}}],["6231064f",{"2":{"294":1}}],["623907f",{"2":{"294":1}}],["623",{"2":{"269":1,"280":1}}],["628371742866767e",{"2":{"296":1}}],["6283391f",{"2":{"294":1}}],["6288941",{"2":{"270":1}}],["628",{"2":{"263":2,"291":2}}],["628994",{"2":{"147":1}}],["626",{"2":{"280":9}}],["6268",{"2":{"265":1}}],["62683",{"2":{"261":1}}],["62625877100596",{"2":{"189":1}}],["62626415",{"2":{"147":1}}],["62900804485342e",{"2":{"296":1}}],["6290253f",{"2":{"294":1}}],["629",{"2":{"263":1,"291":1}}],["6299",{"2":{"241":1}}],["62963",{"2":{"235":2}}],["622790451149549e",{"2":{"296":1}}],["6220472",{"2":{"270":2}}],["622",{"2":{"230":1,"280":1,"291":1}}],["622511",{"2":{"147":1}}],["62",{"2":{"214":2,"246":26,"265":1,"269":1,"280":25}}],["62129945",{"2":{"270":1}}],["621",{"2":{"269":1,"280":6}}],["6210",{"2":{"265":1}}],["621958",{"2":{"189":1}}],["621333",{"2":{"147":1}}],["620453143622839e",{"2":{"296":1}}],["62043f",{"2":{"96":1}}],["620609f",{"2":{"294":1}}],["620796",{"2":{"287":1}}],["6207278",{"2":{"147":1}}],["620",{"2":{"263":1,"280":2}}],["625",{"2":{"50":1}}],["601584210891391e",{"2":{"296":1}}],["601777f",{"2":{"294":1}}],["60171676",{"2":{"273":1}}],["601",{"2":{"280":1,"291":1}}],["60148f",{"2":{"123":1}}],["608275953969919e",{"2":{"296":1}}],["608266738489823e",{"2":{"296":1}}],["608851454932399e",{"2":{"296":1}}],["608665235353539e",{"2":{"296":1}}],["6089532f",{"2":{"294":1}}],["6087113f",{"2":{"294":1}}],["608",{"2":{"263":1,"291":2}}],["608401f",{"2":{"123":1}}],["60931549620726e",{"2":{"296":1}}],["609393f",{"2":{"294":1}}],["60944",{"2":{"280":1}}],["6091344",{"2":{"273":1}}],["609",{"2":{"263":1,"280":3,"291":1}}],["605628349664316e",{"2":{"296":1}}],["605748f",{"2":{"294":1}}],["6054688",{"2":{"261":1}}],["60596",{"2":{"269":1}}],["6059",{"2":{"254":2}}],["6030802204195574e",{"2":{"296":1}}],["6030975f",{"2":{"294":1}}],["603",{"2":{"280":2}}],["60395515",{"2":{"270":1}}],["6035156",{"2":{"261":1}}],["603439",{"2":{"147":1}}],["60",{"2":{"241":1,"246":1,"280":2}}],["6069607225691798e",{"2":{"296":1}}],["6066837f",{"2":{"294":1}}],["6067s\\ttraining",{"2":{"235":1}}],["606",{"2":{"230":1,"263":1,"280":3}}],["60s",{"2":{"214":3}}],["604495200949848e",{"2":{"296":1}}],["6046575f",{"2":{"294":1}}],["604",{"2":{"280":2}}],["6047473710807704e",{"2":{"296":1}}],["6047666f",{"2":{"294":1}}],["6047",{"2":{"265":1}}],["6049256",{"2":{"166":1}}],["604107",{"2":{"147":1}}],["6025",{"2":{"265":1}}],["602513",{"2":{"147":1}}],["602",{"2":{"263":1,"291":2}}],["60234f",{"2":{"96":1}}],["6000",{"2":{"280":7,"287":1}}],["6001",{"2":{"197":1,"254":1}}],["600764",{"2":{"147":1}}],["600",{"2":{"124":1,"263":1,"280":3}}],["6071794706096284e",{"2":{"296":1}}],["6071106f",{"2":{"294":1}}],["607951f",{"2":{"294":1}}],["607",{"2":{"263":1}}],["6077032f",{"2":{"123":1}}],["6076053f0",{"2":{"50":2}}],["635199387186866e",{"2":{"296":1}}],["635232f",{"2":{"294":1}}],["635",{"2":{"280":1}}],["63537f",{"2":{"96":1}}],["6367908381110006e",{"2":{"296":1}}],["636248802390752e",{"2":{"296":1}}],["6365531f",{"2":{"294":1}}],["636",{"2":{"280":5}}],["6366220221551666e",{"2":{"296":1}}],["6366",{"2":{"230":1}}],["633",{"2":{"280":2}}],["633136",{"2":{"261":1}}],["63391256",{"2":{"147":1}}],["6303644547697315e",{"2":{"296":1}}],["6307333f",{"2":{"294":1}}],["6308594",{"2":{"261":1}}],["630",{"2":{"241":1}}],["630092",{"2":{"147":1}}],["6321725159625427e",{"2":{"296":1}}],["632189f",{"2":{"294":1}}],["632",{"2":{"280":5,"291":1}}],["63222\\taccuracy",{"2":{"205":1}}],["6323408",{"2":{"143":1}}],["63",{"2":{"200":1,"246":2,"269":2,"280":31}}],["6346849",{"2":{"270":1}}],["634",{"2":{"230":1,"291":1}}],["6340362477836592",{"2":{"192":1}}],["634469",{"2":{"147":1}}],["637",{"2":{"230":1,"269":1,"280":7}}],["6370531107315014",{"2":{"171":2}}],["637219",{"2":{"166":1}}],["6390291405346965e",{"2":{"296":1}}],["639202f",{"2":{"294":1}}],["639",{"2":{"291":1}}],["6396484",{"2":{"261":2}}],["63936",{"2":{"147":1}}],["6395181",{"2":{"147":1}}],["63952f",{"2":{"96":1}}],["6315886f",{"2":{"294":1}}],["6311",{"2":{"269":1}}],["6311s\\ttraining",{"2":{"235":1}}],["6312",{"2":{"265":1}}],["631217",{"2":{"186":1}}],["631952260187872e",{"2":{"296":1}}],["6319",{"2":{"265":1}}],["631",{"2":{"263":1,"291":2}}],["631482+0",{"2":{"186":1}}],["63161",{"2":{"147":1}}],["63175",{"2":{"147":1}}],["638574792244055e",{"2":{"296":1}}],["638572917886143e",{"2":{"296":1}}],["638449f",{"2":{"294":1}}],["63838f",{"2":{"294":1}}],["6383875",{"2":{"165":1}}],["638",{"2":{"280":6}}],["6386719",{"2":{"261":1}}],["63823",{"2":{"147":1}}],["638731",{"2":{"147":1}}],["63889",{"2":{"81":1}}],["675198938919608e",{"2":{"296":1}}],["6752317f",{"2":{"294":1}}],["67555165",{"2":{"270":1}}],["675",{"2":{"263":1,"280":2}}],["674171226991112e",{"2":{"296":1}}],["6749552f",{"2":{"294":1}}],["674380",{"2":{"280":1}}],["674375",{"2":{"280":1}}],["674372",{"2":{"280":1}}],["674367",{"2":{"280":1}}],["674364",{"2":{"280":1}}],["674361",{"2":{"280":1}}],["674358",{"2":{"280":1}}],["674355",{"2":{"280":1}}],["674352",{"2":{"280":1}}],["674348",{"2":{"280":1}}],["674337",{"2":{"280":1}}],["674827",{"2":{"261":1}}],["674",{"2":{"230":1,"263":1,"291":1}}],["67469",{"2":{"147":1}}],["6720",{"2":{"265":1}}],["672",{"2":{"263":1}}],["672475e",{"2":{"220":1}}],["6726513",{"2":{"163":1}}],["67",{"2":{"214":1,"230":1,"235":1,"246":1,"280":24}}],["6739f",{"2":{"294":1}}],["673",{"2":{"263":1,"269":1,"291":1}}],["673312",{"2":{"261":1}}],["67334",{"2":{"205":1}}],["673229",{"2":{"147":1}}],["67326",{"2":{"67":1}}],["6706038",{"2":{"270":1}}],["670",{"2":{"200":1}}],["6702257",{"2":{"163":1}}],["676785970430642e",{"2":{"296":1}}],["676633f",{"2":{"294":1}}],["6766358",{"2":{"123":1}}],["676",{"2":{"280":6}}],["67689",{"2":{"147":1}}],["671578036057946e",{"2":{"296":1}}],["6715936f",{"2":{"294":1}}],["671219f",{"2":{"294":1}}],["671201",{"2":{"147":1}}],["671320995186048e",{"2":{"296":1}}],["671376",{"2":{"280":1}}],["671371",{"2":{"280":1}}],["671369",{"2":{"280":1}}],["671366",{"2":{"280":1}}],["671363",{"2":{"280":1}}],["671360",{"2":{"280":1}}],["671356",{"2":{"280":1}}],["671353",{"2":{"280":1}}],["671350",{"2":{"280":1}}],["671347",{"2":{"280":1}}],["671336",{"2":{"280":1}}],["671487",{"2":{"186":1}}],["671869",{"2":{"147":1}}],["677475611729583e",{"2":{"296":1}}],["6774449",{"2":{"270":1}}],["677769293814691e",{"2":{"296":1}}],["6778765f",{"2":{"294":1}}],["6770396f",{"2":{"294":1}}],["677041",{"2":{"147":1}}],["677311014927782e",{"2":{"296":1}}],["677317",{"2":{"280":1}}],["677346",{"2":{"280":1}}],["677342",{"2":{"280":1}}],["677339",{"2":{"280":1}}],["677336",{"2":{"280":1}}],["677333",{"2":{"280":1}}],["677330",{"2":{"280":1}}],["677327",{"2":{"280":1}}],["677324",{"2":{"280":1}}],["677320",{"2":{"280":1}}],["677306",{"2":{"280":1}}],["6771653",{"2":{"270":2}}],["6776220912718907",{"2":{"192":1}}],["6772497f",{"2":{"294":1}}],["67728",{"2":{"147":1}}],["677237",{"2":{"123":1}}],["6781479016262425e",{"2":{"296":1}}],["6781903f",{"2":{"294":1}}],["678978f",{"2":{"294":1}}],["678566",{"2":{"280":1}}],["67852753",{"2":{"147":1}}],["678",{"2":{"269":1}}],["67861",{"2":{"189":1}}],["67841595",{"2":{"96":1}}],["67979201925031e",{"2":{"296":1}}],["679186499653968e",{"2":{"296":1}}],["679918f",{"2":{"294":1}}],["679822e",{"2":{"220":1}}],["679885",{"2":{"165":1}}],["67909",{"2":{"189":1}}],["67908657",{"2":{"189":1}}],["679087",{"2":{"189":2}}],["67922f",{"2":{"123":1}}],["6795253",{"2":{"123":1}}],["6×3×1",{"2":{"81":1}}],["6×3×3",{"2":{"79":1}}],["6×10",{"2":{"79":1}}],["6×6",{"2":{"79":3}}],["668437",{"2":{"280":1}}],["668432",{"2":{"280":1}}],["668429",{"2":{"280":1}}],["668426",{"2":{"280":1}}],["668423",{"2":{"280":1}}],["668420",{"2":{"280":1}}],["668417",{"2":{"280":1}}],["668414",{"2":{"280":1}}],["668411",{"2":{"280":1}}],["668407",{"2":{"280":1}}],["668396",{"2":{"280":1}}],["66826s\\ttraining",{"2":{"246":1}}],["665490",{"2":{"280":1}}],["665485",{"2":{"280":1}}],["665482",{"2":{"280":1}}],["665479",{"2":{"280":1}}],["665476",{"2":{"280":1}}],["665473",{"2":{"280":1}}],["665470",{"2":{"280":1}}],["665467",{"2":{"280":1}}],["665462",{"2":{"280":1}}],["665459",{"2":{"280":1}}],["665448",{"2":{"280":1}}],["6652409557748218",{"2":{"77":1}}],["665241",{"2":{"50":10,"77":1}}],["662406",{"2":{"280":1}}],["662401",{"2":{"280":1}}],["662355f",{"2":{"294":1}}],["662399",{"2":{"280":1}}],["662396",{"2":{"280":1}}],["662393",{"2":{"280":1}}],["662390",{"2":{"280":1}}],["662387",{"2":{"280":1}}],["662384",{"2":{"280":1}}],["662381",{"2":{"280":1}}],["662377",{"2":{"280":1}}],["662366",{"2":{"280":1}}],["6628",{"2":{"200":1}}],["6699",{"2":{"291":1}}],["669696\\tval",{"2":{"280":1}}],["6692604",{"2":{"270":1}}],["6693",{"2":{"265":1}}],["669",{"2":{"263":1,"280":7,"291":1}}],["6694499f",{"2":{"123":1}}],["6674",{"2":{"230":1}}],["660744990143062e",{"2":{"296":1}}],["660702f",{"2":{"294":1}}],["6609",{"2":{"265":1}}],["6601562",{"2":{"261":1}}],["660",{"2":{"230":1,"263":1}}],["66s",{"2":{"214":1}}],["664977",{"2":{"214":1}}],["6642563709240284",{"2":{"171":2}}],["66",{"2":{"200":1,"235":1,"246":2,"261":1,"269":1,"280":9,"294":1}}],["666042938806584e",{"2":{"296":1}}],["666510250806197e",{"2":{"296":1}}],["666327497675332e",{"2":{"296":1}}],["6663245",{"2":{"165":1}}],["666391f",{"2":{"294":1}}],["6666434f",{"2":{"294":1}}],["6666821",{"2":{"165":1}}],["6666666666666666",{"2":{"87":1}}],["666667",{"2":{"67":1}}],["66667",{"2":{"67":1,"81":2,"235":15,"237":4}}],["663491198229877e",{"2":{"296":1}}],["663507f",{"2":{"294":1}}],["663",{"2":{"291":2}}],["6638",{"2":{"287":1}}],["6637254",{"2":{"165":1}}],["663223",{"2":{"147":1}}],["661",{"2":{"230":1,"241":1,"291":1}}],["6616",{"2":{"147":1}}],["6615898",{"2":{"123":1}}],["64329454038995e",{"2":{"296":1}}],["643",{"2":{"263":1,"291":1}}],["643172",{"2":{"186":1}}],["6498382031809333e",{"2":{"296":1}}],["649889",{"2":{"280":1}}],["649972",{"2":{"280":1}}],["649953",{"2":{"280":1}}],["649946",{"2":{"280":1}}],["649939",{"2":{"280":1}}],["649933",{"2":{"280":1}}],["649926",{"2":{"280":1}}],["649919",{"2":{"280":1}}],["649912",{"2":{"280":1}}],["649904",{"2":{"280":1}}],["649785",{"2":{"280":1}}],["649",{"2":{"263":1}}],["6414452f",{"2":{"294":1}}],["6414",{"2":{"269":1}}],["641402",{"2":{"147":1}}],["641",{"2":{"263":1}}],["644144\\tval",{"2":{"280":1}}],["64426",{"2":{"269":1}}],["6449848f",{"2":{"294":1}}],["6449",{"2":{"265":1}}],["644",{"2":{"263":3}}],["646",{"2":{"263":1,"269":2}}],["646342",{"2":{"261":1}}],["6469914",{"2":{"143":1}}],["6480795f",{"2":{"294":1}}],["648507\\ttrain",{"2":{"280":1}}],["64850235",{"2":{"123":1}}],["648",{"2":{"263":2}}],["6486",{"2":{"254":1}}],["642",{"2":{"269":1,"291":1}}],["6428",{"2":{"200":1}}],["6423511984038721",{"2":{"171":2}}],["64241976",{"2":{"165":1}}],["64293",{"2":{"147":1}}],["6401841313083126e",{"2":{"296":1}}],["640",{"2":{"263":2,"277":1,"291":1}}],["640935",{"2":{"165":1}}],["640625",{"2":{"147":1}}],["647168f",{"2":{"294":1}}],["64720088536703e",{"2":{"296":1}}],["647255",{"2":{"280":1}}],["64721453",{"2":{"147":1}}],["6478",{"2":{"269":1}}],["647",{"2":{"263":1,"291":1}}],["6474s\\ttraining",{"2":{"235":1}}],["6477456",{"2":{"163":1}}],["645353598509099e",{"2":{"296":1}}],["645224f",{"2":{"294":1}}],["645233",{"2":{"147":1}}],["645",{"2":{"291":1}}],["6457",{"2":{"269":1}}],["6456693",{"2":{"270":2}}],["6456",{"2":{"263":1}}],["6456416023530093",{"2":{"192":1}}],["6454607f",{"2":{"123":1}}],["64",{"2":{"46":18,"64":1,"97":1,"198":2,"207":2,"214":2,"215":2,"222":2,"239":2,"244":2,"247":2,"256":2,"261":2,"262":2,"267":3,"275":2,"280":18,"281":2,"289":2,"294":1,"298":2}}],["6",{"2":{"15":1,"37":2,"39":3,"67":7,"79":20,"80":9,"81":24,"84":7,"87":1,"96":10,"123":25,"147":9,"163":1,"165":5,"166":7,"170":1,"189":2,"191":3,"197":1,"198":1,"200":2,"205":2,"207":1,"211":8,"214":2,"215":1,"222":1,"230":14,"235":5,"237":1,"239":6,"241":15,"246":2,"247":6,"256":1,"261":2,"262":1,"263":48,"265":10,"266":6,"267":1,"269":8,"270":5,"275":1,"280":26,"281":1,"289":1,"291":48,"293":2,"294":181,"296":175,"298":1}}],["+=",{"2":{"205":5,"212":2,"234":2,"245":2,"254":1,"261":2,"265":1,"280":1,"283":1,"285":2,"287":1}}],["+exp",{"2":{"89":1}}],["+ϵ∗γ+βand",{"2":{"65":1}}],["+ϵ∗γ+βwhere",{"2":{"46":1}}],["+",{"2":{"15":3,"23":1,"35":1,"39":5,"40":4,"42":6,"44":4,"47":1,"49":1,"52":1,"56":4,"62":1,"64":1,"67":9,"70":2,"80":1,"81":3,"82":4,"84":5,"87":1,"89":6,"105":1,"119":1,"120":2,"153":2,"163":1,"164":1,"176":2,"189":4,"197":3,"201":2,"218":1,"252":4,"253":3,"255":1,"258":1,"260":8,"264":4,"265":1,"270":1,"283":3,"284":2,"292":13,"293":1,"294":2}}],["krylov",{"2":{"230":1,"291":1}}],["knet",{"2":{"89":1,"91":1}}],["know",{"2":{"121":1,"126":1,"162":1,"294":1}}],["knows",{"2":{"87":1}}],["known",{"0":{"141":1},"2":{"15":2,"49":1,"78":1,"87":1,"121":1}}],["kv",{"2":{"76":5}}],["kwarg",{"2":{"114":2,"115":1}}],["kwargsstorage",{"2":{"238":4}}],["kwargs",{"2":{"16":24,"24":1,"28":2,"39":9,"45":2,"56":4,"70":4,"80":1,"89":1,"115":1,"186":2,"187":2,"213":1,"232":7,"235":2,"236":4,"238":4,"278":5,"283":2}}],["kws",{"2":{"80":3}}],["kw",{"2":{"56":2}}],["kldiv",{"2":{"260":3}}],["kldivergenceloss",{"2":{"50":5}}],["klu",{"2":{"230":1,"291":1}}],["klambauer",{"2":{"63":1}}],["kl",{"2":{"50":1}}],["kullback",{"2":{"50":1}}],["kiros",{"2":{"65":1}}],["ki−1",{"2":{"40":1,"42":3}}],["kind",{"0":{"143":1},"2":{"25":1,"37":1,"39":1,"56":2,"126":1,"143":1}}],["kinds",{"2":{"8":1}}],["k",{"2":{"40":6,"44":4,"76":2,"78":26,"83":26,"84":16,"89":5,"197":1,"232":2,"236":2}}],["kaiming",{"2":{"15":4,"40":2,"44":1,"65":1,"186":6}}],["keith",{"2":{"290":1}}],["keep",{"2":{"160":1}}],["kept",{"2":{"157":1}}],["kerneldensity",{"2":{"263":1,"269":1,"291":1}}],["kernelabstractions",{"2":{"66":1,"82":1,"171":7,"200":4,"263":4,"269":4,"291":4}}],["kernels",{"0":{"171":1},"2":{"62":1,"89":2,"171":3}}],["kernel",{"2":{"15":1,"40":5,"62":1,"66":1,"78":1,"80":6,"89":6,"171":6}}],["keys",{"2":{"114":1,"132":1,"156":2}}],["keypath=keypath",{"2":{"24":1}}],["keypath",{"2":{"23":7,"114":1,"115":1,"126":4,"127":10,"144":2,"145":4}}],["key",{"0":{"140":1},"2":{"10":2,"76":4,"136":1}}],["keywords",{"2":{"47":1,"56":1,"82":1,"187":1}}],["keyword",{"2":{"2":1,"10":1,"24":2,"35":1,"39":5,"40":2,"41":4,"42":3,"43":2,"44":4,"45":2,"46":4,"47":2,"49":2,"69":1,"70":2,"77":1,"81":3,"84":1,"87":1,"88":1,"89":5,"104":1,"116":1,"117":1,"124":1,"187":1}}],["hₓ",{"2":{"292":4}}],["h₊",{"2":{"292":4}}],["h12",{"2":{"292":12}}],["h11",{"2":{"292":12}}],["h22",{"2":{"292":12}}],["hmc",{"2":{"265":2}}],["hwloctrees",{"2":{"263":2,"291":2}}],["hwloc",{"2":{"263":2,"291":3}}],["hcat",{"2":{"253":1,"265":1}}],["hn",{"2":{"243":2}}],["hnew",{"2":{"43":6}}],["hnew=activation",{"2":{"43":1}}],["hnew=",{"2":{"43":1}}],["httpext",{"2":{"230":1,"241":1}}],["http",{"2":{"230":1,"241":1}}],["https",{"2":{"15":1,"43":1,"45":1,"72":1,"81":1,"89":1,"90":1,"195":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"292":2,"298":1}}],["hdf5",{"2":{"230":2,"241":2}}],["hypernet",{"0":{"243":1,"244":1},"2":{"243":4,"244":1}}],["hypernetwork",{"0":{"240":1},"1":{"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1}}],["hypergeometricfunctions",{"2":{"230":1,"263":1,"291":1}}],["hyperbolic",{"2":{"67":1}}],["hlo",{"2":{"124":5,"147":6}}],["hz",{"2":{"89":2}}],["hostcpufeatures",{"2":{"291":1}}],["hosts",{"2":{"98":1}}],["home",{"2":{"155":1}}],["hold",{"2":{"92":1,"96":1}}],["holds",{"2":{"89":2}}],["hops",{"2":{"89":1}}],["hop",{"2":{"89":5}}],["hot",{"2":{"52":1}}],["how",{"0":{"72":1,"172":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1},"2":{"37":1,"39":2,"43":3,"46":8,"50":2,"52":1,"83":2,"84":1,"89":2,"93":1,"95":1,"96":1,"97":1,"123":1,"124":1,"125":1,"127":2,"142":1,"147":1,"153":1,"155":1,"158":1,"162":1,"166":1,"171":1,"175":1,"179":2,"189":2,"195":1,"208":1,"216":1,"232":1,"243":1,"263":1,"265":1,"266":1,"282":1}}],["however",{"2":{"8":1,"11":1,"39":1,"52":1,"56":1,"82":1,"83":1,"89":1,"91":1,"96":1,"116":1,"122":1,"128":1,"140":1,"166":1,"185":1,"191":2,"193":1,"216":3,"227":1,"232":1,"248":1,"292":1,"294":2}}],["hutchinson",{"0":{"167":1},"1":{"168":1,"169":1,"170":1},"2":{"167":2,"168":3,"169":2,"170":8}}],["huge",{"2":{"89":1}}],["huber",{"2":{"50":1}}],["huberloss",{"2":{"50":3}}],["human",{"2":{"15":2}}],["hh",{"2":{"43":6,"117":1}}],["h",{"2":{"42":6,"43":3,"47":4,"81":4,"85":7,"89":1,"210":1,"231":1,"278":2,"292":14}}],["historical",{"2":{"232":1}}],["historically",{"2":{"21":1}}],["hi=1",{"2":{"67":1}}],["hinton",{"2":{"65":1}}],["hinge",{"2":{"50":2}}],["hingeloss",{"2":{"50":2}}],["hierarchically",{"2":{"56":1}}],["hidden",{"2":{"43":43,"202":6,"203":3,"251":11,"254":2,"265":1,"280":2,"285":7,"287":2}}],["highlight",{"2":{"89":1,"140":1}}],["highlights",{"2":{"89":1}}],["highly",{"2":{"50":1}}],["highest",{"2":{"62":1,"121":1,"173":1,"266":1}}],["higher",{"2":{"47":1,"165":1,"173":1}}],["high",{"2":{"39":1,"81":1,"171":1,"189":1,"195":1}}],["hi",{"2":{"15":2,"67":1}}],["hit",{"2":{"8":1}}],["heed",{"2":{"87":1}}],["heatmap",{"2":{"255":1}}],["head",{"2":{"93":1,"202":1}}],["heads",{"2":{"76":1}}],["heavily",{"2":{"55":1}}],["heavy",{"2":{"6":1}}],["height=10",{"2":{"89":2}}],["height=7",{"2":{"67":28}}],["height",{"2":{"40":1,"67":2,"260":6}}],["hence",{"2":{"35":1,"52":5,"68":1,"89":2,"158":1,"160":1,"168":1,"192":1,"193":1,"214":1,"220":1,"232":1,"294":1}}],["helps",{"2":{"89":1,"232":1}}],["helpful",{"2":{"80":1}}],["helper",{"0":{"29":1,"45":1,"66":1,"212":1,"260":1,"279":1,"286":1},"2":{"49":1,"54":1,"89":4,"266":1}}],["helpers",{"0":{"17":1},"1":{"18":1,"19":1,"20":1},"2":{"51":1}}],["help",{"2":{"21":1,"22":1,"23":1,"24":2,"39":6,"40":2,"42":3,"43":2,"45":1,"46":4,"47":1,"50":2,"54":1,"56":1,"125":1,"175":1}}],["here",{"2":{"21":1,"24":1,"49":2,"51":1,"56":3,"83":1,"89":1,"91":1,"96":1,"121":1,"131":1,"137":1,"154":1,"163":3,"164":1,"165":1,"178":1,"179":1,"189":3,"195":2,"220":1,"232":1,"251":1,"264":1,"265":2,"277":1,"283":1}}],["hessian",{"2":{"15":1}}],["he",{"2":{"15":2,"65":1}}],["hamiltonian",{"2":{"265":2}}],["hamming",{"2":{"89":8}}],["harfbuzz",{"2":{"263":1,"291":1}}],["harder",{"2":{"93":1,"125":1}}],["hardware",{"2":{"92":1}}],["hardswish",{"2":{"67":6}}],["hardsigmoid",{"2":{"67":3}}],["hardtanh",{"2":{"67":3}}],["hardσ",{"2":{"67":5}}],["hard",{"2":{"11":1,"50":1,"67":2,"127":1,"153":1,"198":1,"207":1,"215":1,"222":1,"239":2,"247":2,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["hat",{"2":{"221":2}}],["had",{"2":{"193":1,"200":1,"269":2}}],["hadsell",{"2":{"50":1}}],["happened",{"2":{"173":1}}],["happening",{"2":{"164":1,"178":1}}],["happens",{"2":{"126":1,"162":1,"178":1,"183":1}}],["hann",{"2":{"89":9}}],["hand",{"2":{"189":1}}],["handwriting",{"2":{"86":1}}],["handling",{"2":{"51":1,"54":1,"151":1,"173":1,"183":1}}],["handles",{"2":{"89":1,"93":1,"203":1}}],["handled",{"2":{"83":1,"85":2}}],["handle",{"2":{"43":1,"148":1,"171":1,"232":1,"265":1}}],["half",{"2":{"87":1}}],["haven",{"2":{"35":1}}],["have",{"2":{"15":1,"19":1,"21":2,"40":1,"43":1,"45":1,"76":2,"78":1,"80":1,"85":1,"93":2,"101":1,"107":1,"111":2,"114":3,"115":1,"116":2,"117":1,"121":3,"122":2,"126":1,"127":2,"128":2,"141":1,"153":1,"154":1,"155":1,"157":2,"161":1,"162":2,"163":1,"164":1,"170":1,"173":1,"190":1,"191":1,"192":1,"193":3,"202":3,"220":1,"244":1,"265":1,"266":2,"292":1}}],["having",{"2":{"6":1,"25":1,"56":1}}],["hasharraymappedtries",{"2":{"291":1}}],["has",{"2":{"8":3,"22":1,"35":2,"40":1,"42":3,"47":1,"52":5,"56":1,"67":1,"83":3,"87":1,"89":4,"91":1,"93":1,"104":2,"107":6,"111":1,"114":7,"115":4,"116":1,"117":1,"125":1,"126":4,"136":1,"139":1,"147":1,"154":1,"166":1,"188":1,"189":2,"193":1,"265":1,"290":1,"294":1}}],["2π",{"2":{"285":1}}],["2πnn−1",{"2":{"89":2}}],["2\\ttrain",{"2":{"280":1}}],["2f",{"2":{"213":2,"246":4}}],["2fs",{"2":{"213":1}}],["2f0",{"2":{"67":3}}],["2`",{"2":{"154":1}}],["294133240722606e",{"2":{"296":1}}],["294326f",{"2":{"294":1}}],["2947",{"2":{"291":1}}],["2942",{"2":{"263":1}}],["293169802299342e",{"2":{"296":1}}],["293404376840209e",{"2":{"296":1}}],["2934628",{"2":{"270":1}}],["2930885f",{"2":{"294":1}}],["293565478005743e",{"2":{"296":1}}],["2935",{"2":{"291":1}}],["2955985f",{"2":{"294":1}}],["295654f",{"2":{"294":1}}],["29561",{"2":{"291":1}}],["29561827",{"2":{"197":1}}],["29507",{"2":{"291":1}}],["295229",{"2":{"270":1}}],["2906",{"2":{"291":1}}],["2909",{"2":{"265":1}}],["29001",{"2":{"254":1}}],["2904677",{"2":{"146":1}}],["2962267800736885e",{"2":{"296":1}}],["296660391697759e",{"2":{"296":1}}],["2961416f",{"2":{"294":1}}],["2969374f",{"2":{"294":1}}],["296",{"2":{"280":5}}],["29630",{"2":{"235":2,"237":1}}],["296496",{"2":{"189":1}}],["2973533647150512e",{"2":{"296":1}}],["2974551f",{"2":{"294":1}}],["2976",{"2":{"291":1}}],["2970",{"2":{"265":1}}],["297",{"2":{"200":1}}],["297959481822617",{"2":{"171":2}}],["299",{"2":{"280":6}}],["2999",{"2":{"280":1,"291":1}}],["2999034",{"2":{"270":1}}],["29916334",{"2":{"270":1}}],["299262",{"2":{"270":1}}],["2992126",{"2":{"270":2}}],["2990853",{"2":{"197":1}}],["29944375",{"2":{"197":1}}],["29955",{"2":{"186":1}}],["2929",{"2":{"280":1}}],["292002",{"2":{"186":1}}],["292533",{"2":{"147":1}}],["29",{"2":{"147":2,"246":3,"261":2,"263":1,"265":1,"280":15,"291":1}}],["291891176873161e",{"2":{"296":1}}],["291772f",{"2":{"294":1}}],["2916",{"2":{"291":1}}],["2919922",{"2":{"261":1}}],["2913",{"2":{"147":1}}],["2915",{"2":{"265":1}}],["291513",{"2":{"147":1}}],["29156f",{"2":{"96":1}}],["2985233f",{"2":{"294":1}}],["298375f",{"2":{"294":1}}],["298480265503702e",{"2":{"296":1}}],["2984",{"2":{"265":1}}],["298",{"2":{"263":1}}],["298651896799662e",{"2":{"296":1}}],["2986",{"2":{"254":2}}],["2986417",{"2":{"123":1}}],["29828635",{"2":{"197":1}}],["298787",{"2":{"166":1}}],["29872745",{"2":{"123":1}}],["298099",{"2":{"147":1}}],["2x",{"2":{"81":1}}],["269563266835809e",{"2":{"296":1}}],["2698216f",{"2":{"294":1}}],["2696110635469216e",{"2":{"296":1}}],["2696894f",{"2":{"294":1}}],["2696",{"2":{"291":1}}],["269968",{"2":{"147":1}}],["26\\ttrain",{"2":{"280":1}}],["263",{"2":{"280":1}}],["2630",{"2":{"265":1}}],["26397",{"2":{"254":1}}],["2632644",{"2":{"147":1}}],["261098632735822e",{"2":{"296":1}}],["261373350302043e",{"2":{"296":1}}],["2613407f",{"2":{"294":1}}],["261",{"2":{"280":1}}],["2614254",{"2":{"270":1}}],["2614398",{"2":{"147":1}}],["2611",{"2":{"265":1}}],["261636",{"2":{"186":1}}],["2615936",{"2":{"165":1}}],["26124424",{"2":{"147":1}}],["2660292974010864e",{"2":{"296":1}}],["266054",{"2":{"147":1}}],["2661476f",{"2":{"294":1}}],["26615036",{"2":{"147":1}}],["266",{"2":{"280":8}}],["2668537",{"2":{"147":1}}],["26664025",{"2":{"147":1}}],["2666627",{"2":{"147":1}}],["266974",{"2":{"147":1}}],["267292003798961e",{"2":{"296":1}}],["2670017f",{"2":{"294":1}}],["2670698",{"2":{"147":1}}],["26771653",{"2":{"270":2}}],["26774603",{"2":{"147":1}}],["267644",{"2":{"165":1,"166":1}}],["267658",{"2":{"147":1}}],["2643698522401245e",{"2":{"296":1}}],["264",{"2":{"280":1}}],["264485710362863e",{"2":{"296":1}}],["2644",{"2":{"265":1}}],["26468",{"2":{"147":1}}],["26493692",{"2":{"147":1}}],["26477206",{"2":{"147":1}}],["2640756f",{"2":{"123":1}}],["2620028f",{"2":{"294":1}}],["2626992f",{"2":{"294":1}}],["2629",{"2":{"291":1}}],["262371589417253e",{"2":{"296":1}}],["2623",{"2":{"291":1}}],["2624278619860427e",{"2":{"296":1}}],["2624",{"2":{"291":1}}],["262",{"2":{"280":1}}],["26229796",{"2":{"147":1}}],["26255828",{"2":{"96":1}}],["2685547",{"2":{"261":1}}],["26858228",{"2":{"147":1}}],["26867408",{"2":{"147":1}}],["268992",{"2":{"147":1}}],["268941",{"2":{"50":2,"77":2}}],["260",{"2":{"280":1}}],["2603",{"2":{"265":1}}],["26001",{"2":{"254":1}}],["260692",{"2":{"166":1}}],["26076",{"2":{"147":1}}],["26072562",{"2":{"147":1}}],["26055062",{"2":{"147":1}}],["2609411",{"2":{"147":1}}],["26012868",{"2":{"123":1}}],["2601667",{"2":{"123":1}}],["265944f",{"2":{"294":1}}],["26586",{"2":{"269":1}}],["2658",{"2":{"263":1}}],["265788",{"2":{"186":1}}],["265372",{"2":{"186":1}}],["2654086f",{"2":{"294":1}}],["26543173",{"2":{"147":1}}],["26545027",{"2":{"147":1}}],["26520002",{"2":{"147":1}}],["26553962",{"2":{"147":1}}],["26565230753145e",{"2":{"296":1}}],["2656273",{"2":{"147":1}}],["26564768",{"2":{"147":1}}],["265",{"2":{"97":1,"280":1}}],["26",{"2":{"79":2,"147":2,"220":2,"235":1,"246":3,"261":2,"269":1,"280":4,"291":2}}],["22\\ttrain",{"2":{"280":1}}],["228135872569555e",{"2":{"296":1}}],["2284129f",{"2":{"294":1}}],["22893",{"2":{"291":1}}],["228",{"2":{"280":16}}],["22827768",{"2":{"273":1}}],["22m\\u001b",{"2":{"269":1}}],["22034",{"2":{"280":1}}],["22005874",{"2":{"270":1}}],["22001",{"2":{"254":1}}],["2205",{"2":{"263":1}}],["2204",{"2":{"241":1}}],["220",{"2":{"238":6}}],["22094624",{"2":{"147":1}}],["221466",{"2":{"280":1}}],["221462",{"2":{"280":1}}],["221459",{"2":{"280":1}}],["221456",{"2":{"280":1}}],["221453",{"2":{"280":1}}],["221450",{"2":{"280":1}}],["221448",{"2":{"280":1}}],["221445",{"2":{"280":1}}],["221442",{"2":{"280":1}}],["221439",{"2":{"280":1}}],["221413",{"2":{"280":1}}],["2214",{"2":{"265":1}}],["2217",{"2":{"230":1}}],["22157605",{"2":{"147":1}}],["22s",{"2":{"214":1}}],["223290118869231e",{"2":{"296":1}}],["2233919f",{"2":{"294":1}}],["223332e",{"2":{"220":1}}],["2230004f",{"2":{"294":1}}],["2239297f",{"2":{"294":1}}],["223541",{"2":{"287":1}}],["223529",{"2":{"269":1}}],["22313",{"2":{"261":1}}],["223739910629945e",{"2":{"296":1}}],["22374f",{"2":{"294":1}}],["2237",{"2":{"241":1}}],["22379348",{"2":{"96":1}}],["2234",{"2":{"200":1}}],["2297604645573056e",{"2":{"296":1}}],["2295",{"2":{"291":1}}],["2299",{"2":{"280":1}}],["229",{"2":{"280":13}}],["2291s\\ttraining",{"2":{"235":1}}],["2290224145974982",{"2":{"171":2}}],["22900078",{"2":{"147":1}}],["2293",{"2":{"147":1}}],["2262887057476924e",{"2":{"296":1}}],["226222",{"2":{"147":1}}],["22680467",{"2":{"273":1}}],["2264s\\ttraining",{"2":{"235":1}}],["226",{"2":{"220":1,"280":11}}],["22635892",{"2":{"197":1}}],["22677277",{"2":{"147":1}}],["227416399802949e",{"2":{"296":1}}],["2272903f",{"2":{"294":1}}],["227940",{"2":{"280":1}}],["227935",{"2":{"280":1}}],["227932",{"2":{"280":1}}],["227930",{"2":{"280":1}}],["227927",{"2":{"280":1}}],["227924",{"2":{"280":1}}],["227921",{"2":{"280":1}}],["227918",{"2":{"280":1}}],["227916",{"2":{"280":1}}],["227913",{"2":{"280":1}}],["227",{"2":{"280":13}}],["227897",{"2":{"280":1}}],["2278",{"2":{"265":1}}],["2277",{"2":{"241":1}}],["227513",{"2":{"186":1}}],["227502",{"2":{"147":1}}],["22752927",{"2":{"147":1}}],["227338",{"2":{"147":1}}],["22719024",{"2":{"147":1}}],["222756670100149e",{"2":{"296":1}}],["2227837233928576",{"2":{"171":2}}],["2224",{"2":{"280":1}}],["222485",{"2":{"186":1}}],["2226562",{"2":{"261":1}}],["222528",{"2":{"166":1}}],["2221471",{"2":{"166":1}}],["222885",{"2":{"147":1}}],["22202058",{"2":{"147":1}}],["22230405",{"2":{"147":1}}],["2222222222222222",{"2":{"87":1}}],["22222",{"2":{"81":2,"235":2,"237":1}}],["225193f",{"2":{"294":1}}],["22517107",{"2":{"147":1}}],["225",{"2":{"280":13}}],["2252102368062537e",{"2":{"296":1}}],["22521684",{"2":{"147":1}}],["22529833",{"2":{"273":1}}],["2257",{"2":{"265":1}}],["22542597",{"2":{"96":1}}],["2243314f",{"2":{"294":1}}],["22439213",{"2":{"123":1}}],["224689",{"2":{"280":1}}],["224685",{"2":{"280":1}}],["224682",{"2":{"280":1}}],["224679",{"2":{"280":1}}],["224676",{"2":{"280":1}}],["224673",{"2":{"280":1}}],["224671",{"2":{"280":1}}],["224668",{"2":{"280":1}}],["224665",{"2":{"280":1}}],["224662",{"2":{"280":1}}],["224647",{"2":{"280":1}}],["2245107",{"2":{"270":1}}],["224470584601832e",{"2":{"296":1}}],["2244",{"2":{"269":1}}],["224",{"2":{"263":1,"280":9}}],["224205336128725e",{"2":{"296":1}}],["2242",{"2":{"241":1}}],["22421768",{"2":{"123":1}}],["224139",{"2":{"200":1}}],["22412",{"2":{"147":1}}],["2240",{"2":{"269":1}}],["224029",{"2":{"147":1}}],["22409724",{"2":{"147":1}}],["22",{"2":{"79":2,"81":8,"147":2,"163":1,"200":1,"205":3,"214":3,"246":2,"254":2,"261":2,"265":5,"269":1,"280":622,"287":1,"292":14}}],["27\\ttrain",{"2":{"280":1}}],["27176",{"2":{"280":2}}],["2714",{"2":{"254":1}}],["2714335",{"2":{"147":1}}],["275993",{"2":{"287":1}}],["2755905",{"2":{"270":2}}],["27561936",{"2":{"147":1}}],["2727636f",{"2":{"294":1}}],["2727",{"2":{"287":1}}],["27245",{"2":{"280":3}}],["2724",{"2":{"265":1}}],["272974093195133e",{"2":{"296":1}}],["2729",{"2":{"241":1}}],["27284",{"2":{"205":1}}],["27281",{"2":{"205":1}}],["270525f",{"2":{"294":1}}],["27051234",{"2":{"197":1}}],["270",{"2":{"280":2}}],["27001",{"2":{"254":1}}],["27035674",{"2":{"197":1}}],["270823845621428e",{"2":{"296":1}}],["2708",{"2":{"147":1,"230":1,"277":1}}],["279",{"2":{"291":1}}],["27952",{"2":{"261":1}}],["2791",{"2":{"241":1}}],["2791573282471997",{"2":{"171":2}}],["2797048270773437",{"2":{"171":2}}],["279297",{"2":{"147":1}}],["2794292",{"2":{"147":1}}],["2789",{"2":{"280":1}}],["2782",{"2":{"263":1}}],["27841",{"2":{"147":1}}],["27800062",{"2":{"147":1}}],["27834624",{"2":{"147":1}}],["276518631879414e",{"2":{"296":1}}],["2765513f",{"2":{"294":1}}],["27658862",{"2":{"147":1}}],["2763796",{"2":{"270":1}}],["2768941f",{"2":{"294":1}}],["2768",{"2":{"265":1}}],["27682805",{"2":{"147":1}}],["276",{"2":{"220":1}}],["276725",{"2":{"147":1}}],["27664563",{"2":{"147":1}}],["27664083",{"2":{"147":1}}],["2764903",{"2":{"147":1}}],["27611518",{"2":{"147":1}}],["2749",{"2":{"291":1}}],["2741",{"2":{"265":1}}],["274107",{"2":{"147":1}}],["2743",{"2":{"263":1}}],["2747",{"2":{"230":1}}],["274203",{"2":{"147":1}}],["2742818",{"2":{"147":1}}],["274844",{"2":{"147":1}}],["2732",{"2":{"263":1}}],["27343664",{"2":{"147":1}}],["27319488",{"2":{"147":1}}],["27373654",{"2":{"143":1}}],["2770628641464566e",{"2":{"296":1}}],["2770168f",{"2":{"294":1}}],["2771789",{"2":{"197":1}}],["2771862617010155",{"2":{"171":2}}],["2774315",{"2":{"163":1}}],["2779469",{"2":{"147":1}}],["27778",{"2":{"81":1}}],["27",{"2":{"79":3,"147":2,"200":1,"214":1,"246":2,"261":2,"280":6,"287":1}}],["27th",{"2":{"15":1}}],["2n",{"2":{"79":4}}],["2nd",{"0":{"20":1},"2":{"89":2,"119":1,"132":1,"250":2,"252":1}}],["2×10",{"2":{"88":1}}],["2×4",{"2":{"84":1,"191":1}}],["2×5",{"2":{"84":2,"186":6}}],["2×32",{"2":{"123":2}}],["2×3×4×1",{"2":{"81":1}}],["2×3×1×1",{"2":{"81":1}}],["2×3",{"2":{"77":2}}],["2×2×1×1",{"2":{"85":1}}],["2×2",{"2":{"50":2,"79":1,"189":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀4⠀",{"2":{"67":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀5⠀",{"2":{"67":1}}],["2⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀",{"2":{"67":12}}],["25\\ttrain",{"2":{"280":1}}],["252057f",{"2":{"294":1}}],["252",{"2":{"291":1}}],["2524",{"2":{"230":1}}],["25278285",{"2":{"197":1}}],["2525357728685884",{"2":{"171":2}}],["252981",{"2":{"166":1}}],["2529244",{"2":{"166":1}}],["2523668",{"2":{"166":1}}],["258",{"2":{"280":7}}],["258473",{"2":{"147":1}}],["25860527",{"2":{"147":1}}],["2536",{"2":{"265":1}}],["2539",{"2":{"263":1,"265":1,"269":1}}],["253224",{"2":{"147":1}}],["25338f",{"2":{"96":1}}],["250365970902723e",{"2":{"296":1}}],["250362",{"2":{"147":1}}],["2501638329749527e",{"2":{"296":1}}],["250148f",{"2":{"294":1}}],["25005f",{"2":{"294":1}}],["25001",{"2":{"254":1}}],["2506",{"2":{"265":1}}],["250691e",{"2":{"220":1}}],["250",{"2":{"261":1,"266":1,"274":2,"293":1}}],["25055695",{"2":{"195":1,"196":1}}],["250818",{"2":{"147":1}}],["25020984",{"2":{"147":1}}],["2546999389286456e",{"2":{"296":1}}],["2547f",{"2":{"294":1}}],["254",{"2":{"280":3}}],["25483933",{"2":{"270":1}}],["25488",{"2":{"261":1}}],["2548422f0",{"2":{"67":1}}],["2541723",{"2":{"147":1}}],["2512",{"2":{"265":1}}],["2515",{"2":{"263":1}}],["2510",{"2":{"230":1}}],["251",{"2":{"220":1}}],["2516714",{"2":{"147":1}}],["2514698",{"2":{"147":1}}],["255384f",{"2":{"294":1}}],["2553826",{"2":{"147":1}}],["2553",{"2":{"263":1}}],["255203",{"2":{"186":1}}],["25543177",{"2":{"147":1}}],["2556785f",{"2":{"123":1}}],["259400710954987e",{"2":{"296":1}}],["259041543532805e",{"2":{"296":1}}],["259097f",{"2":{"294":1}}],["25988",{"2":{"280":1}}],["259867",{"2":{"147":1}}],["259",{"2":{"280":1}}],["25926",{"2":{"235":3,"237":1}}],["259247",{"2":{"147":1}}],["25912565",{"2":{"147":1}}],["25911948",{"2":{"123":1}}],["2597034",{"2":{"147":1}}],["2570",{"2":{"291":1}}],["25700033",{"2":{"147":1}}],["25723198",{"2":{"147":1}}],["257",{"2":{"96":1,"280":7}}],["25",{"2":{"79":2,"81":10,"88":17,"147":3,"189":1,"205":3,"214":2,"220":2,"246":9,"254":2,"261":2,"265":1,"280":4,"287":2}}],["2569513",{"2":{"273":1}}],["256x128xf32>",{"2":{"147":2}}],["256412f",{"2":{"294":1}}],["2564",{"2":{"147":1}}],["25654957",{"2":{"147":1}}],["2567278307533954e",{"2":{"296":1}}],["2567",{"2":{"265":1}}],["2567473",{"2":{"147":1}}],["25673f",{"2":{"96":1}}],["25662464",{"2":{"143":1}}],["256",{"2":{"37":1,"56":1,"89":2,"96":6,"147":2,"211":3,"244":2,"280":1}}],["24\\ttrain",{"2":{"280":1}}],["24x24x6x4xi1>",{"2":{"147":2}}],["24x24x6x4xf32>",{"2":{"147":8}}],["2432",{"2":{"291":1}}],["24321",{"2":{"147":1}}],["2433",{"2":{"230":1}}],["24344929",{"2":{"147":1}}],["24349861",{"2":{"147":1}}],["2412",{"2":{"230":1}}],["2415",{"2":{"263":1}}],["2415926",{"2":{"166":1}}],["24154694",{"2":{"147":1}}],["24199852",{"2":{"147":1}}],["247",{"2":{"280":3}}],["2470898",{"2":{"147":1}}],["24719",{"2":{"123":1}}],["2488284f",{"2":{"294":1}}],["24878",{"2":{"147":1}}],["24873069",{"2":{"147":1}}],["248654",{"2":{"147":1}}],["2483782",{"2":{"147":1}}],["24817711",{"2":{"147":1}}],["24602139039425e",{"2":{"296":1}}],["2460",{"2":{"263":1}}],["246007",{"2":{"261":1}}],["2460938",{"2":{"261":1}}],["2469",{"2":{"265":1}}],["24691\\taccuracy",{"2":{"205":1}}],["24695547",{"2":{"147":1}}],["24626082",{"2":{"147":1}}],["245856f",{"2":{"294":1}}],["2457",{"2":{"230":1}}],["2459",{"2":{"200":1}}],["24539067",{"2":{"147":1}}],["24503033",{"2":{"147":1}}],["2420164f",{"2":{"294":1}}],["24203484",{"2":{"147":1}}],["2421",{"2":{"291":1}}],["242",{"2":{"280":3}}],["242777",{"2":{"147":1}}],["2427716f",{"2":{"123":1}}],["24238436",{"2":{"147":1}}],["242468",{"2":{"147":1}}],["24900620158046e",{"2":{"296":1}}],["2495",{"2":{"291":1}}],["2497",{"2":{"280":1}}],["24978368",{"2":{"147":1}}],["2492",{"2":{"265":1}}],["24991",{"2":{"147":1}}],["24997774",{"2":{"147":1}}],["24982567",{"2":{"147":1}}],["2445",{"2":{"291":1}}],["244",{"2":{"280":6}}],["2440945",{"2":{"270":2}}],["2449",{"2":{"254":2}}],["24490844",{"2":{"147":1}}],["24472847105479764",{"2":{"77":1}}],["244728",{"2":{"50":10,"77":1}}],["24",{"2":{"79":2,"132":2,"147":2,"205":2,"207":2,"215":2,"235":1,"239":3,"246":3,"247":3,"254":2,"256":2,"261":2,"262":2,"275":2,"280":30,"281":2,"287":1,"289":2}}],["240314249975422e",{"2":{"296":1}}],["240347f",{"2":{"294":1}}],["2409",{"2":{"265":1}}],["240960",{"2":{"261":1}}],["2401",{"2":{"263":1}}],["2401375",{"2":{"147":1}}],["24001",{"2":{"254":1}}],["24079",{"2":{"147":1}}],["24045505",{"2":{"147":1}}],["24046357",{"2":{"147":1}}],["2408",{"2":{"263":1}}],["24080847",{"2":{"147":1}}],["24086508",{"2":{"147":1}}],["24063988",{"2":{"147":1}}],["240",{"2":{"46":3}}],["21\\ttrain",{"2":{"280":1}}],["2174847657902704e",{"2":{"296":1}}],["217068f",{"2":{"294":1}}],["2175396f",{"2":{"294":1}}],["217241f",{"2":{"294":1}}],["2171",{"2":{"291":1}}],["217756484157493e",{"2":{"296":1}}],["217783342798653e",{"2":{"296":1}}],["2177",{"2":{"287":1}}],["2178",{"2":{"263":1}}],["2173",{"2":{"263":1}}],["21799661",{"2":{"147":1}}],["21796629",{"2":{"147":1}}],["21s",{"2":{"214":1}}],["21384",{"2":{"291":1}}],["213",{"2":{"280":6}}],["2133",{"2":{"263":1}}],["2139",{"2":{"263":1}}],["21398924",{"2":{"147":1}}],["213618",{"2":{"147":1}}],["21301",{"2":{"147":1}}],["2143551987492544e",{"2":{"296":1}}],["214773149711706e",{"2":{"296":1}}],["2147716f",{"2":{"294":1}}],["2147609",{"2":{"147":1}}],["2146323f",{"2":{"294":1}}],["21469435",{"2":{"147":1}}],["21412",{"2":{"269":1}}],["21496",{"2":{"200":1}}],["2158313f",{"2":{"294":1}}],["215",{"2":{"280":3}}],["215208935135869e",{"2":{"296":1}}],["2152",{"2":{"280":1}}],["21524015",{"2":{"96":1}}],["2153",{"2":{"280":1}}],["215142383746798e",{"2":{"296":1}}],["215129",{"2":{"280":1}}],["215124",{"2":{"280":1}}],["215121",{"2":{"280":1}}],["215118",{"2":{"280":1}}],["215116",{"2":{"280":1}}],["215113",{"2":{"280":1}}],["215110",{"2":{"280":1}}],["215107",{"2":{"280":1}}],["215104",{"2":{"280":1}}],["215101",{"2":{"280":1}}],["215087",{"2":{"280":1}}],["21501\\taccuracy",{"2":{"205":1}}],["2154",{"2":{"230":1}}],["21542016",{"2":{"147":1}}],["212244f",{"2":{"294":1}}],["21248710499469e",{"2":{"296":1}}],["2124",{"2":{"287":1}}],["21254",{"2":{"291":1}}],["2125",{"2":{"280":2}}],["2125984",{"2":{"270":2}}],["2120",{"2":{"263":1}}],["2128906",{"2":{"261":1}}],["212",{"2":{"230":1}}],["21271591",{"2":{"147":1}}],["21295139",{"2":{"147":1}}],["21215f",{"2":{"96":1}}],["2101947292805683e",{"2":{"296":1}}],["21014",{"2":{"280":2}}],["2104384f",{"2":{"294":1}}],["210988f",{"2":{"294":1}}],["21096227",{"2":{"147":1}}],["2108",{"2":{"265":1}}],["2108695",{"2":{"147":1}}],["21031",{"2":{"261":1}}],["2100",{"2":{"291":1}}],["21001",{"2":{"254":1}}],["21004838",{"2":{"147":1}}],["210",{"2":{"238":18}}],["21062",{"2":{"200":1}}],["219133687867678e",{"2":{"296":1}}],["219101f",{"2":{"294":1}}],["2191",{"2":{"291":1}}],["2197981041108443",{"2":{"189":1}}],["2192001",{"2":{"166":1}}],["21957569",{"2":{"147":1}}],["21983564",{"2":{"123":1}}],["2112140326560686e",{"2":{"296":1}}],["21121861",{"2":{"147":1}}],["2116",{"2":{"291":1}}],["21161",{"2":{"205":1}}],["211941",{"2":{"280":1}}],["211936",{"2":{"280":1}}],["211933",{"2":{"280":1}}],["211930",{"2":{"280":1}}],["211928",{"2":{"280":1}}],["211925",{"2":{"280":1}}],["211922",{"2":{"280":1}}],["211919",{"2":{"280":1}}],["211916",{"2":{"280":1}}],["211913",{"2":{"280":1}}],["211900",{"2":{"280":1}}],["2114",{"2":{"265":1}}],["2111",{"2":{"265":1}}],["2111297",{"2":{"166":1}}],["211",{"2":{"238":6}}],["211378",{"2":{"147":1}}],["21178308",{"2":{"147":1}}],["218301",{"2":{"280":1}}],["21833563",{"2":{"147":1}}],["218296",{"2":{"280":1}}],["218293",{"2":{"280":1}}],["218291",{"2":{"280":1}}],["218288",{"2":{"280":1}}],["218285",{"2":{"280":1}}],["218282",{"2":{"280":1}}],["218279",{"2":{"280":1}}],["218276",{"2":{"280":1}}],["218273",{"2":{"280":1}}],["218258",{"2":{"280":1}}],["2180",{"2":{"263":1}}],["21886098",{"2":{"147":1}}],["21899778",{"2":{"147":1}}],["21854877",{"2":{"124":1}}],["2164",{"2":{"291":1}}],["2165",{"2":{"291":1}}],["216",{"2":{"280":7}}],["216841",{"2":{"261":1}}],["21686329",{"2":{"147":1}}],["2169",{"2":{"265":1}}],["2169s\\ttraining",{"2":{"235":1}}],["21690917",{"2":{"197":1}}],["2167969",{"2":{"261":1}}],["21678\\taccuracy",{"2":{"205":1}}],["21672015",{"2":{"143":1}}],["21636114",{"2":{"147":1}}],["21634498",{"2":{"147":1}}],["21624334",{"2":{"123":1}}],["21608135",{"2":{"96":1}}],["21",{"2":{"46":1,"69":1,"79":2,"81":8,"147":2,"200":3,"205":2,"246":3,"254":2,"261":4,"265":2,"269":5,"280":20,"291":2}}],["20\\ttrain",{"2":{"280":1}}],["20s",{"2":{"214":2}}],["2093268261896655e",{"2":{"296":1}}],["2093402",{"2":{"147":1}}],["209584697112226e",{"2":{"296":1}}],["2094563973928504e",{"2":{"296":1}}],["2094564f",{"2":{"294":1}}],["2096217f",{"2":{"294":1}}],["209628f",{"2":{"294":1}}],["2090",{"2":{"287":1}}],["20977046",{"2":{"270":1}}],["2099609",{"2":{"261":1}}],["209906",{"2":{"147":1}}],["209",{"2":{"254":1,"280":4}}],["208354429092567e",{"2":{"296":1}}],["208362415278059e",{"2":{"296":1}}],["208378",{"2":{"147":1}}],["208456328824419e",{"2":{"296":1}}],["2084233f",{"2":{"294":1}}],["2084102f",{"2":{"294":1}}],["2080675f",{"2":{"294":1}}],["208709",{"2":{"280":1}}],["208705",{"2":{"280":1}}],["208702",{"2":{"280":1}}],["208699",{"2":{"280":1}}],["208696",{"2":{"280":1}}],["208694",{"2":{"280":1}}],["208691",{"2":{"280":1}}],["208687",{"2":{"280":1}}],["208684",{"2":{"280":1}}],["208681",{"2":{"280":1}}],["208666",{"2":{"280":1}}],["20862025",{"2":{"147":1}}],["20824",{"2":{"269":1}}],["2085667166519694e",{"2":{"296":1}}],["2085683f",{"2":{"294":1}}],["2085",{"2":{"230":1}}],["20881107",{"2":{"143":1}}],["2042423958752527e",{"2":{"296":1}}],["20415725f",{"2":{"294":1}}],["20489",{"2":{"291":1}}],["20487879",{"2":{"147":1}}],["20472442",{"2":{"270":2}}],["20497",{"2":{"269":1}}],["20495",{"2":{"269":1}}],["20460",{"2":{"269":1}}],["20461",{"2":{"200":1}}],["2043",{"2":{"265":1}}],["20436017",{"2":{"147":1}}],["204075",{"2":{"261":1}}],["20443262487065e",{"2":{"296":1}}],["2044787f",{"2":{"294":1}}],["2044s\\ttraining",{"2":{"235":1}}],["20448",{"2":{"147":1}}],["20457",{"2":{"200":1}}],["2067",{"2":{"291":1}}],["206",{"2":{"280":4}}],["20623",{"2":{"269":1}}],["20631",{"2":{"269":1}}],["20636114",{"2":{"147":1}}],["20685",{"2":{"269":1}}],["2068",{"2":{"265":1}}],["20610",{"2":{"200":1}}],["20617373",{"2":{"147":1}}],["206476",{"2":{"186":1}}],["20668766",{"2":{"165":1}}],["206943",{"2":{"147":1}}],["20694472",{"2":{"147":1}}],["20656037",{"2":{"147":1}}],["2053984f",{"2":{"294":1}}],["205903",{"2":{"287":1}}],["205468",{"2":{"280":1}}],["205463",{"2":{"280":1}}],["205461",{"2":{"280":1}}],["205458",{"2":{"280":1}}],["205455",{"2":{"280":1}}],["205452",{"2":{"280":1}}],["205449",{"2":{"280":1}}],["205447",{"2":{"280":1}}],["205444",{"2":{"280":1}}],["205441",{"2":{"280":1}}],["205427",{"2":{"280":1}}],["20548",{"2":{"186":1}}],["205",{"2":{"280":3}}],["20521",{"2":{"200":1}}],["20586",{"2":{"200":1}}],["20585205",{"2":{"147":1}}],["20508",{"2":{"200":1}}],["205559",{"2":{"147":1}}],["205642078642352e",{"2":{"296":1}}],["2056",{"2":{"230":1}}],["20567945",{"2":{"147":1}}],["20561251",{"2":{"147":1}}],["207223432006019e",{"2":{"296":1}}],["2072923f",{"2":{"294":1}}],["20780",{"2":{"269":1}}],["20779805",{"2":{"147":1}}],["20753737",{"2":{"147":1}}],["207107",{"2":{"82":2}}],["202203",{"2":{"280":1}}],["2021",{"2":{"290":1}}],["202198",{"2":{"280":1}}],["202195",{"2":{"280":1}}],["202192",{"2":{"280":1}}],["202190",{"2":{"280":1}}],["202187",{"2":{"280":1}}],["202184",{"2":{"280":1}}],["202181",{"2":{"280":1}}],["202178",{"2":{"280":1}}],["202175",{"2":{"280":1}}],["202160",{"2":{"280":1}}],["2027",{"2":{"263":1}}],["20277858",{"2":{"197":1}}],["202443859769488e",{"2":{"296":1}}],["202444f",{"2":{"294":1}}],["2024",{"2":{"198":1,"207":1,"215":1,"222":1,"239":2,"247":2,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["2026148f",{"2":{"294":1}}],["2026",{"2":{"230":1}}],["20266858",{"2":{"147":1}}],["20265023",{"2":{"147":1}}],["20206891",{"2":{"147":1}}],["20296505",{"2":{"147":1}}],["2025",{"2":{"97":1,"205":1,"214":1,"254":8,"261":1,"280":114,"287":3}}],["2023888743800232e",{"2":{"296":1}}],["2023",{"2":{"90":2}}],["2038",{"2":{"291":1}}],["20385",{"2":{"269":1}}],["20346",{"2":{"147":1}}],["20343f",{"2":{"96":1}}],["2031",{"2":{"265":1}}],["203184e",{"2":{"220":1}}],["20310251",{"2":{"147":1}}],["20316577",{"2":{"147":1}}],["20314303",{"2":{"96":1}}],["2008837f",{"2":{"294":1}}],["2008",{"2":{"291":1}}],["2008716",{"2":{"147":1}}],["2004",{"2":{"280":2}}],["2001",{"2":{"197":1,"254":1}}],["20033634",{"2":{"147":1}}],["20025302",{"2":{"147":1}}],["2002995",{"2":{"147":1}}],["20071083",{"2":{"147":1}}],["20001",{"2":{"254":1}}],["20001543",{"2":{"197":1}}],["2000",{"2":{"84":1,"280":3,"287":1}}],["200",{"2":{"84":1,"124":1,"238":18}}],["2009391207179958e",{"2":{"296":1}}],["2009",{"2":{"67":1}}],["20061705",{"2":{"273":1}}],["2006",{"2":{"50":2,"86":1}}],["20⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀",{"2":{"67":1}}],["20",{"2":{"39":1,"46":3,"67":1,"76":2,"79":2,"84":3,"96":1,"97":1,"133":1,"147":3,"165":1,"189":2,"197":3,"198":1,"200":2,"205":3,"207":1,"214":2,"215":1,"222":1,"233":5,"238":24,"239":1,"246":3,"247":1,"254":10,"256":1,"258":1,"261":3,"262":1,"265":7,"267":1,"269":1,"275":1,"280":116,"281":1,"287":7,"289":1,"296":1,"298":1}}],["2013121395275781e",{"2":{"296":1}}],["201",{"2":{"220":1,"238":18}}],["20192719",{"2":{"147":1}}],["2012",{"2":{"86":1}}],["20114",{"2":{"189":1}}],["201145",{"2":{"189":2}}],["2011",{"2":{"67":1}}],["2018",{"2":{"65":1}}],["2017526",{"2":{"147":1}}],["2017",{"2":{"50":2,"63":1,"263":1,"291":1}}],["2016109f",{"2":{"294":1}}],["2016",{"2":{"46":1,"50":2,"65":2,"81":1}}],["2014",{"2":{"15":1,"63":1,"241":1}}],["2015",{"2":{"15":2,"65":1}}],["20101166",{"2":{"147":1}}],["2010",{"2":{"15":3}}],["23\\ttrain",{"2":{"280":1}}],["2398322065749257e",{"2":{"296":1}}],["23987544",{"2":{"147":1}}],["23955515f",{"2":{"294":1}}],["2395",{"2":{"265":1}}],["2390311f",{"2":{"294":1}}],["2390",{"2":{"265":1}}],["23912",{"2":{"205":1}}],["23992883",{"2":{"197":1}}],["23921251",{"2":{"165":1}}],["238393f",{"2":{"294":1}}],["238",{"2":{"269":1,"280":2}}],["238232",{"2":{"147":1}}],["238470677249613e",{"2":{"296":1}}],["23847201",{"2":{"147":1}}],["238426",{"2":{"147":1}}],["2348026154421175e",{"2":{"296":1}}],["2348457f",{"2":{"294":1}}],["2345",{"2":{"291":1}}],["234501",{"2":{"280":1}}],["234",{"2":{"280":6}}],["234497",{"2":{"280":1}}],["234494",{"2":{"280":1}}],["234491",{"2":{"280":1}}],["234488",{"2":{"280":1}}],["234486",{"2":{"280":1}}],["234483",{"2":{"280":1}}],["234480",{"2":{"280":1}}],["234477",{"2":{"280":1}}],["234474",{"2":{"280":1}}],["234459",{"2":{"280":1}}],["234211",{"2":{"147":1}}],["23402624",{"2":{"123":1}}],["2314363f",{"2":{"294":1}}],["231226",{"2":{"280":1}}],["231221",{"2":{"280":1}}],["231219",{"2":{"280":1}}],["231216",{"2":{"280":1}}],["231213",{"2":{"280":1}}],["231209",{"2":{"280":1}}],["231206",{"2":{"280":1}}],["231203",{"2":{"280":1}}],["231200",{"2":{"280":1}}],["231197",{"2":{"280":1}}],["231171",{"2":{"280":1}}],["231",{"2":{"263":1,"280":7}}],["23159832",{"2":{"147":1}}],["23162602",{"2":{"147":1}}],["23192994",{"2":{"147":1}}],["23076019873696e",{"2":{"296":1}}],["230032f",{"2":{"294":1}}],["23001",{"2":{"254":1}}],["2308",{"2":{"265":1}}],["23084445",{"2":{"147":1}}],["230",{"2":{"202":1,"219":2,"232":1,"238":18,"251":1,"280":10}}],["23095",{"2":{"189":1}}],["230954",{"2":{"189":2}}],["23037",{"2":{"147":1}}],["23047f",{"2":{"294":1}}],["2304493",{"2":{"147":1}}],["23049244",{"2":{"147":1}}],["2305717f",{"2":{"123":1}}],["2324022116207556e",{"2":{"296":1}}],["23242",{"2":{"165":1}}],["2321945864940837e",{"2":{"296":1}}],["2321917f",{"2":{"294":1}}],["232128",{"2":{"261":1}}],["2325634f",{"2":{"294":1}}],["2325",{"2":{"291":1}}],["23258851",{"2":{"147":1}}],["232",{"2":{"280":11}}],["23224601",{"2":{"147":1}}],["23234344",{"2":{"147":1}}],["235264f",{"2":{"294":1}}],["2356",{"2":{"291":1}}],["23564403",{"2":{"147":1}}],["235",{"2":{"280":15}}],["23531934932611e",{"2":{"296":1}}],["2353",{"2":{"265":1}}],["23534852",{"2":{"96":1}}],["23598626",{"2":{"147":1}}],["23552088",{"2":{"147":1}}],["233",{"2":{"280":9}}],["2334",{"2":{"265":1}}],["2337",{"2":{"241":1}}],["23372321",{"2":{"147":1}}],["2333",{"2":{"230":1}}],["2335",{"2":{"200":1}}],["23352",{"2":{"147":1}}],["23389685",{"2":{"147":1}}],["23395808",{"2":{"147":1}}],["23392133",{"2":{"123":1}}],["23325463",{"2":{"143":1}}],["2369258851732236e",{"2":{"296":1}}],["236",{"2":{"280":2}}],["2365",{"2":{"280":1}}],["23622048",{"2":{"270":2}}],["23608004",{"2":{"147":1}}],["23669082",{"2":{"147":1}}],["2361472",{"2":{"143":1}}],["2363781",{"2":{"135":1}}],["2378437805872703e",{"2":{"296":1}}],["2378454f",{"2":{"294":1}}],["23722",{"2":{"261":1}}],["23710957",{"2":{"155":1}}],["23715453",{"2":{"123":1}}],["2379414",{"2":{"147":1}}],["23791258",{"2":{"147":1}}],["23735927",{"2":{"147":1}}],["23707482",{"2":{"147":1}}],["2370692",{"2":{"147":1}}],["23703608",{"2":{"123":1}}],["23",{"2":{"39":2,"79":2,"81":8,"97":1,"147":2,"205":2,"214":2,"235":3,"246":2,"254":80,"261":3,"263":1,"265":1,"280":565,"287":22,"291":1}}],["2830536f",{"2":{"294":1}}],["2834571",{"2":{"123":1}}],["28433\\taccuracy",{"2":{"205":1}}],["284967",{"2":{"166":1}}],["284991",{"2":{"147":1}}],["2842",{"2":{"165":1}}],["281052486527042e",{"2":{"296":1}}],["2810528",{"2":{"147":1}}],["28131",{"2":{"280":1}}],["281978",{"2":{"270":1}}],["2812500",{"2":{"261":1}}],["28114",{"2":{"147":1}}],["2864484233419668e",{"2":{"296":1}}],["2864484f",{"2":{"294":1}}],["286431f",{"2":{"294":1}}],["2864517",{"2":{"166":1}}],["2863756539376833e",{"2":{"296":1}}],["2863",{"2":{"280":1}}],["2865",{"2":{"241":1}}],["2869913410456831",{"2":{"171":2}}],["28691",{"2":{"123":1}}],["286642",{"2":{"147":1}}],["2857",{"2":{"280":4}}],["285777",{"2":{"147":1}}],["28524",{"2":{"147":1}}],["285972",{"2":{"147":1}}],["2879865f",{"2":{"294":1}}],["287",{"2":{"263":1,"280":1}}],["2871094",{"2":{"261":1}}],["28766",{"2":{"147":1}}],["28780195",{"2":{"123":1}}],["288978",{"2":{"287":1}}],["2889",{"2":{"265":1}}],["2882814528585292e",{"2":{"296":1}}],["2882",{"2":{"263":1}}],["288641",{"2":{"147":1}}],["2880513",{"2":{"147":1}}],["28×28×1×4",{"2":{"147":1}}],["289895f",{"2":{"294":1}}],["289623719097284e",{"2":{"296":1}}],["2896",{"2":{"265":1}}],["2896144",{"2":{"165":1}}],["289039",{"2":{"147":1}}],["28952244",{"2":{"147":1}}],["28993338",{"2":{"147":1}}],["282782211938267e",{"2":{"296":1}}],["2826",{"2":{"230":1}}],["28209427",{"2":{"147":1}}],["2821546",{"2":{"147":1}}],["28283697",{"2":{"123":1}}],["28288874",{"2":{"96":1}}],["280295470986105e",{"2":{"296":1}}],["2802190113243786e",{"2":{"296":1}}],["280504f",{"2":{"294":1}}],["280975f",{"2":{"294":1}}],["280496f",{"2":{"294":1}}],["2801026",{"2":{"270":1}}],["2801",{"2":{"265":1}}],["2807",{"2":{"265":1}}],["28001",{"2":{"254":1}}],["2806",{"2":{"254":1,"291":1}}],["2806326",{"2":{"147":1}}],["280865e",{"2":{"220":1}}],["280846",{"2":{"147":1}}],["28",{"2":{"37":4,"147":10,"211":2,"214":2,"235":3,"238":2,"241":1,"242":4,"246":2,"261":2,"265":3,"280":27,"291":1}}],["2=dense",{"2":{"23":1}}],["2d",{"0":{"248":1},"1":{"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1},"2":{"15":2,"40":3,"41":2,"42":3,"80":2,"89":4,"213":2,"248":1}}],["2",{"2":{"5":5,"15":5,"19":2,"22":5,"23":4,"25":8,"35":4,"37":4,"39":30,"40":9,"42":15,"43":11,"44":2,"45":15,"46":6,"47":3,"50":22,"56":13,"67":43,"69":1,"70":1,"76":3,"77":10,"78":4,"79":51,"80":11,"81":78,"82":11,"83":14,"84":13,"85":14,"87":2,"88":3,"89":19,"96":34,"97":5,"119":2,"123":39,"124":5,"126":21,"127":41,"132":1,"133":1,"143":3,"144":2,"145":2,"146":4,"147":58,"149":1,"150":1,"153":3,"154":12,"155":2,"157":6,"158":6,"163":7,"164":4,"165":14,"166":13,"170":1,"171":23,"173":5,"186":8,"189":20,"191":4,"192":3,"194":1,"195":1,"198":1,"200":11,"201":5,"202":1,"203":1,"205":3,"206":1,"207":3,"210":1,"211":18,"213":1,"214":2,"215":3,"218":6,"220":13,"221":5,"222":1,"230":14,"231":1,"232":1,"233":1,"235":5,"237":1,"238":67,"239":5,"241":19,"242":1,"244":1,"246":4,"247":5,"252":4,"253":10,"255":4,"256":3,"258":13,"259":1,"260":3,"261":3,"262":3,"263":51,"264":2,"265":34,"267":1,"269":10,"270":14,"271":1,"273":6,"274":5,"275":3,"278":1,"280":36,"281":3,"283":4,"284":2,"285":4,"288":1,"289":3,"291":69,"292":50,"293":8,"294":115,"296":119,"298":1}}],["v=v",{"2":{"252":1}}],["vtav",{"2":{"167":1}}],["vvt",{"2":{"167":1}}],["v∈rd",{"2":{"167":1}}],["vscodeserver",{"2":{"261":1}}],["vscode",{"2":{"261":3}}],["vs",{"2":{"151":1,"220":1}}],["v0",{"2":{"52":5,"90":1,"114":1,"236":1}}],["voila",{"2":{"126":1}}],["volterra",{"2":{"218":2}}],["vol",{"2":{"50":1}}],["volumetric",{"2":{"50":1}}],["vocabulary",{"2":{"44":2}}],["vcat",{"2":{"43":1,"201":1,"261":1,"278":1,"292":1}}],["vae",{"0":{"257":1},"1":{"258":1,"259":1,"260":1,"261":1,"262":1}}],["vanilla",{"2":{"123":1}}],["var=dense",{"2":{"258":1}}],["various",{"2":{"89":1,"93":1,"98":1,"118":1,"216":1}}],["variational",{"2":{"257":1}}],["variationalhiddendropout",{"2":{"41":5}}],["variants",{"2":{"176":1}}],["variance",{"2":{"46":6,"63":2,"65":8,"265":1}}],["variable",{"2":{"56":2,"114":1,"124":1,"265":1}}],["variables",{"2":{"49":2,"56":1,"89":1,"93":2,"96":1,"251":1,"265":1,"292":1}}],["var",{"2":{"46":7,"56":1,"65":5,"87":1,"126":2,"127":2,"133":3,"143":1,"146":1,"163":2,"202":1,"205":2,"219":2,"232":1,"238":24,"251":1,"258":1,"261":1,"280":2}}],["validate",{"2":{"205":1}}],["validated",{"2":{"1":1}}],["validation",{"2":{"201":1,"205":51}}],["valid",{"2":{"46":1,"153":1,"182":1}}],["val",{"2":{"10":2,"24":3,"37":1,"39":7,"41":1,"50":5,"51":5,"52":4,"63":4,"65":2,"79":3,"87":2,"126":1,"127":2,"143":2,"146":2,"201":4,"202":1,"203":1,"205":2,"219":1,"238":13,"253":8,"255":3,"280":20}}],["valued",{"2":{"194":1}}],["value>",{"2":{"179":3}}],["value",{"2":{"3":1,"8":1,"10":2,"24":1,"30":2,"39":1,"43":5,"46":6,"50":6,"51":1,"52":3,"56":6,"63":1,"65":4,"71":1,"76":3,"78":1,"79":1,"82":1,"84":1,"88":1,"89":5,"127":1,"164":2,"197":16,"201":1,"251":1,"254":4,"265":1,"266":4}}],["valuestorage",{"2":{"238":3}}],["values",{"2":{"3":2,"28":2,"47":1,"49":4,"50":1,"52":1,"53":4,"79":6,"84":4,"85":6,"86":1,"89":3,"258":1,"273":1}}],["v",{"2":{"18":2,"46":1,"50":1,"51":3,"76":4,"107":1,"168":5,"169":5,"170":10,"194":4,"195":2,"196":1,"251":4,"252":10,"292":21}}],["vjp",{"0":{"18":1},"2":{"18":1,"168":7,"170":16,"193":3,"196":4,"213":2,"274":3}}],["visualize",{"2":{"266":1,"270":1,"283":1,"297":1}}],["visualization",{"0":{"266":1}}],["visualizing",{"0":{"255":1,"288":1,"297":1}}],["vision",{"2":{"15":2,"50":4,"65":1}}],["virtual",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["vi",{"2":{"167":1}}],["vitj",{"2":{"167":1}}],["video",{"2":{"81":1}}],["victor",{"2":{"46":1,"65":1}}],["viewaxis",{"2":{"238":68}}],["views",{"2":{"232":2,"252":1,"292":1}}],["view",{"2":{"45":4,"89":3,"191":1,"260":1,"265":1,"266":1}}],["via",{"0":{"147":1},"2":{"3":1,"5":2,"6":1,"7":1,"15":1,"26":1,"49":2,"54":2,"64":1,"93":2,"98":2,"114":1,"115":1,"117":1,"122":1,"123":1,"137":1,"140":1,"148":1,"171":1,"192":1,"194":4,"195":1,"265":1}}],["vec",{"2":{"83":6,"170":2,"202":1,"203":1,"232":4,"243":1,"253":1,"255":1}}],["vectorizationbase",{"2":{"291":1}}],["vectorization",{"0":{"185":1},"2":{"66":1,"177":1,"185":1}}],["vectorize",{"2":{"59":1}}],["vectors",{"2":{"43":1,"44":2,"46":3,"77":1,"83":2,"84":1,"167":1,"192":6,"263":1}}],["vector",{"0":{"168":1,"169":1,"195":1,"196":1},"2":{"15":2,"18":9,"25":2,"43":16,"44":5,"50":3,"56":2,"61":1,"64":1,"67":1,"69":1,"77":1,"83":3,"84":7,"89":7,"93":1,"96":9,"133":2,"162":2,"167":4,"168":1,"169":1,"171":3,"189":4,"192":2,"193":5,"194":1,"195":3,"196":1,"197":1,"206":1,"219":1,"232":3,"235":1,"258":1,"259":1,"260":1,"265":4,"266":1,"285":1,"292":1}}],["vendor",{"2":{"66":1}}],["vedaldi",{"2":{"46":1,"65":1}}],["verbatim",{"2":{"56":1}}],["verified",{"2":{"170":1}}],["verification",{"2":{"8":1}}],["verify",{"2":{"52":1,"163":1,"164":1,"165":1,"166":1,"170":1}}],["versioninfo",{"2":{"198":3,"207":3,"215":3,"222":3,"239":3,"247":3,"256":3,"262":3,"267":3,"275":3,"281":3,"289":3,"298":3}}],["versioning",{"2":{"163":2}}],["versions",{"2":{"114":1,"177":1,"185":2}}],["version",{"2":{"22":1,"49":2,"52":1,"56":1,"67":2,"72":3,"81":4,"84":2,"89":1,"90":1,"95":1,"107":1,"131":1,"153":1,"168":1,"198":1,"207":1,"215":1,"222":1,"228":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["very",{"2":{"7":1,"44":1,"67":1,"93":1,"96":1,"123":2,"125":1,"126":1,"127":2,"157":1,"158":1,"188":1,"189":1,"292":1,"296":1}}],["v1",{"0":{"102":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1},"2":{"7":1,"40":2,"52":5,"95":4,"102":2,"105":1}}],["||",{"2":{"89":4,"97":1,"124":1,"197":1,"254":1,"261":2,"274":1,"287":2}}],["|>",{"2":{"77":1,"83":9,"89":2,"96":2,"97":2,"123":5,"124":2,"147":2,"149":1,"150":3,"171":2,"178":2,"197":1,"205":3,"213":2,"219":1,"220":2,"221":1,"233":2,"235":1,"243":2,"246":3,"254":3,"259":1,"260":3,"261":4,"273":1,"274":1,"280":2,"285":1,"287":3,"294":1}}],["|x|",{"2":{"67":1}}],["|y^−y|",{"2":{"50":1}}],["|y−y^|−0",{"2":{"50":1}}],["|y−y^|≤δδ∗",{"2":{"50":1}}],["|p|−di×",{"2":{"40":1,"42":3}}],["|",{"2":{"3":4,"112":1,"126":5,"127":13,"149":1,"255":1}}],["x~",{"2":{"266":1}}],["x~|θ",{"2":{"266":1}}],["x~|x",{"2":{"266":1}}],["xml2",{"2":{"263":1,"291":1}}],["xz",{"2":{"263":1,"291":1}}],["xorg",{"2":{"263":8,"291":8}}],["xoshiro",{"2":{"15":3,"25":1,"56":5,"126":1,"143":2,"171":1,"173":1,"192":3,"246":1,"261":1}}],["x∈",{"2":{"253":1}}],["xyt",{"2":{"252":17,"253":18,"254":10}}],["xi∈rn",{"2":{"197":1}}],["xi",{"2":{"197":2}}],["xᵢ",{"2":{"124":4}}],["xᵢ^p",{"2":{"78":1}}],["xdev",{"2":{"123":5,"124":3,"249":1,"254":3,"257":1,"261":4,"273":2,"274":3,"276":1,"280":2,"282":1,"287":2}}],["xrot",{"2":{"89":3}}],["xlabel=",{"2":{"255":1,"264":1,"270":1,"274":1,"283":1,"293":1,"294":1,"297":2}}],["xla",{"0":{"74":1,"99":1},"2":{"97":4,"99":1,"123":2,"200":3,"205":3,"209":1,"214":3,"254":17,"261":3,"269":3,"280":229,"287":7}}],["xlogy",{"2":{"51":1}}],["xlogx",{"2":{"51":1}}],["x=",{"2":{"70":1}}],["xslt",{"2":{"263":1,"291":1}}],["xs",{"2":{"66":2,"67":2,"77":1,"255":6,"264":1,"265":3}}],["xt0s",{"2":{"264":5}}],["xt1s",{"2":{"264":5}}],["xtrans",{"2":{"263":1,"291":1}}],["xt",{"2":{"65":2,"134":1,"135":1}}],["x86",{"2":{"64":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["x3",{"2":{"39":1}}],["x3c",{"2":{"3":3,"4":2,"7":2,"28":3,"40":4,"43":2,"62":1,"64":1,"81":1,"83":2,"89":5,"132":2,"134":2,"147":111,"153":1,"154":1,"158":1,"179":6,"194":1,"202":1,"220":1,"232":2,"236":2,"238":2,"251":1,"254":2,"258":3,"259":1,"280":1,"284":5,"285":2,"292":2}}],["x2s",{"2":{"264":8}}],["x2",{"2":{"39":3,"264":2,"266":18}}],["x264",{"2":{"263":1,"291":1}}],["x265",{"2":{"263":1,"291":1}}],["x26",{"2":{"25":6,"90":1,"124":2,"144":2,"145":4,"149":1,"198":4,"207":4,"215":4,"220":2,"222":4,"239":4,"247":4,"254":4,"256":4,"261":2,"262":4,"267":4,"275":4,"281":4,"283":2,"285":2,"287":4,"289":4,"292":4,"298":4}}],["x1s",{"2":{"264":8}}],["x1",{"2":{"39":3,"264":2,"266":18}}],["xavier",{"2":{"15":4}}],["x",{"2":{"3":12,"5":12,"7":3,"8":12,"11":2,"15":7,"18":6,"19":10,"35":2,"37":2,"39":24,"40":23,"41":9,"42":27,"43":20,"44":15,"45":30,"46":13,"47":14,"50":2,"51":18,"52":16,"55":2,"56":31,"59":10,"60":2,"61":10,"62":3,"63":8,"64":3,"65":26,"66":1,"67":151,"70":5,"76":3,"77":14,"78":21,"79":26,"80":17,"81":28,"83":3,"85":7,"87":25,"89":43,"96":4,"97":14,"123":14,"124":3,"126":4,"127":10,"130":3,"132":9,"133":2,"134":9,"138":3,"143":3,"146":2,"147":10,"149":4,"150":9,"153":6,"154":5,"155":7,"157":6,"158":10,"162":6,"163":10,"164":12,"166":8,"168":2,"169":2,"170":9,"171":3,"173":3,"176":3,"178":5,"189":7,"190":2,"191":4,"194":7,"195":5,"196":1,"197":11,"201":6,"202":8,"203":8,"204":2,"205":4,"210":6,"212":2,"213":4,"218":4,"231":6,"232":10,"234":2,"235":2,"236":2,"238":13,"242":4,"243":2,"245":2,"246":2,"253":7,"255":1,"258":12,"260":7,"261":6,"264":1,"265":5,"266":4,"270":9,"274":9,"278":10,"279":2,"283":5,"284":13,"285":14,"286":4,"287":4,"292":6,"296":2}}],["3\\ttrain",{"2":{"280":1}}],["3rd",{"0":{"228":1},"2":{"165":1}}],["375632477959769e",{"2":{"296":1}}],["3753422f",{"2":{"294":1}}],["3759",{"2":{"263":1,"291":1}}],["375862",{"2":{"147":1}}],["37486005",{"2":{"270":1}}],["374",{"2":{"263":1}}],["3742405e",{"2":{"197":1}}],["3700787",{"2":{"270":2}}],["37001",{"2":{"254":1}}],["3708",{"2":{"265":1}}],["370",{"2":{"263":1}}],["37037",{"2":{"235":3}}],["3707023",{"2":{"163":1}}],["373168998214119e",{"2":{"296":1}}],["373516f",{"2":{"294":1}}],["3733468f",{"2":{"294":1}}],["37367159641918e",{"2":{"296":1}}],["3736",{"2":{"265":1}}],["3734",{"2":{"265":1}}],["373",{"2":{"230":1,"280":4}}],["373847",{"2":{"147":1}}],["37629542520897e",{"2":{"296":1}}],["376328f",{"2":{"294":1}}],["376386",{"2":{"166":1}}],["376",{"2":{"220":1,"263":3,"280":2}}],["37246",{"2":{"205":1}}],["3729237",{"2":{"165":1}}],["37",{"2":{"200":1,"235":5,"237":1,"246":2,"261":2,"269":1,"280":10,"287":1}}],["3711",{"2":{"265":1}}],["371",{"2":{"200":1,"241":1}}],["3719285",{"2":{"166":1}}],["3781595",{"2":{"270":1}}],["378",{"2":{"200":1}}],["3792036701308052e",{"2":{"296":1}}],["3794807f",{"2":{"294":1}}],["379",{"2":{"263":1}}],["37979",{"2":{"147":1}}],["3798853",{"2":{"147":1}}],["377",{"2":{"263":1}}],["37745",{"2":{"147":1}}],["3777897",{"2":{"147":1}}],["37783703",{"2":{"124":1}}],["377829f",{"2":{"123":1}}],["3198372f",{"2":{"294":1}}],["3196",{"2":{"263":1}}],["3138",{"2":{"291":1}}],["3136",{"2":{"280":1}}],["3182670583428215e",{"2":{"296":1}}],["3182307f",{"2":{"294":1}}],["3181",{"2":{"265":1,"291":1}}],["3189",{"2":{"263":1}}],["3187",{"2":{"263":1}}],["31860\\taccuracy",{"2":{"205":1}}],["31861955",{"2":{"163":1}}],["311769",{"2":{"261":1}}],["312111023020203e",{"2":{"296":1}}],["3120784f",{"2":{"294":1}}],["312",{"2":{"263":1,"280":2}}],["31240",{"2":{"261":1}}],["3128",{"2":{"241":1}}],["310829f",{"2":{"294":1}}],["3108",{"2":{"291":1}}],["3108878",{"2":{"165":1}}],["3103005",{"2":{"273":1}}],["310",{"2":{"263":1}}],["3101",{"2":{"263":1}}],["31001",{"2":{"254":1}}],["317067im",{"2":{"186":1}}],["31",{"2":{"147":2,"200":1,"214":2,"246":2,"261":2,"263":1,"265":1,"280":516,"287":1,"291":1}}],["3154642147125585e",{"2":{"296":1}}],["3154642f",{"2":{"294":1}}],["315431",{"2":{"261":1}}],["31578436",{"2":{"270":1}}],["315267",{"2":{"147":1}}],["315031",{"2":{"147":1}}],["316552",{"2":{"147":1}}],["31697956",{"2":{"147":1}}],["31615f",{"2":{"123":1}}],["314770564915953e",{"2":{"296":1}}],["31473723",{"2":{"96":1}}],["314455f",{"2":{"294":1}}],["31438",{"2":{"205":1}}],["3143208",{"2":{"147":1}}],["314",{"2":{"163":2}}],["3149016",{"2":{"147":1}}],["353196242878853e",{"2":{"296":1}}],["35307f",{"2":{"294":1}}],["353",{"2":{"280":3}}],["3537",{"2":{"269":1}}],["353753",{"2":{"147":1}}],["3577",{"2":{"263":1}}],["357",{"2":{"263":1,"280":3}}],["35745",{"2":{"230":1}}],["3503622473563574e",{"2":{"296":1}}],["3501534f",{"2":{"294":1}}],["3504",{"2":{"291":1}}],["3505",{"2":{"291":1}}],["3505859",{"2":{"261":1}}],["35001",{"2":{"254":1}}],["35005f",{"2":{"123":1}}],["354",{"2":{"263":1}}],["354699",{"2":{"261":1}}],["35452",{"2":{"147":1}}],["35278271712624e",{"2":{"296":1}}],["35299227",{"2":{"270":1}}],["352",{"2":{"263":1}}],["352411",{"2":{"254":1}}],["352405",{"2":{"254":1}}],["352402",{"2":{"254":1}}],["352400",{"2":{"254":1}}],["352397",{"2":{"254":1}}],["352395",{"2":{"254":1}}],["352392",{"2":{"254":1}}],["352389",{"2":{"254":1}}],["352387",{"2":{"254":1}}],["352384",{"2":{"254":1}}],["352365",{"2":{"254":1}}],["3523193",{"2":{"197":1}}],["35",{"2":{"235":1,"239":2,"241":1,"246":4,"247":2,"261":2,"265":1,"280":107,"287":1}}],["35998288738646e",{"2":{"296":1}}],["35970",{"2":{"261":1}}],["359",{"2":{"230":1,"263":1}}],["3595447f",{"2":{"123":1}}],["351",{"2":{"220":1,"263":1,"280":9}}],["35149138733595564",{"2":{"192":4}}],["351491",{"2":{"189":1}}],["35181466",{"2":{"143":1}}],["355732f",{"2":{"294":1}}],["3556",{"2":{"230":1}}],["3558193508137715",{"2":{"171":2}}],["355299",{"2":{"147":1}}],["3564175119523264e",{"2":{"296":1}}],["3565193f",{"2":{"294":1}}],["35652733",{"2":{"147":1}}],["356",{"2":{"263":2}}],["35638645",{"2":{"165":1}}],["358265431065218e",{"2":{"296":1}}],["3587606f",{"2":{"294":1}}],["3580926f",{"2":{"294":1}}],["3584",{"2":{"291":1}}],["3586",{"2":{"291":1}}],["358",{"2":{"263":1,"280":3}}],["35817",{"2":{"189":1}}],["3585896f",{"2":{"123":1}}],["35837248",{"2":{"96":1}}],["364012f",{"2":{"294":1}}],["36485",{"2":{"280":2}}],["3642",{"2":{"265":1}}],["36428708",{"2":{"163":1}}],["3610642",{"2":{"273":1}}],["3613516",{"2":{"270":1}}],["361142",{"2":{"147":1}}],["363",{"2":{"263":1}}],["36348",{"2":{"147":1}}],["3667717",{"2":{"270":1}}],["3663",{"2":{"265":1}}],["366",{"2":{"263":3}}],["36220473",{"2":{"270":2}}],["36222499022985155",{"2":{"171":2}}],["362",{"2":{"263":1}}],["368734f",{"2":{"294":1}}],["3689325",{"2":{"270":1}}],["368",{"2":{"263":1,"280":1}}],["3688",{"2":{"254":3}}],["3680",{"2":{"230":1}}],["36829436",{"2":{"197":1}}],["3690677536234e",{"2":{"296":1}}],["3690662f",{"2":{"294":1}}],["369",{"2":{"263":1}}],["3691778381831775",{"2":{"192":1}}],["369178",{"2":{"189":2}}],["36918",{"2":{"189":1}}],["3695437",{"2":{"147":1}}],["3607",{"2":{"291":1}}],["360745",{"2":{"186":1}}],["360",{"2":{"263":2}}],["36001",{"2":{"254":1}}],["3602988f",{"2":{"294":1}}],["36023283",{"2":{"165":1}}],["36027783",{"2":{"123":1}}],["365784514893617e",{"2":{"296":1}}],["365734",{"2":{"147":1}}],["3654",{"2":{"291":1}}],["3654292",{"2":{"165":1}}],["365187\\ttrain",{"2":{"280":1}}],["365896",{"2":{"261":1}}],["3650193059617517",{"2":{"171":2}}],["367254243263146e",{"2":{"296":1}}],["367246",{"2":{"147":1}}],["3675924f",{"2":{"294":1}}],["3674731046403816e",{"2":{"296":1}}],["3674271f",{"2":{"294":1}}],["36745527",{"2":{"270":1}}],["3671643f",{"2":{"294":1}}],["3676376264610214e",{"2":{"296":1}}],["3676",{"2":{"263":1}}],["367604",{"2":{"147":1}}],["367",{"2":{"200":1}}],["367747f",{"2":{"123":1}}],["36",{"2":{"97":1,"214":2,"246":3,"261":2,"269":1,"280":10,"287":1}}],["3421308674941e",{"2":{"296":1}}],["342174f",{"2":{"294":1}}],["3427787f",{"2":{"294":1}}],["3428478f",{"2":{"294":1}}],["3426",{"2":{"291":1}}],["34254",{"2":{"189":1}}],["34253937",{"2":{"189":1}}],["342539",{"2":{"189":2}}],["342532",{"2":{"147":1}}],["3486587764421316e",{"2":{"296":1}}],["3486914f",{"2":{"294":1}}],["3482",{"2":{"291":1}}],["348267",{"2":{"147":1}}],["348735",{"2":{"287":1}}],["349",{"2":{"280":2}}],["34999675",{"2":{"273":1}}],["3498",{"2":{"265":1,"291":1}}],["349666",{"2":{"254":1}}],["349661",{"2":{"254":1}}],["349659",{"2":{"254":1}}],["349656",{"2":{"254":1}}],["349653",{"2":{"254":1}}],["349651",{"2":{"254":1}}],["349648",{"2":{"254":1}}],["349646",{"2":{"254":1}}],["349643",{"2":{"254":1}}],["349640",{"2":{"254":1}}],["349626",{"2":{"254":1}}],["3497",{"2":{"230":1}}],["34652048",{"2":{"273":1}}],["346",{"2":{"263":1,"280":11}}],["346934",{"2":{"254":1}}],["346929",{"2":{"254":1}}],["346926",{"2":{"254":1}}],["346924",{"2":{"254":1}}],["346921",{"2":{"254":1}}],["346919",{"2":{"254":1}}],["346916",{"2":{"254":1}}],["346913",{"2":{"254":1}}],["346910",{"2":{"254":1}}],["346907",{"2":{"254":1}}],["346894",{"2":{"254":1}}],["3463",{"2":{"254":2}}],["3444",{"2":{"263":1}}],["344",{"2":{"263":1}}],["344173",{"2":{"254":1}}],["344167",{"2":{"254":1}}],["344163",{"2":{"254":1}}],["344159",{"2":{"254":1}}],["344156",{"2":{"254":1}}],["344152",{"2":{"254":1}}],["344147",{"2":{"254":1}}],["344143",{"2":{"254":1}}],["344140",{"2":{"254":1}}],["344135",{"2":{"254":1}}],["344116",{"2":{"254":1}}],["34457564",{"2":{"147":1}}],["341384",{"2":{"254":1}}],["341378",{"2":{"254":1}}],["341374",{"2":{"254":1}}],["341370",{"2":{"254":1}}],["341367",{"2":{"254":1}}],["341363",{"2":{"254":1}}],["341359",{"2":{"254":1}}],["341356",{"2":{"254":1}}],["341352",{"2":{"254":1}}],["341348",{"2":{"254":1}}],["341328",{"2":{"254":1}}],["3418s\\ttraining",{"2":{"235":1}}],["3410565",{"2":{"154":1}}],["3473",{"2":{"263":1}}],["347",{"2":{"263":1}}],["3478",{"2":{"254":3,"291":1}}],["34760",{"2":{"230":1}}],["34717593",{"2":{"197":1}}],["34001",{"2":{"254":1}}],["340",{"2":{"230":1,"263":1}}],["340804e",{"2":{"220":1}}],["3457184027037318e",{"2":{"296":1}}],["3457505f",{"2":{"294":1}}],["345835663754017e",{"2":{"296":1}}],["3455923f",{"2":{"294":1}}],["345",{"2":{"263":1,"280":10}}],["3452",{"2":{"230":1}}],["345026",{"2":{"166":1}}],["343077479965612e",{"2":{"296":1}}],["3436992223920287e",{"2":{"296":1}}],["3436353225087703",{"2":{"171":2}}],["343",{"2":{"263":3}}],["34351700231383653",{"2":{"192":1}}],["34336",{"2":{"147":1}}],["34383082",{"2":{"143":1}}],["34",{"2":{"80":2,"214":2,"246":3,"261":2,"269":1,"280":15,"287":1}}],["386199004623343e",{"2":{"296":1}}],["38697105170153e",{"2":{"296":1}}],["386408f",{"2":{"294":1}}],["386",{"2":{"263":2,"269":1,"280":3}}],["387073f",{"2":{"294":1}}],["3879445f",{"2":{"294":1}}],["3875027",{"2":{"270":1}}],["387",{"2":{"263":1}}],["3844414",{"2":{"270":1}}],["384",{"2":{"263":1,"291":1}}],["380313728730091e",{"2":{"296":1}}],["380968113508274e",{"2":{"296":1}}],["380446834013477e",{"2":{"296":1}}],["3802286f",{"2":{"294":1}}],["380203f",{"2":{"294":1}}],["380673f",{"2":{"294":1}}],["3800323",{"2":{"273":1}}],["38001",{"2":{"254":1}}],["380",{"2":{"200":1}}],["380777f0",{"2":{"164":1}}],["3880926939336946e",{"2":{"296":1}}],["3881222989384034e",{"2":{"296":1}}],["3882741828639766e",{"2":{"296":1}}],["3882188f",{"2":{"294":1}}],["3883068f",{"2":{"294":1}}],["388315",{"2":{"186":1}}],["3886",{"2":{"265":1}}],["38889",{"2":{"81":1}}],["3899584f",{"2":{"294":1}}],["389",{"2":{"263":2}}],["389748",{"2":{"147":1}}],["389206",{"2":{"147":1}}],["382322601250955e",{"2":{"296":1}}],["38241944",{"2":{"166":1}}],["382491",{"2":{"147":1}}],["382574",{"2":{"147":1}}],["383929",{"2":{"261":1}}],["383948",{"2":{"147":1}}],["3834447782571341",{"2":{"171":1}}],["3834447782571344",{"2":{"171":1}}],["383777",{"2":{"147":1}}],["385454",{"2":{"147":1}}],["3850993",{"2":{"143":1}}],["3817818f",{"2":{"294":1}}],["381774",{"2":{"186":1}}],["3818",{"2":{"265":1}}],["38187847",{"2":{"143":1}}],["3812",{"2":{"263":1}}],["381",{"2":{"263":2}}],["381141",{"2":{"147":1}}],["38",{"2":{"80":2,"214":2,"237":1,"241":1,"246":22,"261":2,"265":3,"280":5,"287":1}}],["3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀3⠀",{"2":{"67":3}}],["3⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀2⠀",{"2":{"67":1}}],["322812331050906e",{"2":{"296":1}}],["3225227710919e",{"2":{"296":1}}],["322569f",{"2":{"294":1}}],["322965f",{"2":{"294":1}}],["3227286",{"2":{"270":1}}],["3224781f",{"2":{"123":1}}],["32m\\u001b",{"2":{"269":1}}],["32562151379396e",{"2":{"296":1}}],["325645820200876e",{"2":{"296":1}}],["3258304f",{"2":{"294":1}}],["3257455f",{"2":{"294":1}}],["3255684f",{"2":{"294":1}}],["3250673994701609345",{"2":{"280":1}}],["325",{"2":{"269":1}}],["3259",{"2":{"263":1}}],["323967786971796e",{"2":{"296":1}}],["323948425329717e",{"2":{"296":1}}],["3236",{"2":{"263":1,"291":1}}],["3231",{"2":{"263":1}}],["323",{"2":{"263":1}}],["32322538",{"2":{"143":1}}],["321811005605096e",{"2":{"296":1}}],["3218864f",{"2":{"294":1}}],["3218283f",{"2":{"294":1}}],["321",{"2":{"238":6}}],["321506",{"2":{"186":1}}],["320205993706372e",{"2":{"296":1}}],["3206",{"2":{"269":1}}],["32001",{"2":{"254":1}}],["320",{"2":{"238":6}}],["327",{"2":{"263":2}}],["3277",{"2":{"230":1}}],["3276",{"2":{"230":1,"265":1}}],["3279041356366077",{"2":{"171":2}}],["3242749f",{"2":{"294":1}}],["324",{"2":{"200":1,"230":1,"261":1,"263":1,"280":1}}],["324167",{"2":{"147":1}}],["329143",{"2":{"147":1}}],["32983455",{"2":{"96":1}}],["326",{"2":{"220":1}}],["32698",{"2":{"147":1}}],["32651654",{"2":{"147":1}}],["32623518",{"2":{"143":1}}],["32649475",{"2":{"123":1}}],["3266256",{"2":{"123":1}}],["3282",{"2":{"265":1}}],["328235",{"2":{"123":1}}],["328",{"2":{"263":1,"280":2}}],["328058",{"2":{"261":1}}],["32833",{"2":{"241":1}}],["32839986131789933",{"2":{"171":2}}],["32838f",{"2":{"96":1}}],["32",{"2":{"35":2,"56":2,"83":10,"97":13,"123":13,"124":2,"126":1,"127":1,"147":4,"198":1,"211":2,"214":2,"218":1,"220":4,"222":1,"242":1,"244":2,"246":2,"254":1,"261":2,"263":1,"265":1,"267":1,"271":1,"280":8,"287":1,"291":1,"294":8,"298":1}}],["3=dense",{"2":{"23":1}}],["3dv",{"2":{"50":1}}],["3d",{"2":{"19":1,"50":1,"65":1,"80":4,"89":5,"164":1,"205":1,"246":2,"274":1,"280":1}}],["308304f",{"2":{"294":1}}],["308281",{"2":{"287":1}}],["308",{"2":{"263":1}}],["30316323",{"2":{"270":1}}],["303",{"2":{"263":2,"291":1}}],["30389",{"2":{"230":1}}],["3078414549973e",{"2":{"296":1}}],["3078429656656455e",{"2":{"296":1}}],["3078587f",{"2":{"294":1}}],["3073",{"2":{"291":1}}],["30797",{"2":{"280":1}}],["3070866",{"2":{"270":2}}],["307005",{"2":{"261":1}}],["3071294",{"2":{"166":1}}],["30267",{"2":{"291":1}}],["3025",{"2":{"265":1}}],["302",{"2":{"230":1,"280":3}}],["30215",{"2":{"189":1}}],["3092",{"2":{"230":1}}],["309233",{"2":{"147":1}}],["3097294",{"2":{"197":1}}],["306712f",{"2":{"294":1}}],["30674052",{"2":{"197":1}}],["306",{"2":{"280":3}}],["306695522505033e",{"2":{"296":1}}],["3066",{"2":{"265":1}}],["306641",{"2":{"186":1}}],["304737256864081e",{"2":{"296":1}}],["304946f",{"2":{"294":1}}],["304",{"2":{"280":6}}],["3048",{"2":{"265":1}}],["30481228",{"2":{"147":1}}],["304635",{"2":{"147":1}}],["305414876830443e",{"2":{"296":1}}],["305",{"2":{"263":1,"280":1}}],["305844",{"2":{"165":1,"166":1}}],["30573",{"2":{"147":1}}],["30556",{"2":{"81":1}}],["30172",{"2":{"147":1}}],["301429",{"2":{"147":1}}],["3013572",{"2":{"147":1}}],["30112675",{"2":{"123":1}}],["301",{"2":{"96":1,"220":1}}],["30017487637572e",{"2":{"296":1}}],["3001",{"2":{"197":1,"254":1}}],["30009f",{"2":{"294":1}}],["30001",{"2":{"254":1}}],["30000",{"2":{"84":1}}],["3000",{"2":{"84":1,"287":1}}],["300",{"2":{"84":1,"124":1,"263":1}}],["30",{"2":{"15":1,"63":1,"76":2,"147":2,"200":2,"246":3,"261":2,"265":2,"269":2,"280":10}}],["3×6×1",{"2":{"81":1}}],["3×3×3",{"2":{"79":1}}],["3×3×1×1",{"2":{"15":1,"82":3}}],["3×3",{"2":{"79":4}}],["3×2×1×1",{"2":{"85":2}}],["3×2",{"2":{"56":2}}],["3×5",{"2":{"50":4,"56":1,"84":1}}],["3×7",{"2":{"5":2}}],["3×13",{"2":{"5":4}}],["3341",{"2":{"291":1}}],["334",{"2":{"263":1}}],["3344336",{"2":{"166":1}}],["3302614846771015e",{"2":{"296":1}}],["3302788f",{"2":{"294":1}}],["33070865",{"2":{"270":2}}],["330",{"2":{"269":1}}],["330154",{"2":{"261":1}}],["3300781",{"2":{"261":1}}],["33001",{"2":{"254":1}}],["33091125",{"2":{"123":1}}],["337214468729621e",{"2":{"296":1}}],["337",{"2":{"263":1}}],["33710",{"2":{"241":1}}],["3379819837970135e",{"2":{"296":1}}],["3379242",{"2":{"147":1}}],["337936",{"2":{"123":1}}],["3356",{"2":{"263":1}}],["335762",{"2":{"254":1}}],["335733",{"2":{"254":1}}],["335730",{"2":{"254":1}}],["335726",{"2":{"254":1}}],["335722",{"2":{"254":1}}],["335719",{"2":{"254":1}}],["335715",{"2":{"254":1}}],["335711",{"2":{"254":1}}],["335707",{"2":{"254":1}}],["335702",{"2":{"254":1}}],["335396",{"2":{"254":1}}],["335",{"2":{"230":1}}],["3359975f",{"2":{"123":1}}],["33281517",{"2":{"273":1}}],["332131",{"2":{"261":1}}],["332",{"2":{"200":1}}],["339459540718326e",{"2":{"296":1}}],["3392158f",{"2":{"294":1}}],["3396",{"2":{"230":1}}],["339081",{"2":{"166":1}}],["339815",{"2":{"147":1}}],["3338s\\ttraining",{"2":{"235":1}}],["333503",{"2":{"147":1}}],["3333",{"2":{"263":1}}],["33335",{"2":{"147":1}}],["33333",{"2":{"81":2,"88":10,"235":15,"237":2}}],["3333333333333335",{"2":{"265":1}}],["3333333333333333",{"2":{"87":1}}],["333333",{"2":{"67":2,"77":3}}],["3316176",{"2":{"270":1}}],["3316",{"2":{"263":1}}],["33160\\taccuracy",{"2":{"205":1}}],["3316886f0",{"2":{"67":1}}],["331",{"2":{"200":1,"263":1}}],["331483",{"2":{"147":1}}],["3364947399156407e",{"2":{"296":1}}],["33655504999339e",{"2":{"296":1}}],["3365914f",{"2":{"294":1}}],["33691181985879e",{"2":{"296":1}}],["3369134f",{"2":{"294":1}}],["336197f",{"2":{"294":1}}],["3366141f",{"2":{"294":1}}],["336659",{"2":{"147":1}}],["3362",{"2":{"287":1}}],["336883e",{"2":{"220":1}}],["336721",{"2":{"147":1}}],["3367505",{"2":{"143":1}}],["3382257f",{"2":{"294":1}}],["33821836",{"2":{"123":1}}],["3383",{"2":{"291":1}}],["3387",{"2":{"291":1}}],["338",{"2":{"280":3}}],["3385",{"2":{"254":1}}],["338590",{"2":{"254":1}}],["3385826",{"2":{"270":2}}],["338584",{"2":{"254":1}}],["338580",{"2":{"254":1}}],["338576",{"2":{"254":1}}],["338573",{"2":{"254":1}}],["338569",{"2":{"254":1}}],["338565",{"2":{"254":1}}],["338562",{"2":{"254":1}}],["338558",{"2":{"254":1}}],["338554",{"2":{"254":1}}],["338530",{"2":{"254":1}}],["338672",{"2":{"147":1}}],["33802274",{"2":{"123":1}}],["33",{"2":{"5":1,"96":2,"97":1,"214":1,"241":1,"246":2,"261":2,"269":1,"287":1,"291":1}}],["391370067887998e",{"2":{"296":1}}],["391144f",{"2":{"294":1}}],["391086",{"2":{"261":1}}],["39m",{"2":{"269":1}}],["396034969635737e",{"2":{"296":1}}],["3963",{"2":{"291":1}}],["3961",{"2":{"269":1}}],["396",{"2":{"263":2}}],["396293f",{"2":{"123":1}}],["399085045171508e",{"2":{"296":1}}],["3990735",{"2":{"287":16}}],["3992459f",{"2":{"294":1}}],["399",{"2":{"269":1,"280":2}}],["3994141",{"2":{"261":1}}],["39950222",{"2":{"147":1}}],["39370078",{"2":{"270":2}}],["393313",{"2":{"261":1}}],["3939897",{"2":{"166":1}}],["392691",{"2":{"280":1}}],["3928",{"2":{"230":1}}],["3928175",{"2":{"163":1}}],["39211",{"2":{"205":1}}],["3945",{"2":{"291":1}}],["3944451",{"2":{"280":1026}}],["39495495",{"2":{"273":1}}],["394958",{"2":{"147":1}}],["394",{"2":{"263":2}}],["3942307655311696",{"2":{"171":2}}],["3940224213371761",{"2":{"171":2}}],["398950082268685e",{"2":{"296":1}}],["3989509488540305e",{"2":{"296":1}}],["398002957362856e",{"2":{"296":1}}],["3980046",{"2":{"166":1}}],["398595381811298e",{"2":{"296":1}}],["3988137f",{"2":{"294":1}}],["398",{"2":{"280":3}}],["3983694f",{"2":{"294":1}}],["3983",{"2":{"230":1}}],["398299",{"2":{"147":1}}],["3959157f",{"2":{"294":1}}],["39599",{"2":{"147":1}}],["395",{"2":{"263":2}}],["3951999",{"2":{"254":70}}],["395306",{"2":{"163":1}}],["39783f",{"2":{"294":1}}],["3979",{"2":{"287":1}}],["3970",{"2":{"263":1,"265":1}}],["397",{"2":{"263":1}}],["3977316",{"2":{"166":1}}],["397588",{"2":{"147":1}}],["397521",{"2":{"147":1}}],["3971",{"2":{"147":1}}],["39763013",{"2":{"147":1}}],["390322278706127e",{"2":{"296":1}}],["3900014080365e",{"2":{"296":1}}],["3900235f",{"2":{"294":1}}],["39001",{"2":{"254":1}}],["390",{"2":{"263":2}}],["390459",{"2":{"287":1}}],["3904",{"2":{"200":1}}],["39049",{"2":{"147":1}}],["390200114697191",{"2":{"171":2}}],["39068",{"2":{"147":1}}],["39",{"2":{"2":1,"4":2,"8":1,"10":1,"16":1,"23":1,"25":1,"35":1,"39":1,"43":1,"45":1,"46":3,"47":1,"49":1,"50":1,"51":1,"53":1,"54":3,"56":7,"59":2,"62":1,"64":1,"67":1,"68":1,"75":1,"87":2,"88":1,"89":7,"93":2,"96":2,"97":1,"107":2,"110":1,"114":1,"120":1,"121":3,"122":1,"123":2,"124":1,"126":1,"127":1,"128":1,"130":1,"131":1,"138":2,"140":1,"141":1,"143":1,"153":5,"154":1,"155":3,"156":1,"158":2,"160":1,"161":1,"162":4,"163":4,"164":1,"165":2,"166":1,"170":3,"171":2,"173":1,"188":5,"189":8,"191":2,"194":1,"195":1,"197":1,"202":3,"204":1,"206":2,"216":2,"219":1,"230":1,"232":1,"235":1,"241":1,"243":1,"246":2,"261":2,"264":1,"265":4,"270":1,"271":1,"274":1,"280":16,"283":1,"287":1,"292":1,"293":1,"294":2}}],["3",{"2":{"2":1,"5":3,"15":5,"23":7,"35":2,"37":1,"39":17,"40":2,"41":2,"43":4,"45":4,"46":3,"49":1,"50":30,"52":5,"56":27,"67":14,"69":1,"77":6,"78":3,"79":60,"80":24,"81":86,"82":8,"83":21,"84":15,"85":10,"87":4,"88":11,"89":13,"96":4,"97":4,"119":7,"123":24,"126":13,"127":7,"132":3,"133":4,"135":1,"143":4,"146":6,"147":23,"149":1,"150":1,"153":1,"155":1,"163":6,"165":10,"166":8,"171":2,"189":23,"190":1,"191":3,"192":7,"200":7,"202":1,"203":1,"205":2,"210":1,"211":9,"213":1,"214":3,"218":1,"230":12,"231":1,"233":2,"235":5,"237":1,"238":62,"239":6,"241":9,"246":7,"247":6,"251":3,"252":2,"253":1,"254":2,"258":17,"260":2,"261":3,"263":58,"265":18,"266":1,"269":5,"270":9,"273":1,"280":39,"287":1,"288":2,"291":39,"292":12,"293":1,"294":102,"296":106}}],["4\\ttrain",{"2":{"280":1}}],["4f",{"2":{"261":1,"280":4}}],["4fs",{"2":{"235":1,"261":1}}],["4t",{"2":{"253":1}}],["4th",{"2":{"144":1}}],["4+0",{"2":{"239":1,"247":1}}],["4x",{"2":{"214":1}}],["4x256xf32>",{"2":{"147":2}}],["4x2xi64>",{"2":{"147":2}}],["4x4x16x4xf32>",{"2":{"147":2}}],["4x16x4x4xf32>",{"2":{"147":2}}],["4x10xf32>",{"2":{"147":3}}],["4x1x28x28xf32>",{"2":{"147":2}}],["4849142209393e",{"2":{"296":1}}],["48990166",{"2":{"270":1}}],["489",{"2":{"263":1}}],["48983f",{"2":{"96":1}}],["48",{"2":{"207":3,"214":1,"215":3,"230":1,"239":3,"241":1,"246":2,"247":3,"256":3,"261":2,"262":3,"275":3,"280":5,"281":3,"289":3,"291":1}}],["482771282271026e",{"2":{"296":1}}],["482774f",{"2":{"294":1}}],["482481f",{"2":{"294":1}}],["4822886f",{"2":{"294":1}}],["482",{"2":{"280":6}}],["4825145084669266e",{"2":{"296":1}}],["48259094",{"2":{"270":1}}],["48257408",{"2":{"147":1}}],["482196",{"2":{"261":1}}],["48263642",{"2":{"197":1}}],["483017006370395e",{"2":{"296":1}}],["4834358480483478e",{"2":{"296":1}}],["483525\\ttrain",{"2":{"280":1}}],["48351598",{"2":{"197":1}}],["483",{"2":{"263":2,"280":2}}],["4831388f",{"2":{"123":1}}],["4862",{"2":{"265":1}}],["486214",{"2":{"186":1}}],["486",{"2":{"241":1,"263":1}}],["4864",{"2":{"230":1}}],["486550215883336",{"2":{"171":2}}],["485527831376266e",{"2":{"296":1}}],["485212f",{"2":{"294":1}}],["48525",{"2":{"147":1}}],["4856",{"2":{"265":1}}],["485",{"2":{"263":1,"280":1}}],["4851",{"2":{"254":1}}],["4851346",{"2":{"155":1}}],["4870",{"2":{"265":1}}],["4876",{"2":{"263":1,"265":1}}],["487558",{"2":{"147":1}}],["487275",{"2":{"147":1}}],["488476f",{"2":{"294":1}}],["488028235902512",{"2":{"171":2}}],["48801818",{"2":{"123":1}}],["488623",{"2":{"166":1}}],["48818898",{"2":{"270":2}}],["488105",{"2":{"165":1,"166":1}}],["48815",{"2":{"147":1}}],["488387",{"2":{"147":1}}],["480523127744434e",{"2":{"296":1}}],["4804332f",{"2":{"294":1}}],["480875\\tthroughput",{"2":{"287":1}}],["48001",{"2":{"254":1}}],["48005",{"2":{"147":1}}],["480",{"2":{"214":1,"238":3,"263":1}}],["480797",{"2":{"147":1}}],["48148",{"2":{"235":6,"237":1}}],["48193252",{"2":{"163":1}}],["4810884",{"2":{"163":1}}],["48107117",{"2":{"147":1}}],["48137343",{"2":{"143":1}}],["418831f",{"2":{"294":1}}],["418679f",{"2":{"294":1}}],["4182",{"2":{"263":1}}],["41815332",{"2":{"147":1}}],["411097761874896e",{"2":{"296":1}}],["4110002f",{"2":{"294":1}}],["4116536",{"2":{"273":1}}],["4115",{"2":{"269":1}}],["411",{"2":{"263":1}}],["4112755f",{"2":{"294":1}}],["41127",{"2":{"123":1}}],["413187652407419e",{"2":{"296":1}}],["413478f",{"2":{"294":1}}],["4139942f",{"2":{"294":1}}],["413",{"2":{"280":1}}],["4136",{"2":{"263":1}}],["4137662",{"2":{"123":1}}],["4173640883628515e",{"2":{"296":1}}],["4172",{"2":{"265":1}}],["417",{"2":{"263":1,"280":3,"291":1}}],["41799",{"2":{"147":1}}],["41",{"2":{"246":2,"261":2,"280":18,"287":1,"291":1}}],["410635709273246e",{"2":{"296":1}}],["4106842662012454e",{"2":{"296":1}}],["410427f",{"2":{"294":1}}],["41001",{"2":{"254":1}}],["410",{"2":{"200":1,"263":1}}],["4101036",{"2":{"165":1}}],["41017",{"2":{"147":1}}],["4192197856860511e",{"2":{"296":1}}],["419638\\ttrain",{"2":{"280":1}}],["419611",{"2":{"147":1}}],["419",{"2":{"263":2,"280":6}}],["41919118",{"2":{"197":1}}],["4142137172354915e",{"2":{"296":1}}],["4142030708658745e",{"2":{"296":1}}],["4142293f",{"2":{"294":1}}],["41430866827519e",{"2":{"296":1}}],["4146246f",{"2":{"294":1}}],["414",{"2":{"269":2}}],["414769",{"2":{"189":1}}],["4144737",{"2":{"166":1}}],["415817760730499e",{"2":{"296":1}}],["415895002200944e",{"2":{"296":1}}],["4158005f",{"2":{"294":1}}],["4158693023143286",{"2":{"171":2}}],["4159314f",{"2":{"294":1}}],["4150",{"2":{"269":1}}],["415",{"2":{"269":1}}],["415368",{"2":{"147":1}}],["412",{"2":{"263":2,"280":4}}],["41238895",{"2":{"165":1}}],["41261",{"2":{"147":1}}],["41246277",{"2":{"143":1}}],["41229f",{"2":{"96":1}}],["41628727",{"2":{"273":1}}],["41623f",{"2":{"96":1}}],["416629e",{"2":{"220":1}}],["41667",{"2":{"81":1}}],["416",{"2":{"211":2,"230":1,"263":3}}],["41645882",{"2":{"147":1}}],["4164f",{"2":{"96":1}}],["41613227",{"2":{"123":1}}],["474634",{"2":{"287":1}}],["479",{"2":{"280":12}}],["4786",{"2":{"265":1}}],["4785156",{"2":{"261":2}}],["478",{"2":{"230":1}}],["47872233",{"2":{"163":1}}],["476",{"2":{"220":1,"291":1}}],["476158e",{"2":{"220":1}}],["4763277207638013",{"2":{"171":1}}],["4763277207638008",{"2":{"171":1}}],["471053125195037e",{"2":{"296":1}}],["4713",{"2":{"265":1}}],["471",{"2":{"263":1,"280":3}}],["471206+1",{"2":{"186":1}}],["471634",{"2":{"147":1}}],["47174177",{"2":{"123":1}}],["4734",{"2":{"291":1}}],["473499e",{"2":{"220":1}}],["4736",{"2":{"265":1}}],["4735",{"2":{"263":1}}],["473",{"2":{"263":1,"280":1}}],["4730743722547668",{"2":{"171":2}}],["473179",{"2":{"147":1}}],["47329637",{"2":{"123":1}}],["4774398263794144e",{"2":{"296":1}}],["4776487f",{"2":{"294":1}}],["477",{"2":{"291":1}}],["477329",{"2":{"147":1}}],["47787",{"2":{"147":1}}],["47792822",{"2":{"123":1}}],["470961946988632e",{"2":{"296":1}}],["4709f",{"2":{"294":1}}],["4709256f",{"2":{"294":1}}],["47092",{"2":{"147":1}}],["470",{"2":{"263":1}}],["470635",{"2":{"261":1}}],["47001",{"2":{"254":1}}],["47006413",{"2":{"123":1}}],["4705",{"2":{"254":1}}],["470282",{"2":{"147":1}}],["4754",{"2":{"147":1}}],["47512773",{"2":{"147":1}}],["4755f",{"2":{"96":1}}],["47",{"2":{"97":1,"211":2,"214":1,"230":1,"246":3,"261":2,"263":1,"280":11,"291":1}}],["4724",{"2":{"291":1}}],["472684\\ttrain",{"2":{"280":1}}],["4727",{"2":{"265":1}}],["47222",{"2":{"81":1}}],["472",{"2":{"43":1,"269":1,"280":3,"291":1}}],["4668771315294324e",{"2":{"296":1}}],["466835493892637e",{"2":{"296":1}}],["4668616f",{"2":{"294":1}}],["466627f",{"2":{"294":1}}],["4678693f",{"2":{"294":1}}],["467",{"2":{"280":1}}],["4677734",{"2":{"261":1}}],["465",{"2":{"280":3}}],["4651",{"2":{"265":1}}],["46514034",{"2":{"197":1}}],["464019030814682e",{"2":{"296":1}}],["4640174f",{"2":{"294":1}}],["464",{"2":{"280":2}}],["46497717",{"2":{"270":1}}],["464567",{"2":{"270":2}}],["4631066",{"2":{"270":1}}],["463947",{"2":{"261":1}}],["4622",{"2":{"291":1}}],["462024\\ttrain",{"2":{"280":1}}],["462",{"2":{"269":1}}],["4626",{"2":{"265":1}}],["46211714f0",{"2":{"67":1}}],["46211717f0",{"2":{"67":1}}],["460703712539674e",{"2":{"296":1}}],["4607037f",{"2":{"294":1}}],["46079",{"2":{"280":3}}],["460",{"2":{"280":1}}],["46001",{"2":{"254":1}}],["46094",{"2":{"205":2}}],["46096706",{"2":{"147":1}}],["4604826",{"2":{"166":1}}],["461434f",{"2":{"294":1}}],["461134953903325e",{"2":{"296":1}}],["4611",{"2":{"265":1}}],["461",{"2":{"263":1}}],["4612",{"2":{"230":1}}],["46127",{"2":{"147":1}}],["461908",{"2":{"189":1}}],["4619778701480337",{"2":{"171":2}}],["461399",{"2":{"165":1}}],["469730\\tthroughput",{"2":{"287":1}}],["46976f",{"2":{"96":1}}],["469",{"2":{"263":1,"280":5}}],["469687",{"2":{"147":1}}],["468931437053212e",{"2":{"296":1}}],["4689855f",{"2":{"294":1}}],["46878588",{"2":{"166":1}}],["468792",{"2":{"147":1}}],["468023",{"2":{"147":1}}],["46",{"2":{"89":1,"205":1,"214":3,"246":5,"261":2,"280":8,"291":1}}],["43697",{"2":{"280":2}}],["4369135",{"2":{"273":1}}],["436",{"2":{"263":1}}],["43676856",{"2":{"165":1}}],["430",{"2":{"269":1}}],["43001",{"2":{"254":1}}],["4305115e",{"2":{"170":1}}],["4305781",{"2":{"143":1}}],["43af",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["4380538735154734e",{"2":{"296":1}}],["4380043027140755e",{"2":{"296":1}}],["438139f",{"2":{"294":1}}],["43817532",{"2":{"270":1}}],["4382992f",{"2":{"294":1}}],["438248",{"2":{"287":1}}],["43856599391177e",{"2":{"296":1}}],["438520306951789e",{"2":{"296":1}}],["4385825f",{"2":{"294":1}}],["4385368f",{"2":{"294":1}}],["438",{"2":{"280":6}}],["4384766",{"2":{"261":1}}],["43846482",{"2":{"197":1}}],["4383",{"2":{"230":1}}],["43874\\taccuracy",{"2":{"205":1}}],["4389",{"2":{"45":1}}],["432689f",{"2":{"294":1}}],["432399",{"2":{"287":1}}],["43255",{"2":{"263":1}}],["432439im",{"2":{"186":1}}],["432701",{"2":{"147":1}}],["43462287007217e",{"2":{"296":1}}],["434922f",{"2":{"294":1}}],["434",{"2":{"200":1,"263":1}}],["434273",{"2":{"186":1}}],["434771",{"2":{"147":1}}],["4371523286227764e",{"2":{"296":1}}],["437",{"2":{"263":1,"269":1}}],["4375000",{"2":{"261":1}}],["4376572711003852",{"2":{"171":2}}],["437085f",{"2":{"123":1}}],["4330709",{"2":{"270":2}}],["433",{"2":{"263":1,"280":1}}],["4339262",{"2":{"197":1}}],["433656",{"2":{"186":1}}],["43350345",{"2":{"165":1}}],["433858",{"2":{"147":1}}],["4393227496349207e",{"2":{"296":1}}],["4395333f",{"2":{"294":1}}],["4398",{"2":{"291":1}}],["439068",{"2":{"270":1}}],["439",{"2":{"269":1}}],["4399",{"2":{"265":1}}],["43998",{"2":{"147":1}}],["4397957",{"2":{"163":1}}],["435445\\tthroughput",{"2":{"287":1}}],["435166\\ttrain",{"2":{"280":1}}],["435143",{"2":{"147":1}}],["4359436",{"2":{"273":1}}],["43599",{"2":{"147":1}}],["435230",{"2":{"261":1}}],["435",{"2":{"200":1,"291":1}}],["431",{"2":{"291":1}}],["4312565",{"2":{"273":1}}],["4319346874291314",{"2":{"189":1}}],["431456",{"2":{"166":1}}],["4318977",{"2":{"143":1}}],["43153",{"2":{"89":1}}],["43",{"2":{"107":1,"246":3,"261":2,"265":1,"280":20}}],["4523875505300814e",{"2":{"296":1}}],["4521105f",{"2":{"294":1}}],["452",{"2":{"263":2,"280":18,"291":1}}],["4526",{"2":{"230":1}}],["45247704",{"2":{"197":1}}],["45298168",{"2":{"96":1}}],["4597",{"2":{"291":1}}],["4595859",{"2":{"273":1}}],["45953766",{"2":{"147":1}}],["4594",{"2":{"263":1}}],["459271",{"2":{"261":1}}],["459",{"2":{"230":1,"280":7}}],["4593922",{"2":{"194":4}}],["459019",{"2":{"189":1}}],["4550",{"2":{"265":1}}],["4550781",{"2":{"261":1}}],["455",{"2":{"263":3,"269":1,"280":18,"291":1}}],["45598",{"2":{"241":1}}],["4552384158732863",{"2":{"192":4}}],["455238",{"2":{"189":1}}],["45553648",{"2":{"147":1}}],["454",{"2":{"280":17}}],["454679",{"2":{"186":1}}],["454535",{"2":{"147":1}}],["453",{"2":{"280":18}}],["4537",{"2":{"265":1}}],["45325348",{"2":{"153":1}}],["453974",{"2":{"147":1}}],["456",{"2":{"280":18}}],["4566929",{"2":{"270":2}}],["4562",{"2":{"265":1}}],["4568695275612873e",{"2":{"296":1}}],["4568",{"2":{"265":1}}],["45682606",{"2":{"147":1}}],["45678f",{"2":{"96":1}}],["451749",{"2":{"270":1}}],["451",{"2":{"220":1,"269":1,"280":18}}],["4510427",{"2":{"143":1}}],["45199f",{"2":{"96":1}}],["4580",{"2":{"287":1}}],["45867",{"2":{"280":1}}],["458615e",{"2":{"220":1}}],["4589",{"2":{"241":1}}],["45895",{"2":{"147":1}}],["458",{"2":{"230":1,"280":7}}],["45855",{"2":{"205":1}}],["4585489",{"2":{"143":1}}],["458762",{"2":{"147":1}}],["458241f",{"2":{"123":1}}],["45001",{"2":{"254":1}}],["4500963619011972",{"2":{"171":2}}],["450",{"2":{"200":1,"280":17}}],["45016125",{"2":{"197":1}}],["45013",{"2":{"147":1}}],["450581f",{"2":{"123":1}}],["45046f",{"2":{"96":1}}],["457763855494437e",{"2":{"296":1}}],["4571409f",{"2":{"294":1}}],["457",{"2":{"280":17,"291":1}}],["4579",{"2":{"265":1}}],["45745936",{"2":{"123":1}}],["45763f",{"2":{"96":1}}],["45",{"2":{"82":2,"205":1,"241":1,"246":3,"261":2,"280":14,"291":1}}],["4×6×1×1",{"2":{"81":1}}],["4×3",{"2":{"81":1}}],["4×3×1",{"2":{"80":1}}],["4×9×1×1",{"2":{"81":1}}],["4×9",{"2":{"81":1}}],["4×1×1",{"2":{"80":2}}],["4×4×1×1",{"2":{"82":3}}],["4×4",{"2":{"79":1}}],["4065914000128018e",{"2":{"296":1}}],["4063143f",{"2":{"294":1}}],["4066525",{"2":{"166":1}}],["408371575175171e",{"2":{"296":1}}],["408363600700849e",{"2":{"296":1}}],["4082106f",{"2":{"294":1}}],["408",{"2":{"280":1}}],["4024173061425155e",{"2":{"296":1}}],["402922f",{"2":{"294":1}}],["4026602f",{"2":{"294":1}}],["402",{"2":{"280":2}}],["4023438",{"2":{"261":1}}],["40229",{"2":{"189":1}}],["4094",{"2":{"291":1}}],["4099",{"2":{"291":1}}],["4097",{"2":{"291":1}}],["409743f",{"2":{"123":1}}],["409",{"2":{"254":3}}],["40gb",{"2":{"239":1,"247":1}}],["405210055505506e",{"2":{"296":1}}],["4052427f",{"2":{"294":1}}],["405782431583927e",{"2":{"296":1}}],["405736f",{"2":{"294":1}}],["405032f",{"2":{"294":1}}],["405",{"2":{"230":1}}],["4051151",{"2":{"194":4}}],["4013881886585616e",{"2":{"296":1}}],["40190733423352e",{"2":{"296":1}}],["4012104f",{"2":{"294":1}}],["4017144f",{"2":{"294":1}}],["401719",{"2":{"147":1}}],["4015749",{"2":{"270":2}}],["401",{"2":{"220":1,"263":1,"291":1}}],["403532485188505e",{"2":{"296":1}}],["40354207",{"2":{"197":1}}],["403217f",{"2":{"294":1}}],["403784",{"2":{"147":1}}],["40423456234346e",{"2":{"296":1}}],["40424\\taccuracy",{"2":{"205":1}}],["404",{"2":{"280":3}}],["404992",{"2":{"261":1}}],["40473",{"2":{"189":1}}],["404728",{"2":{"189":2}}],["40417f",{"2":{"96":1}}],["407384459477803e",{"2":{"296":1}}],["4076615f",{"2":{"294":1}}],["4076941f",{"2":{"123":1}}],["407709",{"2":{"261":1}}],["407790",{"2":{"261":1}}],["40741",{"2":{"235":3}}],["40789893",{"2":{"197":1}}],["40721768",{"2":{"135":1}}],["4009",{"2":{"269":1}}],["4001",{"2":{"197":1,"254":1,"265":1}}],["400779e",{"2":{"220":1}}],["4007297",{"2":{"165":1}}],["4007905",{"2":{"163":1}}],["400",{"2":{"124":1,"230":1,"263":2}}],["40001",{"2":{"254":1}}],["4000489932546559643",{"2":{"254":1}}],["4000",{"2":{"84":2,"280":6,"287":1}}],["40",{"2":{"80":9,"235":5,"237":1,"241":1,"246":3,"261":2,"263":1,"287":1,"291":1}}],["4⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀0⠀",{"2":{"67":1}}],["4275992282234933e",{"2":{"296":1}}],["42752",{"2":{"147":1}}],["427",{"2":{"280":3}}],["429319016311954e",{"2":{"296":1}}],["429472f",{"2":{"294":1}}],["429",{"2":{"280":1}}],["429851f",{"2":{"123":1}}],["4250785f",{"2":{"294":1}}],["42502",{"2":{"280":1}}],["42519686",{"2":{"270":2}}],["425",{"2":{"263":1}}],["4257812",{"2":{"261":1}}],["422930643430968e",{"2":{"296":1}}],["422918",{"2":{"189":1}}],["4221",{"2":{"265":1}}],["422",{"2":{"263":1}}],["424748576265019e",{"2":{"296":1}}],["424764f",{"2":{"294":1}}],["4246759648355458e",{"2":{"296":1}}],["424867943178601e",{"2":{"296":1}}],["4245908f",{"2":{"294":1}}],["424",{"2":{"263":1,"291":1}}],["424038",{"2":{"261":1}}],["4242767",{"2":{"197":1}}],["428196038915285e",{"2":{"296":1}}],["4281805f",{"2":{"294":1}}],["428124",{"2":{"261":1}}],["428688513980065e",{"2":{"296":1}}],["4286",{"2":{"280":6}}],["428",{"2":{"230":1,"263":1}}],["4230",{"2":{"265":1}}],["4231",{"2":{"265":1}}],["423",{"2":{"230":2}}],["4260509979401333e",{"2":{"296":1}}],["4260665f",{"2":{"294":1}}],["4263631813337436e",{"2":{"296":1}}],["426732f",{"2":{"294":1}}],["4267578",{"2":{"261":1}}],["4268",{"2":{"291":1}}],["4269633",{"2":{"270":1}}],["426",{"2":{"220":1,"291":1}}],["4266",{"2":{"147":1}}],["4218189446592156e",{"2":{"296":1}}],["42189494",{"2":{"197":1}}],["4215803",{"2":{"273":1}}],["421",{"2":{"263":1}}],["4216236",{"2":{"143":1}}],["420808690380042e",{"2":{"296":1}}],["4208548f",{"2":{"294":1}}],["4203156629426456e",{"2":{"296":1}}],["420",{"2":{"280":3}}],["4200443f",{"2":{"294":1}}],["42001",{"2":{"254":1}}],["420058350197999",{"2":{"171":1}}],["42005835019799886",{"2":{"171":1}}],["420698",{"2":{"165":1,"166":1}}],["42",{"2":{"67":1,"186":2,"200":1,"235":1,"246":2,"261":2,"263":1,"269":1,"280":12,"291":1}}],["4914649520232076e",{"2":{"296":1}}],["4916579f",{"2":{"294":1}}],["491",{"2":{"291":1}}],["494447400514461e",{"2":{"296":1}}],["494450\\tthroughput",{"2":{"287":1}}],["494157f",{"2":{"294":1}}],["4941406",{"2":{"261":1}}],["49205",{"2":{"280":2}}],["4921",{"2":{"269":1}}],["492",{"2":{"269":1}}],["498",{"2":{"280":4}}],["49889082",{"2":{"270":1}}],["4981",{"2":{"265":1}}],["4985",{"2":{"265":1}}],["499",{"2":{"263":1}}],["4994",{"2":{"241":1}}],["49923",{"2":{"147":1}}],["495479f",{"2":{"294":1}}],["495026f",{"2":{"294":1}}],["4950674f",{"2":{"123":1}}],["4958",{"2":{"291":1}}],["495",{"2":{"291":1}}],["4956",{"2":{"265":1}}],["4957",{"2":{"265":1}}],["495580507376246e",{"2":{"296":1}}],["4955s",{"2":{"261":1}}],["4955",{"2":{"241":1}}],["49319",{"2":{"291":1}}],["49312",{"2":{"280":3}}],["4938",{"2":{"263":1}}],["49384",{"2":{"205":1}}],["493",{"2":{"230":1,"263":1,"280":6}}],["4960184008690696e",{"2":{"296":1}}],["496063",{"2":{"270":2}}],["4962289f",{"2":{"294":1}}],["496801\\ttrain",{"2":{"280":1}}],["49684",{"2":{"280":1}}],["4965",{"2":{"265":1}}],["496",{"2":{"230":1,"263":1,"280":1,"291":1}}],["4974",{"2":{"241":1}}],["497",{"2":{"200":1,"241":1,"263":1,"291":1}}],["490448236263789e",{"2":{"296":1}}],["4903583f",{"2":{"294":1}}],["490",{"2":{"263":1}}],["49001",{"2":{"254":1}}],["4908",{"2":{"147":1}}],["4905",{"2":{"147":1}}],["490193",{"2":{"147":1}}],["490201f",{"2":{"123":1}}],["49",{"2":{"56":1,"246":2,"261":2,"269":1,"271":1,"274":2,"280":21,"291":1}}],["4d",{"2":{"47":3,"80":2,"124":2}}],["449647\\ttrain",{"2":{"280":1}}],["449",{"2":{"280":19}}],["447797589574892e",{"2":{"296":1}}],["447511746946438e",{"2":{"296":1}}],["447554f",{"2":{"294":1}}],["44768f",{"2":{"294":1}}],["4476895f",{"2":{"294":1}}],["447",{"2":{"263":1}}],["44746",{"2":{"189":1}}],["440494266436019e",{"2":{"296":1}}],["44048917",{"2":{"197":1}}],["44051f",{"2":{"294":1}}],["4405",{"2":{"265":1}}],["44001",{"2":{"254":1}}],["4480484697138e",{"2":{"296":1}}],["448989698923161e",{"2":{"296":1}}],["4489046f",{"2":{"294":1}}],["44894f",{"2":{"96":1}}],["448",{"2":{"263":1,"280":12}}],["4488101521328277",{"2":{"192":1}}],["445300135778785e",{"2":{"296":1}}],["445645607027763e",{"2":{"296":1}}],["445494f",{"2":{"294":1}}],["44593205f",{"2":{"294":1}}],["445595713202687e",{"2":{"296":1}}],["4455915f",{"2":{"294":1}}],["445511f",{"2":{"294":1}}],["445",{"2":{"263":2}}],["44571495",{"2":{"166":1}}],["4450739",{"2":{"147":1}}],["446085060193778e",{"2":{"296":1}}],["446466",{"2":{"287":1}}],["446",{"2":{"280":1}}],["4462147",{"2":{"270":1}}],["446214",{"2":{"165":1}}],["446599",{"2":{"147":1}}],["4434",{"2":{"291":1}}],["4436",{"2":{"291":1}}],["443",{"2":{"200":1,"263":2,"291":1}}],["44397",{"2":{"166":1}}],["4437156",{"2":{"165":1}}],["443328",{"2":{"147":1}}],["44338",{"2":{"147":1}}],["441438436574161e",{"2":{"296":1}}],["441454",{"2":{"147":1}}],["441568910398035e",{"2":{"296":1}}],["441148f",{"2":{"294":1}}],["4416882f",{"2":{"294":1}}],["441",{"2":{"263":2,"291":2}}],["4418456",{"2":{"197":1}}],["44173f",{"2":{"96":1}}],["4422551898376484e",{"2":{"296":1}}],["442206",{"2":{"147":1}}],["4423745f",{"2":{"294":1}}],["4423828",{"2":{"261":1}}],["442",{"2":{"280":1}}],["4427",{"2":{"265":1}}],["442699",{"2":{"261":1}}],["442849",{"2":{"147":1}}],["442927",{"2":{"147":1}}],["4442135f",{"2":{"294":1}}],["444180862607065e",{"2":{"296":1}}],["4441",{"2":{"265":1}}],["444",{"2":{"263":2,"280":3}}],["444415986842068e",{"2":{"296":1}}],["44447",{"2":{"261":1}}],["44444",{"2":{"81":2,"235":2}}],["444516",{"2":{"147":1}}],["44",{"2":{"25":1,"214":1,"246":3,"261":2,"280":27}}],["4",{"2":{"2":1,"15":1,"22":1,"23":1,"25":8,"41":1,"43":4,"45":1,"46":5,"49":2,"50":2,"56":2,"67":7,"77":5,"79":28,"80":1,"81":69,"82":12,"83":6,"84":12,"85":11,"89":5,"96":6,"119":2,"123":4,"124":4,"132":8,"134":1,"135":6,"143":5,"146":8,"147":12,"153":2,"154":2,"157":8,"158":4,"163":9,"164":2,"165":3,"166":9,"170":7,"171":16,"173":2,"189":15,"191":5,"194":2,"197":1,"200":10,"205":5,"211":2,"213":1,"214":2,"220":2,"230":12,"235":5,"237":1,"238":33,"239":3,"241":9,"246":2,"247":3,"254":1,"258":6,"261":3,"263":45,"264":9,"265":20,"269":10,"270":5,"273":1,"280":21,"291":54,"292":7,"293":1,"294":100,"296":95}}],["0+560",{"2":{"239":1,"247":1}}],["0f4",{"2":{"293":1}}],["0f",{"2":{"213":1,"233":2}}],["0f0",{"2":{"56":2,"67":4,"89":1,"97":2,"134":2,"135":1,"170":2,"201":2,"205":3,"218":4,"232":4,"236":2,"253":8,"255":6,"258":2,"261":1,"264":4,"270":2,"293":1}}],["0k2025",{"2":{"200":1,"269":1}}],["0k",{"2":{"200":1,"269":1}}],["0>",{"2":{"147":2}}],["0x000000000705d7b0",{"2":{"209":1}}],["0x1911b814c02405e8",{"2":{"192":1}}],["0x12c522b8034ae186",{"2":{"143":1}}],["0x8c49bc52dc8a77ea",{"2":{"192":1}}],["0x8e0c3a65079041bb",{"2":{"143":1}}],["0x48d73dc42d195740",{"2":{"192":1}}],["0x4fa3403dd074e603",{"2":{"143":1}}],["0xdb2fa90498613fdf",{"2":{"192":1}}],["0xff800000>",{"2":{"147":1}}],["0x22a21880af5dc689",{"2":{"143":1,"192":1}}],["0x21617f7747d97206",{"2":{"143":1}}],["0e",{"2":{"96":7,"272":1,"274":2}}],["09",{"2":{"214":2,"265":1,"280":14}}],["0993808229258895e",{"2":{"296":1}}],["0993981f",{"2":{"294":1}}],["09932359",{"2":{"147":1}}],["099199",{"2":{"280":1}}],["099194",{"2":{"280":1}}],["099191",{"2":{"280":1}}],["099188",{"2":{"280":1}}],["099185",{"2":{"280":1}}],["099182",{"2":{"280":1}}],["099179",{"2":{"280":1}}],["099176",{"2":{"280":1}}],["099174",{"2":{"280":1}}],["099170",{"2":{"280":1}}],["099157",{"2":{"280":1}}],["0998",{"2":{"254":2}}],["099841796",{"2":{"147":1}}],["099759124",{"2":{"147":1}}],["09900095",{"2":{"147":1}}],["095091920202965e",{"2":{"296":1}}],["095",{"2":{"280":1}}],["095177f",{"2":{"294":1}}],["09517",{"2":{"147":1}}],["0952605",{"2":{"147":1}}],["09540328",{"2":{"147":1}}],["09596483",{"2":{"147":1}}],["09591715",{"2":{"147":1}}],["095548995",{"2":{"123":1}}],["09777648",{"2":{"270":1}}],["09701932",{"2":{"197":1}}],["09705801",{"2":{"147":1}}],["09799",{"2":{"147":1}}],["09757828",{"2":{"147":1}}],["09725678",{"2":{"147":1}}],["097253",{"2":{"143":1}}],["0973789",{"2":{"147":1}}],["091005509619314e",{"2":{"296":1}}],["09103",{"2":{"205":1}}],["091216f",{"2":{"294":1}}],["09126\\taccuracy",{"2":{"205":1}}],["0913593358445652",{"2":{"171":1}}],["091359335844565",{"2":{"171":1}}],["0913238f",{"2":{"123":1}}],["0914631",{"2":{"166":1}}],["091481484",{"2":{"147":1}}],["0917963",{"2":{"147":1}}],["09189941",{"2":{"147":1}}],["0986705f",{"2":{"294":1}}],["09860833",{"2":{"147":1}}],["098568776053526e",{"2":{"296":1}}],["098543f",{"2":{"294":1}}],["09852",{"2":{"189":1}}],["0985227",{"2":{"189":2}}],["098",{"2":{"280":1}}],["0980411",{"2":{"147":1}}],["0982",{"2":{"147":1}}],["09885789",{"2":{"147":1}}],["09875466",{"2":{"123":1}}],["09878272",{"2":{"123":1}}],["093776\\ttrain",{"2":{"280":1}}],["093533",{"2":{"280":1}}],["093528",{"2":{"280":1}}],["093525",{"2":{"280":1}}],["093522",{"2":{"280":1}}],["093519",{"2":{"280":1}}],["093516",{"2":{"280":1}}],["09351656",{"2":{"195":1,"196":1}}],["093513",{"2":{"280":1}}],["093510",{"2":{"280":1}}],["0935077",{"2":{"292":2}}],["093507",{"2":{"280":1}}],["093502",{"2":{"280":1}}],["09359703",{"2":{"147":1}}],["093925126",{"2":{"197":1}}],["0939071",{"2":{"186":1}}],["093958974",{"2":{"147":1}}],["093378566",{"2":{"147":1}}],["093302496",{"2":{"123":1}}],["09321641",{"2":{"123":1}}],["093486",{"2":{"280":1}}],["09348221",{"2":{"147":1}}],["0934494",{"2":{"123":1}}],["093436174",{"2":{"123":1}}],["09222229",{"2":{"147":1}}],["092265144",{"2":{"147":1}}],["09230051",{"2":{"147":1}}],["09250315",{"2":{"147":1}}],["09255884",{"2":{"123":1}}],["09282472",{"2":{"147":1}}],["09287714",{"2":{"123":1}}],["0921624",{"2":{"123":1}}],["092198014",{"2":{"123":1}}],["092793465",{"2":{"123":1}}],["094",{"2":{"280":10}}],["0945",{"2":{"254":3}}],["094564",{"2":{"186":1}}],["094527975",{"2":{"96":1}}],["09461",{"2":{"147":1}}],["09469124",{"2":{"123":1}}],["09422591",{"2":{"147":1}}],["09430397",{"2":{"147":1}}],["09442957",{"2":{"147":1}}],["09441388",{"2":{"147":1}}],["094833545",{"2":{"123":1}}],["09485077",{"2":{"123":1}}],["096343",{"2":{"280":1}}],["096338",{"2":{"280":1}}],["096335",{"2":{"280":1}}],["096332",{"2":{"280":1}}],["096329",{"2":{"280":1}}],["096326",{"2":{"280":1}}],["096323",{"2":{"280":1}}],["096321",{"2":{"280":1}}],["096317",{"2":{"280":1}}],["096314",{"2":{"280":1}}],["096300",{"2":{"280":1}}],["0964",{"2":{"280":1}}],["09646284",{"2":{"147":1}}],["0961363",{"2":{"147":1}}],["09668045",{"2":{"147":1}}],["09658958",{"2":{"147":1}}],["09650694",{"2":{"147":1}}],["09659223",{"2":{"123":1}}],["09699093",{"2":{"165":1}}],["096988946",{"2":{"143":1}}],["09693537",{"2":{"96":1}}],["0909078337657313e",{"2":{"296":1}}],["09096992",{"2":{"147":1}}],["0904702f",{"2":{"294":1}}],["0904621",{"2":{"147":1}}],["0908715f",{"2":{"294":1}}],["0908203",{"2":{"261":1}}],["090718",{"2":{"280":1}}],["090714",{"2":{"280":1}}],["090711",{"2":{"280":1}}],["090708",{"2":{"280":1}}],["090705",{"2":{"280":1}}],["090702",{"2":{"280":1}}],["0907582",{"2":{"147":1}}],["090699",{"2":{"280":1}}],["090696",{"2":{"280":1}}],["090693",{"2":{"280":1}}],["090690",{"2":{"280":1}}],["090676",{"2":{"280":1}}],["090154287275188e",{"2":{"296":1}}],["0901217f",{"2":{"294":1}}],["0901",{"2":{"97":1}}],["09021f",{"2":{"96":1}}],["09003057317038046",{"2":{"77":1}}],["0900306",{"2":{"50":10,"77":1}}],["07",{"2":{"246":1,"280":17}}],["078415f",{"2":{"294":1}}],["078872",{"2":{"280":1}}],["078852",{"2":{"280":1}}],["078846",{"2":{"280":1}}],["078839",{"2":{"280":1}}],["078832",{"2":{"280":1}}],["078825",{"2":{"280":1}}],["078818",{"2":{"280":1}}],["078812",{"2":{"280":1}}],["078805",{"2":{"280":1}}],["078796",{"2":{"280":1}}],["07874016",{"2":{"270":2}}],["0782",{"2":{"265":1}}],["07894",{"2":{"147":1}}],["078535f",{"2":{"294":1}}],["0785",{"2":{"147":1}}],["078624543414058e",{"2":{"296":1}}],["07865399",{"2":{"123":1}}],["078672916",{"2":{"123":1}}],["075255815069164e",{"2":{"296":1}}],["075948604206879e",{"2":{"296":1}}],["0753",{"2":{"287":1}}],["0753409",{"2":{"147":1}}],["07513529",{"2":{"165":1}}],["07513033",{"2":{"147":1}}],["07542",{"2":{"147":1}}],["0755849",{"2":{"147":1}}],["07552715",{"2":{"147":1}}],["07564111",{"2":{"147":1}}],["076808261076978e",{"2":{"296":1}}],["076973f",{"2":{"294":1}}],["0769223",{"2":{"147":1}}],["07667785",{"2":{"273":1}}],["0764086",{"2":{"270":1}}],["0764",{"2":{"265":1}}],["0764233",{"2":{"147":1}}],["07621\\taccuracy",{"2":{"205":1}}],["076062605",{"2":{"147":1}}],["07600229",{"2":{"147":1}}],["07615535",{"2":{"147":1}}],["07638691",{"2":{"147":1}}],["077814001737008e",{"2":{"296":1}}],["0778448",{"2":{"147":1}}],["077662",{"2":{"280":1}}],["07765452",{"2":{"147":1}}],["07702",{"2":{"189":1}}],["0770206",{"2":{"189":2}}],["07701616",{"2":{"96":1}}],["0775962",{"2":{"147":1}}],["0772457",{"2":{"147":1}}],["0773392",{"2":{"147":1}}],["077724f",{"2":{"294":1}}],["07774367",{"2":{"147":1}}],["07778767",{"2":{"147":1}}],["07796077",{"2":{"147":1}}],["079246775971852e",{"2":{"296":1}}],["079204\\tval",{"2":{"280":1}}],["0792036",{"2":{"147":1}}],["079082f",{"2":{"294":1}}],["079042196",{"2":{"147":1}}],["0791126",{"2":{"186":1}}],["07915",{"2":{"147":1}}],["0799499",{"2":{"147":1}}],["07297467",{"2":{"147":1}}],["072645485",{"2":{"147":1}}],["07287f",{"2":{"96":1}}],["0728675615927385",{"2":{"50":1}}],["073",{"2":{"280":4}}],["0731",{"2":{"265":1}}],["07319629",{"2":{"147":1}}],["07376325",{"2":{"123":1}}],["07384164",{"2":{"123":1}}],["071570341989771e",{"2":{"296":1}}],["0716244f",{"2":{"294":1}}],["071",{"2":{"280":3}}],["071482725",{"2":{"197":1}}],["07133968",{"2":{"126":1}}],["07186686",{"2":{"123":1}}],["07125516",{"2":{"96":1}}],["074171f",{"2":{"294":1}}],["0740278",{"2":{"270":1}}],["07406641",{"2":{"147":1}}],["0742188",{"2":{"261":1}}],["07461\\taccuracy",{"2":{"205":1}}],["074720",{"2":{"261":1}}],["074723326",{"2":{"147":1}}],["07477753",{"2":{"123":1}}],["07489191",{"2":{"147":1}}],["074939f",{"2":{"123":1}}],["07444663",{"2":{"123":1}}],["07021",{"2":{"147":1}}],["0701",{"2":{"97":1}}],["070576",{"2":{"96":1}}],["03f0",{"2":{"272":1}}],["03",{"2":{"214":2,"272":1,"274":2,"280":8}}],["037785888",{"2":{"254":1}}],["03774s\\ttraining",{"2":{"246":1}}],["03722s\\ttraining",{"2":{"246":1}}],["037206527",{"2":{"147":1}}],["0376s\\ttraining",{"2":{"235":1}}],["03704",{"2":{"235":2}}],["03704503",{"2":{"147":1}}],["03719\\taccuracy",{"2":{"205":1}}],["03731",{"2":{"205":1}}],["0373",{"2":{"147":1}}],["037487797",{"2":{"147":1}}],["037877575",{"2":{"147":1}}],["0329427f",{"2":{"294":1}}],["032",{"2":{"280":4}}],["032271e",{"2":{"220":1}}],["03229",{"2":{"205":1}}],["03245",{"2":{"205":1}}],["03258145",{"2":{"197":1}}],["03284671",{"2":{"165":1}}],["032847542",{"2":{"147":1}}],["0321158",{"2":{"147":1}}],["03278077",{"2":{"147":1}}],["03202845",{"2":{"96":1}}],["0340",{"2":{"265":1}}],["0340666",{"2":{"186":1}}],["034934171",{"2":{"254":1}}],["03446s\\ttraining",{"2":{"246":1}}],["03443\\taccuracy",{"2":{"205":1}}],["034259234",{"2":{"197":1}}],["034200985",{"2":{"147":1}}],["034135879720712e",{"2":{"296":1}}],["03413\\taccuracy",{"2":{"205":1}}],["0341252",{"2":{"147":1}}],["03415",{"2":{"147":1}}],["0346338",{"2":{"147":1}}],["03461915",{"2":{"147":1}}],["03475806",{"2":{"147":1}}],["033831614914848e",{"2":{"296":1}}],["0338566",{"2":{"165":1}}],["03339s\\ttraining",{"2":{"246":1}}],["033936515",{"2":{"147":1}}],["033446588",{"2":{"147":1}}],["033461835",{"2":{"147":1}}],["03308521",{"2":{"147":1}}],["03306083",{"2":{"96":1}}],["03374887",{"2":{"147":1}}],["031525195587328e",{"2":{"296":1}}],["03150s\\ttraining",{"2":{"246":1}}],["031061f",{"2":{"294":1}}],["031034712",{"2":{"147":1}}],["0312500",{"2":{"261":1}}],["03123\\taccuracy",{"2":{"205":1}}],["0311s\\ttraining",{"2":{"235":1}}],["031358e",{"2":{"220":1}}],["031386353",{"2":{"147":1}}],["031414255389526885",{"2":{"171":2}}],["031490605",{"2":{"147":1}}],["03194926",{"2":{"147":1}}],["031942792",{"2":{"147":1}}],["031756539",{"2":{"254":1}}],["03170667",{"2":{"147":1}}],["03178858",{"2":{"123":1}}],["031815775",{"2":{"123":1}}],["030222366123042e",{"2":{"296":1}}],["03001s\\ttraining",{"2":{"246":1}}],["03048s\\ttraining",{"2":{"246":1}}],["030407937",{"2":{"147":1}}],["03057s\\ttraining",{"2":{"246":1}}],["030724227",{"2":{"147":1}}],["030336674",{"2":{"147":1}}],["03091s\\ttraining",{"2":{"246":1}}],["030987449",{"2":{"147":1}}],["030971587",{"2":{"123":1}}],["030937059",{"2":{"96":1}}],["03012\\taccuracy",{"2":{"205":1}}],["0301979",{"2":{"147":1}}],["0301",{"2":{"97":1}}],["0356342554694465e",{"2":{"296":1}}],["03569",{"2":{"147":1}}],["035584463386469e",{"2":{"296":1}}],["0355555",{"2":{"189":1}}],["0357873f",{"2":{"294":1}}],["035063371",{"2":{"254":1}}],["03504s\\ttraining",{"2":{"246":1}}],["03504307",{"2":{"96":1}}],["03584s\\ttraining",{"2":{"246":1}}],["0358925",{"2":{"96":1}}],["03532",{"2":{"205":1}}],["035161175",{"2":{"123":1}}],["035196126",{"2":{"123":1}}],["03547f",{"2":{"123":1}}],["03542879",{"2":{"123":1}}],["03657",{"2":{"147":1}}],["0365001",{"2":{"96":1}}],["036290962",{"2":{"147":1}}],["036468588",{"2":{"147":1}}],["03666526",{"2":{"96":1}}],["039640423",{"2":{"147":1}}],["039614968",{"2":{"147":1}}],["039650977",{"2":{"147":1}}],["039103575",{"2":{"147":1}}],["039153982",{"2":{"147":1}}],["039138064",{"2":{"96":1}}],["0390298",{"2":{"96":1}}],["03937737",{"2":{"96":1}}],["0388",{"2":{"265":1}}],["038802307",{"2":{"147":1}}],["0380859",{"2":{"261":1}}],["038037397",{"2":{"96":1}}],["03836s\\ttraining",{"2":{"246":1}}],["038353663",{"2":{"123":1}}],["03849",{"2":{"205":1}}],["0381317",{"2":{"166":1}}],["0381727",{"2":{"165":1}}],["0385186",{"2":{"147":1}}],["03828844",{"2":{"123":1}}],["038",{"2":{"46":3}}],["04",{"2":{"246":1,"280":20}}],["04589159336426e",{"2":{"296":1}}],["04583689",{"2":{"147":1}}],["045",{"2":{"280":2}}],["0454082635012388e",{"2":{"296":1}}],["0454",{"2":{"265":1}}],["0450307",{"2":{"186":1}}],["045083",{"2":{"96":1}}],["0453085",{"2":{"147":1}}],["0461865f",{"2":{"294":1}}],["04616411",{"2":{"147":1}}],["0467",{"2":{"265":1}}],["046786353",{"2":{"147":1}}],["04635",{"2":{"205":1}}],["046394978",{"2":{"147":1}}],["0469304",{"2":{"166":1}}],["0462949",{"2":{"166":1}}],["046468746",{"2":{"147":1}}],["047835210777135e",{"2":{"296":1}}],["04783\\taccuracy",{"2":{"205":1}}],["0476264f",{"2":{"294":1}}],["047",{"2":{"280":8}}],["047244094",{"2":{"270":2}}],["04728",{"2":{"147":1}}],["047050234",{"2":{"147":1}}],["047006823",{"2":{"147":1}}],["047496427",{"2":{"147":1}}],["04715905",{"2":{"147":1}}],["047317084",{"2":{"147":1}}],["0447",{"2":{"265":1}}],["044715x^3",{"2":{"67":1}}],["04460\\taccuracy",{"2":{"205":1}}],["04407",{"2":{"205":1}}],["044987272",{"2":{"147":1}}],["04447686",{"2":{"147":1}}],["044427596",{"2":{"147":1}}],["049668143568285e",{"2":{"296":1}}],["0491123",{"2":{"270":1}}],["049741872",{"2":{"254":1}}],["04971",{"2":{"147":1}}],["04903387",{"2":{"197":1}}],["0499631",{"2":{"186":1}}],["04993449",{"2":{"96":1}}],["04936\\taccuracy",{"2":{"205":1}}],["04933",{"2":{"147":1}}],["049328804",{"2":{"147":1}}],["0492191",{"2":{"147":1}}],["049217567",{"2":{"147":1}}],["0480",{"2":{"265":1}}],["04811",{"2":{"166":1}}],["048154727",{"2":{"147":1}}],["04823295",{"2":{"147":1}}],["048471738",{"2":{"147":1}}],["04899112",{"2":{"147":1}}],["048821628",{"2":{"147":1}}],["048522998",{"2":{"146":1}}],["0433457913626107e",{"2":{"296":1}}],["0435387f",{"2":{"294":1}}],["0439453",{"2":{"261":1}}],["043952383",{"2":{"147":1}}],["043883",{"2":{"189":1}}],["04343",{"2":{"147":1}}],["04368514",{"2":{"147":1}}],["043790642",{"2":{"147":1}}],["04319424",{"2":{"123":1}}],["043209497",{"2":{"123":1}}],["041705",{"2":{"261":1}}],["04173",{"2":{"147":1}}],["04199",{"2":{"205":1}}],["04199602",{"2":{"197":1}}],["041031003",{"2":{"163":1}}],["04119787",{"2":{"147":1}}],["041269857",{"2":{"123":1}}],["041253403",{"2":{"123":1}}],["041368797",{"2":{"123":1}}],["042415008",{"2":{"270":1}}],["0424012",{"2":{"186":1}}],["0428",{"2":{"265":1}}],["04265762",{"2":{"197":1}}],["042543",{"2":{"165":1}}],["04229615",{"2":{"147":1}}],["042247728",{"2":{"96":1}}],["042990997",{"2":{"147":1}}],["042965017",{"2":{"123":1}}],["042984415",{"2":{"123":1}}],["040009234571584e",{"2":{"296":1}}],["0400456f",{"2":{"294":1}}],["04008\\taccuracy",{"2":{"205":1}}],["04063\\taccuracy",{"2":{"205":1}}],["0402095",{"2":{"165":1}}],["040403437",{"2":{"147":1}}],["0404982",{"2":{"96":1}}],["040938716",{"2":{"147":1}}],["04019666",{"2":{"147":1}}],["0401",{"2":{"97":1}}],["04d",{"2":{"97":1}}],["0≤ω",{"2":{"89":1}}],["08",{"2":{"195":1,"246":1,"280":278}}],["08613788",{"2":{"273":1}}],["0866141",{"2":{"270":2}}],["08694684883050086",{"2":{"192":4}}],["0869468",{"2":{"189":1}}],["08694564",{"2":{"147":1}}],["086430185",{"2":{"165":1}}],["08603747",{"2":{"147":1}}],["082395f",{"2":{"294":1}}],["082013",{"2":{"280":1}}],["082002",{"2":{"280":1}}],["0829067",{"2":{"270":1}}],["0829282",{"2":{"163":1}}],["082812",{"2":{"261":1}}],["0824661",{"2":{"147":1}}],["0826899431040455e",{"2":{"296":1}}],["08268406",{"2":{"123":1}}],["08260414",{"2":{"147":1}}],["082605906",{"2":{"123":1}}],["08269734",{"2":{"147":1}}],["084990",{"2":{"280":1}}],["08437178",{"2":{"147":1}}],["08455851",{"2":{"147":1}}],["084573634",{"2":{"123":1}}],["084569596",{"2":{"96":1}}],["0838746f",{"2":{"294":1}}],["08387",{"2":{"280":2}}],["0836640431256135e",{"2":{"296":1}}],["083633",{"2":{"261":1}}],["0836404",{"2":{"147":1}}],["0839844",{"2":{"261":2}}],["08311",{"2":{"205":1}}],["08358303",{"2":{"147":1}}],["0833",{"2":{"147":1}}],["08338781",{"2":{"147":1}}],["08333",{"2":{"81":1}}],["088138267774102e",{"2":{"296":1}}],["088204",{"2":{"287":1}}],["088053f",{"2":{"294":1}}],["0880",{"2":{"265":1}}],["088512",{"2":{"261":1}}],["0884834705765853",{"2":{"171":2}}],["08846138",{"2":{"147":1}}],["08872778",{"2":{"147":1}}],["0883858638065282e",{"2":{"296":1}}],["0883609",{"2":{"147":1}}],["08832364",{"2":{"147":1}}],["0894328193482666e",{"2":{"296":1}}],["0899",{"2":{"265":1}}],["089975506",{"2":{"147":1}}],["0898651",{"2":{"147":1}}],["08981f",{"2":{"96":1}}],["08927501",{"2":{"147":1}}],["085032",{"2":{"280":1}}],["08502",{"2":{"280":2}}],["085028",{"2":{"280":1}}],["085025",{"2":{"280":1}}],["085022",{"2":{"280":1}}],["085019",{"2":{"280":1}}],["085016",{"2":{"280":1}}],["085013",{"2":{"280":1}}],["085010",{"2":{"280":1}}],["085007",{"2":{"280":1}}],["085004",{"2":{"280":1}}],["0855",{"2":{"265":1}}],["08554761",{"2":{"147":1}}],["08517",{"2":{"189":1}}],["085170805",{"2":{"189":1}}],["0851708",{"2":{"189":2}}],["085806124",{"2":{"147":1}}],["08589938",{"2":{"147":1}}],["08527214",{"2":{"96":1}}],["081624649988187e",{"2":{"296":1}}],["08162795",{"2":{"123":1}}],["081672571066655e",{"2":{"296":1}}],["0818675f",{"2":{"294":1}}],["08185f",{"2":{"294":1}}],["0810268f",{"2":{"294":1}}],["081994",{"2":{"280":1}}],["081987",{"2":{"280":1}}],["081981",{"2":{"280":1}}],["081974",{"2":{"280":1}}],["081967",{"2":{"280":1}}],["081961",{"2":{"280":1}}],["08196s\\ttraining",{"2":{"246":1}}],["081954",{"2":{"280":1}}],["081947",{"2":{"280":1}}],["081918",{"2":{"280":1}}],["081932",{"2":{"123":1}}],["08193636",{"2":{"123":1}}],["081931",{"2":{"123":1}}],["0811164",{"2":{"147":1}}],["08115203",{"2":{"147":1}}],["08152549",{"2":{"147":1}}],["08178409",{"2":{"147":1}}],["0800394598880278e",{"2":{"296":1}}],["0803165f",{"2":{"294":1}}],["08034529",{"2":{"273":1}}],["080871",{"2":{"270":1}}],["080980673910808e",{"2":{"296":1}}],["08098776",{"2":{"147":1}}],["08096753",{"2":{"147":1}}],["08025096",{"2":{"147":1}}],["08022",{"2":{"46":1,"65":1}}],["0801",{"2":{"97":1}}],["08073235",{"2":{"97":1}}],["0872212146714704e",{"2":{"296":1}}],["0872986f",{"2":{"294":1}}],["087847",{"2":{"280":1}}],["087842",{"2":{"280":1}}],["087839",{"2":{"280":1}}],["087836",{"2":{"280":1}}],["087833",{"2":{"280":1}}],["087830",{"2":{"280":1}}],["087827",{"2":{"280":1}}],["087824",{"2":{"280":1}}],["087821",{"2":{"280":1}}],["087817",{"2":{"280":1}}],["087802",{"2":{"280":1}}],["087390475",{"2":{"273":1}}],["0877254",{"2":{"270":1}}],["087764904",{"2":{"147":1}}],["08713347",{"2":{"155":1}}],["08765691",{"2":{"147":1}}],["08746583",{"2":{"147":1}}],["08743f",{"2":{"96":1}}],["08797912",{"2":{"96":1}}],["02f0",{"2":{"255":2}}],["0229213f",{"2":{"294":1}}],["0224993932037384e",{"2":{"296":1}}],["0224994f",{"2":{"294":1}}],["022407684",{"2":{"254":1}}],["022041641",{"2":{"254":1}}],["022836153601422e",{"2":{"296":1}}],["022884602",{"2":{"254":1}}],["0228109822209213",{"2":{"171":2}}],["0222",{"2":{"254":2}}],["02250\\taccuracy",{"2":{"205":1}}],["022579487",{"2":{"123":1}}],["022576151",{"2":{"123":1}}],["02279",{"2":{"205":1}}],["02529s\\ttraining",{"2":{"246":1}}],["02520s\\ttraining",{"2":{"246":1}}],["02528",{"2":{"205":1}}],["02517s\\ttraining",{"2":{"246":1}}],["02576s\\ttraining",{"2":{"246":2}}],["02533s\\ttraining",{"2":{"246":1}}],["02533\\taccuracy",{"2":{"205":1}}],["02535537",{"2":{"197":1}}],["025357665",{"2":{"165":1}}],["025493514",{"2":{"147":1}}],["0216",{"2":{"265":1}}],["0215",{"2":{"265":1}}],["021555956",{"2":{"254":1}}],["021532167",{"2":{"254":1}}],["02157\\taccuracy",{"2":{"205":1}}],["021726867",{"2":{"147":1}}],["02109389",{"2":{"147":1}}],["0213176",{"2":{"96":1}}],["027586516788974e",{"2":{"296":1}}],["027571f",{"2":{"294":1}}],["027509f",{"2":{"294":1}}],["027592428",{"2":{"147":1}}],["027456025",{"2":{"254":1}}],["0271588+0",{"2":{"186":1}}],["02767261",{"2":{"147":1}}],["027224839",{"2":{"147":1}}],["02772s\\ttraining",{"2":{"246":1}}],["027705256",{"2":{"96":1}}],["02778",{"2":{"81":1}}],["0288567103354e",{"2":{"296":1}}],["028814f",{"2":{"294":1}}],["028999f",{"2":{"294":1}}],["028",{"2":{"280":1}}],["02850357",{"2":{"273":1}}],["0285957",{"2":{"165":1}}],["028596466",{"2":{"147":1}}],["0283",{"2":{"265":1}}],["02835\\taccuracy",{"2":{"205":1}}],["028678153",{"2":{"254":1}}],["02879s\\ttraining",{"2":{"246":1}}],["028738922",{"2":{"165":1}}],["02828s\\ttraining",{"2":{"246":1}}],["028244259",{"2":{"96":1}}],["0281603",{"2":{"147":1}}],["02841s\\ttraining",{"2":{"246":1}}],["02846s\\ttraining",{"2":{"246":1}}],["028461456",{"2":{"143":1}}],["02840",{"2":{"205":1}}],["028482005",{"2":{"147":1}}],["02806s\\ttraining",{"2":{"246":1}}],["028036328",{"2":{"123":1}}],["028029898",{"2":{"123":1}}],["020034342551356e",{"2":{"296":1}}],["0200804f",{"2":{"294":1}}],["020825051",{"2":{"254":1}}],["02055",{"2":{"205":1}}],["0207279",{"2":{"153":1}}],["020255674",{"2":{"147":1}}],["020219602",{"2":{"123":1}}],["020608762",{"2":{"147":1}}],["020335387",{"2":{"123":1}}],["02034735",{"2":{"123":1}}],["0201",{"2":{"97":1}}],["029209377926103e",{"2":{"296":1}}],["029283267",{"2":{"96":1}}],["0297",{"2":{"265":1}}],["02971s\\ttraining",{"2":{"246":1}}],["029342793",{"2":{"254":1}}],["02964765308691042",{"2":{"192":4}}],["0296477",{"2":{"189":1}}],["02998",{"2":{"147":1}}],["02995433",{"2":{"124":1}}],["02946",{"2":{"147":1,"205":1}}],["029835312",{"2":{"123":1}}],["023622",{"2":{"270":2}}],["02365s\\ttraining",{"2":{"246":1}}],["02367s\\ttraining",{"2":{"246":1}}],["02338s\\ttraining",{"2":{"246":1}}],["02334s\\ttraining",{"2":{"246":1}}],["02330s\\ttraining",{"2":{"246":1}}],["02337",{"2":{"205":1}}],["02345s\\ttraining",{"2":{"246":1}}],["02349s\\ttraining",{"2":{"246":1}}],["02343s\\ttraining",{"2":{"246":1}}],["02341s\\ttraining",{"2":{"246":1}}],["0234102",{"2":{"147":1}}],["02370s\\ttraining",{"2":{"246":1}}],["02379s\\ttraining",{"2":{"246":2}}],["02371s\\ttraining",{"2":{"246":1}}],["02376s\\ttraining",{"2":{"246":3}}],["02373s\\ttraining",{"2":{"246":1}}],["02377s\\ttraining",{"2":{"246":1}}],["02378s\\ttraining",{"2":{"246":2}}],["02374s\\ttraining",{"2":{"246":2}}],["0238676",{"2":{"270":1}}],["02386s\\ttraining",{"2":{"246":1}}],["02389s\\ttraining",{"2":{"246":2}}],["02383s\\ttraining",{"2":{"246":2}}],["02380s\\ttraining",{"2":{"246":2}}],["02385s\\ttraining",{"2":{"246":1}}],["023857513",{"2":{"147":1}}],["02382s\\ttraining",{"2":{"246":5}}],["02384s\\ttraining",{"2":{"246":1}}],["02388s\\ttraining",{"2":{"246":2}}],["02387s\\ttraining",{"2":{"246":3}}],["02396s\\ttraining",{"2":{"246":1}}],["02397s\\ttraining",{"2":{"246":1}}],["02394s\\ttraining",{"2":{"246":1}}],["02395s\\ttraining",{"2":{"246":2}}],["02391\\taccuracy",{"2":{"205":1}}],["023275209512656e",{"2":{"296":1}}],["02325s\\ttraining",{"2":{"246":1}}],["02326s\\ttraining",{"2":{"246":2}}],["02324",{"2":{"147":1}}],["023283502",{"2":{"147":1}}],["0235466f",{"2":{"294":1}}],["0235",{"2":{"265":1}}],["02356s\\ttraining",{"2":{"246":2}}],["023569956",{"2":{"147":1}}],["02355s\\ttraining",{"2":{"246":1}}],["02357im",{"2":{"186":1}}],["0235815",{"2":{"123":1}}],["023097955",{"2":{"147":1}}],["023026714",{"2":{"96":1}}],["02316s\\ttraining",{"2":{"246":1}}],["02316084",{"2":{"123":1}}],["023150858",{"2":{"147":1}}],["023147244",{"2":{"123":1}}],["026",{"2":{"280":2}}],["0262",{"2":{"265":1}}],["0262358",{"2":{"147":1}}],["0260",{"2":{"265":1}}],["026029184",{"2":{"147":1}}],["0269",{"2":{"265":1}}],["02670\\taccuracy",{"2":{"205":1}}],["026193827",{"2":{"147":1}}],["026323957",{"2":{"123":1}}],["026362807",{"2":{"123":1}}],["02653",{"2":{"205":1}}],["026577514",{"2":{"123":1}}],["026588852",{"2":{"123":1}}],["026523968",{"2":{"96":1}}],["026677115",{"2":{"123":1}}],["02649f",{"2":{"96":1}}],["026844418",{"2":{"96":1}}],["0247411343498385e",{"2":{"296":1}}],["02470s\\ttraining",{"2":{"246":2}}],["02469948485016e",{"2":{"296":1}}],["02467s\\ttraining",{"2":{"246":1}}],["0245323f",{"2":{"294":1}}],["02458f",{"2":{"294":1}}],["02456s\\ttraining",{"2":{"246":1}}],["0245608",{"2":{"147":1}}],["02429s\\ttraining",{"2":{"246":1}}],["024228286",{"2":{"147":1}}],["024228822",{"2":{"123":1}}],["0242259",{"2":{"96":1}}],["02495s\\ttraining",{"2":{"246":1}}],["024442466",{"2":{"254":1}}],["02444s\\ttraining",{"2":{"246":1}}],["02445s\\ttraining",{"2":{"246":1}}],["02480s\\ttraining",{"2":{"246":1}}],["02482s\\ttraining",{"2":{"246":1}}],["0248332",{"2":{"96":1}}],["02408s\\ttraining",{"2":{"246":1}}],["02401s\\ttraining",{"2":{"246":2}}],["02400s\\ttraining",{"2":{"246":1}}],["02402s\\ttraining",{"2":{"246":1}}],["02407s\\ttraining",{"2":{"246":4}}],["024163416",{"2":{"123":1}}],["024",{"2":{"96":1}}],["02",{"2":{"67":1,"198":1,"207":1,"215":1,"220":20,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"280":1,"281":1,"289":1,"298":1}}],["062348275420957e",{"2":{"296":1}}],["0627",{"2":{"265":1}}],["06272482",{"2":{"147":1}}],["0622",{"2":{"265":1}}],["0625000",{"2":{"261":1}}],["06208062",{"2":{"165":1}}],["0621394f",{"2":{"294":1}}],["062155924699545",{"2":{"171":1}}],["0621559246995447",{"2":{"171":1}}],["06217661",{"2":{"147":1}}],["062121924",{"2":{"147":1}}],["062463008",{"2":{"147":1}}],["064331025",{"2":{"147":1}}],["06445622",{"2":{"147":1}}],["06445969",{"2":{"147":1}}],["06450",{"2":{"65":1}}],["0612961799879316e",{"2":{"296":1}}],["061299384",{"2":{"147":1}}],["0612806f",{"2":{"294":1}}],["061570",{"2":{"261":1}}],["0619725",{"2":{"147":1}}],["061889123",{"2":{"147":1}}],["061646424",{"2":{"123":1}}],["066951980577018e",{"2":{"296":1}}],["06697127",{"2":{"123":1}}],["066380357968293e",{"2":{"296":1}}],["066327532155177e",{"2":{"296":1}}],["0663788f",{"2":{"294":1}}],["0661346f",{"2":{"294":1}}],["0661273",{"2":{"166":1}}],["0664062",{"2":{"261":1}}],["06680218",{"2":{"147":1}}],["06682308",{"2":{"147":1}}],["0667989",{"2":{"123":1}}],["068345387929354e",{"2":{"296":1}}],["068362f",{"2":{"294":1}}],["06807",{"2":{"205":1}}],["06884",{"2":{"205":1}}],["06885",{"2":{"147":1}}],["0688707",{"2":{"123":1}}],["0684376",{"2":{"147":1}}],["06879873",{"2":{"147":1}}],["06875219",{"2":{"123":1}}],["06877028",{"2":{"123":1}}],["067131119436574e",{"2":{"296":1}}],["0671675f",{"2":{"294":1}}],["06715196",{"2":{"147":1}}],["067",{"2":{"280":1}}],["0673828",{"2":{"261":1}}],["067008324",{"2":{"147":1}}],["06704105",{"2":{"123":1}}],["06777584",{"2":{"270":1}}],["06777759",{"2":{"147":1}}],["06776868",{"2":{"123":1}}],["06740383",{"2":{"147":1}}],["06782",{"2":{"280":1}}],["06781",{"2":{"147":1}}],["06784943",{"2":{"123":1}}],["06788022",{"2":{"96":1}}],["0635604339854782e",{"2":{"296":1}}],["06352646",{"2":{"147":1}}],["0633827f",{"2":{"294":1}}],["063393466",{"2":{"123":1}}],["06339173",{"2":{"123":1}}],["06315138",{"2":{"147":1}}],["063480295",{"2":{"147":1}}],["06347208",{"2":{"96":1}}],["06360\\taccuracy",{"2":{"205":1}}],["063676804",{"2":{"147":1}}],["06368081",{"2":{"147":1}}],["0636671",{"2":{"123":1}}],["0694488240208985e",{"2":{"296":1}}],["06942153",{"2":{"96":1}}],["0692383f",{"2":{"294":1}}],["0692",{"2":{"265":1}}],["0692472",{"2":{"147":1}}],["06973958",{"2":{"147":1}}],["0698316",{"2":{"270":1}}],["069884226",{"2":{"147":1}}],["0698748",{"2":{"96":1}}],["06953073",{"2":{"123":1}}],["06066203",{"2":{"273":1}}],["06063513",{"2":{"147":1}}],["060633026",{"2":{"147":1}}],["06080796",{"2":{"147":1}}],["060551327",{"2":{"147":1}}],["06052852",{"2":{"96":1}}],["06003239",{"2":{"123":1}}],["0601",{"2":{"97":1}}],["06026435",{"2":{"96":1}}],["060276944",{"2":{"96":1}}],["0652513113287296e",{"2":{"296":1}}],["06528456",{"2":{"147":1}}],["065457616139575e",{"2":{"296":1}}],["065425f",{"2":{"294":1}}],["065813f",{"2":{"294":1}}],["06582927",{"2":{"123":1}}],["0656201f",{"2":{"294":1}}],["065",{"2":{"280":1}}],["06592303",{"2":{"147":1}}],["06574188",{"2":{"123":1}}],["065509446",{"2":{"96":1}}],["065325744",{"2":{"96":1}}],["06",{"2":{"50":1,"280":11,"287":20}}],["0549",{"2":{"265":1}}],["054846566",{"2":{"147":1}}],["05f0",{"2":{"254":2,"255":1}}],["0538062782561645e",{"2":{"296":1}}],["0538752f",{"2":{"294":1}}],["053120755",{"2":{"270":1}}],["05358",{"2":{"205":1}}],["053398557",{"2":{"147":1}}],["053759247",{"2":{"147":1}}],["05379219",{"2":{"147":1}}],["0581",{"2":{"265":1}}],["058104068",{"2":{"147":1}}],["0580",{"2":{"265":1}}],["058005",{"2":{"261":1}}],["0580112im",{"2":{"186":1}}],["05892",{"2":{"205":1}}],["05899\\taccuracy",{"2":{"205":1}}],["058901027",{"2":{"147":1}}],["0587707f",{"2":{"294":1}}],["05877",{"2":{"147":1}}],["0523686092035774e",{"2":{"296":1}}],["0523839",{"2":{"147":1}}],["0524585f",{"2":{"294":1}}],["0526",{"2":{"265":1}}],["05262428",{"2":{"147":1}}],["05218964",{"2":{"197":1}}],["05290228",{"2":{"147":1}}],["052803323",{"2":{"147":1}}],["052086722",{"2":{"123":1}}],["059065611064305e",{"2":{"296":1}}],["059055757",{"2":{"147":1}}],["0597239f",{"2":{"294":1}}],["05951088",{"2":{"270":1}}],["059521634",{"2":{"147":1}}],["0592",{"2":{"265":1}}],["059224278",{"2":{"147":1}}],["0594",{"2":{"254":3}}],["059475474",{"2":{"147":1}}],["059655108",{"2":{"147":1}}],["059671473",{"2":{"147":1}}],["0599498852892281e",{"2":{"296":1}}],["05998",{"2":{"147":1}}],["059937198",{"2":{"147":1}}],["059965573",{"2":{"123":1}}],["059382",{"2":{"96":1}}],["051804f",{"2":{"294":1}}],["051825043",{"2":{"147":1}}],["051",{"2":{"280":3}}],["0513",{"2":{"265":1}}],["05177",{"2":{"205":1}}],["0517578e",{"2":{"166":1}}],["051994912",{"2":{"147":1}}],["0515602995979942e",{"2":{"296":1}}],["051536214",{"2":{"123":1}}],["05158",{"2":{"81":1}}],["051605415",{"2":{"123":1}}],["057005208",{"2":{"195":1,"196":1}}],["0572769",{"2":{"147":1}}],["05721",{"2":{"147":1}}],["05742457",{"2":{"147":1}}],["0574222",{"2":{"123":1}}],["057571076",{"2":{"147":1}}],["057713665",{"2":{"147":1}}],["05779423",{"2":{"143":1}}],["057887085",{"2":{"123":1}}],["050334737",{"2":{"147":1}}],["05095075",{"2":{"147":1}}],["0501",{"2":{"97":1}}],["050780848",{"2":{"147":1}}],["0507623",{"2":{"96":1}}],["05070",{"2":{"67":1}}],["0567135893997024e",{"2":{"296":1}}],["0567677f",{"2":{"294":1}}],["0560465288091306e",{"2":{"296":1}}],["056034062",{"2":{"147":1}}],["05684s\\ttraining",{"2":{"246":1}}],["0568757",{"2":{"147":1}}],["05657739601024536",{"2":{"192":1}}],["05659819",{"2":{"147":1}}],["0564903986057956",{"2":{"171":2}}],["056",{"2":{"97":3,"294":1}}],["0559924f",{"2":{"294":1}}],["055999022",{"2":{"147":1}}],["055",{"2":{"280":3}}],["0551181",{"2":{"270":2}}],["0551505",{"2":{"147":1}}],["05522",{"2":{"147":1}}],["055663344",{"2":{"147":1}}],["05560146",{"2":{"124":1}}],["055411585",{"2":{"147":1}}],["055422388",{"2":{"147":1}}],["055362783",{"2":{"147":1}}],["05550\\taccuracy",{"2":{"205":1}}],["055573903",{"2":{"147":1}}],["05556",{"2":{"81":1}}],["055711076",{"2":{"96":1}}],["05",{"2":{"50":1,"97":2,"147":1,"246":2,"265":1,"280":13}}],["00",{"2":{"214":2,"246":48,"280":541}}],["0097656",{"2":{"261":1}}],["009721650",{"2":{"254":1}}],["009720063",{"2":{"147":1}}],["009044455",{"2":{"254":1}}],["0090858545",{"2":{"147":1}}],["009084041f0",{"2":{"50":1}}],["009156350",{"2":{"254":1}}],["00914141",{"2":{"197":1}}],["009686996",{"2":{"254":1}}],["009989802",{"2":{"254":1}}],["009982477",{"2":{"254":1}}],["00996",{"2":{"205":1}}],["009963844931984",{"2":{"171":2}}],["009353531",{"2":{"254":1}}],["009397319",{"2":{"123":1}}],["0094632419040616e",{"2":{"296":1}}],["00945\\taccuracy",{"2":{"205":1}}],["009416734",{"2":{"123":1}}],["009401777",{"2":{"123":1}}],["009405821",{"2":{"123":1}}],["00942f",{"2":{"123":1}}],["008",{"2":{"280":4}}],["008794412",{"2":{"254":1}}],["008573188",{"2":{"254":1}}],["008559611",{"2":{"254":1}}],["008988982",{"2":{"254":1}}],["008673832",{"2":{"254":1}}],["008210877",{"2":{"254":1}}],["00823197",{"2":{"189":1}}],["008269919",{"2":{"147":1}}],["00835054",{"2":{"147":1}}],["0081296405",{"2":{"147":1}}],["008840925",{"2":{"147":1}}],["008846691",{"2":{"147":1}}],["008825431",{"2":{"123":1}}],["008014115",{"2":{"123":1}}],["008014375",{"2":{"123":1}}],["00803444",{"2":{"96":1}}],["00707058939334e",{"2":{"296":1}}],["007158f",{"2":{"294":1}}],["0075345926442328e",{"2":{"296":1}}],["0075346f",{"2":{"294":1}}],["007559542",{"2":{"254":1}}],["007330933",{"2":{"254":1}}],["0073688403",{"2":{"147":1}}],["0079646f",{"2":{"294":1}}],["007961722",{"2":{"254":1}}],["007984848",{"2":{"165":1}}],["007904152",{"2":{"147":1}}],["0072900057",{"2":{"147":1}}],["007675517",{"2":{"254":1}}],["0076573063",{"2":{"147":1}}],["0076395273",{"2":{"96":1}}],["0078154",{"2":{"270":1}}],["0078206",{"2":{"147":1}}],["007873881",{"2":{"96":1}}],["00744",{"2":{"147":1}}],["0074208006",{"2":{"123":1}}],["007434898",{"2":{"123":1}}],["0077208612465763e",{"2":{"296":1}}],["007789471",{"2":{"254":1}}],["007744140",{"2":{"254":1}}],["007705185",{"2":{"147":1}}],["00775477",{"2":{"124":1}}],["0077551096",{"2":{"123":1}}],["007762789",{"2":{"123":1}}],["0043",{"2":{"265":1}}],["004308250",{"2":{"254":1}}],["004308246",{"2":{"254":1}}],["004727511",{"2":{"254":1}}],["004753132",{"2":{"123":1}}],["004263383",{"2":{"254":1}}],["004263131",{"2":{"254":1}}],["0048576f",{"2":{"294":1}}],["004831538",{"2":{"254":1}}],["004818310",{"2":{"254":1}}],["004148528",{"2":{"254":1}}],["0041270256",{"2":{"96":1}}],["004580559059229e",{"2":{"296":1}}],["004589280",{"2":{"254":1}}],["004531536",{"2":{"254":1}}],["004514552",{"2":{"254":1}}],["0045406874",{"2":{"96":1}}],["00407581",{"2":{"197":1}}],["004908773",{"2":{"147":1}}],["004666437",{"2":{"254":1}}],["00461",{"2":{"147":1}}],["004629241",{"2":{"147":1}}],["0046932907",{"2":{"123":1}}],["004689035",{"2":{"123":1}}],["0044614216",{"2":{"123":1}}],["004457889",{"2":{"123":1}}],["00443555",{"2":{"96":1}}],["004425203",{"2":{"96":1}}],["006596491",{"2":{"254":1}}],["006546173",{"2":{"254":1}}],["006389363",{"2":{"254":1}}],["006852372",{"2":{"254":1}}],["0068602865",{"2":{"147":1}}],["006111934",{"2":{"254":1}}],["006038338",{"2":{"254":1}}],["006062332",{"2":{"96":1}}],["006657102",{"2":{"254":1}}],["006656272",{"2":{"96":1}}],["0066712513",{"2":{"197":1}}],["0067944888674725e",{"2":{"296":1}}],["006780343",{"2":{"147":1}}],["0067291846",{"2":{"123":1}}],["0069064857",{"2":{"147":1}}],["006228818",{"2":{"254":1}}],["006220151",{"2":{"254":1}}],["006241375",{"2":{"254":1}}],["006240552",{"2":{"123":1}}],["006276353",{"2":{"254":1}}],["0062883645",{"2":{"147":1}}],["006239144",{"2":{"123":1}}],["0064240773",{"2":{"96":1}}],["002668173",{"2":{"254":1}}],["0026271285",{"2":{"127":1}}],["002202400",{"2":{"254":1}}],["002219696",{"2":{"254":1}}],["0022123815",{"2":{"96":1}}],["002241082",{"2":{"254":1}}],["002317760",{"2":{"254":1}}],["002314395",{"2":{"254":1}}],["00234933",{"2":{"96":1}}],["002419390",{"2":{"254":1}}],["002460307",{"2":{"254":1}}],["002489067",{"2":{"254":1}}],["0024843565",{"2":{"147":1}}],["002483191",{"2":{"147":1}}],["002503448",{"2":{"254":1}}],["002532476",{"2":{"254":1}}],["002512253",{"2":{"254":1}}],["002584497",{"2":{"254":1}}],["00258",{"2":{"147":1}}],["002045441",{"2":{"254":1}}],["002052142",{"2":{"254":1}}],["002033974",{"2":{"254":1}}],["0020273982",{"2":{"147":1}}],["00201115524",{"2":{"97":1}}],["002898715",{"2":{"254":1}}],["0028994146",{"2":{"96":1}}],["002849674",{"2":{"254":1}}],["002823828",{"2":{"254":1}}],["0028807875",{"2":{"147":1}}],["002771034",{"2":{"254":1}}],["002704730",{"2":{"254":1}}],["002748450",{"2":{"254":1}}],["0027464977",{"2":{"96":1}}],["00271383",{"2":{"96":1}}],["0027350332",{"2":{"96":1}}],["002933637",{"2":{"254":1}}],["0029306945",{"2":{"96":1}}],["002955514",{"2":{"254":1}}],["002917211",{"2":{"254":1}}],["002969833",{"2":{"254":1}}],["002975470",{"2":{"254":1}}],["0029033197",{"2":{"96":1}}],["002143627",{"2":{"254":1}}],["0021180161",{"2":{"123":1}}],["0021197717",{"2":{"123":1}}],["0021157824",{"2":{"96":1}}],["0021741025",{"2":{"96":1}}],["002197485",{"2":{"96":1}}],["0039115f",{"2":{"294":1}}],["003963022",{"2":{"254":1}}],["0039605754",{"2":{"96":1}}],["003234812",{"2":{"254":1}}],["003274539",{"2":{"254":1}}],["00326585",{"2":{"96":1}}],["0031626f",{"2":{"294":1}}],["003183292",{"2":{"254":1}}],["003130429",{"2":{"254":1}}],["0031301186",{"2":{"96":1}}],["003135182",{"2":{"254":1}}],["0031371699",{"2":{"96":1}}],["003816272",{"2":{"254":1}}],["0038155357",{"2":{"96":1}}],["003847382",{"2":{"254":1}}],["003096203",{"2":{"254":1}}],["003098861",{"2":{"254":1}}],["003023159",{"2":{"254":1}}],["00303222",{"2":{"96":1}}],["0035893882",{"2":{"273":1}}],["003585879",{"2":{"96":1}}],["003575745",{"2":{"254":1}}],["0035986663",{"2":{"147":1}}],["0034529164089138e",{"2":{"296":1}}],["003485207",{"2":{"147":1}}],["0034407252",{"2":{"96":1}}],["0033476425243785e",{"2":{"296":1}}],["0033046f",{"2":{"294":1}}],["003336374",{"2":{"254":1}}],["003328452",{"2":{"254":1}}],["003325696",{"2":{"123":1}}],["0033274868",{"2":{"123":1}}],["003383829",{"2":{"96":1}}],["003730775",{"2":{"254":1}}],["0037318026",{"2":{"96":1}}],["00376332",{"2":{"96":1}}],["003750000000000005",{"2":{"50":1}}],["001965113",{"2":{"254":1}}],["001906899",{"2":{"254":1}}],["001910425",{"2":{"254":1}}],["001986223",{"2":{"254":1}}],["00198415",{"2":{"197":1}}],["001348543",{"2":{"254":1}}],["001342141",{"2":{"254":1}}],["001363893",{"2":{"254":1}}],["001365761",{"2":{"254":1}}],["001390206",{"2":{"254":1}}],["001392871",{"2":{"254":1}}],["001394353",{"2":{"254":1}}],["001394610",{"2":{"254":1}}],["001306025",{"2":{"254":1}}],["001308756",{"2":{"254":1}}],["001301785",{"2":{"254":1}}],["001351097",{"2":{"254":1}}],["001203460",{"2":{"254":1}}],["001202732",{"2":{"254":1}}],["001224924",{"2":{"254":1}}],["001220055",{"2":{"254":1}}],["001242174",{"2":{"254":1}}],["001248004",{"2":{"254":1}}],["001258306",{"2":{"254":1}}],["001217105",{"2":{"254":1}}],["001213797",{"2":{"254":1}}],["001211297",{"2":{"254":1}}],["001270391",{"2":{"254":1}}],["001271797",{"2":{"254":1}}],["00127967",{"2":{"96":1}}],["001261572",{"2":{"254":1}}],["001231446",{"2":{"254":1}}],["001293942",{"2":{"254":1}}],["001",{"2":{"220":1,"280":1}}],["001886792",{"2":{"254":1}}],["001881554",{"2":{"254":1}}],["001840178",{"2":{"254":1}}],["001836546",{"2":{"254":1}}],["0018348376",{"2":{"147":1}}],["001818714",{"2":{"254":1}}],["001854344",{"2":{"254":1}}],["001857541",{"2":{"254":1}}],["001822183",{"2":{"254":1}}],["001803906",{"2":{"254":1}}],["0018621369",{"2":{"96":1}}],["001518319",{"2":{"254":1}}],["001575661",{"2":{"254":1}}],["001594038",{"2":{"254":1}}],["001524673",{"2":{"254":1}}],["0015270555",{"2":{"96":1}}],["001584558",{"2":{"254":1}}],["0015867107",{"2":{"147":1}}],["001559482",{"2":{"254":1}}],["001530622",{"2":{"254":1}}],["0015469915",{"2":{"147":1}}],["001f0",{"2":{"97":1,"124":1,"235":1,"246":1}}],["001465745",{"2":{"254":1}}],["00146704",{"2":{"96":1}}],["001444635",{"2":{"254":1}}],["001451151",{"2":{"254":1}}],["001458327",{"2":{"254":1}}],["001458622",{"2":{"254":1}}],["001430126",{"2":{"254":1}}],["001479890",{"2":{"254":1}}],["001470311",{"2":{"254":1}}],["001489272",{"2":{"254":1}}],["001482069",{"2":{"254":1}}],["001429706",{"2":{"254":1}}],["001413772",{"2":{"254":1}}],["001491809",{"2":{"254":1}}],["001496360",{"2":{"254":1}}],["0014993",{"2":{"96":1}}],["001657575",{"2":{"254":1}}],["001625932",{"2":{"254":1}}],["001649175",{"2":{"254":1}}],["001640767",{"2":{"254":1}}],["001645100",{"2":{"254":1}}],["001697728",{"2":{"254":1}}],["00169459",{"2":{"96":1}}],["0016859076",{"2":{"147":1}}],["00163654",{"2":{"96":1}}],["00160158",{"2":{"96":1}}],["001789718",{"2":{"254":1}}],["001738792",{"2":{"254":1}}],["0017325052",{"2":{"147":1}}],["001700793",{"2":{"254":1}}],["001797066",{"2":{"254":1}}],["0017983615",{"2":{"96":1}}],["001751710",{"2":{"254":1}}],["00174489",{"2":{"96":1}}],["001056366",{"2":{"254":1}}],["001054667",{"2":{"254":1}}],["001096051",{"2":{"254":1}}],["001001803",{"2":{"254":1}}],["001007635",{"2":{"254":1}}],["001007139",{"2":{"254":1}}],["001022693",{"2":{"254":1}}],["001014963",{"2":{"254":1}}],["0010140468",{"2":{"123":1}}],["00101147",{"2":{"197":1}}],["0010442184",{"2":{"123":1}}],["001037403",{"2":{"254":1}}],["00103705",{"2":{"96":1}}],["001038662",{"2":{"254":1}}],["00103866",{"2":{"96":1}}],["00106235",{"2":{"96":1}}],["001149551",{"2":{"254":1}}],["001149394",{"2":{"254":1}}],["001148664",{"2":{"254":1}}],["001140883",{"2":{"254":1}}],["001145163",{"2":{"254":1}}],["001126388",{"2":{"254":1}}],["0011292367",{"2":{"96":1}}],["001197141",{"2":{"254":1}}],["001113268",{"2":{"254":1}}],["001112666",{"2":{"254":1}}],["0011154839",{"2":{"96":1}}],["001130134",{"2":{"254":1}}],["0011872598",{"2":{"147":1}}],["00118357129",{"2":{"97":1}}],["00110328",{"2":{"96":1}}],["005213017209725e",{"2":{"296":1}}],["0052819f",{"2":{"294":1}}],["0052928f",{"2":{"294":1}}],["005471f",{"2":{"294":1}}],["0057s",{"2":{"261":1}}],["00571",{"2":{"88":5}}],["005886041",{"2":{"254":1}}],["0058369",{"2":{"123":1}}],["005661591405254e",{"2":{"296":1}}],["0056975360155791e",{"2":{"296":1}}],["00569",{"2":{"280":2}}],["005691605",{"2":{"254":1}}],["005651589",{"2":{"254":1}}],["005671175",{"2":{"254":1}}],["0056276177",{"2":{"96":1}}],["005f0",{"2":{"254":1}}],["005",{"2":{"220":1,"280":6}}],["0055776797",{"2":{"147":1}}],["005070512",{"2":{"254":1}}],["005017307",{"2":{"254":1}}],["005086349",{"2":{"147":1}}],["005000000000000009",{"2":{"50":1}}],["005381152002951e",{"2":{"296":1}}],["005350559",{"2":{"147":1}}],["005353872",{"2":{"123":1}}],["005372945",{"2":{"123":1}}],["0053392933",{"2":{"96":1}}],["005158901",{"2":{"273":1}}],["005170135",{"2":{"254":1}}],["005173992",{"2":{"96":1}}],["0051055951",{"2":{"97":1}}],["000",{"2":{"266":1,"283":1,"287":2,"288":1}}],["0000",{"2":{"280":11}}],["000058015",{"2":{"254":1}}],["000058747",{"2":{"254":1}}],["000072045",{"2":{"254":1}}],["00000",{"2":{"205":48,"235":17,"237":3}}],["000000e+00>",{"2":{"147":4}}],["00022516783217494898",{"2":{"296":1}}],["00022309657793136115",{"2":{"296":1}}],["00022309602",{"2":{"294":1}}],["00022899765948751778",{"2":{"296":1}}],["00022899467",{"2":{"294":1}}],["0002292761733439514",{"2":{"296":1}}],["00022927686",{"2":{"294":1}}],["00022118821105469543",{"2":{"296":1}}],["00022118838",{"2":{"294":1}}],["0002203426335448267",{"2":{"296":1}}],["0002203402",{"2":{"294":1}}],["00022402793",{"2":{"294":1}}],["00022763764589309998",{"2":{"296":1}}],["00022763766",{"2":{"294":1}}],["00022728855157284713",{"2":{"296":1}}],["00022728901",{"2":{"294":1}}],["000226719",{"2":{"254":1}}],["00024418851308823634",{"2":{"296":1}}],["00024444716656784975",{"2":{"296":1}}],["0002461702184744789",{"2":{"296":1}}],["00024034676115898234",{"2":{"296":1}}],["0002403455",{"2":{"294":1}}],["00024896805",{"2":{"294":1}}],["00024713810137711416",{"2":{"296":1}}],["00024713847",{"2":{"294":1}}],["0002477158559482912",{"2":{"296":1}}],["00024771586",{"2":{"294":1}}],["00024750977172510844",{"2":{"296":1}}],["00024750977",{"2":{"294":1}}],["000249964",{"2":{"254":1}}],["000264422",{"2":{"254":1}}],["000261034",{"2":{"254":1}}],["0002875879872335591",{"2":{"296":1}}],["0002888079448299872",{"2":{"296":1}}],["000288372",{"2":{"254":1}}],["00028535946776719076",{"2":{"296":1}}],["0002852504187144746",{"2":{"296":1}}],["00028524833",{"2":{"294":1}}],["000281135",{"2":{"254":1}}],["000281832268",{"2":{"97":1}}],["00023656456266660993",{"2":{"296":1}}],["00023631081669807597",{"2":{"296":1}}],["00023630871",{"2":{"294":1}}],["00023875543193984836",{"2":{"296":1}}],["00023157069141912902",{"2":{"296":1}}],["00023942263937834753",{"2":{"296":1}}],["00023942554",{"2":{"294":1}}],["00023098065387673432",{"2":{"296":1}}],["00023097855",{"2":{"294":1}}],["00023513551266315306",{"2":{"296":1}}],["00023513274",{"2":{"294":1}}],["000233010",{"2":{"254":1}}],["000232967",{"2":{"254":1}}],["00027705075610947075",{"2":{"296":1}}],["000271820",{"2":{"254":1}}],["000279625",{"2":{"254":1}}],["000279479",{"2":{"254":1}}],["000278991",{"2":{"254":1}}],["000276357",{"2":{"123":1}}],["00029607197514000284",{"2":{"296":1}}],["0002960712",{"2":{"294":1}}],["000290614",{"2":{"254":1}}],["00029203",{"2":{"197":1}}],["00025591577145627056",{"2":{"296":1}}],["00025591633",{"2":{"294":1}}],["0002572190715636609",{"2":{"296":1}}],["00025722026",{"2":{"294":1}}],["0002548092919142067",{"2":{"296":1}}],["00025480852",{"2":{"294":1}}],["0002584676306602902",{"2":{"296":1}}],["0002584681",{"2":{"294":1}}],["00025979521651754906",{"2":{"296":1}}],["00025979505",{"2":{"294":1}}],["000252445",{"2":{"254":1}}],["00025326014",{"2":{"164":1}}],["00020783417733023503",{"2":{"296":1}}],["00020783316",{"2":{"294":1}}],["00020521891535947",{"2":{"296":1}}],["00020521648",{"2":{"294":1}}],["0002046598284286004",{"2":{"296":1}}],["00020465998",{"2":{"294":1}}],["00020458228816547466",{"2":{"296":1}}],["00020458306",{"2":{"294":1}}],["00020969985547375172",{"2":{"296":1}}],["00020969917",{"2":{"294":1}}],["00020987168923099534",{"2":{"296":1}}],["00020986874",{"2":{"294":1}}],["000209119",{"2":{"254":1}}],["00020880220427263064",{"2":{"296":1}}],["0002088031",{"2":{"294":1}}],["00020897041153150129",{"2":{"296":1}}],["00020894228539207352",{"2":{"296":1}}],["00020894472",{"2":{"294":1}}],["00020896742",{"2":{"294":1}}],["00020849705",{"2":{"164":1}}],["00020110630044896678",{"2":{"296":1}}],["00020110856",{"2":{"294":1}}],["000201709",{"2":{"254":1}}],["00020383631034325903",{"2":{"296":1}}],["00020383588",{"2":{"294":1}}],["000203011135",{"2":{"97":1}}],["000200418",{"2":{"254":1}}],["00020055473",{"2":{"123":1}}],["000202413",{"2":{"254":1}}],["0002195028717603664",{"2":{"296":1}}],["00021950372",{"2":{"294":1}}],["000219414",{"2":{"254":1}}],["00021877435427482487",{"2":{"296":1}}],["00021877282",{"2":{"294":1}}],["00021804248",{"2":{"123":1}}],["00021730853985387548",{"2":{"296":1}}],["00021730701",{"2":{"294":1}}],["00021798005",{"2":{"123":1}}],["00021043877369686858",{"2":{"296":1}}],["0002104392",{"2":{"294":1}}],["000210634",{"2":{"254":1}}],["00021668768022198756",{"2":{"296":1}}],["00021668768",{"2":{"294":1}}],["0007342948457122147",{"2":{"296":1}}],["000732422",{"2":{"147":1}}],["0007198111732472897",{"2":{"296":1}}],["0007936663818308618",{"2":{"296":1}}],["0007968502568538448",{"2":{"296":1}}],["0007663684670847433",{"2":{"296":1}}],["0007674425274839696",{"2":{"296":1}}],["0007675145653677455",{"2":{"296":1}}],["0007628348698585942",{"2":{"296":1}}],["0007096161531146074",{"2":{"296":1}}],["000709339",{"2":{"254":1}}],["000705937786320906",{"2":{"296":1}}],["0007039347168316969",{"2":{"296":1}}],["0007083392471198734",{"2":{"296":1}}],["000707322",{"2":{"254":1}}],["0007291652117940308",{"2":{"296":1}}],["000722710",{"2":{"254":1}}],["000725332",{"2":{"254":1}}],["000723396",{"2":{"254":1}}],["0007827243794498699",{"2":{"296":1}}],["000789952",{"2":{"254":1}}],["000786213",{"2":{"254":1}}],["0007736237",{"2":{"197":1}}],["000773079",{"2":{"96":1}}],["0007416157470687397",{"2":{"296":1}}],["00074655426662429",{"2":{"295":1}}],["0007467641",{"2":{"123":1}}],["000740890",{"2":{"254":1}}],["00074701244",{"2":{"123":1}}],["000750440",{"2":{"254":1}}],["00075",{"2":{"88":2}}],["0006370621519656112",{"2":{"296":1}}],["000636658",{"2":{"254":1}}],["0006401482151674277",{"2":{"296":1}}],["000641221",{"2":{"254":1}}],["0006815718149522211",{"2":{"296":1}}],["0006873017693977915",{"2":{"296":1}}],["0006807522492192497",{"2":{"296":1}}],["00068473816",{"2":{"163":1}}],["000625095",{"2":{"254":1}}],["000622213",{"2":{"254":1}}],["0006998163852451662",{"2":{"296":1}}],["000694363",{"2":{"254":1}}],["000693607",{"2":{"254":1}}],["0006779767132701048",{"2":{"296":1}}],["000678301",{"2":{"254":1}}],["000676658",{"2":{"254":1}}],["000612216",{"2":{"254":1}}],["000618911",{"2":{"96":1}}],["0006008070585930647",{"2":{"296":1}}],["000604900",{"2":{"254":1}}],["00060117245",{"2":{"123":1}}],["000657654",{"2":{"254":1}}],["000657073",{"2":{"254":1}}],["000656006",{"2":{"254":1}}],["000659175",{"2":{"96":1}}],["0006680203486089178",{"2":{"296":1}}],["0006601656832833226",{"2":{"296":1}}],["000665410",{"2":{"254":1}}],["000665883",{"2":{"96":1}}],["000669667",{"2":{"254":1}}],["0003001225345201752",{"2":{"296":1}}],["00030787125091351374",{"2":{"296":1}}],["000307869",{"2":{"294":1}}],["0003476422657305056",{"2":{"296":1}}],["0003426428411950763",{"2":{"296":1}}],["0003480696592447589",{"2":{"296":1}}],["0003480712",{"2":{"294":1}}],["00034049777821961217",{"2":{"296":1}}],["00034049625",{"2":{"294":1}}],["000343106",{"2":{"254":1}}],["0003",{"2":{"265":1}}],["000339579",{"2":{"254":1}}],["000339070",{"2":{"254":1}}],["000336276",{"2":{"254":1}}],["000337431",{"2":{"254":1}}],["00031949468363295594",{"2":{"296":1}}],["0003191377717522412",{"2":{"296":1}}],["00031913762",{"2":{"294":1}}],["000319502",{"2":{"254":1}}],["000319554",{"2":{"254":1}}],["0003113416411456186",{"2":{"296":1}}],["0003113387",{"2":{"294":1}}],["000311622",{"2":{"254":1}}],["00031030029206790394",{"2":{"296":1}}],["00031020346477542545",{"2":{"296":1}}],["00031020257",{"2":{"294":1}}],["000310050",{"2":{"254":1}}],["000312800",{"2":{"254":1}}],["000317998",{"2":{"254":1}}],["000315701",{"2":{"254":1}}],["000318603",{"2":{"254":1}}],["00035162594564089306",{"2":{"296":1}}],["000353080",{"2":{"254":1}}],["000355202",{"2":{"254":1}}],["0003288542043758386",{"2":{"296":1}}],["0003234436096119211",{"2":{"296":1}}],["0003239590848124499",{"2":{"296":1}}],["00032395986",{"2":{"294":1}}],["00032742233636123233",{"2":{"296":1}}],["00032224463199342885",{"2":{"296":1}}],["00032224192",{"2":{"294":1}}],["00032174655631877743",{"2":{"296":1}}],["0003217462",{"2":{"294":1}}],["000321687",{"2":{"254":1}}],["000320880",{"2":{"254":1}}],["000366133",{"2":{"254":1}}],["000364929",{"2":{"254":1}}],["000363118",{"2":{"254":1}}],["000365587",{"2":{"96":1}}],["000370217",{"2":{"254":1}}],["000375299",{"2":{"254":1}}],["000373781",{"2":{"254":1}}],["000376609",{"2":{"254":1}}],["000377079",{"2":{"254":1}}],["000392687",{"2":{"254":1}}],["000395160",{"2":{"254":1}}],["000397676",{"2":{"254":1}}],["000391039",{"2":{"96":1}}],["000542178",{"2":{"254":1}}],["0005237149715754186",{"2":{"296":1}}],["0005239929431246982",{"2":{"296":1}}],["000529087",{"2":{"254":1}}],["000520651",{"2":{"96":1}}],["000559338",{"2":{"254":1}}],["000554989",{"2":{"254":1}}],["000551110",{"2":{"254":1}}],["000556399",{"2":{"254":1}}],["000550666",{"2":{"254":1}}],["000534889",{"2":{"254":1}}],["00053332",{"2":{"197":1}}],["0005f0",{"2":{"254":1}}],["0005732331186485315",{"2":{"296":1}}],["00057825993",{"2":{"147":1}}],["000574605",{"2":{"96":1}}],["000500438",{"2":{"254":1}}],["000508558",{"2":{"254":1}}],["000505303",{"2":{"254":1}}],["000503887",{"2":{"254":1}}],["000504208321",{"2":{"97":1}}],["000501535",{"2":{"96":1}}],["000584197",{"2":{"96":1}}],["0004",{"2":{"265":1,"287":1}}],["000441957",{"2":{"254":1}}],["000449356",{"2":{"254":1}}],["000449867",{"2":{"96":1}}],["00041967105742638807",{"2":{"296":1}}],["000418792",{"2":{"254":1}}],["000411240",{"2":{"254":1}}],["000412173",{"2":{"254":1}}],["0004855881439717459",{"2":{"296":1}}],["000485165",{"2":{"254":1}}],["000480712",{"2":{"96":1}}],["000480834",{"2":{"96":1}}],["000450692",{"2":{"254":1}}],["000458097",{"2":{"254":1}}],["000451727",{"2":{"254":1}}],["000438084",{"2":{"254":1}}],["000438617",{"2":{"96":1}}],["000439273",{"2":{"254":1}}],["000491625",{"2":{"254":1}}],["000495484",{"2":{"96":1}}],["0004606481010041707",{"2":{"296":1}}],["00046014786",{"2":{"163":1}}],["000466423",{"2":{"254":1}}],["000466388",{"2":{"254":1}}],["000461125",{"2":{"254":1}}],["000405950",{"2":{"254":1}}],["000401148",{"2":{"254":1}}],["00040024254",{"2":{"96":1}}],["000427971",{"2":{"254":1}}],["00047666396",{"2":{"147":1}}],["000470662",{"2":{"96":1}}],["000804500632374427",{"2":{"296":1}}],["0008493740251560428",{"2":{"296":1}}],["000846439777128357",{"2":{"296":1}}],["000844118",{"2":{"254":1}}],["000894072",{"2":{"254":1}}],["000834267",{"2":{"254":1}}],["000882140",{"2":{"254":1}}],["000884497",{"2":{"254":1}}],["000887432",{"2":{"254":1}}],["00088779256",{"2":{"123":1}}],["00088798604",{"2":{"123":1}}],["000818373",{"2":{"254":1}}],["000819583",{"2":{"254":1}}],["000813131",{"2":{"96":1}}],["000859504",{"2":{"254":1}}],["000857728",{"2":{"254":1}}],["000856754",{"2":{"254":1}}],["000868318",{"2":{"96":1}}],["000860063",{"2":{"96":1}}],["0008762945",{"2":{"96":1}}],["000964010",{"2":{"254":1}}],["000991343",{"2":{"254":1}}],["000994991",{"2":{"254":1}}],["000992391663909964",{"2":{"50":2}}],["000918692",{"2":{"254":1}}],["000912517",{"2":{"254":1}}],["000916688",{"2":{"254":1}}],["000959212",{"2":{"254":1}}],["000936380",{"2":{"254":1}}],["000983181",{"2":{"254":1}}],["000927515",{"2":{"254":1}}],["000926178",{"2":{"254":1}}],["000926728",{"2":{"96":1}}],["000900613",{"2":{"254":1}}],["000908301",{"2":{"96":1}}],["000977637",{"2":{"96":1}}],["00014301574",{"2":{"294":1}}],["000143649",{"2":{"254":1}}],["0001496133966975353",{"2":{"296":1}}],["00014961041",{"2":{"294":1}}],["0001490025",{"2":{"294":1}}],["00014750205937833923",{"2":{"296":1}}],["00014750332",{"2":{"294":1}}],["00014776770788838128",{"2":{"296":1}}],["0001477714",{"2":{"294":1}}],["000147021",{"2":{"254":1}}],["00014831039840870855",{"2":{"296":1}}],["00014801379174594998",{"2":{"296":1}}],["00014801395",{"2":{"294":1}}],["00014899951444020555",{"2":{"296":1}}],["00014828101631398684",{"2":{"296":1}}],["00014828255",{"2":{"294":1}}],["00014860643411901837",{"2":{"296":1}}],["00014860366",{"2":{"294":1}}],["00014671273679659245",{"2":{"296":1}}],["00014670983",{"2":{"294":1}}],["00014633141595326291",{"2":{"296":1}}],["00014633437",{"2":{"294":1}}],["00014682859668029686",{"2":{"296":1}}],["00014682561",{"2":{"294":1}}],["000146818",{"2":{"96":1}}],["0001416534165333453",{"2":{"296":1}}],["00014165216",{"2":{"294":1}}],["00014175455732583236",{"2":{"296":1}}],["00014175575",{"2":{"294":1}}],["0001418841130822501",{"2":{"296":1}}],["00014188448",{"2":{"294":1}}],["00014111447660014205",{"2":{"296":1}}],["00014111411",{"2":{"294":1}}],["00014117833",{"2":{"294":1}}],["000141316",{"2":{"254":1}}],["00014254723552485873",{"2":{"296":1}}],["00014254355",{"2":{"294":1}}],["0001426472594428061",{"2":{"296":1}}],["00014263122524434918",{"2":{"296":1}}],["00014262945",{"2":{"294":1}}],["0001427783198318679",{"2":{"296":1}}],["00014278436225473204",{"2":{"296":1}}],["00014278505",{"2":{"294":1}}],["00014278025",{"2":{"294":1}}],["00014477276505536627",{"2":{"296":1}}],["00014477345",{"2":{"294":1}}],["00014413404597055185",{"2":{"296":1}}],["00014413495",{"2":{"294":1}}],["00014455265643757148",{"2":{"296":1}}],["00014455267",{"2":{"294":1}}],["0001444444098864041",{"2":{"296":1}}],["00014444215",{"2":{"294":1}}],["00014462121920117096",{"2":{"296":1}}],["00014462393",{"2":{"294":1}}],["00014461752875598408",{"2":{"296":1}}],["00014461648873656717",{"2":{"296":1}}],["00014461616",{"2":{"294":1}}],["00014461808",{"2":{"294":1}}],["00014089126983557595",{"2":{"296":1}}],["00014089058",{"2":{"294":1}}],["00014016335115625707",{"2":{"296":1}}],["00014016182",{"2":{"294":1}}],["0001409013088987373",{"2":{"296":1}}],["00014090221",{"2":{"294":1}}],["00014076288487100295",{"2":{"296":1}}],["00014076366",{"2":{"294":1}}],["00014034815229512965",{"2":{"296":1}}],["00014034772",{"2":{"294":1}}],["00014002856061290895",{"2":{"296":1}}],["00014002899",{"2":{"294":1}}],["00014577488461778537",{"2":{"296":1}}],["00014577566",{"2":{"294":1}}],["00014546464913156064",{"2":{"296":1}}],["00014546272",{"2":{"294":1}}],["0001458835307596573",{"2":{"296":1}}],["0001458811",{"2":{"294":1}}],["0001458104816263927",{"2":{"296":1}}],["00014581048",{"2":{"294":1}}],["00015877615835295628",{"2":{"296":1}}],["0001587786",{"2":{"294":1}}],["0001570579517986553",{"2":{"296":1}}],["00015705569",{"2":{"294":1}}],["00015768438",{"2":{"294":1}}],["0001522184710703305",{"2":{"296":1}}],["00015221832",{"2":{"294":1}}],["00015260976961557342",{"2":{"296":1}}],["00015260768",{"2":{"294":1}}],["000152633",{"2":{"254":1}}],["00015326878107489213",{"2":{"296":1}}],["0001532651",{"2":{"294":1}}],["00015308497913644838",{"2":{"296":1}}],["00015308514",{"2":{"294":1}}],["00015345501195189177",{"2":{"296":1}}],["00015345518",{"2":{"294":1}}],["00015342065440732578",{"2":{"296":1}}],["0001534175",{"2":{"294":1}}],["00015550967757805854",{"2":{"296":1}}],["00015550757",{"2":{"294":1}}],["0001551919825577893",{"2":{"296":1}}],["00015519376",{"2":{"294":1}}],["0001552761470070882",{"2":{"296":1}}],["00015527422",{"2":{"294":1}}],["00015458567699466283",{"2":{"296":1}}],["00015458622",{"2":{"294":1}}],["00015436650810295014",{"2":{"296":1}}],["00015436667",{"2":{"294":1}}],["00015189987787526965",{"2":{"296":1}}],["00015148486257272604",{"2":{"296":1}}],["00015148571",{"2":{"294":1}}],["0001516227532532436",{"2":{"296":1}}],["0001516205",{"2":{"294":1}}],["00015190980763250898",{"2":{"296":1}}],["00015190005",{"2":{"294":1}}],["0001519071",{"2":{"294":1}}],["0001502353091927029",{"2":{"296":1}}],["00015023709",{"2":{"294":1}}],["00015052314217064189",{"2":{"296":1}}],["00015052347",{"2":{"294":1}}],["00015015776935036124",{"2":{"296":1}}],["0001501576",{"2":{"294":1}}],["0001501849377760926",{"2":{"296":1}}],["00015018477",{"2":{"294":1}}],["0001593343377194908",{"2":{"296":1}}],["00015950913063477265",{"2":{"296":1}}],["00015950877",{"2":{"294":1}}],["00015913601221722784",{"2":{"296":1}}],["00015913686",{"2":{"294":1}}],["000159133",{"2":{"254":1}}],["000159858",{"2":{"254":1}}],["0001888192708024324",{"2":{"296":1}}],["0001888233672889447",{"2":{"296":1}}],["00018882136",{"2":{"294":1}}],["00018882217",{"2":{"294":1}}],["00018907153329861083",{"2":{"296":1}}],["00018907207",{"2":{"294":1}}],["0001866333089674399",{"2":{"296":1}}],["00018663032",{"2":{"294":1}}],["000186986",{"2":{"254":1}}],["0001832126485499808",{"2":{"296":1}}],["00018321196",{"2":{"294":1}}],["00018369659185625887",{"2":{"296":1}}],["00018369449",{"2":{"294":1}}],["0001873437349485446",{"2":{"296":1}}],["00018734005",{"2":{"294":1}}],["00018738551286897687",{"2":{"296":1}}],["00018738274",{"2":{"294":1}}],["00018438571566663171",{"2":{"296":1}}],["00018438329",{"2":{"294":1}}],["00018453389466604935",{"2":{"296":1}}],["00018453374",{"2":{"294":1}}],["0001845576753279514",{"2":{"296":1}}],["0001845575",{"2":{"294":1}}],["00018230733837413447",{"2":{"296":1}}],["00018230491",{"2":{"294":1}}],["00018248076249123237",{"2":{"296":1}}],["00018247867",{"2":{"294":1}}],["00018585492679984496",{"2":{"296":1}}],["00018585508",{"2":{"294":1}}],["00018572575551174637",{"2":{"296":1}}],["00018572801",{"2":{"294":1}}],["00018590117",{"2":{"294":1}}],["00018567254426320238",{"2":{"296":1}}],["00018567356",{"2":{"294":1}}],["00018562307",{"2":{"294":1}}],["000180287",{"2":{"254":1}}],["00016772500603453095",{"2":{"296":1}}],["00016375791863294776",{"2":{"296":1}}],["00016375548",{"2":{"294":1}}],["0001609140841522923",{"2":{"296":1}}],["00016091425",{"2":{"294":1}}],["00016270977093507758",{"2":{"296":1}}],["00016270993",{"2":{"294":1}}],["0001624573995566141",{"2":{"296":1}}],["00016245825",{"2":{"294":1}}],["0001624933825069277",{"2":{"296":1}}],["00016249428",{"2":{"294":1}}],["0001646288436580844",{"2":{"296":1}}],["00016462848",{"2":{"294":1}}],["00016422321670933018",{"2":{"296":1}}],["00016422565",{"2":{"294":1}}],["00016987823648622463",{"2":{"296":1}}],["00016988067",{"2":{"294":1}}],["000169103",{"2":{"254":1}}],["00016627398003714284",{"2":{"296":1}}],["00016627453",{"2":{"294":1}}],["00016691740713066828",{"2":{"296":1}}],["00016691723",{"2":{"294":1}}],["00016687043664294915",{"2":{"296":1}}],["00016687042",{"2":{"294":1}}],["00016638852086633133",{"2":{"296":1}}],["0001663885",{"2":{"294":1}}],["00016164971261830032",{"2":{"296":1}}],["00016164676",{"2":{"294":1}}],["00016184421098402112",{"2":{"296":1}}],["0001618415",{"2":{"294":1}}],["000161957",{"2":{"254":1}}],["00016878",{"2":{"197":1}}],["00017059183643381996",{"2":{"296":1}}],["00017059014",{"2":{"294":1}}],["0001705941",{"2":{"294":1}}],["00017832730514106884",{"2":{"296":1}}],["00017832675",{"2":{"294":1}}],["00017884592906976674",{"2":{"296":1}}],["00017884467",{"2":{"294":1}}],["0001781980100810144",{"2":{"296":1}}],["00017819837",{"2":{"294":1}}],["00017164245460661758",{"2":{"296":1}}],["00017164544",{"2":{"294":1}}],["00017150653242433716",{"2":{"296":1}}],["00017150475",{"2":{"294":1}}],["00017463043151873554",{"2":{"296":1}}],["00017463026",{"2":{"294":1}}],["0001744479",{"2":{"123":1}}],["00017334775608278201",{"2":{"296":1}}],["0001733446",{"2":{"294":1}}],["0001733818391925073",{"2":{"296":1}}],["00017337958",{"2":{"294":1}}],["00017520478984558427",{"2":{"296":1}}],["00017520235",{"2":{"294":1}}],["000175943447597476",{"2":{"296":1}}],["00017598764546901608",{"2":{"296":1}}],["0001759151224693883",{"2":{"296":1}}],["00017591458",{"2":{"294":1}}],["0001759908",{"2":{"294":1}}],["00017636272939609954",{"2":{"296":1}}],["00017636029",{"2":{"294":1}}],["0001763010589663154",{"2":{"296":1}}],["0001763035",{"2":{"294":1}}],["00017633298215273782",{"2":{"296":1}}],["00017633072",{"2":{"294":1}}],["000176694",{"2":{"254":1}}],["00017225651296658113",{"2":{"296":1}}],["00017225282",{"2":{"294":1}}],["00017293241013490452",{"2":{"296":1}}],["00017293467",{"2":{"294":1}}],["00017256773414937995",{"2":{"296":1}}],["00017256496",{"2":{"294":1}}],["00017726282141184352",{"2":{"296":1}}],["00017726328",{"2":{"294":1}}],["000177574",{"2":{"254":1}}],["00019372796673514434",{"2":{"296":1}}],["00019372506",{"2":{"294":1}}],["00019205013955121797",{"2":{"296":1}}],["00019204959",{"2":{"294":1}}],["000192074",{"2":{"254":1}}],["00019889632671884686",{"2":{"296":1}}],["00019889923",{"2":{"294":1}}],["00019885768548366046",{"2":{"296":1}}],["00019885453",{"2":{"294":1}}],["00019746958517596915",{"2":{"296":1}}],["00019747327",{"2":{"294":1}}],["00019774817466985131",{"2":{"296":1}}],["00019774816",{"2":{"294":1}}],["00019784389104215682",{"2":{"296":1}}],["00019784343",{"2":{"294":1}}],["00019028710214204783",{"2":{"296":1}}],["00019028709",{"2":{"294":1}}],["0001902274840692754",{"2":{"296":1}}],["00019022694",{"2":{"294":1}}],["0001945096995201535",{"2":{"296":1}}],["00019450326",{"2":{"294":1}}],["00019428026151128424",{"2":{"296":1}}],["00019428028",{"2":{"294":1}}],["0001948726227831649",{"2":{"296":1}}],["00019487216",{"2":{"294":1}}],["00019194803871594836",{"2":{"296":1}}],["00019194714",{"2":{"294":1}}],["00019175104408953125",{"2":{"296":1}}],["00019175147",{"2":{"294":1}}],["000191524",{"2":{"254":1}}],["00019577764886468858",{"2":{"296":1}}],["00019577722",{"2":{"294":1}}],["00019562244",{"2":{"123":1}}],["000196180",{"2":{"254":1}}],["000196472",{"2":{"123":1}}],["00012808509435908975",{"2":{"296":1}}],["00012808383",{"2":{"294":1}}],["00012849907710582722",{"2":{"296":1}}],["00012849839",{"2":{"294":1}}],["000128774",{"2":{"254":1}}],["00012510945583593439",{"2":{"296":1}}],["00012510877",{"2":{"294":1}}],["000125641",{"2":{"254":1}}],["000125642",{"2":{"254":1}}],["00012335951666220181",{"2":{"296":1}}],["00012335583",{"2":{"294":1}}],["00012327676807218543",{"2":{"296":1}}],["00012327557",{"2":{"294":1}}],["000123195",{"2":{"254":1}}],["00012705672265776708",{"2":{"296":1}}],["00012705963",{"2":{"294":1}}],["00012786824796594428",{"2":{"296":1}}],["00012786509",{"2":{"294":1}}],["00012743169742498184",{"2":{"296":1}}],["00012743202",{"2":{"294":1}}],["0001266979890756742",{"2":{"296":1}}],["0001266959",{"2":{"294":1}}],["00012671904799279023",{"2":{"296":1}}],["00012671694",{"2":{"294":1}}],["00012645814",{"2":{"294":1}}],["000126347542",{"2":{"97":1}}],["00012948764830723452",{"2":{"296":1}}],["00012992579044296065",{"2":{"296":1}}],["0001299228",{"2":{"294":1}}],["00012983844985640914",{"2":{"296":1}}],["0001298414",{"2":{"294":1}}],["00012977615303609452",{"2":{"296":1}}],["00012977826",{"2":{"294":1}}],["00012959120614904347",{"2":{"296":1}}],["00012959103",{"2":{"294":1}}],["00012959851434234343",{"2":{"296":1}}],["00012959536",{"2":{"294":1}}],["00012904078175776253",{"2":{"296":1}}],["00012904078",{"2":{"294":1}}],["00012090178401491052",{"2":{"296":1}}],["000120902325",{"2":{"294":1}}],["00012041697246549082",{"2":{"296":1}}],["00012041823",{"2":{"294":1}}],["00012045840898578325",{"2":{"296":1}}],["00012045895",{"2":{"294":1}}],["00012013815191213221",{"2":{"296":1}}],["000120134464",{"2":{"294":1}}],["00012018774703880979",{"2":{"296":1}}],["00012018732",{"2":{"294":1}}],["00012000166",{"2":{"294":1}}],["000120629",{"2":{"254":1}}],["00012167637249545816",{"2":{"296":1}}],["00012167484",{"2":{"294":1}}],["0001213918391295802",{"2":{"296":1}}],["00012139006",{"2":{"294":1}}],["00012147761543501332",{"2":{"296":1}}],["000121477155",{"2":{"294":1}}],["00012151897",{"2":{"123":1}}],["00012222706566718716",{"2":{"296":1}}],["00012222587",{"2":{"294":1}}],["0001224543374792238",{"2":{"296":1}}],["00012245314",{"2":{"294":1}}],["00012247443",{"2":{"294":1}}],["0001221923111225926",{"2":{"296":1}}],["00012219216",{"2":{"294":1}}],["00012213365219284077",{"2":{"296":1}}],["00012213155",{"2":{"294":1}}],["00012267545209250727",{"2":{"296":1}}],["00012267528",{"2":{"294":1}}],["00012266928879546438",{"2":{"296":1}}],["00012266912",{"2":{"294":1}}],["0001228075003747053",{"2":{"296":1}}],["00012280572",{"2":{"294":1}}],["000122820",{"2":{"254":1}}],["00012465363992132786",{"2":{"296":1}}],["0001246568",{"2":{"294":1}}],["0001247906863943803",{"2":{"296":1}}],["00012478858",{"2":{"294":1}}],["00012441315244600152",{"2":{"296":1}}],["00012441541",{"2":{"294":1}}],["00012439271825754832",{"2":{"296":1}}],["00012439543",{"2":{"294":1}}],["0001245440724825835",{"2":{"296":1}}],["00012454136",{"2":{"294":1}}],["0001329010774958846",{"2":{"296":1}}],["00013289817",{"2":{"294":1}}],["0001377084021509263",{"2":{"296":1}}],["00013770966",{"2":{"294":1}}],["00013735471168170116",{"2":{"296":1}}],["0001373537",{"2":{"294":1}}],["00013101725941783716",{"2":{"296":1}}],["00013101641",{"2":{"294":1}}],["0001319178928957165",{"2":{"296":1}}],["00013191596",{"2":{"294":1}}],["0001393872275070107",{"2":{"296":1}}],["00013938849",{"2":{"294":1}}],["00013981320073982073",{"2":{"296":1}}],["00013981201",{"2":{"294":1}}],["00013963988532453915",{"2":{"296":1}}],["00013964142",{"2":{"294":1}}],["00013975805",{"2":{"294":1}}],["00013344438264936093",{"2":{"296":1}}],["00013344285",{"2":{"294":1}}],["00013332401402050718",{"2":{"296":1}}],["00013332158",{"2":{"294":1}}],["0001338799849098273",{"2":{"296":1}}],["0001338827",{"2":{"294":1}}],["00013398421146797074",{"2":{"296":1}}],["00013398477",{"2":{"294":1}}],["00013395053849656757",{"2":{"296":1}}],["00013394952",{"2":{"294":1}}],["00013636132962621653",{"2":{"296":1}}],["00013636223",{"2":{"294":1}}],["00013638294",{"2":{"294":1}}],["00013682363",{"2":{"294":1}}],["00013654782864252595",{"2":{"296":1}}],["00013654484",{"2":{"294":1}}],["00013650592888551523",{"2":{"296":1}}],["00013650712",{"2":{"294":1}}],["000136122",{"2":{"254":1}}],["00013870699206002676",{"2":{"296":1}}],["00013870597",{"2":{"294":1}}],["00013839253857431705",{"2":{"296":1}}],["00013839432",{"2":{"294":1}}],["00013800825702222952",{"2":{"296":1}}],["00013800748",{"2":{"294":1}}],["00013806564289793466",{"2":{"296":1}}],["0001380679",{"2":{"294":1}}],["00013484661938638023",{"2":{"296":1}}],["00013484679",{"2":{"294":1}}],["0001342673736856988",{"2":{"296":1}}],["00013426754",{"2":{"294":1}}],["00013455416915582251",{"2":{"296":1}}],["00013455361",{"2":{"294":1}}],["00013453081321220542",{"2":{"296":1}}],["0001345308",{"2":{"294":1}}],["00013431907",{"2":{"123":1}}],["00013005707308422635",{"2":{"296":1}}],["00013005744",{"2":{"294":1}}],["0001302307767612189",{"2":{"296":1}}],["00013023132",{"2":{"294":1}}],["00013050133553877422",{"2":{"296":1}}],["00013041991781830882",{"2":{"296":1}}],["00013042263",{"2":{"294":1}}],["00013049862",{"2":{"294":1}}],["0001308322",{"2":{"123":1}}],["00013572206228558613",{"2":{"296":1}}],["00013572449",{"2":{"294":1}}],["00013526839993425815",{"2":{"296":1}}],["00013526856",{"2":{"294":1}}],["0001358188455924383",{"2":{"296":1}}],["00013581885",{"2":{"294":1}}],["000135529",{"2":{"254":1}}],["00011531279471806082",{"2":{"296":1}}],["00011531400763243772",{"2":{"296":1}}],["00011531178",{"2":{"294":1}}],["00011531175",{"2":{"294":1}}],["00011074135053841844",{"2":{"296":1}}],["00011074009",{"2":{"294":1}}],["00011022104297012485",{"2":{"296":1}}],["000110218614",{"2":{"294":1}}],["00011791483356556496",{"2":{"296":1}}],["00011791357",{"2":{"294":1}}],["00011770217684051357",{"2":{"296":1}}],["000117475014",{"2":{"294":1}}],["000117699274",{"2":{"294":1}}],["00011947830617292601",{"2":{"296":1}}],["00011947815",{"2":{"294":1}}],["0001193008951867435",{"2":{"296":1}}],["00011999955792927964",{"2":{"296":1}}],["00011975622807880166",{"2":{"296":1}}],["00011975833",{"2":{"294":1}}],["00011903363442847782",{"2":{"296":1}}],["000119031705",{"2":{"294":1}}],["000119297736",{"2":{"294":1}}],["00011106545076805646",{"2":{"296":1}}],["00011106614",{"2":{"294":1}}],["0001117432620497369",{"2":{"296":1}}],["000111746165",{"2":{"294":1}}],["00011134139659092425",{"2":{"296":1}}],["00011133931",{"2":{"294":1}}],["00011143940107219109",{"2":{"296":1}}],["000111441506",{"2":{"294":1}}],["00011150925577116165",{"2":{"296":1}}],["00011150848",{"2":{"294":1}}],["00011168235517345108",{"2":{"296":1}}],["00011168272",{"2":{"294":1}}],["00011163986947815657",{"2":{"296":1}}],["0001116403",{"2":{"294":1}}],["00011463618410322375",{"2":{"296":1}}],["00011463575",{"2":{"294":1}}],["0001140118839708692",{"2":{"296":1}}],["00011401225",{"2":{"294":1}}],["00011407547335841485",{"2":{"296":1}}],["00011407531",{"2":{"294":1}}],["0001163059676061016",{"2":{"296":1}}],["00011630823",{"2":{"294":1}}],["00011620705578570802",{"2":{"296":1}}],["00011620738",{"2":{"294":1}}],["00011640474161504216",{"2":{"296":1}}],["000116404415",{"2":{"294":1}}],["000116947986468257",{"2":{"296":1}}],["0001169443",{"2":{"294":1}}],["00011697976774439031",{"2":{"296":1}}],["0001169817",{"2":{"294":1}}],["00011838625769461324",{"2":{"296":1}}],["0001183861",{"2":{"294":1}}],["00011881545756345831",{"2":{"296":1}}],["00011881353",{"2":{"294":1}}],["00011844933",{"2":{"123":1}}],["00011378178663684763",{"2":{"296":1}}],["0001137811",{"2":{"294":1}}],["00011304630123531435",{"2":{"296":1}}],["00011304584",{"2":{"294":1}}],["00011321728621327554",{"2":{"296":1}}],["000113216745",{"2":{"294":1}}],["000113272",{"2":{"254":1}}],["00011229382078070643",{"2":{"296":1}}],["00011280686025029057",{"2":{"296":1}}],["00011287401",{"2":{"294":1}}],["00011281002",{"2":{"294":1}}],["00011267308686960662",{"2":{"296":1}}],["00011267116",{"2":{"294":1}}],["000112652",{"2":{"96":1}}],["000112148016341931",{"2":{"296":1}}],["000112148016",{"2":{"294":1}}],["00011254487617403629",{"2":{"296":1}}],["000112544876",{"2":{"294":1}}],["00010767002558703952",{"2":{"296":1}}],["00010767018",{"2":{"294":1}}],["00010716452865383237",{"2":{"296":1}}],["00010716154",{"2":{"294":1}}],["0001074596487706212",{"2":{"296":1}}],["00010745666",{"2":{"294":1}}],["00010738169163565525",{"2":{"296":1}}],["00010738238",{"2":{"294":1}}],["00010729954953718456",{"2":{"296":1}}],["00010729712",{"2":{"294":1}}],["00010782323343311245",{"2":{"296":1}}],["00010782308",{"2":{"294":1}}],["00010786557667912822",{"2":{"296":1}}],["000107867665",{"2":{"294":1}}],["00010116513536558117",{"2":{"296":1}}],["00010116812",{"2":{"294":1}}],["00010130146261024565",{"2":{"296":1}}],["000101424093035117",{"2":{"296":1}}],["00010142652",{"2":{"294":1}}],["00010194839699580599",{"2":{"296":1}}],["000101947204",{"2":{"294":1}}],["00010164548345247386",{"2":{"296":1}}],["000101641795",{"2":{"294":1}}],["00010166785766825587",{"2":{"296":1}}],["00010166542",{"2":{"294":1}}],["0001012625654807624",{"2":{"296":1}}],["00010126048",{"2":{"294":1}}],["00010129856",{"2":{"294":1}}],["00010108948",{"2":{"123":1}}],["00010017867457697105",{"2":{"296":1}}],["00010017812",{"2":{"294":1}}],["00010046586728205614",{"2":{"296":1}}],["00010046797",{"2":{"294":1}}],["00010029477259646522",{"2":{"296":1}}],["00010029555",{"2":{"294":1}}],["00010844721785641021",{"2":{"296":1}}],["000108450906",{"2":{"294":1}}],["00010864825403629797",{"2":{"296":1}}],["000108645305",{"2":{"294":1}}],["00010817386327924342",{"2":{"296":1}}],["000108172964",{"2":{"294":1}}],["00010879812050827156",{"2":{"296":1}}],["000108798136",{"2":{"294":1}}],["00010829750921409024",{"2":{"296":1}}],["00010829573",{"2":{"294":1}}],["00010887833687539623",{"2":{"296":1}}],["0001088796",{"2":{"294":1}}],["000108809094",{"2":{"294":1}}],["00010804757043905769",{"2":{"296":1}}],["00010804467",{"2":{"294":1}}],["00010803140309250678",{"2":{"296":1}}],["00010802869",{"2":{"294":1}}],["00010350166027532305",{"2":{"296":1}}],["00010338270273009304",{"2":{"296":1}}],["00010338496",{"2":{"294":1}}],["0001036864",{"2":{"294":1}}],["00010395918813561029",{"2":{"296":1}}],["00010395834",{"2":{"294":1}}],["00010392377180598205",{"2":{"296":1}}],["00010392467",{"2":{"294":1}}],["00010340492742215",{"2":{"296":1}}],["00010340612",{"2":{"294":1}}],["00010345557517683067",{"2":{"296":1}}],["00010345286",{"2":{"294":1}}],["00010349797",{"2":{"294":1}}],["00010386997048291692",{"2":{"296":1}}],["00010384682111638278",{"2":{"296":1}}],["00010384605",{"2":{"294":1}}],["00010387241",{"2":{"294":1}}],["00010677312234526383",{"2":{"296":1}}],["00010677257",{"2":{"294":1}}],["0001064900208844284",{"2":{"296":1}}],["00010648793",{"2":{"294":1}}],["00010613065258350538",{"2":{"296":1}}],["00010613067",{"2":{"294":1}}],["00010655527880834928",{"2":{"296":1}}],["000106552376",{"2":{"294":1}}],["00010652114730872946",{"2":{"296":1}}],["00010652234",{"2":{"294":1}}],["00010658812874134512",{"2":{"296":1}}],["0001065878",{"2":{"294":1}}],["00010688291513358794",{"2":{"296":1}}],["000106882915",{"2":{"294":1}}],["00010480302585398664",{"2":{"296":1}}],["00010480247",{"2":{"294":1}}],["00010438933664945397",{"2":{"296":1}}],["00010438618",{"2":{"294":1}}],["00010464021632198549",{"2":{"296":1}}],["00010464023",{"2":{"294":1}}],["00010471301927846019",{"2":{"296":1}}],["00010471025",{"2":{"294":1}}],["00010449867604768573",{"2":{"296":1}}],["00010449851",{"2":{"294":1}}],["00010448388872069576",{"2":{"296":1}}],["00010448997741200304",{"2":{"296":1}}],["00010448458",{"2":{"294":1}}],["00010448805",{"2":{"294":1}}],["00010974786725387859",{"2":{"296":1}}],["000109747016",{"2":{"294":1}}],["00010994218505352448",{"2":{"296":1}}],["000109944114",{"2":{"294":1}}],["00010909062984874433",{"2":{"296":1}}],["00010909063",{"2":{"294":1}}],["0001090765",{"2":{"123":1}}],["00010272087074141474",{"2":{"296":1}}],["00010272296",{"2":{"294":1}}],["00010240284098869066",{"2":{"296":1}}],["00010240131",{"2":{"294":1}}],["00010294706829890184",{"2":{"296":1}}],["00010294529",{"2":{"294":1}}],["000102387274500376",{"2":{"296":1}}],["000102389204",{"2":{"294":1}}],["00010221995046833276",{"2":{"296":1}}],["00010223158565216602",{"2":{"296":1}}],["00010225453",{"2":{"294":1}}],["000102220314",{"2":{"294":1}}],["00010222966",{"2":{"294":1}}],["00010252862065736459",{"2":{"296":1}}],["00010252736",{"2":{"294":1}}],["00010259472945254828",{"2":{"296":1}}],["0001025949",{"2":{"294":1}}],["000102106123100063",{"2":{"296":1}}],["00010210765",{"2":{"294":1}}],["00010211646440444609",{"2":{"296":1}}],["000102116464",{"2":{"294":1}}],["000102632",{"2":{"254":1}}],["00010289252",{"2":{"123":1}}],["00010548676782079127",{"2":{"296":1}}],["000105488856",{"2":{"294":1}}],["0001054756513899967",{"2":{"296":1}}],["00010547372",{"2":{"294":1}}],["00010546855",{"2":{"197":1}}],["00010551",{"2":{"197":1}}],["0001",{"2":{"96":7,"97":1}}],["0001f0",{"2":{"96":1}}],["01f0",{"2":{"163":1,"164":1,"197":2,"205":1}}],["018874f",{"2":{"294":1}}],["018859928621433e",{"2":{"296":1}}],["018853f",{"2":{"294":1}}],["018854173",{"2":{"147":1}}],["018215429",{"2":{"254":1}}],["01861",{"2":{"205":1}}],["01832",{"2":{"147":1}}],["0183474",{"2":{"147":1}}],["018303769",{"2":{"147":1}}],["018772621303625e",{"2":{"296":1}}],["01877f",{"2":{"294":1}}],["018774498",{"2":{"123":1}}],["01872db4",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["018743665453639813",{"2":{"192":1}}],["018748894",{"2":{"123":1}}],["018755732",{"2":{"123":1}}],["018708104",{"2":{"96":1}}],["0169387f",{"2":{"294":1}}],["01699",{"2":{"205":1}}],["0160",{"2":{"265":1}}],["016400268",{"2":{"254":1}}],["016400939",{"2":{"254":1}}],["016156793",{"2":{"254":1}}],["01614\\taccuracy",{"2":{"205":1}}],["016353965",{"2":{"254":1}}],["016386721",{"2":{"147":1}}],["016709540",{"2":{"254":1}}],["016621085",{"2":{"254":1}}],["016625574",{"2":{"254":1}}],["016287515",{"2":{"147":1}}],["0168695",{"2":{"163":1}}],["016865179",{"2":{"147":1}}],["016841749",{"2":{"96":1}}],["0173824693803676e",{"2":{"296":1}}],["017370628",{"2":{"254":1}}],["0172544f",{"2":{"294":1}}],["0172084",{"2":{"96":1}}],["017019382",{"2":{"254":1}}],["01774\\taccuracy",{"2":{"205":1}}],["017459199",{"2":{"147":1}}],["017608581",{"2":{"147":1}}],["017935842",{"2":{"147":1}}],["017933888",{"2":{"147":1}}],["017866217",{"2":{"147":1}}],["017829532",{"2":{"147":1}}],["0171815609258854e",{"2":{"296":1}}],["017135098772205e",{"2":{"296":1}}],["01717774",{"2":{"147":1}}],["0171592f",{"2":{"123":1}}],["010282698",{"2":{"254":1}}],["0109437",{"2":{"270":1}}],["010943229",{"2":{"254":1}}],["010950223",{"2":{"147":1}}],["010581179",{"2":{"254":1}}],["010573289",{"2":{"254":1}}],["010567766",{"2":{"147":1}}],["010564294",{"2":{"123":1}}],["01038",{"2":{"280":1}}],["01033\\taccuracy",{"2":{"205":1}}],["010303006",{"2":{"123":1}}],["010303013",{"2":{"123":1}}],["010891679",{"2":{"254":1}}],["01086",{"2":{"205":1}}],["0108668577150213",{"2":{"189":1}}],["0108781",{"2":{"147":1}}],["010494760",{"2":{"254":1}}],["010404",{"2":{"163":1}}],["010482817",{"2":{"147":1}}],["0106334835",{"2":{"147":1}}],["01003141",{"2":{"124":1}}],["0101",{"2":{"97":1}}],["0154193974899372e",{"2":{"296":1}}],["01549",{"2":{"205":1}}],["015020947304269e",{"2":{"296":1}}],["015039141",{"2":{"147":1}}],["0153177f",{"2":{"294":1}}],["01534216",{"2":{"147":1}}],["015625\\tval",{"2":{"280":1}}],["015748031",{"2":{"270":2}}],["015235063",{"2":{"254":1}}],["015896475",{"2":{"147":1}}],["015869793",{"2":{"123":1}}],["015937408",{"2":{"143":1,"146":1}}],["015173428",{"2":{"96":1}}],["014099698895911e",{"2":{"296":1}}],["014004502",{"2":{"147":1}}],["0148104f",{"2":{"294":1}}],["014774345",{"2":{"270":1}}],["014730594",{"2":{"254":1}}],["01472\\taccuracy",{"2":{"205":1}}],["014741955",{"2":{"147":1}}],["014984946",{"2":{"254":1}}],["014992779",{"2":{"147":1}}],["01490508",{"2":{"147":1}}],["014606951",{"2":{"147":1}}],["014689862",{"2":{"147":1}}],["01413",{"2":{"205":1}}],["014139019",{"2":{"147":1}}],["014197593",{"2":{"123":1}}],["0141069945",{"2":{"96":1}}],["014333963",{"2":{"123":1}}],["014353451",{"2":{"123":1}}],["014224287",{"2":{"123":1}}],["014550619",{"2":{"254":1}}],["014522564",{"2":{"254":1}}],["014522501",{"2":{"123":1}}],["014524898",{"2":{"123":1}}],["014519578",{"2":{"123":1}}],["019283330847153e",{"2":{"296":1}}],["019230088",{"2":{"96":1}}],["019912010",{"2":{"254":1}}],["019320106",{"2":{"254":1}}],["0195218f",{"2":{"294":1}}],["019530077",{"2":{"254":1}}],["01954\\taccuracy",{"2":{"205":1}}],["019876348",{"2":{"147":1}}],["019838588",{"2":{"147":1}}],["019675586",{"2":{"147":1}}],["01965834",{"2":{"143":1}}],["019472213",{"2":{"147":1}}],["0191636f",{"2":{"294":1}}],["01910517",{"2":{"124":1}}],["0191175",{"2":{"96":1}}],["0125346567199934e",{"2":{"296":1}}],["012557827",{"2":{"254":1}}],["012997f",{"2":{"294":1}}],["01294",{"2":{"205":1}}],["0126981673657063e",{"2":{"296":1}}],["0126035f",{"2":{"294":1}}],["012625601",{"2":{"254":1}}],["012438955",{"2":{"254":1}}],["012419771",{"2":{"254":1}}],["0124978",{"2":{"96":1}}],["0128",{"2":{"265":1}}],["0128983",{"2":{"147":1}}],["012866791",{"2":{"147":1}}],["012322948",{"2":{"254":1}}],["012399685",{"2":{"254":1}}],["01233\\taccuracy",{"2":{"205":1}}],["012382892",{"2":{"147":1}}],["0123434",{"2":{"96":1}}],["012063608",{"2":{"147":1}}],["0122697f",{"2":{"123":1}}],["0121",{"2":{"265":1}}],["012151958",{"2":{"123":1}}],["012145694",{"2":{"123":1}}],["013817497363848e",{"2":{"296":1}}],["0137634f",{"2":{"294":1}}],["0130",{"2":{"265":1}}],["013016323",{"2":{"254":1}}],["013593317",{"2":{"254":1}}],["013275324",{"2":{"254":1}}],["01325738",{"2":{"124":1}}],["01346\\taccuracy",{"2":{"205":1}}],["0134848",{"2":{"147":1}}],["0139467f",{"2":{"294":1}}],["013927094",{"2":{"123":1}}],["013915376",{"2":{"123":1}}],["013969757",{"2":{"96":1}}],["013654154",{"2":{"123":1}}],["013659927",{"2":{"123":1}}],["0136483535",{"2":{"96":1}}],["011",{"2":{"280":4}}],["011476245",{"2":{"254":1}}],["0114351",{"2":{"96":1}}],["011310317",{"2":{"254":1}}],["011975820",{"2":{"254":1}}],["011937321",{"2":{"197":1}}],["011818888",{"2":{"254":1}}],["011811271",{"2":{"254":1}}],["011849238",{"2":{"254":1}}],["01186",{"2":{"205":1}}],["01129\\taccuracy",{"2":{"205":1}}],["011206773",{"2":{"147":1}}],["01117275",{"2":{"147":1}}],["011100831f0",{"2":{"50":1}}],["011098074",{"2":{"147":1}}],["01100632",{"2":{"123":1}}],["011658882",{"2":{"254":1}}],["0116419",{"2":{"96":1}}],["0116051175",{"2":{"96":1}}],["01174226",{"2":{"96":1}}],["01",{"2":{"15":3,"50":2,"67":2,"97":1,"197":1,"198":1,"200":1,"205":1,"207":1,"214":1,"215":1,"220":5,"222":1,"239":1,"247":1,"254":86,"256":1,"261":1,"262":1,"267":1,"269":1,"275":1,"280":114,"281":1,"287":3,"289":1,"296":1,"298":1}}],["0",{"2":{"2":1,"4":1,"7":3,"11":1,"15":44,"25":2,"39":3,"41":3,"44":2,"45":10,"46":1,"50":122,"51":4,"52":5,"56":16,"65":3,"67":71,"70":2,"77":15,"78":1,"79":30,"80":22,"81":22,"82":143,"84":17,"85":20,"87":19,"88":43,"89":15,"90":1,"96":204,"97":11,"123":253,"124":10,"126":5,"127":37,"132":1,"133":1,"135":1,"137":1,"143":54,"146":25,"147":1153,"153":6,"154":1,"155":7,"163":18,"164":5,"165":35,"166":23,"170":7,"171":358,"173":1,"186":59,"188":1,"189":155,"191":3,"192":62,"194":12,"195":9,"196":4,"197":72,"198":3,"200":4,"201":1,"204":1,"205":108,"207":3,"210":1,"211":2,"212":2,"213":5,"214":9,"215":3,"218":4,"220":2,"222":3,"230":11,"231":1,"232":2,"233":1,"234":2,"235":26,"236":1,"237":8,"238":19,"239":11,"241":7,"242":2,"245":2,"246":105,"247":11,"253":7,"254":400,"255":6,"256":3,"258":1,"261":54,"262":3,"263":34,"264":4,"265":66,"267":3,"269":9,"270":138,"271":1,"272":4,"273":38,"274":10,"275":3,"280":232,"281":3,"283":1,"285":2,"287":34,"289":3,"291":43,"292":4,"293":9,"294":484,"295":1,"296":394,"298":3}}],["1\\ttrain",{"2":{"280":1}}],["1m",{"2":{"269":1}}],["1g",{"2":{"239":1,"247":1}}],["1e6",{"2":{"261":1,"280":1}}],["1e",{"2":{"220":1,"292":1}}],["1`",{"2":{"154":3}}],["1>",{"2":{"147":6}}],["1f",{"2":{"89":1}}],["1f0",{"2":{"46":2,"50":2,"56":1,"65":3,"67":1,"89":2,"97":1,"270":1}}],["1st",{"2":{"88":1,"89":1,"144":1}}],["17\\ttrain",{"2":{"280":1}}],["1794",{"2":{"280":6,"291":1}}],["1792",{"2":{"280":6}}],["179",{"2":{"280":3}}],["1793",{"2":{"265":1,"280":9}}],["17938598",{"2":{"147":1}}],["1798",{"2":{"261":1,"280":6}}],["1799",{"2":{"261":1,"280":6}}],["1790",{"2":{"261":1,"280":6}}],["1796",{"2":{"261":1,"280":25}}],["1797",{"2":{"261":1,"280":9}}],["17973809",{"2":{"147":1}}],["1791",{"2":{"200":1}}],["1795",{"2":{"280":11}}],["179584495291061",{"2":{"171":2}}],["179542",{"2":{"147":1}}],["177",{"2":{"280":3}}],["1776",{"2":{"263":2}}],["177697",{"2":{"147":1}}],["1778",{"2":{"261":2,"280":8}}],["17785463",{"2":{"147":1}}],["17743",{"2":{"291":1}}],["1774",{"2":{"200":1}}],["1770951f",{"2":{"294":1}}],["17706361",{"2":{"197":1}}],["17703351",{"2":{"147":1}}],["177110664353059e",{"2":{"296":1}}],["177116im",{"2":{"186":1}}],["1771",{"2":{"263":1}}],["17712",{"2":{"261":1}}],["17713173",{"2":{"147":1}}],["177779",{"2":{"147":1}}],["17755158",{"2":{"147":1}}],["1713142784319098e",{"2":{"296":1}}],["1719",{"2":{"291":1}}],["171969",{"2":{"147":1}}],["17196824",{"2":{"147":1}}],["171465",{"2":{"280":1}}],["171460",{"2":{"280":1}}],["171457",{"2":{"280":1}}],["171455",{"2":{"280":1}}],["171452",{"2":{"280":1}}],["171449",{"2":{"280":1}}],["171446",{"2":{"280":1}}],["171443",{"2":{"280":1}}],["171440",{"2":{"280":1}}],["171437",{"2":{"280":1}}],["171423",{"2":{"280":1}}],["1715",{"2":{"261":1,"291":1}}],["17161606",{"2":{"270":1}}],["1716",{"2":{"261":1}}],["1710",{"2":{"261":1}}],["1711",{"2":{"261":1}}],["17115343",{"2":{"147":1}}],["1718",{"2":{"241":1,"263":1}}],["17170",{"2":{"200":1}}],["171756",{"2":{"147":1}}],["1717901f",{"2":{"123":1}}],["1712",{"2":{"263":1}}],["171212",{"2":{"147":1}}],["171224",{"2":{"147":1}}],["17124604",{"2":{"147":1}}],["1709455f",{"2":{"294":1}}],["1709",{"2":{"277":1,"291":1}}],["1701",{"2":{"269":1}}],["1700",{"2":{"263":1}}],["17001",{"2":{"254":1}}],["1704",{"2":{"261":1,"263":1}}],["1705",{"2":{"261":1}}],["170",{"2":{"247":1}}],["1703",{"2":{"291":2}}],["170342",{"2":{"189":1}}],["17039865",{"2":{"147":1}}],["17029831",{"2":{"147":1}}],["17029636",{"2":{"147":1}}],["17085029",{"2":{"147":1}}],["17083026",{"2":{"147":1}}],["17078549",{"2":{"147":1}}],["1780272f",{"2":{"294":1}}],["1785355f",{"2":{"294":1}}],["1786895",{"2":{"273":1}}],["1783259708183335e",{"2":{"296":1}}],["1783",{"2":{"261":1}}],["1784580750754484e",{"2":{"296":1}}],["1784",{"2":{"261":1}}],["17840558",{"2":{"147":1}}],["1781",{"2":{"261":1}}],["1789",{"2":{"261":2}}],["1788",{"2":{"261":2}}],["17886546",{"2":{"123":1}}],["17829275",{"2":{"147":1}}],["172",{"2":{"280":1}}],["1722",{"2":{"261":1}}],["17255",{"2":{"269":1}}],["1725",{"2":{"261":2,"280":2}}],["1726",{"2":{"261":2}}],["1723",{"2":{"261":2}}],["172313",{"2":{"147":1}}],["1724",{"2":{"261":1}}],["17242633",{"2":{"147":1}}],["172708",{"2":{"147":1}}],["17211406",{"2":{"147":1}}],["1732",{"2":{"291":1}}],["17322835",{"2":{"270":2}}],["17326018",{"2":{"147":1}}],["1737",{"2":{"265":1}}],["1733",{"2":{"263":1}}],["17337337",{"2":{"147":1}}],["1739",{"2":{"261":1}}],["1735",{"2":{"261":2,"263":1}}],["1730",{"2":{"230":1}}],["1730737",{"2":{"147":1}}],["17349",{"2":{"147":1}}],["17310219",{"2":{"147":1}}],["17385432",{"2":{"147":1}}],["1736",{"2":{"230":1,"291":1}}],["17361793",{"2":{"147":1}}],["17363501086534973873",{"2":{"97":1}}],["1745",{"2":{"280":2}}],["174513",{"2":{"147":1}}],["174298",{"2":{"270":1}}],["1742",{"2":{"261":1,"291":1}}],["1741",{"2":{"261":2}}],["1743",{"2":{"261":1}}],["174372",{"2":{"261":1}}],["17441",{"2":{"189":1}}],["17440075",{"2":{"147":1}}],["1747854",{"2":{"147":1}}],["17461234",{"2":{"96":1}}],["1762",{"2":{"269":1}}],["1765",{"2":{"263":1}}],["17657f",{"2":{"96":1}}],["1764",{"2":{"261":1,"280":12}}],["1764085",{"2":{"143":1}}],["176",{"2":{"220":1,"241":1}}],["1769388",{"2":{"165":1}}],["17696409",{"2":{"147":1}}],["176032",{"2":{"280":1}}],["176027",{"2":{"280":1}}],["176024",{"2":{"280":1}}],["176022",{"2":{"280":1}}],["176021",{"2":{"147":1}}],["176019",{"2":{"280":1}}],["176016",{"2":{"280":1}}],["176013",{"2":{"280":1}}],["176010",{"2":{"280":1}}],["176007",{"2":{"280":1}}],["17600718",{"2":{"147":1}}],["176003",{"2":{"280":1}}],["1760",{"2":{"269":1}}],["17604505",{"2":{"147":1}}],["17616239",{"2":{"147":1}}],["17617925",{"2":{"147":1}}],["176348",{"2":{"147":1}}],["17569",{"2":{"291":1}}],["1756",{"2":{"280":6}}],["1753",{"2":{"261":1}}],["17538628",{"2":{"147":1}}],["1754",{"2":{"261":1}}],["17512",{"2":{"291":1}}],["1751",{"2":{"261":1,"263":1}}],["1751135",{"2":{"147":1}}],["1752",{"2":{"261":1}}],["1752539",{"2":{"147":1}}],["17558427",{"2":{"147":1}}],["175989",{"2":{"280":1}}],["17598884",{"2":{"147":1}}],["1759",{"2":{"261":2}}],["1759687",{"2":{"147":1}}],["17590503",{"2":{"123":1}}],["17570448",{"2":{"123":1}}],["17",{"2":{"79":2,"83":7,"127":1,"147":2,"166":1,"205":2,"214":1,"235":4,"246":2,"254":2,"261":2,"263":1,"265":4,"269":1,"271":1,"280":3,"287":2,"291":1,"296":18}}],["14\\ttrain",{"2":{"280":1}}],["1498674911780836e",{"2":{"296":1}}],["1496063",{"2":{"270":2}}],["14967",{"2":{"261":1}}],["14979f",{"2":{"294":1}}],["1497",{"2":{"269":1}}],["1492",{"2":{"263":2}}],["1493",{"2":{"230":1,"261":1,"265":1,"269":1}}],["149533300416954e",{"2":{"296":1}}],["1495",{"2":{"200":1}}],["14941788",{"2":{"96":1}}],["1471450170075303e",{"2":{"296":1}}],["1476",{"2":{"291":1}}],["1475",{"2":{"291":1}}],["1470",{"2":{"291":1}}],["14702773",{"2":{"123":1}}],["1473379f",{"2":{"294":1}}],["147394",{"2":{"280":1}}],["147389",{"2":{"280":1}}],["147384",{"2":{"280":1}}],["147381",{"2":{"280":1}}],["147378",{"2":{"280":1}}],["147375",{"2":{"280":1}}],["147372",{"2":{"280":1}}],["147369",{"2":{"280":1}}],["147366",{"2":{"280":1}}],["147362",{"2":{"280":1}}],["14736821",{"2":{"147":1}}],["147349",{"2":{"280":1}}],["147998",{"2":{"280":1}}],["147995",{"2":{"280":1}}],["147993",{"2":{"280":1}}],["147990",{"2":{"280":1}}],["147987",{"2":{"280":1}}],["147984",{"2":{"280":1}}],["147981",{"2":{"280":1}}],["147978",{"2":{"280":1}}],["147974",{"2":{"280":1}}],["147960",{"2":{"280":1}}],["1479",{"2":{"263":1}}],["1474936265601624e",{"2":{"296":1}}],["1474475f",{"2":{"294":1}}],["1474",{"2":{"263":1,"291":1}}],["1474609",{"2":{"261":1}}],["14783\\taccuracy",{"2":{"205":1}}],["147705370924899",{"2":{"171":1}}],["1477053709248992",{"2":{"171":1}}],["148348416838996e",{"2":{"296":1}}],["1483009379054432e",{"2":{"296":1}}],["1486343f",{"2":{"294":1}}],["148003",{"2":{"280":1}}],["14881086",{"2":{"273":1}}],["14884971",{"2":{"147":1}}],["1487",{"2":{"241":1}}],["148222f",{"2":{"294":1}}],["1482575",{"2":{"270":1}}],["14825",{"2":{"189":1}}],["148248",{"2":{"189":2}}],["148901",{"2":{"165":1}}],["14815",{"2":{"235":2}}],["148189",{"2":{"147":1}}],["148147",{"2":{"147":1}}],["1408",{"2":{"280":6}}],["140",{"2":{"277":1}}],["1405",{"2":{"269":1}}],["1409",{"2":{"263":1}}],["1401",{"2":{"261":1,"265":1}}],["14012684",{"2":{"147":1}}],["14026",{"2":{"261":1}}],["140299",{"2":{"147":1}}],["14001",{"2":{"254":1}}],["14077",{"2":{"261":1}}],["14079519",{"2":{"147":1}}],["14073242",{"2":{"147":1}}],["14033335",{"2":{"147":1}}],["14550832840387e",{"2":{"296":1}}],["14555879",{"2":{"147":1}}],["145881095545308e",{"2":{"296":1}}],["145826f",{"2":{"294":1}}],["14589407905380107767",{"2":{"214":1}}],["1450",{"2":{"280":3}}],["145000",{"2":{"280":1}}],["14531",{"2":{"291":1}}],["1453",{"2":{"280":6,"291":1}}],["14532",{"2":{"261":1}}],["1451",{"2":{"280":4}}],["14517665",{"2":{"147":1}}],["145454f",{"2":{"294":1}}],["1454",{"2":{"265":1}}],["1452",{"2":{"280":1}}],["145296",{"2":{"186":1}}],["14523235",{"2":{"147":1}}],["14562015",{"2":{"147":1}}],["14593515",{"2":{"147":1}}],["1412348022512357e",{"2":{"296":1}}],["141189319918315e",{"2":{"296":1}}],["141174704725275e",{"2":{"296":1}}],["141157f",{"2":{"294":1}}],["1411",{"2":{"291":1}}],["1414777f",{"2":{"294":1}}],["1414",{"2":{"280":6,"287":1,"291":1}}],["14199",{"2":{"280":1}}],["141903",{"2":{"280":1}}],["1413676f",{"2":{"294":1}}],["1413",{"2":{"269":1,"280":3}}],["1413405",{"2":{"147":1}}],["141898",{"2":{"280":1}}],["141895",{"2":{"280":1}}],["141892",{"2":{"280":1}}],["141889",{"2":{"280":1}}],["141886",{"2":{"280":1}}],["141883",{"2":{"280":1}}],["141880",{"2":{"280":1}}],["141877",{"2":{"280":1}}],["141873",{"2":{"280":1}}],["141858",{"2":{"280":1}}],["1418",{"2":{"263":1,"280":4}}],["14173229",{"2":{"270":2}}],["1417",{"2":{"254":1}}],["141",{"2":{"235":1,"277":1,"291":1}}],["1410",{"2":{"230":1}}],["14102985",{"2":{"147":1}}],["14163494",{"2":{"147":1}}],["1415927",{"2":{"50":1}}],["1443",{"2":{"291":1}}],["14433007",{"2":{"123":1}}],["1440",{"2":{"280":6,"291":1}}],["144009",{"2":{"147":1}}],["1448",{"2":{"280":6}}],["144669",{"2":{"280":1}}],["144656",{"2":{"280":1}}],["144653",{"2":{"280":1}}],["144650",{"2":{"280":1}}],["144647",{"2":{"280":1}}],["144644",{"2":{"280":1}}],["144641",{"2":{"280":1}}],["144638",{"2":{"280":1}}],["144634",{"2":{"280":1}}],["144631",{"2":{"280":1}}],["144561",{"2":{"280":1}}],["14457",{"2":{"261":1}}],["144",{"2":{"280":3}}],["1441",{"2":{"263":1,"280":3}}],["1449035f",{"2":{"294":1}}],["144995",{"2":{"280":1}}],["144992",{"2":{"280":1}}],["14499082",{"2":{"147":1}}],["144989",{"2":{"280":1}}],["144986",{"2":{"280":1}}],["144983",{"2":{"280":1}}],["144980",{"2":{"280":1}}],["144977",{"2":{"280":1}}],["144974",{"2":{"280":1}}],["144971",{"2":{"280":1}}],["144958893179827e",{"2":{"296":1}}],["144958",{"2":{"280":1}}],["1449",{"2":{"230":1,"291":1}}],["14421012",{"2":{"147":1}}],["143599f",{"2":{"294":1}}],["14359581",{"2":{"147":1}}],["143889478827302e",{"2":{"296":1}}],["1438",{"2":{"265":1}}],["1434",{"2":{"241":1}}],["1439",{"2":{"230":1}}],["1437",{"2":{"230":1,"280":1}}],["1437841",{"2":{"197":1}}],["1436",{"2":{"263":1}}],["14365605",{"2":{"147":1}}],["14361875",{"2":{"147":1}}],["14366318",{"2":{"147":1}}],["14336014",{"2":{"147":1}}],["1427",{"2":{"280":6}}],["142588352768827e",{"2":{"296":1}}],["142571f",{"2":{"294":1}}],["142574623",{"2":{"97":1}}],["1425",{"2":{"280":6,"291":1}}],["1429",{"2":{"280":5}}],["1429857",{"2":{"270":1}}],["1422",{"2":{"269":1}}],["1420",{"2":{"269":1}}],["14203\\taccuracy",{"2":{"205":1}}],["1424",{"2":{"254":3}}],["142470881033475",{"2":{"171":2}}],["14243387",{"2":{"147":1}}],["14238782",{"2":{"147":1}}],["146494149126811e",{"2":{"296":1}}],["14646342",{"2":{"123":1}}],["1462",{"2":{"291":1}}],["1468",{"2":{"280":1}}],["1460",{"2":{"263":1}}],["1465402f",{"2":{"294":1}}],["1465",{"2":{"263":1}}],["14696",{"2":{"261":1}}],["14692664",{"2":{"123":1}}],["1467",{"2":{"230":1}}],["14673097",{"2":{"147":1}}],["146",{"2":{"200":1,"269":1,"291":1}}],["1466",{"2":{"280":3}}],["14660794",{"2":{"147":1}}],["14663221",{"2":{"123":1}}],["1461747",{"2":{"147":1}}],["14",{"2":{"79":2,"84":1,"147":2,"163":1,"205":2,"230":1,"235":1,"246":2,"254":6,"261":2,"265":6,"280":8,"291":1}}],["11\\ttrain",{"2":{"280":1}}],["112829",{"2":{"291":1}}],["11285",{"2":{"261":1}}],["112",{"2":{"280":17}}],["1121",{"2":{"263":1,"280":7}}],["1122",{"2":{"263":2,"280":2}}],["11223",{"2":{"261":1}}],["1126353",{"2":{"166":1}}],["11269632",{"2":{"147":1}}],["1120",{"2":{"263":1,"280":2,"291":1}}],["11205",{"2":{"261":1}}],["112076",{"2":{"147":1}}],["112063006",{"2":{"147":1}}],["112344",{"2":{"147":1}}],["1150",{"2":{"291":1}}],["1155",{"2":{"280":4,"291":2}}],["115506485",{"2":{"273":1}}],["115507",{"2":{"270":1}}],["115",{"2":{"280":17,"291":1}}],["1151",{"2":{"269":1,"291":1}}],["1157",{"2":{"263":1}}],["1158",{"2":{"230":1}}],["11568",{"2":{"291":1}}],["11569",{"2":{"261":1}}],["11569\\taccuracy",{"2":{"205":1}}],["1156",{"2":{"230":1}}],["115316e",{"2":{"220":1}}],["11532391",{"2":{"147":1}}],["115444220054804e",{"2":{"296":1}}],["11549814",{"2":{"147":1}}],["11545f",{"2":{"96":1}}],["1114",{"2":{"280":6}}],["111452006",{"2":{"147":1}}],["1115",{"2":{"280":1}}],["11151",{"2":{"261":1}}],["1113",{"2":{"280":2}}],["11135",{"2":{"261":1}}],["1112",{"2":{"280":19}}],["11125",{"2":{"261":1}}],["1110",{"2":{"280":17}}],["1118",{"2":{"280":2}}],["1119",{"2":{"280":22}}],["11194",{"2":{"241":1}}],["1117",{"2":{"280":13}}],["1111",{"2":{"263":1,"280":2}}],["11111",{"2":{"237":1}}],["111",{"2":{"241":2,"291":1}}],["11169",{"2":{"261":1}}],["11169241",{"2":{"147":1}}],["111676365",{"2":{"147":1}}],["1149034f",{"2":{"294":1}}],["1149755",{"2":{"165":1}}],["1148",{"2":{"291":1}}],["114184186",{"2":{"270":1}}],["114",{"2":{"263":1,"280":17,"291":1}}],["11454258",{"2":{"147":1}}],["114560924",{"2":{"147":1}}],["11476975",{"2":{"96":1}}],["113994725340338e",{"2":{"296":1}}],["113952",{"2":{"186":1}}],["113752f",{"2":{"294":1}}],["11373",{"2":{"165":1}}],["113468f",{"2":{"294":1}}],["1136",{"2":{"291":1}}],["11362071",{"2":{"147":1}}],["1131",{"2":{"291":1}}],["113199",{"2":{"280":1}}],["113197",{"2":{"280":1}}],["113194",{"2":{"280":1}}],["113191",{"2":{"280":1}}],["113188",{"2":{"280":1}}],["113185",{"2":{"280":1}}],["113182",{"2":{"280":1}}],["113168",{"2":{"280":1}}],["11315",{"2":{"261":1}}],["113",{"2":{"280":17}}],["113390590799612e",{"2":{"296":1}}],["11339",{"2":{"261":1}}],["11384",{"2":{"261":1}}],["1138737",{"2":{"147":1}}],["113006115",{"2":{"147":1}}],["113205",{"2":{"280":1}}],["113202",{"2":{"280":1}}],["113210",{"2":{"280":1}}],["11321",{"2":{"261":1}}],["1132",{"2":{"230":1}}],["11325522",{"2":{"147":1}}],["113252416",{"2":{"123":1}}],["11326964",{"2":{"123":1}}],["116136",{"2":{"280":1}}],["116131",{"2":{"280":1}}],["116128",{"2":{"280":1}}],["116125",{"2":{"280":1}}],["116122",{"2":{"280":1}}],["116119",{"2":{"280":1}}],["116116",{"2":{"280":1}}],["116113",{"2":{"280":1}}],["116110",{"2":{"280":1}}],["116107",{"2":{"280":1}}],["1160",{"2":{"291":1}}],["116093",{"2":{"280":1}}],["11604",{"2":{"261":1}}],["116",{"2":{"280":17,"291":1}}],["1165",{"2":{"291":1}}],["11651",{"2":{"261":1}}],["116574",{"2":{"147":1}}],["11695",{"2":{"261":2}}],["116989",{"2":{"189":1}}],["11640526",{"2":{"147":1}}],["11645464",{"2":{"147":1}}],["11673186",{"2":{"147":1}}],["116768986",{"2":{"147":1}}],["1163",{"2":{"230":1,"291":1}}],["1163871",{"2":{"147":1}}],["11631798",{"2":{"147":1}}],["1193",{"2":{"280":2}}],["11933",{"2":{"147":1}}],["1190",{"2":{"280":6}}],["119063",{"2":{"280":1}}],["119058",{"2":{"280":1}}],["119055",{"2":{"280":1}}],["119052",{"2":{"280":1}}],["119049",{"2":{"280":1}}],["119046",{"2":{"280":1}}],["119043",{"2":{"280":1}}],["119040",{"2":{"280":1}}],["119037",{"2":{"280":1}}],["119034",{"2":{"280":1}}],["119021",{"2":{"280":1}}],["1199",{"2":{"280":6}}],["11997",{"2":{"261":1}}],["1192",{"2":{"280":7}}],["119",{"2":{"263":1,"280":17}}],["11966",{"2":{"263":1}}],["119643",{"2":{"261":1}}],["119758",{"2":{"147":1}}],["11950846",{"2":{"147":1}}],["1198",{"2":{"280":25}}],["11987625",{"2":{"147":1}}],["1198f",{"2":{"96":1}}],["1184",{"2":{"280":9}}],["118",{"2":{"280":17}}],["1182",{"2":{"269":1}}],["1181102",{"2":{"270":2}}],["1181",{"2":{"269":1}}],["118148",{"2":{"147":1}}],["1186",{"2":{"280":4}}],["11867",{"2":{"263":1}}],["1186073",{"2":{"147":1}}],["1185",{"2":{"263":1,"269":1,"280":4}}],["1180353",{"2":{"270":1}}],["11800",{"2":{"261":1}}],["11808148",{"2":{"147":1}}],["1183",{"2":{"263":1}}],["11831",{"2":{"261":1}}],["118356064",{"2":{"147":1}}],["11895952635772526774",{"2":{"261":1}}],["11893",{"2":{"230":1}}],["1187",{"2":{"147":1,"230":1}}],["1188",{"2":{"269":1}}],["1188200246206075",{"2":{"171":2}}],["11883854",{"2":{"123":1}}],["118868664",{"2":{"123":1}}],["1176324f",{"2":{"294":1}}],["1176",{"2":{"291":1}}],["1177",{"2":{"280":7}}],["11770",{"2":{"261":1}}],["1174",{"2":{"263":1}}],["11740078",{"2":{"147":1}}],["1178",{"2":{"263":1}}],["117853135",{"2":{"147":1}}],["117",{"2":{"241":2,"280":17}}],["1173706",{"2":{"165":1}}],["1175306997164e",{"2":{"296":1}}],["1175",{"2":{"265":1,"280":2}}],["11750876",{"2":{"147":1}}],["11754602",{"2":{"123":1}}],["11728277",{"2":{"147":1}}],["11708575",{"2":{"147":1}}],["1107",{"2":{"280":16}}],["11070",{"2":{"230":1}}],["1101",{"2":{"280":24}}],["110407",{"2":{"280":1}}],["110402",{"2":{"280":1}}],["1104",{"2":{"280":15}}],["1106",{"2":{"265":1,"280":2}}],["1108",{"2":{"263":1}}],["1100",{"2":{"280":6}}],["11009",{"2":{"263":1}}],["11000",{"2":{"261":1}}],["11001",{"2":{"254":1}}],["1109",{"2":{"254":2}}],["11096",{"2":{"261":1}}],["110962",{"2":{"147":1}}],["11096934",{"2":{"123":1}}],["110",{"2":{"238":6,"291":1}}],["110374",{"2":{"287":1}}],["110379",{"2":{"280":1}}],["110399",{"2":{"280":1}}],["110396",{"2":{"280":1}}],["110394",{"2":{"280":1}}],["110391",{"2":{"280":1}}],["110388",{"2":{"280":1}}],["110385",{"2":{"280":1}}],["110382",{"2":{"280":1}}],["110365",{"2":{"280":1}}],["1103",{"2":{"230":1}}],["110330954",{"2":{"147":1}}],["11023622",{"2":{"270":2}}],["11023762",{"2":{"197":1}}],["1102",{"2":{"263":1,"280":7}}],["11028515",{"2":{"147":1}}],["1105",{"2":{"263":1}}],["11050306",{"2":{"123":1}}],["110567965",{"2":{"123":1}}],["11",{"2":{"79":2,"81":9,"84":3,"147":2,"163":2,"164":2,"165":2,"166":1,"198":1,"205":2,"207":1,"215":1,"222":1,"239":4,"246":2,"247":4,"254":13,"256":1,"261":53,"262":1,"265":3,"267":1,"269":1,"275":1,"280":12,"281":1,"287":19,"289":1,"291":2,"296":1,"298":1}}],["1×16",{"2":{"165":1,"166":1}}],["1×11",{"2":{"67":1}}],["1×5",{"2":{"88":2}}],["1×32",{"2":{"56":1,"97":1}}],["19\\ttrain",{"2":{"280":1}}],["19s",{"2":{"214":5}}],["19854593887767e",{"2":{"296":1}}],["1981361254741375e",{"2":{"296":1}}],["198789f",{"2":{"294":1}}],["1988",{"2":{"292":2}}],["198454328000425e",{"2":{"296":1}}],["19847682",{"2":{"270":1}}],["1984668",{"2":{"163":1}}],["198329f",{"2":{"294":1}}],["1983",{"2":{"254":1}}],["19899946",{"2":{"147":1}}],["1989196",{"2":{"147":1}}],["192",{"2":{"291":1}}],["19286752",{"2":{"273":1}}],["1928",{"2":{"263":1}}],["192836",{"2":{"186":1}}],["1921258",{"2":{"165":1}}],["19226",{"2":{"147":1}}],["19236",{"2":{"147":1}}],["19259712",{"2":{"147":1}}],["1929",{"2":{"63":1}}],["1955",{"2":{"291":1}}],["1957",{"2":{"280":1}}],["1953009",{"2":{"195":1,"196":1}}],["19597391412112541",{"2":{"192":1}}],["19547431",{"2":{"147":1}}],["1958431",{"2":{"197":1}}],["1958",{"2":{"63":1,"265":1}}],["197971",{"2":{"280":1}}],["197966",{"2":{"280":1}}],["197964",{"2":{"280":1}}],["197961",{"2":{"280":1}}],["197957",{"2":{"280":1}}],["197954",{"2":{"280":1}}],["197950",{"2":{"280":1}}],["197947",{"2":{"280":1}}],["197944",{"2":{"280":1}}],["197941",{"2":{"280":1}}],["197920",{"2":{"280":1}}],["19792819",{"2":{"147":1}}],["1979",{"2":{"263":1}}],["1978",{"2":{"263":1}}],["197785",{"2":{"261":1}}],["1970",{"2":{"241":1}}],["197045",{"2":{"189":1}}],["1976",{"2":{"280":2}}],["19761",{"2":{"230":1}}],["19765",{"2":{"230":1}}],["19744",{"2":{"147":1}}],["1964",{"2":{"280":2}}],["19641913",{"2":{"147":1}}],["1960",{"2":{"269":1}}],["19601588",{"2":{"147":1}}],["1968",{"2":{"263":1}}],["1965",{"2":{"241":1}}],["196581",{"2":{"186":1}}],["19623",{"2":{"147":1}}],["1966",{"2":{"280":1}}],["19660722",{"2":{"147":1}}],["19666132",{"2":{"147":1}}],["19634563",{"2":{"147":1}}],["19615631573604e",{"2":{"296":1}}],["196156f",{"2":{"294":1}}],["1961907f",{"2":{"123":1}}],["1961357",{"2":{"123":1}}],["1910976f",{"2":{"294":1}}],["1915",{"2":{"263":1}}],["191346",{"2":{"261":1}}],["19138478",{"2":{"147":1}}],["1917",{"2":{"230":1}}],["19124654",{"2":{"163":1}}],["191424",{"2":{"147":1}}],["19183022",{"2":{"147":1}}],["1908",{"2":{"263":1}}],["1907",{"2":{"280":1}}],["19078",{"2":{"261":1}}],["19076\\taccuracy",{"2":{"205":1}}],["19001",{"2":{"254":1}}],["1903755662471326e",{"2":{"296":1}}],["190392f",{"2":{"294":1}}],["1903",{"2":{"263":1,"291":1}}],["19038",{"2":{"200":1}}],["19031122",{"2":{"123":1}}],["1904757",{"2":{"166":1}}],["19011",{"2":{"147":1}}],["19019358",{"2":{"147":1}}],["19096",{"2":{"280":3}}],["1909",{"2":{"263":1}}],["19099",{"2":{"147":1}}],["19094673",{"2":{"147":1}}],["19059215",{"2":{"147":1}}],["19065982",{"2":{"147":1}}],["1943916551562877e",{"2":{"296":1}}],["194354",{"2":{"261":1}}],["1947217450797755e",{"2":{"296":1}}],["194286909998489e",{"2":{"296":1}}],["194287f",{"2":{"294":1}}],["1945178f",{"2":{"294":1}}],["194423f",{"2":{"294":1}}],["19444",{"2":{"81":1}}],["1946",{"2":{"291":2}}],["19483931571915e",{"2":{"296":1}}],["194856f",{"2":{"294":1}}],["1948",{"2":{"269":1}}],["19402",{"2":{"261":1}}],["19405301",{"2":{"147":1}}],["1941834",{"2":{"147":1}}],["19498321",{"2":{"147":1}}],["1999603f",{"2":{"294":1}}],["1999",{"2":{"291":1}}],["19995327",{"2":{"147":1}}],["199",{"2":{"280":3}}],["1998",{"2":{"280":1}}],["199431",{"2":{"291":1}}],["1994",{"2":{"280":5}}],["1993",{"2":{"230":1,"280":1}}],["199387",{"2":{"163":1}}],["1990",{"2":{"167":1,"280":5}}],["1990666",{"2":{"147":1}}],["19920883",{"2":{"147":1}}],["1995",{"2":{"280":1}}],["19959326",{"2":{"147":1}}],["19954802",{"2":{"147":1}}],["19969921",{"2":{"147":1}}],["1997513",{"2":{"96":1}}],["193599",{"2":{"280":1}}],["193594",{"2":{"280":1}}],["193591",{"2":{"280":1}}],["193589",{"2":{"280":1}}],["193586",{"2":{"280":1}}],["193583",{"2":{"280":1}}],["193580",{"2":{"280":1}}],["193577",{"2":{"280":1}}],["193574",{"2":{"280":1}}],["193559",{"2":{"280":1}}],["19355828",{"2":{"123":1}}],["1938",{"2":{"265":1}}],["19388",{"2":{"230":1}}],["1937",{"2":{"263":1}}],["19373",{"2":{"147":1}}],["193193396044492e",{"2":{"296":1}}],["1931",{"2":{"263":1}}],["19313364",{"2":{"147":1}}],["19397983",{"2":{"197":1}}],["193291",{"2":{"189":1}}],["193263",{"2":{"147":1}}],["19363",{"2":{"291":1}}],["193603",{"2":{"280":1}}],["19361",{"2":{"241":1}}],["19361475",{"2":{"147":1}}],["1936812",{"2":{"163":1}}],["19339",{"2":{"147":1}}],["19343676",{"2":{"123":1}}],["19",{"2":{"79":2,"133":1,"147":2,"205":2,"214":2,"246":4,"254":2,"261":2,"265":3,"269":1,"280":13,"287":1}}],["18\\ttrain",{"2":{"280":1}}],["1875199932126688e",{"2":{"296":1}}],["1875000",{"2":{"261":1}}],["187474f",{"2":{"294":1}}],["1872",{"2":{"280":5}}],["1878",{"2":{"280":2}}],["1879",{"2":{"241":1}}],["18794",{"2":{"186":1}}],["1871622",{"2":{"194":4}}],["1876",{"2":{"291":1}}],["187646",{"2":{"186":1}}],["187688",{"2":{"147":1}}],["1850115f",{"2":{"294":1}}],["18506",{"2":{"205":1}}],["1852",{"2":{"280":2}}],["185209",{"2":{"280":1}}],["185204",{"2":{"280":1}}],["185202",{"2":{"280":1}}],["18538",{"2":{"261":1,"291":1}}],["18531604",{"2":{"147":1}}],["18568",{"2":{"241":1}}],["185187",{"2":{"280":1}}],["185184",{"2":{"280":1}}],["185181",{"2":{"280":1}}],["185164",{"2":{"280":1}}],["185199",{"2":{"280":1}}],["185196",{"2":{"280":1}}],["185193",{"2":{"280":1}}],["185190",{"2":{"280":1}}],["18519",{"2":{"235":2}}],["1851237",{"2":{"165":1}}],["1855",{"2":{"230":1}}],["1859",{"2":{"241":1}}],["185913",{"2":{"147":1}}],["18599004",{"2":{"147":1}}],["18574256",{"2":{"147":1}}],["1869",{"2":{"280":1}}],["18695377",{"2":{"197":1}}],["1862",{"2":{"280":1}}],["18629253",{"2":{"147":1}}],["1866",{"2":{"280":5}}],["186",{"2":{"230":1,"294":1}}],["1865234",{"2":{"261":1}}],["18652153",{"2":{"166":1}}],["1865",{"2":{"230":1}}],["18659353",{"2":{"147":1}}],["186449",{"2":{"166":1}}],["1804566045959986e",{"2":{"296":1}}],["1804",{"2":{"280":3}}],["180299689194884e",{"2":{"296":1}}],["180201559755303e",{"2":{"296":1}}],["1802308f",{"2":{"294":1}}],["1802838f",{"2":{"294":1}}],["1802",{"2":{"280":6}}],["1801867351985504e",{"2":{"296":1}}],["18018521",{"2":{"147":1}}],["180124f",{"2":{"294":1}}],["1801",{"2":{"280":6}}],["1807",{"2":{"261":2,"280":6}}],["1807569",{"2":{"147":1}}],["1808",{"2":{"261":2,"280":6}}],["1805",{"2":{"261":1}}],["18056087",{"2":{"197":1}}],["1806",{"2":{"261":1}}],["1809",{"2":{"261":1}}],["18095753",{"2":{"147":1}}],["1800",{"2":{"280":9}}],["18001",{"2":{"254":1}}],["18000072",{"2":{"147":1}}],["18033011",{"2":{"273":1}}],["1803",{"2":{"241":1,"280":13}}],["1844",{"2":{"280":2}}],["1840",{"2":{"263":1}}],["1848854346344296e",{"2":{"296":1}}],["1848",{"2":{"230":1}}],["18482",{"2":{"205":1}}],["1846486482317626",{"2":{"171":2}}],["18495357",{"2":{"147":1}}],["18450394",{"2":{"147":1}}],["189376",{"2":{"280":1}}],["189371",{"2":{"280":1}}],["189368",{"2":{"280":1}}],["189366",{"2":{"280":1}}],["189363",{"2":{"280":1}}],["189360",{"2":{"280":1}}],["189357",{"2":{"280":1}}],["189354",{"2":{"280":1}}],["189351",{"2":{"280":1}}],["189348",{"2":{"280":1}}],["189333",{"2":{"280":1}}],["18958",{"2":{"269":1}}],["1895644",{"2":{"96":1}}],["1892",{"2":{"263":1}}],["189",{"2":{"230":1,"280":3}}],["18901739",{"2":{"147":1}}],["18949205",{"2":{"147":1}}],["189168384483676e",{"2":{"296":1}}],["18916532",{"2":{"147":1}}],["18919921",{"2":{"147":1}}],["18996632",{"2":{"147":1}}],["18960184",{"2":{"147":1}}],["183191888462663e",{"2":{"296":1}}],["183235f",{"2":{"294":1}}],["1835",{"2":{"291":1}}],["18356045",{"2":{"123":1}}],["183798f",{"2":{"294":1}}],["1837",{"2":{"261":1,"280":7}}],["18384086786765e",{"2":{"296":1}}],["1838",{"2":{"261":1}}],["183978e",{"2":{"220":1}}],["1833",{"2":{"280":2}}],["183381",{"2":{"186":1}}],["183308",{"2":{"165":1}}],["1836515766351254",{"2":{"171":2}}],["18308182",{"2":{"143":1}}],["188297f",{"2":{"294":1}}],["188713f",{"2":{"294":1}}],["188520194628911e",{"2":{"296":1}}],["1885",{"2":{"263":1}}],["1883",{"2":{"254":1}}],["18841726",{"2":{"197":1}}],["188839",{"2":{"147":1}}],["18816021",{"2":{"147":1}}],["18810266",{"2":{"96":1}}],["18898767",{"2":{"147":1}}],["18803856",{"2":{"147":1}}],["18860114",{"2":{"143":1}}],["1822",{"2":{"291":1}}],["1826",{"2":{"280":14,"291":1}}],["1821",{"2":{"280":12}}],["1820",{"2":{"280":6}}],["1828",{"2":{"265":1}}],["1824",{"2":{"261":3,"280":8}}],["1825",{"2":{"261":3}}],["1823",{"2":{"280":5}}],["182365",{"2":{"147":1}}],["182300",{"2":{"97":1}}],["1829",{"2":{"263":1}}],["18294385",{"2":{"147":1}}],["18292144",{"2":{"123":1}}],["18295963",{"2":{"123":1}}],["181",{"2":{"280":3}}],["1814",{"2":{"265":1}}],["1811024",{"2":{"270":2}}],["1811",{"2":{"261":1,"280":2,"291":1}}],["1813",{"2":{"261":1,"280":12,"291":1}}],["1812",{"2":{"261":4}}],["18120264",{"2":{"96":1}}],["1816",{"2":{"261":1}}],["1817",{"2":{"261":1,"280":14}}],["1818",{"2":{"261":1,"263":1}}],["1819",{"2":{"261":1,"263":1}}],["1819797",{"2":{"147":1}}],["1810",{"2":{"261":1}}],["18101",{"2":{"230":1}}],["18102062",{"2":{"96":1}}],["18",{"2":{"56":1,"79":2,"147":2,"205":4,"246":2,"254":2,"261":3,"263":1,"265":5,"269":1,"280":13,"287":1,"291":3,"296":7}}],["15\\ttrain",{"2":{"280":1}}],["155610691517851e",{"2":{"296":1}}],["155574f",{"2":{"294":1}}],["1552",{"2":{"291":1}}],["1559",{"2":{"280":3}}],["1551",{"2":{"280":8}}],["15513715445056030835",{"2":{"205":1}}],["1558",{"2":{"280":11}}],["1554",{"2":{"280":3}}],["155453",{"2":{"280":1}}],["155448",{"2":{"280":1}}],["155445",{"2":{"280":1}}],["155442",{"2":{"280":1}}],["155439",{"2":{"280":1}}],["155436",{"2":{"280":1}}],["155434",{"2":{"280":1}}],["155431",{"2":{"280":1}}],["155428",{"2":{"280":1}}],["155425",{"2":{"280":1}}],["155412",{"2":{"280":1}}],["1550",{"2":{"263":1,"280":2,"291":1}}],["1557",{"2":{"263":1}}],["155757",{"2":{"261":1}}],["1557573",{"2":{"166":1}}],["1535",{"2":{"280":3}}],["1534",{"2":{"280":5,"291":1}}],["1533",{"2":{"280":5}}],["15332098",{"2":{"147":1}}],["1538",{"2":{"280":3,"291":2}}],["1537",{"2":{"280":3}}],["153721",{"2":{"269":1}}],["1537897",{"2":{"147":1}}],["158861f",{"2":{"294":1}}],["1585",{"2":{"280":6}}],["1583",{"2":{"280":1}}],["15830041",{"2":{"147":1}}],["158138",{"2":{"280":1}}],["158133",{"2":{"280":1}}],["158130",{"2":{"280":1}}],["158127",{"2":{"280":1}}],["158124",{"2":{"280":1}}],["158121",{"2":{"280":1}}],["158118",{"2":{"280":1}}],["158115",{"2":{"280":1}}],["158112",{"2":{"280":1}}],["158106",{"2":{"280":1}}],["1581",{"2":{"280":5}}],["1582031",{"2":{"261":1}}],["15829",{"2":{"261":1}}],["158992767",{"2":{"254":2}}],["158975",{"2":{"147":1}}],["158094",{"2":{"280":1}}],["1580",{"2":{"230":1}}],["158650426094909e",{"2":{"296":1}}],["1586047939092094",{"2":{"171":2}}],["15864038",{"2":{"147":1}}],["15877295",{"2":{"270":1}}],["15877534",{"2":{"147":1}}],["15875",{"2":{"147":1}}],["15672f",{"2":{"294":1}}],["1561",{"2":{"280":6}}],["1566",{"2":{"280":3}}],["1560",{"2":{"280":8}}],["15605855",{"2":{"147":1}}],["156421274780055e",{"2":{"296":1}}],["156423",{"2":{"280":1}}],["156418",{"2":{"280":1}}],["156415",{"2":{"280":1}}],["156412",{"2":{"280":1}}],["156409",{"2":{"280":1}}],["156406",{"2":{"280":1}}],["156403",{"2":{"280":1}}],["156400",{"2":{"280":1}}],["1565",{"2":{"280":8}}],["1562",{"2":{"265":1,"280":4}}],["156397",{"2":{"280":1}}],["156394",{"2":{"280":1}}],["156378",{"2":{"280":1}}],["1563",{"2":{"241":1,"280":5}}],["15681",{"2":{"238":6}}],["15681f",{"2":{"96":1}}],["15680",{"2":{"238":6}}],["156",{"2":{"211":2,"254":70,"280":1026,"287":16}}],["159095204734098e",{"2":{"296":1}}],["15905891",{"2":{"96":1}}],["159131000316127e",{"2":{"296":1}}],["159141636274673e",{"2":{"296":1}}],["1595777f",{"2":{"294":1}}],["1595",{"2":{"280":9}}],["15978265",{"2":{"270":1}}],["1592",{"2":{"241":1,"280":10,"291":1}}],["1593845f",{"2":{"294":1}}],["1593",{"2":{"230":1,"280":14}}],["1599543",{"2":{"147":1}}],["15967444",{"2":{"147":1}}],["152763",{"2":{"280":1}}],["15276335",{"2":{"153":1}}],["152758",{"2":{"280":1}}],["152755",{"2":{"280":1}}],["152752",{"2":{"280":1}}],["152749",{"2":{"280":1}}],["152746",{"2":{"280":1}}],["152744",{"2":{"280":1}}],["152741",{"2":{"280":1}}],["152738",{"2":{"280":1}}],["152734",{"2":{"280":1}}],["152723",{"2":{"280":1}}],["1529",{"2":{"263":1}}],["1528",{"2":{"263":1,"280":2}}],["1525574e",{"2":{"170":1}}],["15262935",{"2":{"147":1}}],["1521",{"2":{"269":1}}],["15215",{"2":{"261":1}}],["15218197",{"2":{"147":1}}],["152131",{"2":{"123":1}}],["15222053",{"2":{"147":1}}],["1549",{"2":{"291":1}}],["15499277",{"2":{"147":1}}],["1544",{"2":{"280":2}}],["15447",{"2":{"261":1}}],["1540",{"2":{"280":5}}],["154041",{"2":{"189":1}}],["154516",{"2":{"166":1}}],["15451026",{"2":{"96":1}}],["154",{"2":{"133":1,"211":2,"280":1}}],["1510",{"2":{"291":1}}],["15105338",{"2":{"147":1}}],["15105f",{"2":{"96":1}}],["1519",{"2":{"291":1}}],["15199737",{"2":{"147":1}}],["151494",{"2":{"261":1}}],["1513",{"2":{"241":1}}],["1516",{"2":{"230":1}}],["151",{"2":{"220":1,"263":1,"280":6}}],["151792",{"2":{"147":1}}],["1515473946554614e",{"2":{"296":1}}],["1515904f",{"2":{"294":1}}],["1515998",{"2":{"123":1}}],["1515675",{"2":{"123":1}}],["157685712199914e",{"2":{"296":1}}],["1576396f",{"2":{"294":1}}],["1579",{"2":{"291":1}}],["1578",{"2":{"291":1}}],["1574",{"2":{"280":6}}],["15746",{"2":{"261":1}}],["1573",{"2":{"280":6}}],["157368",{"2":{"147":1}}],["1572",{"2":{"280":6}}],["1571",{"2":{"280":4}}],["1575",{"2":{"280":12}}],["157552",{"2":{"186":1}}],["1570",{"2":{"280":4}}],["15701",{"2":{"238":6}}],["15700",{"2":{"238":12}}],["15707001",{"2":{"165":1}}],["15708946",{"2":{"147":1}}],["15708117",{"2":{"123":1}}],["1577",{"2":{"230":1,"263":1,"291":1}}],["15771388",{"2":{"147":1}}],["1577647",{"2":{"96":1}}],["15",{"2":{"50":1,"63":1,"79":2,"80":1,"84":2,"147":2,"165":1,"166":3,"189":4,"205":2,"214":1,"235":2,"239":1,"246":3,"247":1,"261":2,"263":2,"265":7,"269":1,"280":36,"291":3}}],["150714212449048e",{"2":{"296":1}}],["15078",{"2":{"263":1}}],["1506977f",{"2":{"294":1}}],["15064",{"2":{"261":1}}],["1503",{"2":{"291":1}}],["15035425",{"2":{"147":1}}],["1508",{"2":{"241":1,"269":1}}],["150086",{"2":{"280":1}}],["150081",{"2":{"280":1}}],["150079",{"2":{"280":1}}],["150076",{"2":{"280":1}}],["150073",{"2":{"280":1}}],["150070",{"2":{"280":1}}],["150067",{"2":{"280":1}}],["150064",{"2":{"280":1}}],["150061",{"2":{"280":1}}],["150058",{"2":{"280":1}}],["150038",{"2":{"280":1}}],["15001",{"2":{"254":1}}],["1500",{"2":{"210":1,"231":2,"259":2,"266":1}}],["15026206",{"2":{"147":1}}],["15048876",{"2":{"123":1}}],["150",{"2":{"46":1}}],["1−cos⁡",{"2":{"89":1}}],["1−2∑yy^+α∑y2+∑y^2+α",{"2":{"50":1}}],["1−α",{"2":{"50":2}}],["1−y",{"2":{"50":1}}],["1−yy^",{"2":{"50":2}}],["1−y^+ϵ",{"2":{"50":1}}],["1−y~",{"2":{"50":2}}],["1−z",{"2":{"43":1}}],["1th",{"2":{"46":3}}],["13\\ttrain",{"2":{"280":1}}],["1349",{"2":{"291":1}}],["134",{"2":{"280":16}}],["1343",{"2":{"263":1}}],["1341",{"2":{"263":1}}],["134711",{"2":{"261":1}}],["1345",{"2":{"241":1,"291":1}}],["13420",{"2":{"205":1}}],["13485748",{"2":{"147":1}}],["13462374",{"2":{"123":1}}],["1367",{"2":{"291":1}}],["1367188",{"2":{"261":1}}],["136",{"2":{"280":16,"291":1}}],["1362",{"2":{"280":2}}],["13625823",{"2":{"147":1}}],["1360",{"2":{"269":1}}],["1366",{"2":{"263":1}}],["1363",{"2":{"263":1}}],["13632",{"2":{"261":1}}],["136356",{"2":{"186":1}}],["13684514",{"2":{"147":1}}],["1326",{"2":{"291":1}}],["1322",{"2":{"291":1}}],["132",{"2":{"280":18}}],["1324",{"2":{"265":2}}],["132384116490144e",{"2":{"296":1}}],["13235",{"2":{"261":1}}],["1323625",{"2":{"147":1}}],["13282",{"2":{"261":1}}],["1321127f",{"2":{"294":1}}],["1321",{"2":{"254":2}}],["132542743249991e",{"2":{"296":1}}],["132558f",{"2":{"294":1}}],["1325",{"2":{"241":1}}],["13276449",{"2":{"147":1}}],["1334",{"2":{"291":1}}],["133",{"2":{"280":18}}],["133099",{"2":{"280":1}}],["133095",{"2":{"280":1}}],["133092",{"2":{"280":1}}],["133089",{"2":{"280":1}}],["133086",{"2":{"280":1}}],["133083",{"2":{"280":1}}],["133080",{"2":{"280":1}}],["133077",{"2":{"280":1}}],["133074",{"2":{"280":1}}],["133071",{"2":{"280":1}}],["133056",{"2":{"280":1}}],["13380",{"2":{"261":1}}],["1336206",{"2":{"147":1}}],["13399862",{"2":{"147":1}}],["1337921f",{"2":{"123":1}}],["135",{"2":{"280":16}}],["135967",{"2":{"280":1}}],["135962",{"2":{"280":1}}],["135959",{"2":{"280":1}}],["135956",{"2":{"280":1}}],["135954",{"2":{"280":1}}],["135951",{"2":{"280":1}}],["135948",{"2":{"280":1}}],["135945",{"2":{"280":1}}],["135942",{"2":{"280":1}}],["135939",{"2":{"280":1}}],["135925",{"2":{"280":1}}],["13590635",{"2":{"147":1}}],["1358",{"2":{"265":1}}],["1357",{"2":{"263":1}}],["13518",{"2":{"261":1}}],["13513",{"2":{"147":1}}],["13559",{"2":{"189":1}}],["1350523",{"2":{"165":1}}],["135206",{"2":{"147":1}}],["13540733",{"2":{"147":1}}],["137945\\tval",{"2":{"280":1}}],["137",{"2":{"280":16}}],["13759731",{"2":{"273":1}}],["13751146",{"2":{"147":1}}],["13717",{"2":{"263":1}}],["1377",{"2":{"263":1}}],["1376",{"2":{"291":1}}],["13767318",{"2":{"147":1}}],["13760304",{"2":{"147":1}}],["1373128",{"2":{"147":1}}],["13732003",{"2":{"147":1}}],["13873",{"2":{"280":1}}],["138851",{"2":{"280":1}}],["138846",{"2":{"280":1}}],["138843",{"2":{"280":1}}],["138841",{"2":{"280":1}}],["138838",{"2":{"280":1}}],["138835",{"2":{"280":1}}],["138832",{"2":{"280":1}}],["138829",{"2":{"280":1}}],["138826",{"2":{"280":1}}],["138823",{"2":{"280":1}}],["138807",{"2":{"280":1}}],["13889",{"2":{"81":1}}],["1380",{"2":{"200":1}}],["13826",{"2":{"269":1}}],["138228",{"2":{"147":1}}],["13825017",{"2":{"147":1}}],["13812806",{"2":{"147":1}}],["13819042",{"2":{"143":1}}],["13861585",{"2":{"124":1}}],["131708313969156e",{"2":{"296":1}}],["131726",{"2":{"165":1,"166":1}}],["1315",{"2":{"291":1}}],["1315112",{"2":{"123":1}}],["1311414f",{"2":{"294":1}}],["1311",{"2":{"291":1}}],["13110895",{"2":{"147":1}}],["131435",{"2":{"280":1}}],["13142236",{"2":{"147":1}}],["131",{"2":{"280":19}}],["1319",{"2":{"280":2}}],["1318613f",{"2":{"294":1}}],["1318",{"2":{"263":2}}],["13185",{"2":{"261":1}}],["13102",{"2":{"261":1}}],["13164552",{"2":{"123":1}}],["131294381895894e",{"2":{"296":1}}],["1312",{"2":{"15":1}}],["139397441054727e",{"2":{"296":1}}],["1393",{"2":{"280":6}}],["1392",{"2":{"280":4}}],["1396786f",{"2":{"294":1}}],["1396403f",{"2":{"294":1}}],["1396",{"2":{"263":1}}],["13993",{"2":{"263":1}}],["1399",{"2":{"261":1,"291":1}}],["13996856",{"2":{"147":1}}],["1398",{"2":{"280":2}}],["1398s\\ttraining",{"2":{"235":1}}],["1398194",{"2":{"123":1}}],["139",{"2":{"200":1,"269":1}}],["1394",{"2":{"269":1}}],["139433",{"2":{"166":1}}],["13940203",{"2":{"123":1}}],["139778",{"2":{"147":1}}],["13971032",{"2":{"123":1}}],["13950577",{"2":{"147":1}}],["13950492",{"2":{"123":1}}],["1305",{"2":{"291":1}}],["13051206",{"2":{"123":1}}],["1303",{"2":{"280":6}}],["1303978",{"2":{"147":1}}],["1307",{"2":{"280":9,"291":1}}],["13073668",{"2":{"197":1}}],["130",{"2":{"265":1,"280":16,"291":2}}],["130117",{"2":{"261":1}}],["13001",{"2":{"254":1}}],["1306752",{"2":{"166":1}}],["1309025",{"2":{"147":1}}],["13093f",{"2":{"96":1}}],["13048346",{"2":{"123":1}}],["130295",{"2":{"280":1}}],["130290",{"2":{"280":1}}],["130285",{"2":{"280":1}}],["130282",{"2":{"280":1}}],["130279",{"2":{"280":1}}],["130276",{"2":{"280":1}}],["130273",{"2":{"280":1}}],["130270",{"2":{"280":1}}],["130267",{"2":{"280":1}}],["130264",{"2":{"280":1}}],["130250",{"2":{"280":1}}],["1302",{"2":{"45":1}}],["13",{"2":{"77":2,"79":2,"81":8,"84":2,"124":1,"147":2,"205":2,"214":9,"246":2,"254":3,"261":2,"265":3,"280":22}}],["1c",{"2":{"43":1}}],["1b",{"2":{"43":3}}],["1a",{"2":{"43":3}}],["16\\ttrain",{"2":{"280":1}}],["16228",{"2":{"291":1}}],["16220272",{"2":{"147":1}}],["162",{"2":{"280":6}}],["162393",{"2":{"280":1}}],["1626",{"2":{"269":1,"291":1}}],["1620",{"2":{"261":1}}],["162436",{"2":{"280":1}}],["162432",{"2":{"280":1}}],["162429",{"2":{"280":1}}],["162426",{"2":{"280":1}}],["162423",{"2":{"280":1}}],["162421",{"2":{"280":1}}],["162418",{"2":{"280":1}}],["162415",{"2":{"280":1}}],["162412",{"2":{"280":1}}],["16241",{"2":{"238":12}}],["162409",{"2":{"280":1}}],["16240",{"2":{"238":12}}],["1612",{"2":{"291":1}}],["16123",{"2":{"230":1}}],["1616",{"2":{"291":1}}],["1610",{"2":{"291":1}}],["16105938",{"2":{"147":1}}],["1619",{"2":{"261":1}}],["1614s\\ttraining",{"2":{"235":1}}],["161",{"2":{"200":1,"269":1}}],["1617228180412864",{"2":{"171":2}}],["16xf32>",{"2":{"147":2}}],["16x6x5x5xf32>",{"2":{"147":2}}],["167050",{"2":{"280":1}}],["167045",{"2":{"280":1}}],["167042",{"2":{"280":1}}],["167039",{"2":{"280":1}}],["167037",{"2":{"280":1}}],["167034",{"2":{"280":1}}],["167031",{"2":{"280":1}}],["167028",{"2":{"280":1}}],["167025",{"2":{"280":1}}],["167022",{"2":{"280":1}}],["167007",{"2":{"280":1}}],["16701514",{"2":{"147":1}}],["1672",{"2":{"261":1}}],["167399502371825e",{"2":{"296":1}}],["1673",{"2":{"261":1}}],["1679688",{"2":{"261":1}}],["16778",{"2":{"261":1}}],["167",{"2":{"200":1,"241":1,"280":2}}],["167698f",{"2":{"294":1}}],["1676",{"2":{"200":1}}],["1678",{"2":{"147":1}}],["1678572",{"2":{"147":1}}],["16783f",{"2":{"96":1}}],["165",{"2":{"291":1}}],["1650",{"2":{"261":1}}],["16504566",{"2":{"147":1}}],["1654413f",{"2":{"294":1}}],["1654",{"2":{"241":1}}],["16566901",{"2":{"163":1}}],["165689",{"2":{"147":1}}],["16553",{"2":{"147":1}}],["16576298",{"2":{"147":1}}],["168061141840587e",{"2":{"296":1}}],["168044f",{"2":{"294":1}}],["168032f",{"2":{"294":1}}],["1684",{"2":{"291":1}}],["168600370192953e",{"2":{"296":1}}],["1686",{"2":{"265":1}}],["16869935",{"2":{"147":1}}],["1688",{"2":{"261":1}}],["1689",{"2":{"261":1,"291":1}}],["168109256737795e",{"2":{"296":1}}],["1681",{"2":{"261":1,"291":1}}],["16817337",{"2":{"147":1}}],["1685",{"2":{"261":1,"265":1}}],["1687434",{"2":{"147":1}}],["1644",{"2":{"291":1}}],["16447",{"2":{"147":1}}],["16447622",{"2":{"147":1}}],["164",{"2":{"280":2}}],["1649",{"2":{"261":1}}],["16468",{"2":{"261":1}}],["1646",{"2":{"261":1}}],["1647",{"2":{"261":1}}],["164786",{"2":{"189":1}}],["1642132",{"2":{"165":1}}],["16428338",{"2":{"147":1}}],["1645",{"2":{"241":1,"291":1}}],["16450",{"2":{"238":6}}],["16454",{"2":{"147":1}}],["1645548",{"2":{"147":1}}],["1664037178524215e",{"2":{"296":1}}],["16645215",{"2":{"147":1}}],["166371924580691e",{"2":{"296":1}}],["1665567f",{"2":{"294":1}}],["16652\\taccuracy",{"2":{"205":1}}],["166616f",{"2":{"294":1}}],["16667",{"2":{"81":1}}],["1661",{"2":{"261":1}}],["166224",{"2":{"280":1}}],["166221",{"2":{"280":1}}],["166218",{"2":{"280":1}}],["166215",{"2":{"280":1}}],["166212",{"2":{"280":1}}],["166200",{"2":{"280":1}}],["1662",{"2":{"261":1}}],["16625",{"2":{"261":1}}],["1660",{"2":{"261":2,"291":1}}],["1667",{"2":{"261":2}}],["1668",{"2":{"261":1,"263":1}}],["1669",{"2":{"261":1}}],["166",{"2":{"241":2}}],["163944f",{"2":{"294":1}}],["1639",{"2":{"291":1}}],["1631",{"2":{"291":1}}],["163569",{"2":{"280":1}}],["163564",{"2":{"280":1}}],["163561",{"2":{"280":1}}],["163558",{"2":{"280":1}}],["163556",{"2":{"280":1}}],["163553",{"2":{"280":1}}],["163550",{"2":{"280":1}}],["163547",{"2":{"280":1}}],["163544",{"2":{"280":1}}],["163541",{"2":{"280":1}}],["163529",{"2":{"280":1}}],["1635",{"2":{"263":1}}],["16340032",{"2":{"147":1}}],["16363813",{"2":{"147":1}}],["1696",{"2":{"261":1,"291":1}}],["16969556",{"2":{"147":1}}],["169746",{"2":{"261":1}}],["1697",{"2":{"261":2}}],["1698",{"2":{"261":1}}],["1691734",{"2":{"270":1}}],["1691",{"2":{"261":1,"287":1}}],["1692",{"2":{"261":1}}],["169",{"2":{"241":1}}],["16954337",{"2":{"147":1}}],["16951457",{"2":{"123":1}}],["16955197",{"2":{"123":1}}],["16f0",{"2":{"67":1}}],["160647700850692e",{"2":{"296":1}}],["1606",{"2":{"291":1}}],["1600",{"2":{"280":6}}],["16001",{"2":{"254":1}}],["160392f",{"2":{"294":1}}],["160353f",{"2":{"294":1}}],["1603",{"2":{"261":1}}],["1601",{"2":{"241":1,"287":1}}],["16046",{"2":{"261":1}}],["1604",{"2":{"241":1,"261":1}}],["16043",{"2":{"205":1}}],["160",{"2":{"230":1,"291":1}}],["160808",{"2":{"280":1}}],["160804",{"2":{"280":1}}],["160801",{"2":{"280":1}}],["160876",{"2":{"186":1}}],["16083185f0",{"2":{"50":1}}],["160832f0",{"2":{"50":1}}],["16027175",{"2":{"147":1}}],["16094846",{"2":{"163":1}}],["160986",{"2":{"147":1}}],["1609",{"2":{"81":1}}],["160798",{"2":{"280":1}}],["160795",{"2":{"280":1}}],["160792",{"2":{"280":1}}],["160789",{"2":{"280":1}}],["160786",{"2":{"280":1}}],["160783",{"2":{"280":1}}],["160780",{"2":{"280":1}}],["160769",{"2":{"280":1}}],["1607",{"2":{"46":1,"65":2}}],["16",{"2":{"37":1,"56":3,"67":1,"79":2,"83":9,"126":13,"127":14,"132":1,"147":5,"165":1,"166":1,"189":1,"198":5,"205":2,"207":1,"211":3,"214":1,"215":1,"222":5,"235":1,"239":2,"246":2,"247":2,"253":1,"254":2,"256":1,"261":2,"262":1,"265":5,"267":1,"271":4,"274":4,"275":1,"280":14,"281":1,"287":1,"289":1,"296":6,"298":5}}],["10f",{"2":{"295":1}}],["10f0",{"2":{"67":11}}],["10\\ttrain",{"2":{"280":1}}],["1066",{"2":{"263":1}}],["10663",{"2":{"261":1}}],["10672",{"2":{"261":1}}],["10676",{"2":{"147":1}}],["10696",{"2":{"261":1}}],["10657",{"2":{"261":1}}],["1065",{"2":{"254":3}}],["1063",{"2":{"254":3,"280":6}}],["10622",{"2":{"261":1}}],["10626",{"2":{"261":1}}],["1062",{"2":{"254":3}}],["1061",{"2":{"254":3,"280":6}}],["106062",{"2":{"263":1}}],["106069",{"2":{"163":1}}],["1060",{"2":{"254":3,"263":1}}],["106",{"2":{"241":1}}],["1064",{"2":{"230":1,"254":3,"291":1}}],["106482625",{"2":{"147":1}}],["108192f",{"2":{"294":1}}],["1089",{"2":{"280":11}}],["1086",{"2":{"280":1}}],["1087",{"2":{"280":1,"291":1}}],["10875653",{"2":{"147":1}}],["1084149777091826e",{"2":{"296":1}}],["1084924f",{"2":{"294":1}}],["1084",{"2":{"280":20}}],["10842768",{"2":{"147":1}}],["10882",{"2":{"291":1}}],["1088",{"2":{"280":20,"291":1}}],["10885",{"2":{"147":1}}],["1085",{"2":{"265":1}}],["10831120229798e",{"2":{"296":1}}],["1083",{"2":{"280":4}}],["10832",{"2":{"261":1}}],["10836",{"2":{"261":1}}],["1080",{"2":{"254":7,"280":118,"287":2}}],["108",{"2":{"241":1,"280":4,"291":1}}],["1022601369267256e",{"2":{"296":1}}],["1020",{"2":{"280":6}}],["10206564",{"2":{"147":1}}],["1023915f",{"2":{"294":1}}],["1023",{"2":{"280":6}}],["10217",{"2":{"261":1}}],["1029",{"2":{"254":1,"280":1}}],["1028",{"2":{"254":1}}],["1027",{"2":{"254":1}}],["1026",{"2":{"254":1}}],["1025844575904385e",{"2":{"296":1}}],["10252",{"2":{"261":1}}],["1025391",{"2":{"261":1}}],["1025",{"2":{"254":1}}],["102",{"2":{"200":1,"241":1,"291":1}}],["102486f",{"2":{"294":1}}],["1024",{"2":{"171":7,"242":1,"254":1,"263":1,"291":1}}],["10242631",{"2":{"147":1}}],["104797",{"2":{"280":1}}],["104794",{"2":{"280":1}}],["104791",{"2":{"280":1}}],["104788",{"2":{"280":1}}],["104785",{"2":{"280":1}}],["104768",{"2":{"280":1}}],["10473",{"2":{"261":1}}],["1047",{"2":{"254":1,"280":9}}],["104674499776066e",{"2":{"296":1}}],["1046",{"2":{"254":1}}],["10464323",{"2":{"147":1}}],["1045",{"2":{"254":1,"263":1}}],["1044",{"2":{"254":1,"265":1,"280":12}}],["1043",{"2":{"254":1,"263":1}}],["1042",{"2":{"254":1}}],["10420467",{"2":{"147":1}}],["104191",{"2":{"270":1}}],["1041",{"2":{"254":1}}],["104",{"2":{"241":1}}],["10491",{"2":{"261":1}}],["1049",{"2":{"254":1,"280":7}}],["10494",{"2":{"147":1}}],["104936846",{"2":{"147":1}}],["1040",{"2":{"254":1}}],["104042254",{"2":{"147":1}}],["10401063",{"2":{"147":1}}],["104813",{"2":{"280":1}}],["104808",{"2":{"280":1}}],["104805",{"2":{"280":1}}],["104803",{"2":{"280":1}}],["1048006f",{"2":{"294":1}}],["104800",{"2":{"280":1}}],["1048",{"2":{"254":1,"263":1}}],["104844354",{"2":{"123":1}}],["104868725",{"2":{"123":1}}],["1074",{"2":{"291":1}}],["107406616",{"2":{"147":1}}],["1077",{"2":{"280":3}}],["1073",{"2":{"280":9}}],["107597",{"2":{"280":1}}],["107594",{"2":{"280":1}}],["107591",{"2":{"280":1}}],["107588",{"2":{"280":1}}],["107585",{"2":{"280":1}}],["107582",{"2":{"280":1}}],["107579",{"2":{"280":1}}],["107576",{"2":{"280":1}}],["107562",{"2":{"280":1}}],["1079",{"2":{"263":1,"280":3}}],["107933",{"2":{"165":1}}],["1076",{"2":{"280":3,"291":1}}],["107605",{"2":{"280":1}}],["107600",{"2":{"280":1}}],["10764",{"2":{"261":1}}],["10761",{"2":{"261":1}}],["10716",{"2":{"261":1}}],["10784",{"2":{"261":1}}],["10708179",{"2":{"147":1}}],["1094",{"2":{"280":16}}],["1099555f",{"2":{"294":1}}],["10995198",{"2":{"123":1}}],["10992",{"2":{"291":1}}],["1099",{"2":{"280":16}}],["1091",{"2":{"280":18}}],["10914792",{"2":{"147":1}}],["1090",{"2":{"280":7,"292":2}}],["10908",{"2":{"261":1}}],["1093",{"2":{"280":4}}],["1095306f",{"2":{"294":1}}],["1095",{"2":{"280":5}}],["109820893628288e",{"2":{"296":1}}],["1098",{"2":{"280":10}}],["10987",{"2":{"261":1}}],["109746655647504e",{"2":{"296":1}}],["1097",{"2":{"269":1,"280":2}}],["10973",{"2":{"261":1}}],["109790",{"2":{"261":1}}],["109",{"2":{"235":1,"241":2,"291":1}}],["1092",{"2":{"280":8}}],["10923161",{"2":{"147":1}}],["109261915",{"2":{"143":1}}],["1035",{"2":{"291":1}}],["1034",{"2":{"280":5}}],["10346943",{"2":{"147":1}}],["103769868027601e",{"2":{"296":1}}],["1037864f",{"2":{"294":1}}],["1037",{"2":{"263":1}}],["10386",{"2":{"261":1}}],["10384",{"2":{"261":1}}],["10332",{"2":{"261":1}}],["1033",{"2":{"254":1,"280":4}}],["1032115f",{"2":{"294":1}}],["10321",{"2":{"261":1}}],["1032",{"2":{"254":1}}],["1031",{"2":{"254":1}}],["10316533",{"2":{"147":1}}],["10316813",{"2":{"147":1}}],["1030585133668373e",{"2":{"296":1}}],["1030",{"2":{"254":1}}],["1039",{"2":{"280":5}}],["10393",{"2":{"261":1}}],["10396\\taccuracy",{"2":{"205":1}}],["103913486",{"2":{"147":1}}],["10361175",{"2":{"147":1}}],["103",{"2":{"126":1,"241":1}}],["1052335604828207e",{"2":{"296":1}}],["10521463",{"2":{"123":1}}],["1050",{"2":{"280":4}}],["1051",{"2":{"280":5}}],["1055",{"2":{"280":6}}],["10543",{"2":{"261":1}}],["10542",{"2":{"205":1}}],["1058",{"2":{"254":3,"280":6}}],["10584956",{"2":{"123":1}}],["1057",{"2":{"254":3,"280":5}}],["10561",{"2":{"263":1}}],["10561423",{"2":{"147":1}}],["1056",{"2":{"254":3,"280":5}}],["1059",{"2":{"241":1,"254":3}}],["105389535",{"2":{"147":1}}],["105",{"2":{"123":1,"241":1,"291":2}}],["10^4",{"2":{"88":2}}],["10x4xf32>",{"2":{"147":4}}],["10xf32>",{"2":{"147":2}}],["10x",{"2":{"87":3}}],["101885932662914e",{"2":{"296":1}}],["1012",{"2":{"291":1}}],["10156251217167081480",{"2":{"287":1}}],["10155",{"2":{"261":1}}],["10194f",{"2":{"294":1}}],["101995",{"2":{"280":1}}],["101991",{"2":{"280":1}}],["101988",{"2":{"280":1}}],["101985",{"2":{"280":1}}],["101980",{"2":{"280":1}}],["101977",{"2":{"280":1}}],["101974",{"2":{"280":1}}],["101971",{"2":{"280":1}}],["1019723",{"2":{"123":1}}],["101968",{"2":{"280":1}}],["101965",{"2":{"280":1}}],["101952",{"2":{"280":1}}],["10101",{"2":{"261":1}}],["101199",{"2":{"261":1}}],["10116895",{"2":{"147":1}}],["101374922326844e",{"2":{"296":1}}],["10137743",{"2":{"147":1}}],["1013",{"2":{"230":1,"291":1}}],["1014304f",{"2":{"294":1}}],["10140238162746013",{"2":{"171":2}}],["10148246",{"2":{"147":1}}],["101",{"2":{"84":1,"220":1,"230":1,"238":6}}],["10i",{"2":{"81":1}}],["10⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀⠀10⠀",{"2":{"67":1}}],["10",{"2":{"25":2,"37":2,"39":1,"46":12,"67":7,"70":1,"76":2,"79":2,"81":2,"84":10,"87":7,"88":1,"89":5,"90":2,"95":1,"96":5,"97":1,"134":2,"135":1,"147":5,"163":2,"164":2,"166":2,"189":4,"192":2,"197":3,"200":2,"205":2,"211":5,"213":1,"214":22,"233":5,"238":30,"239":2,"244":1,"246":2,"247":2,"254":26,"258":1,"261":2,"263":2,"265":3,"266":1,"269":1,"280":267,"283":1,"288":1,"291":3,"292":2,"296":13}}],["1002388f",{"2":{"294":1}}],["10022328",{"2":{"123":1}}],["1007",{"2":{"291":1}}],["100737736",{"2":{"147":1}}],["1005",{"2":{"280":3}}],["100514844",{"2":{"147":1}}],["1008",{"2":{"263":1}}],["10089",{"2":{"261":1}}],["10062",{"2":{"261":1}}],["1001",{"2":{"197":1,"241":1,"254":1}}],["100167036",{"2":{"147":1}}],["1003911",{"2":{"155":1}}],["10097251",{"2":{"147":1}}],["10044085",{"2":{"123":1}}],["100f0",{"2":{"67":1}}],["10001",{"2":{"254":1}}],["10000",{"2":{"197":2,"254":1,"287":12}}],["1000",{"2":{"56":1,"67":1,"84":3,"97":3,"124":14,"197":1,"254":1,"287":2}}],["100",{"2":{"15":1,"40":6,"70":2,"80":7,"84":5,"89":2,"97":1,"124":2,"198":1,"207":1,"213":2,"215":1,"222":1,"235":2,"238":6,"239":2,"246":38,"247":2,"256":1,"262":1,"263":1,"267":1,"275":1,"279":1,"281":1,"289":1,"291":1,"293":2,"298":1}}],["12\\ttrain",{"2":{"280":1}}],["12s",{"2":{"246":2}}],["12abac4f24f6",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["12x12x6x4xf32>",{"2":{"147":2}}],["12104",{"2":{"291":1}}],["1215",{"2":{"280":2}}],["121",{"2":{"280":17}}],["121308927130042e",{"2":{"296":1}}],["1213",{"2":{"280":24}}],["12138993",{"2":{"147":1}}],["121828",{"2":{"280":1}}],["121823",{"2":{"280":1}}],["121820",{"2":{"280":1}}],["121817",{"2":{"280":1}}],["121814",{"2":{"280":1}}],["121810",{"2":{"280":1}}],["121807",{"2":{"280":1}}],["121802",{"2":{"280":1}}],["12180024",{"2":{"147":1}}],["1218",{"2":{"265":1,"280":14}}],["1216456f",{"2":{"294":1}}],["1216",{"2":{"263":1,"280":5}}],["1214",{"2":{"280":5}}],["121482",{"2":{"261":1}}],["12146",{"2":{"261":1}}],["1211828f",{"2":{"294":1}}],["12117",{"2":{"261":1}}],["12116281",{"2":{"147":1}}],["1217473064180395e",{"2":{"296":1}}],["121799",{"2":{"280":1}}],["121795",{"2":{"280":1}}],["121781",{"2":{"280":1}}],["1217",{"2":{"280":3}}],["12177",{"2":{"147":1}}],["121751495",{"2":{"147":1}}],["121998",{"2":{"147":1}}],["1256",{"2":{"291":1}}],["12565",{"2":{"261":1}}],["1259",{"2":{"280":12}}],["1252",{"2":{"280":2}}],["1251",{"2":{"280":6}}],["125055323315705e",{"2":{"296":1}}],["1250395f",{"2":{"294":1}}],["12508307",{"2":{"273":1}}],["12500",{"2":{"261":1}}],["12572",{"2":{"261":1}}],["125",{"2":{"230":1,"291":2}}],["1255",{"2":{"230":1,"263":1}}],["12559004",{"2":{"147":1}}],["12589",{"2":{"197":1}}],["125342",{"2":{"123":1}}],["126605172776816e",{"2":{"296":1}}],["126621f",{"2":{"294":1}}],["126616",{"2":{"261":1}}],["1260",{"2":{"280":2}}],["1264",{"2":{"280":5}}],["12648",{"2":{"147":1}}],["1267",{"2":{"280":6}}],["1262",{"2":{"280":6}}],["1269",{"2":{"265":1,"280":12}}],["1261",{"2":{"280":2}}],["12617",{"2":{"261":1}}],["1261990389976131",{"2":{"192":1}}],["1263",{"2":{"280":3}}],["12630",{"2":{"261":1}}],["126324",{"2":{"123":1}}],["1268",{"2":{"230":1}}],["126",{"2":{"200":1,"220":1,"269":1}}],["12650935",{"2":{"147":1}}],["1249",{"2":{"280":3}}],["12497909",{"2":{"147":1}}],["1241",{"2":{"280":2}}],["124123",{"2":{"147":1}}],["1244",{"2":{"280":6}}],["1247",{"2":{"280":2}}],["12477568",{"2":{"197":1}}],["1246",{"2":{"280":2}}],["124",{"2":{"280":6}}],["1248",{"2":{"241":1}}],["124594",{"2":{"280":1}}],["124589",{"2":{"280":1}}],["124586",{"2":{"280":1}}],["124583",{"2":{"280":1}}],["124581",{"2":{"280":1}}],["124578",{"2":{"280":1}}],["124575",{"2":{"280":1}}],["124572",{"2":{"280":1}}],["124569",{"2":{"280":1}}],["124566",{"2":{"280":1}}],["124551",{"2":{"280":1}}],["1245",{"2":{"241":1,"280":3}}],["1245009",{"2":{"165":1}}],["1243",{"2":{"291":1}}],["12437135",{"2":{"147":1}}],["1243891",{"2":{"147":1}}],["12287",{"2":{"291":1}}],["1224",{"2":{"280":4}}],["122411",{"2":{"261":1}}],["12241076",{"2":{"147":1}}],["1222",{"2":{"280":2}}],["12229406",{"2":{"147":1}}],["1223",{"2":{"280":3}}],["1229",{"2":{"269":1,"280":1}}],["1221",{"2":{"263":1}}],["122150525",{"2":{"147":1}}],["12250",{"2":{"261":1}}],["122567",{"2":{"147":1}}],["122563",{"2":{"147":1}}],["122",{"2":{"230":1}}],["122653805",{"2":{"147":1}}],["12207",{"2":{"230":1}}],["12208768",{"2":{"147":1}}],["12209795",{"2":{"123":1}}],["1270938f",{"2":{"294":1}}],["1270",{"2":{"280":3}}],["1277",{"2":{"280":3}}],["1277556f0",{"2":{"50":1}}],["12767854634326e",{"2":{"296":1}}],["1276325f",{"2":{"294":1}}],["1276",{"2":{"280":4}}],["12766095",{"2":{"147":1}}],["127",{"2":{"280":1}}],["1278",{"2":{"280":3}}],["1275",{"2":{"280":5}}],["127103\\ttrain",{"2":{"280":1}}],["12715751",{"2":{"147":1}}],["1272",{"2":{"280":6}}],["1272793",{"2":{"147":1}}],["127462602324607e",{"2":{"296":1}}],["127464",{"2":{"280":1}}],["127459",{"2":{"280":1}}],["127456",{"2":{"280":1}}],["127454",{"2":{"280":1}}],["127451",{"2":{"280":1}}],["127448",{"2":{"280":1}}],["127445",{"2":{"280":1}}],["127442",{"2":{"280":1}}],["127439",{"2":{"280":1}}],["127436",{"2":{"280":1}}],["127422",{"2":{"280":1}}],["1279",{"2":{"280":20}}],["1279297",{"2":{"261":1}}],["127934",{"2":{"147":1}}],["12738f",{"2":{"96":1}}],["1202",{"2":{"280":21}}],["120217",{"2":{"147":1}}],["1208",{"2":{"280":8}}],["120820366",{"2":{"147":1}}],["1201",{"2":{"263":1}}],["120147206",{"2":{"147":1}}],["1205",{"2":{"263":2,"291":1}}],["12055",{"2":{"261":1}}],["1200",{"2":{"269":1,"280":6,"288":1}}],["12001",{"2":{"254":1}}],["12008221",{"2":{"96":1}}],["12032411168801079817",{"2":{"200":1}}],["1204217935313214",{"2":{"171":2}}],["120612435",{"2":{"147":1}}],["12079",{"2":{"261":1}}],["12077212",{"2":{"147":1}}],["12076889",{"2":{"147":1}}],["12078848",{"2":{"147":1}}],["1209",{"2":{"280":3}}],["12092318",{"2":{"147":1}}],["120917246",{"2":{"147":1}}],["120",{"2":{"80":1,"280":17}}],["1297",{"2":{"280":6,"291":1}}],["1294",{"2":{"269":1}}],["12953",{"2":{"261":1}}],["1295",{"2":{"147":1,"230":1}}],["12900914",{"2":{"147":1}}],["12936348",{"2":{"147":1}}],["1291",{"2":{"280":6}}],["12912875",{"2":{"124":1}}],["12918535",{"2":{"123":1}}],["12923913",{"2":{"123":1}}],["129",{"2":{"46":1,"56":1,"280":16,"291":1}}],["1286",{"2":{"280":3}}],["1289033",{"2":{"273":1}}],["128141423478875e",{"2":{"296":1}}],["1281",{"2":{"263":2}}],["12811285",{"2":{"147":1}}],["12835f",{"2":{"294":1}}],["1283",{"2":{"263":1,"280":9}}],["1285",{"2":{"263":1}}],["12875",{"2":{"261":1}}],["1282",{"2":{"280":5}}],["12820",{"2":{"205":1}}],["12825768",{"2":{"147":1}}],["128x4xi1>",{"2":{"147":2}}],["128x4xf32>",{"2":{"147":8}}],["128x84xf32>",{"2":{"147":2}}],["128xf32>",{"2":{"147":2}}],["128",{"2":{"37":2,"46":3,"56":12,"96":4,"147":5,"198":1,"211":6,"213":1,"222":1,"235":1,"263":1,"267":5,"270":4,"280":23,"298":1}}],["12",{"2":{"25":2,"79":2,"80":1,"81":8,"84":1,"147":3,"153":1,"165":6,"166":7,"170":9,"189":2,"198":1,"205":2,"207":1,"214":1,"215":1,"222":1,"239":6,"246":5,"247":6,"254":15,"256":1,"261":2,"262":1,"265":4,"267":1,"275":1,"281":1,"289":1,"292":1,"298":1}}],["12307",{"2":{"291":1}}],["1238",{"2":{"280":8}}],["12382",{"2":{"261":1}}],["1233",{"2":{"280":2}}],["12345",{"2":{"270":1}}],["1234",{"2":{"264":1,"280":5}}],["12348884",{"2":{"147":1}}],["12371",{"2":{"261":1}}],["1239",{"2":{"230":1,"280":2}}],["1239749674027375",{"2":{"171":2}}],["12397134",{"2":{"147":1}}],["123123",{"2":{"147":1}}],["123659",{"2":{"147":1}}],["123",{"2":{"15":3,"39":1,"171":1,"280":3,"291":1}}],["1=dense",{"2":{"23":1}}],["1d",{"2":{"15":1,"43":1,"80":5,"81":1,"86":1,"89":4}}],["1",{"2":{"2":1,"4":1,"5":2,"7":1,"11":4,"15":48,"23":6,"37":5,"39":27,"40":9,"41":1,"42":24,"44":7,"45":12,"46":13,"47":3,"49":1,"50":110,"51":1,"56":24,"63":8,"65":10,"67":44,"69":6,"70":1,"76":1,"77":9,"78":2,"79":52,"80":34,"81":83,"82":33,"83":17,"84":22,"85":41,"86":1,"87":3,"88":26,"89":17,"96":55,"97":14,"105":1,"114":1,"119":2,"123":45,"124":6,"126":35,"127":47,"132":6,"134":1,"135":3,"143":4,"144":2,"145":2,"146":9,"147":191,"153":3,"154":11,"155":1,"158":2,"163":13,"165":23,"166":20,"168":3,"169":3,"170":3,"171":88,"186":1,"189":45,"191":4,"192":5,"194":4,"195":6,"196":1,"197":6,"198":1,"200":13,"201":11,"203":1,"204":1,"205":53,"207":1,"210":5,"211":8,"212":1,"213":2,"214":2,"215":1,"217":1,"218":12,"220":18,"221":7,"222":1,"230":28,"231":5,"232":3,"234":1,"235":6,"236":1,"237":1,"238":118,"239":5,"241":35,"242":4,"243":1,"244":1,"245":1,"246":7,"247":5,"249":1,"251":1,"252":9,"253":3,"254":6,"255":5,"256":1,"257":1,"258":4,"259":1,"260":19,"261":7,"262":1,"263":71,"264":6,"265":68,"266":9,"267":1,"269":32,"270":90,"271":5,"272":1,"273":13,"274":20,"275":1,"276":1,"277":3,"278":4,"279":1,"280":46,"281":1,"282":1,"283":10,"284":7,"285":8,"286":1,"287":2,"288":4,"289":1,"291":111,"292":37,"293":7,"294":101,"295":2,"296":98,"297":5,"298":1}}],["e^2",{"2":{"293":1}}],["err",{"2":{"255":3}}],["errs",{"2":{"255":3}}],["errored",{"2":{"125":1}}],["errors",{"2":{"24":3,"67":1,"128":1,"220":1}}],["error",{"2":{"2":3,"3":1,"22":1,"24":6,"46":1,"49":1,"50":3,"52":1,"54":2,"57":1,"67":2,"71":2,"77":1,"79":1,"107":2,"114":1,"126":3,"163":1,"197":1,"255":1,"263":1,"265":2,"287":1,"291":1}}],["e0120",{"2":{"254":70,"280":1026,"287":16}}],["ess",{"2":{"265":3}}],["essence",{"2":{"89":1}}],["essentially",{"2":{"39":1,"56":1,"154":1,"184":1,"274":1}}],["especially",{"2":{"121":1,"125":1}}],["estimated",{"2":{"167":1}}],["estimate",{"2":{"167":1,"170":1}}],["estimation",{"0":{"167":1,"282":1},"1":{"168":1,"169":1,"170":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"89":1,"167":1,"220":4}}],["established",{"2":{"91":1,"192":1}}],["edata=gph",{"2":{"277":1}}],["editors",{"2":{"67":2}}],["edges",{"2":{"162":1}}],["edge",{"2":{"21":1,"89":1,"93":1,"121":1,"122":1,"128":1,"171":1,"277":2}}],["evolved",{"2":{"266":1}}],["evalpoly",{"2":{"270":2,"274":1}}],["eval",{"2":{"171":4,"242":7}}],["evaluate",{"2":{"71":1}}],["evaluation",{"2":{"67":1}}],["eventually",{"2":{"171":1}}],["even",{"2":{"3":1,"7":1,"8":1,"52":1,"82":3,"89":2,"92":1,"93":1,"101":1,"125":1,"127":1,"174":1,"189":1,"193":1,"214":1}}],["everystep",{"2":{"238":13}}],["everystep=false",{"2":{"233":1}}],["everything",{"2":{"89":1,"93":2,"158":1}}],["every",{"2":{"41":1,"69":1,"79":5,"83":1,"88":2,"130":1,"131":1,"178":1,"189":1}}],["ever",{"2":{"3":1,"93":1}}],["eccv",{"2":{"65":1}}],["ecosystem",{"2":{"35":1,"97":1,"118":1}}],["european",{"2":{"65":1}}],["epyc",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["epochs=50",{"2":{"261":1}}],["epochs",{"2":{"220":2,"261":2,"274":3,"280":2}}],["epoch",{"2":{"56":1,"205":53,"213":2,"235":2,"246":2,"261":106,"274":5,"280":33}}],["eps",{"2":{"50":1,"65":8,"67":3}}],["epsilon=eps",{"2":{"65":1}}],["epsilon=0",{"2":{"50":2}}],["epsilon=nothing",{"2":{"50":1}}],["epsilon=1f",{"2":{"46":4}}],["epsilon",{"2":{"46":4,"50":9,"65":8}}],["emp",{"2":{"163":2,"164":2}}],["empirical",{"2":{"67":1}}],["empting",{"2":{"40":1}}],["empty",{"2":{"7":1,"8":2,"40":2,"42":9,"44":4,"45":5,"46":5,"47":1,"49":2,"50":1,"153":1,"197":1,"261":5,"294":1}}],["embed=chain",{"2":{"258":1}}],["embed",{"2":{"56":8,"97":8,"171":1,"258":1}}],["embeddings",{"2":{"44":2}}],["embedding",{"2":{"44":2,"116":1,"171":1,"244":1}}],["either",{"2":{"40":4,"42":6,"50":1,"76":2,"78":3,"79":2,"81":1,"83":2,"84":1,"88":2,"89":3,"292":1}}],["early",{"2":{"280":2}}],["earcut",{"2":{"263":1,"291":1}}],["eagerly",{"2":{"178":1}}],["easily",{"2":{"92":1,"163":1,"175":1}}],["easier",{"2":{"45":1,"49":1,"266":1}}],["easy",{"2":{"39":1,"72":1,"93":1,"125":1,"173":1}}],["eachindex",{"2":{"265":1}}],["eachslice",{"2":{"51":2,"202":1,"203":1,"259":1,"260":1}}],["each",{"2":{"5":1,"15":1,"25":4,"39":15,"40":8,"42":12,"43":4,"44":2,"45":4,"46":2,"65":2,"77":1,"79":5,"83":3,"84":3,"85":1,"86":2,"88":2,"89":7,"164":3,"189":2,"219":1,"260":1,"265":1,"293":1}}],["effort",{"2":{"123":1}}],["efficient",{"2":{"50":1,"52":1,"78":1,"80":1,"90":1,"117":1,"156":1,"163":1,"164":1,"193":2,"250":1}}],["efficiently",{"2":{"18":2,"50":1}}],["effectively",{"2":{"220":2}}],["effect",{"2":{"1":1,"47":1,"57":1}}],["eta=lr",{"2":{"280":1}}],["eta=learning",{"2":{"261":1}}],["et",{"2":{"15":2,"50":2,"63":2,"81":1,"86":1,"290":1}}],["etc",{"2":{"3":2,"7":1,"66":1,"83":2,"89":6,"93":1,"153":1,"156":1,"157":1,"158":1,"160":1,"163":2,"192":1,"216":1,"261":1,"280":1}}],["equilibrium",{"2":{"93":1}}],["equivalent",{"2":{"8":1,"37":1,"45":1,"50":2,"52":4,"56":1,"77":2,"79":1,"81":1,"83":7,"86":1,"87":1,"89":5,"108":1,"189":1}}],["equations",{"2":{"90":1,"295":1}}],["equation",{"2":{"89":2}}],["equally",{"2":{"51":1,"65":1}}],["equals",{"2":{"45":1,"89":1}}],["equal",{"2":{"15":1,"25":1,"40":2,"42":3,"50":1,"79":1,"81":3}}],["ellipticalslicesampling",{"2":{"263":1}}],["elem",{"2":{"253":2,"255":2}}],["elemet",{"2":{"89":2}}],["elementwise",{"2":{"45":2,"46":5,"61":1,"65":1}}],["element",{"0":{"54":1},"2":{"15":1,"25":1,"39":1,"43":6,"46":1,"50":1,"52":1,"54":10,"56":2,"63":2,"77":1,"78":1,"80":1,"84":5,"88":2,"89":1,"171":4,"189":4,"194":1,"195":1,"197":1,"202":1,"206":1}}],["elements",{"2":{"8":1,"15":9,"23":1,"40":2,"42":3,"44":1,"76":1,"78":1,"84":2,"89":2,"91":1,"189":6,"202":1}}],["eliminate",{"2":{"89":1}}],["elu",{"2":{"67":6}}],["eltypes",{"2":{"173":1,"183":1}}],["eltype",{"0":{"183":1},"2":{"52":6,"53":4,"54":3,"65":9,"67":1,"79":6,"89":3,"132":1,"173":5,"183":2,"191":1,"258":1}}],["elman",{"2":{"43":1}}],["elseif",{"2":{"150":1,"155":1,"292":1}}],["elsewhere",{"2":{"83":2,"89":2}}],["else",{"2":{"8":1,"19":1,"35":1,"39":1,"41":3,"43":1,"45":1,"49":1,"51":1,"52":1,"63":1,"71":1,"83":1,"87":2,"88":1,"89":1,"150":1,"205":1,"210":1,"213":1,"231":1,"242":2,"260":1,"280":1,"292":3}}],["eg",{"2":{"8":1,"40":2,"42":3}}],["enc",{"2":{"258":4}}],["encode",{"2":{"258":1}}],["encoder=st",{"2":{"258":2}}],["encoder",{"2":{"258":15}}],["encountered",{"2":{"127":1,"287":1}}],["encounter",{"2":{"122":1,"123":1,"162":1}}],["encouraged",{"2":{"52":5}}],["enumx",{"2":{"230":1,"263":1}}],["enumerate",{"2":{"5":2,"124":1,"261":1,"285":3,"287":1,"288":1}}],["engine",{"2":{"191":1}}],["enforce",{"2":{"191":1}}],["enhance",{"2":{"121":1}}],["enhancing",{"2":{"77":1}}],["enough",{"2":{"89":1,"157":1,"178":1,"220":1}}],["energy",{"2":{"89":1,"265":2}}],["environment",{"2":{"124":1,"198":1,"207":1,"215":1,"222":1,"239":2,"247":2,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["env",{"2":{"83":1,"210":1,"231":1,"242":2,"259":1}}],["enable",{"2":{"70":2,"175":1}}],["enables",{"2":{"64":1,"93":1,"263":1}}],["enabled",{"2":{"7":1,"24":2,"114":1,"263":1}}],["ensuring",{"2":{"63":1}}],["ensures",{"2":{"137":1}}],["ensure",{"2":{"2":1,"8":1,"15":1,"93":1,"122":1,"137":1,"167":1,"202":1,"273":1}}],["enter",{"2":{"95":1}}],["entered",{"2":{"67":2}}],["entry",{"2":{"142":1}}],["entries",{"2":{"89":2,"164":1}}],["entropy",{"2":{"50":3,"77":1}}],["entire",{"2":{"41":2,"43":5,"46":1,"131":1,"220":1}}],["entirely",{"2":{"6":1,"40":2,"44":3,"236":1,"238":1}}],["enzymechainrulescoreext",{"2":{"200":1,"269":1,"291":1}}],["enzymecoreext",{"2":{"241":1}}],["enzymecore",{"2":{"200":2,"263":2}}],["enzymestaticarraysext",{"2":{"200":1,"269":1,"291":1}}],["enzymespecialfunctionsext",{"2":{"200":1,"269":1,"291":1}}],["enzymelogexpfunctionsext",{"2":{"200":1,"269":1,"291":1}}],["enzymegpuarrayscoreext",{"2":{"200":1,"269":1,"291":1}}],["enzymeext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["enzymejax",{"0":{"147":1},"2":{"147":2}}],["enzymemlir",{"2":{"122":1,"123":1}}],["enzyme",{"2":{"49":1,"70":5,"87":1,"93":1,"97":1,"105":2,"119":2,"120":6,"122":3,"123":10,"124":1,"147":1,"162":1,"171":1,"191":1,"193":1,"200":6,"249":1,"252":6,"257":1,"269":7,"274":1,"276":1,"282":1,"291":7}}],["endpoints",{"2":{"292":2}}],["end",{"2":{"5":2,"11":1,"15":2,"23":2,"39":4,"40":2,"42":3,"56":14,"84":2,"87":2,"89":5,"96":1,"97":7,"123":2,"124":4,"126":2,"127":4,"132":3,"133":1,"134":3,"143":1,"144":2,"145":2,"147":1,"150":1,"153":4,"154":2,"155":3,"158":3,"163":1,"164":1,"165":1,"166":1,"168":1,"169":1,"170":1,"171":1,"178":2,"190":1,"192":2,"197":3,"198":3,"199":1,"201":3,"202":4,"203":3,"204":1,"205":5,"207":3,"210":2,"212":2,"213":4,"215":3,"218":2,"219":1,"220":4,"221":1,"222":3,"231":2,"232":7,"233":1,"234":2,"235":3,"236":3,"239":3,"242":4,"243":3,"244":1,"245":2,"246":4,"247":3,"251":3,"252":3,"253":1,"254":3,"255":2,"256":3,"258":9,"259":3,"260":8,"261":6,"262":3,"264":1,"265":4,"266":3,"267":3,"270":2,"274":4,"275":3,"277":1,"278":8,"279":1,"280":4,"281":3,"283":3,"284":10,"285":8,"286":1,"287":3,"288":3,"289":3,"292":26,"293":2,"294":2,"295":2,"297":2,"298":3}}],["executor",{"2":{"280":11}}],["execution",{"2":{"192":1}}],["executables",{"2":{"123":1}}],["exhaustive",{"2":{"179":1}}],["existent",{"2":{"140":1}}],["exists",{"2":{"133":1,"139":1,"232":1}}],["exist",{"2":{"56":2,"80":1,"83":2,"89":3}}],["exactpredicates",{"2":{"263":1,"291":1}}],["exactly",{"2":{"47":1,"51":1,"62":1,"88":1,"108":1,"153":1,"154":1,"160":1}}],["exact",{"2":{"15":1,"86":1,"89":1,"154":1,"205":1}}],["examplex",{"2":{"149":1}}],["examples",{"0":{"187":1},"2":{"5":1,"15":2,"50":2,"56":4,"76":1,"77":1,"81":3,"82":1,"84":4,"85":1,"87":1,"88":1,"101":1,"187":1,"227":1}}],["example",{"0":{"131":1},"1":{"132":1,"133":1,"134":1,"135":1},"2":{"2":1,"3":2,"4":1,"8":1,"15":1,"19":1,"22":3,"23":2,"25":3,"35":1,"37":1,"39":3,"45":5,"46":3,"49":1,"50":14,"55":1,"65":1,"69":1,"70":1,"80":4,"81":1,"85":2,"86":1,"101":1,"108":1,"111":1,"125":1,"136":2,"143":1,"144":1,"147":1,"149":3,"155":1,"156":1,"157":1,"158":1,"164":4,"165":1,"166":1,"173":1,"184":1,"194":1,"195":1,"276":1,"294":1}}],["ext",{"2":{"133":2}}],["external",{"2":{"97":1,"200":1,"205":1,"214":1,"254":8,"261":1,"265":1,"269":1,"280":114,"287":3}}],["extents",{"2":{"263":1,"291":1}}],["extensively",{"2":{"93":1,"121":2,"122":1,"193":1}}],["extensive",{"2":{"92":1,"93":1}}],["extensible",{"2":{"92":1}}],["extensions",{"0":{"71":1}}],["extending",{"2":{"191":1,"202":1}}],["extended",{"2":{"22":1,"23":1,"24":1,"39":6,"40":2,"42":3,"43":2,"45":1,"46":4,"47":1,"50":2,"54":1,"56":1}}],["extend",{"2":{"7":1,"22":1,"93":1}}],["extrema",{"2":{"67":1,"85":1,"253":2}}],["extremely",{"2":{"24":1,"49":1,"148":1,"158":1}}],["extract",{"2":{"265":2,"266":1}}],["extracted",{"2":{"89":1}}],["extra",{"2":{"15":1,"89":3}}],["exciting",{"2":{"102":1}}],["excellent",{"2":{"208":1}}],["exceed",{"2":{"79":1}}],["except",{"2":{"7":1,"43":2,"45":1,"87":1,"174":1}}],["exceptionunwrapping",{"2":{"230":1,"241":1}}],["exceptions",{"2":{"15":1}}],["exception",{"2":{"3":2,"71":1}}],["exclusively",{"2":{"23":1,"68":1}}],["excluding",{"2":{"22":1,"37":1,"45":1,"46":1}}],["excludes",{"2":{"79":4}}],["exclude",{"2":{"10":2,"104":1}}],["exclude=internal",{"2":{"10":1}}],["expat",{"2":{"263":1,"291":1}}],["exp⁡",{"2":{"89":1}}],["exploiting",{"2":{"178":1}}],["explore",{"2":{"162":2}}],["explicit",{"2":{"90":1,"93":1,"186":2,"263":1}}],["explicitly",{"2":{"7":1,"56":2,"67":3,"89":2,"200":1,"269":1}}],["explanation",{"2":{"89":5}}],["exposing",{"2":{"238":1}}],["exponent",{"2":{"89":1}}],["exponential",{"2":{"67":4}}],["exporting",{"0":{"147":1}}],["export",{"2":{"75":1,"147":1}}],["exported",{"2":{"3":1,"6":1,"21":1,"67":2,"147":6,"190":1}}],["exp",{"2":{"67":6,"77":3,"87":1,"89":1,"253":1,"258":1,"260":1,"284":2}}],["express",{"2":{"265":1}}],["expressing",{"2":{"89":1}}],["expression",{"2":{"56":1,"69":1,"84":2}}],["expronicon",{"2":{"263":1}}],["exprtools",{"2":{"263":1}}],["expr",{"2":{"56":1,"71":4}}],["expected",{"2":{"39":1,"40":2,"44":1,"50":4,"93":1,"127":1,"140":1,"192":1,"254":70,"280":1026,"287":16}}],["expect",{"2":{"21":1,"93":1,"123":1,"140":1,"171":1,"263":1}}],["expects",{"2":{"19":1,"37":1,"40":1,"42":3,"47":1,"69":1,"77":1,"78":3,"81":1,"155":1}}],["experimental",{"0":{"21":1},"1":{"22":1,"23":1,"24":1,"25":1},"2":{"2":1,"4":2,"7":1,"21":2,"22":10,"23":2,"24":6,"25":2,"114":5,"115":5,"123":2,"125":1,"126":2,"127":2,"128":1,"141":1,"142":1,"143":3,"144":2,"145":2,"146":1,"147":2,"148":1,"171":2,"173":1}}],["e",{"2":{"2":1,"11":1,"15":1,"22":1,"35":1,"37":1,"40":2,"41":2,"42":9,"46":3,"50":3,"54":1,"55":1,"65":1,"67":3,"79":6,"84":2,"85":1,"86":2,"87":2,"88":1,"89":3,"107":2,"123":1,"126":4,"127":4,"133":2,"137":1,"147":1,"156":1,"167":1,"254":7,"280":102,"287":2,"292":4,"293":6,"294":3}}],["gw",{"2":{"293":1}}],["gph",{"2":{"277":5}}],["gpucompiler",{"2":{"269":1,"291":1}}],["gpuci",{"2":{"126":1,"133":3,"163":2,"205":2,"238":3,"261":1,"280":2}}],["gpubroadcastop",{"2":{"66":1}}],["gpuarray",{"2":{"51":1}}],["gpuarrayscore",{"2":{"174":2,"263":1}}],["gpuarrays",{"2":{"13":7,"51":1,"241":1,"291":1}}],["gpusintel",{"2":{"73":2}}],["gpusmetal",{"2":{"73":2}}],["gpusamd",{"2":{"73":2}}],["gpus",{"2":{"28":1,"59":1,"73":2,"120":2,"148":1,"173":1,"174":1,"178":1}}],["gpu",{"0":{"4":1,"73":1,"99":1,"100":1,"148":1,"171":1,"174":1,"181":1,"182":1},"1":{"149":1,"150":1},"2":{"1":8,"2":7,"3":10,"5":2,"42":3,"62":1,"66":1,"70":1,"73":4,"74":1,"96":2,"99":1,"100":1,"115":2,"119":1,"120":4,"123":3,"132":1,"140":4,"148":5,"149":11,"150":5,"158":1,"171":15,"178":4,"182":3,"186":1,"190":3,"217":1,"219":3,"235":3,"238":1,"246":1,"254":7,"280":102,"287":2}}],["gnngraph",{"2":{"277":3}}],["gnngraphs",{"2":{"276":1}}],["gnnlux",{"2":{"276":1}}],["gnu",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["gzip",{"2":{"230":1}}],["gcnlayer",{"2":{"278":3}}],["gcn",{"2":{"276":1,"278":5,"280":10}}],["gc",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["guess",{"2":{"296":1}}],["guide",{"0":{"137":1,"138":1},"1":{"139":1,"140":1},"2":{"138":1,"153":1}}],["guarantee",{"2":{"115":1,"184":1}}],["guaranteed",{"2":{"62":1,"64":1,"163":2}}],["guarantees",{"2":{"21":1}}],["gdev",{"2":{"112":1,"149":1,"150":4,"171":2,"178":6,"217":1,"219":3,"220":3,"221":1}}],["glib",{"2":{"263":1,"291":1}}],["glu",{"2":{"87":1}}],["glob",{"2":{"230":1}}],["globals",{"2":{"294":1}}],["globally",{"2":{"263":1}}],["globallppool",{"2":{"42":1,"117":1}}],["globalmeanpool",{"2":{"42":1}}],["globalmaxpool",{"2":{"42":1}}],["global",{"2":{"42":3,"69":1,"171":1,"173":1,"263":1}}],["glorot",{"2":{"15":4,"153":2}}],["günter",{"2":{"63":1}}],["gs",{"2":{"56":2,"96":3}}],["giflib",{"2":{"263":1,"291":1}}],["gif",{"2":{"255":1,"266":2}}],["gib",{"2":{"239":2,"247":2}}],["gigantic",{"2":{"125":1}}],["giving",{"2":{"89":1}}],["give",{"2":{"87":1,"201":1,"296":2}}],["gives",{"2":{"51":1,"56":1,"153":1}}],["given",{"2":{"2":1,"4":2,"15":6,"16":24,"28":6,"29":2,"30":5,"35":1,"39":2,"44":3,"46":5,"50":3,"65":1,"66":1,"76":1,"79":5,"85":1,"87":2,"89":5,"92":1,"93":1,"200":1,"220":1,"269":1}}],["github",{"2":{"43":1,"72":2,"89":1,"101":2,"128":1,"162":1,"171":1}}],["g=tanh",{"2":{"43":1}}],["grisu",{"2":{"291":1}}],["gridlayoutbase",{"2":{"263":1,"291":1}}],["grid",{"2":{"85":27,"253":5,"255":8,"260":29}}],["green",{"2":{"274":1}}],["great",{"2":{"216":1}}],["ground",{"2":{"197":3}}],["group",{"2":{"46":1,"65":3,"147":4,"265":1}}],["groupnorm",{"2":{"46":7,"65":1,"160":1}}],["groups",{"2":{"40":8,"46":4,"65":4,"80":1}}],["groups=1",{"2":{"40":2}}],["graphs",{"2":{"277":1}}],["graph",{"0":{"276":1},"1":{"277":1,"278":1,"279":1,"280":1,"281":1},"2":{"264":1,"266":1}}],["graphics",{"2":{"263":1}}],["graphite2",{"2":{"263":1,"291":1}}],["gray",{"2":{"260":1}}],["gravitational",{"0":{"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1}}],["gravitate",{"2":{"52":1}}],["graves",{"2":{"86":2}}],["grad",{"2":{"87":2}}],["grads",{"2":{"49":11,"197":1}}],["gradient|jacobian",{"2":{"162":2}}],["gradient",{"0":{"70":1,"165":1},"2":{"18":2,"49":1,"68":1,"70":1,"81":4,"82":2,"85":1,"87":10,"89":7,"123":8,"127":4,"157":2,"158":2,"163":6,"164":3,"165":6,"166":6,"170":3,"194":9,"197":1,"216":1,"252":3,"261":1,"280":1}}],["gradients=val",{"2":{"261":1,"280":1,"287":1}}],["gradients=true",{"2":{"49":2}}],["gradients",{"0":{"194":1},"2":{"18":2,"31":1,"39":1,"49":34,"50":2,"51":1,"70":10,"85":1,"96":6,"139":1,"162":1,"163":1,"170":1,"194":1,"250":1}}],["grucell",{"2":{"43":6,"116":1}}],["gru",{"2":{"7":1,"43":1}}],["gaussadjoint",{"2":{"235":1}}],["gaussian",{"2":{"67":1,"265":1}}],["gathers",{"2":{"84":2}}],["gather",{"0":{"84":1},"2":{"84":12,"277":1}}],["gated",{"2":{"43":1,"67":3,"87":2}}],["gamma=0",{"2":{"50":1}}],["gamma",{"2":{"50":2}}],["ganguli",{"2":{"15":1}}],["gain=1",{"2":{"15":1,"186":2}}],["gain",{"2":{"15":15,"40":4,"44":2,"89":2,"116":2}}],["gt",{"2":{"5":1,"11":1,"15":1,"40":2,"42":3,"45":2,"46":2,"47":3,"50":1,"51":1,"56":2,"87":1,"98":1,"107":2,"112":1,"149":2,"154":2,"162":12,"202":1}}],["geoformattypes",{"2":{"291":1}}],["geoffrey",{"2":{"65":1}}],["geometrybasics",{"2":{"263":1,"291":1}}],["geointerface",{"2":{"263":1,"291":1}}],["gen",{"2":{"261":4}}],["genericbroadcastop",{"2":{"66":1}}],["generic",{"2":{"59":1,"62":2,"64":2,"83":2,"97":1,"114":1,"123":1,"127":1,"147":1,"168":1,"169":1,"170":1,"171":1,"201":1,"203":1,"204":1,"210":1,"212":1,"213":1,"217":1,"231":1,"232":2,"233":1,"234":1,"242":1,"243":1,"244":1,"245":1,"249":1,"252":2,"257":1,"258":2,"259":1,"260":1,"265":2,"266":1,"270":1,"276":1,"277":1,"278":1,"279":1,"282":1,"283":2,"284":1,"285":1,"286":1,"292":5,"294":1,"295":2}}],["genericlossfunction",{"2":{"50":2,"274":1}}],["generating",{"0":{"264":1},"2":{"47":1}}],["generation",{"2":{"43":3}}],["generator=lux",{"2":{"243":1}}],["generator",{"2":{"15":3,"41":3,"63":4,"88":1,"89":1,"153":1,"188":1,"192":2,"243":4,"244":2,"270":1}}],["generates",{"2":{"97":1,"264":1}}],["generated",{"2":{"15":1,"41":1,"56":1,"63":1,"162":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"266":1,"267":1,"268":1,"275":1,"281":1,"289":1,"298":1}}],["generate",{"0":{"218":1,"253":1},"2":{"3":1,"9":1,"10":1,"43":1,"56":1,"80":1,"147":3,"156":1,"186":2,"197":2,"201":1,"243":1,"253":1,"258":1,"260":2,"261":1,"264":2,"270":4,"283":1}}],["generalization",{"2":{"112":1}}],["generalized",{"2":{"5":1,"89":1}}],["generalises",{"2":{"89":1}}],["general",{"0":{"8":1},"2":{"39":1,"72":1,"80":2,"89":1,"98":1,"147":3,"172":1,"294":1}}],["generally",{"2":{"5":1}}],["geared",{"2":{"216":1}}],["gemm",{"2":{"83":3,"89":9,"254":7,"280":113,"287":2}}],["gelu",{"2":{"67":5,"123":12,"124":2,"173":1}}],["gettext",{"2":{"263":1,"291":1}}],["getting",{"0":{"94":1},"1":{"95":1,"96":1,"97":1,"98":1,"99":1,"100":1},"2":{"8":1,"80":1}}],["getaxes",{"2":{"243":1}}],["getindex",{"2":{"219":1,"259":1}}],["getproperty",{"2":{"51":3,"155":2,"238":3}}],["getfield",{"2":{"7":1,"132":3,"155":3}}],["get",{"0":{"101":1},"2":{"3":5,"27":2,"28":2,"29":2,"51":1,"59":1,"60":1,"66":1,"89":1,"96":1,"137":2,"155":1,"157":1,"167":2,"171":2,"183":1,"187":1,"189":2,"192":2,"201":3,"202":1,"205":2,"210":1,"228":1,"231":1,"242":2,"259":1,"260":2,"265":2,"292":1}}],["g",{"2":{"2":1,"39":4,"41":2,"46":4,"67":3,"84":2,"86":1,"87":2,"89":2,"123":1,"292":1}}],["goal",{"2":{"264":1}}],["goes",{"2":{"87":1}}],["goodies",{"2":{"195":1}}],["good",{"2":{"87":5,"151":1,"163":1,"220":1,"265":1,"296":2}}],["goodfellow",{"2":{"45":1}}],["going",{"2":{"35":1,"165":1}}],["go",{"2":{"2":1,"67":1,"97":1,"101":2,"127":1,"131":1,"142":1,"147":1,"154":1,"155":1,"172":1,"199":1,"248":1}}],["nsteps",{"2":{"285":3}}],["nsamples",{"2":{"285":2}}],["nb",{"2":{"278":2,"280":2}}],["nparameters",{"2":{"265":3}}],["npz",{"2":{"230":1,"241":1}}],["nvml",{"2":{"239":1,"247":1}}],["nvidia",{"2":{"3":1,"5":1,"73":2,"120":2,"207":2,"215":2,"239":4,"247":4,"256":2,"262":2,"275":2,"281":2,"289":2}}],["nlsolversbase",{"2":{"230":1,"263":1,"291":1}}],["nlayers",{"2":{"56":2,"97":2}}],["ndata=gph",{"2":{"277":1}}],["ndrange=length",{"2":{"171":2}}],["ndim",{"2":{"78":6}}],["ndims",{"2":{"11":1,"19":4,"40":4,"42":9,"45":4,"46":2,"47":2,"79":4,"81":1,"83":4,"89":2,"132":2,"168":2,"169":2,"260":1,"261":1,"284":1,"285":2,"287":1}}],["nicely",{"2":{"163":1}}],["nilarray",{"2":{"107":1}}],["nitish",{"2":{"63":1}}],["n=4",{"2":{"81":1}}],["n=tanh⁡",{"2":{"43":1}}],["nheads",{"2":{"76":7}}],["nnlibfftwext",{"2":{"263":1,"269":2,"291":2}}],["nnlibforwarddiffext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["nnlibcudacudnnext",{"2":{"241":2}}],["nnlibcudaext",{"2":{"241":2}}],["nnlibenzymecoreext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["nnlib",{"0":{"75":1,"89":1},"1":{"76":1,"77":1,"78":1,"79":1,"80":1,"81":1,"82":1,"83":1,"84":1,"85":1,"86":1,"87":1,"88":1,"89":1},"2":{"40":1,"47":6,"59":2,"60":2,"67":27,"75":3,"76":3,"77":2,"78":4,"79":6,"80":15,"81":9,"82":6,"83":6,"84":12,"85":2,"86":1,"87":8,"88":3,"89":50,"114":3,"157":2,"158":2,"176":3,"200":3,"241":2,"263":4,"269":4,"274":2,"291":4}}],["nn",{"2":{"40":2,"44":1,"171":7,"265":12,"266":11,"294":14,"295":1,"297":9}}],["ntuple",{"2":{"39":1,"40":2,"46":2,"78":5,"81":9,"89":1,"132":1,"238":13,"254":1,"284":1,"285":2}}],["n",{"2":{"39":23,"40":27,"42":54,"44":2,"45":11,"46":11,"47":8,"51":2,"56":5,"65":5,"78":3,"79":4,"81":3,"85":7,"89":52,"97":5,"124":1,"197":3,"205":2,"210":4,"213":1,"220":1,"231":4,"232":5,"235":1,"236":5,"242":14,"246":2,"254":1,"259":2,"261":3,"264":2,"265":3,"266":2,"274":1,"280":4,"283":5,"284":6,"285":11,"287":8,"288":1,"295":1}}],["nccl",{"2":{"27":1,"28":4,"139":1}}],["ncclbackend",{"2":{"27":2,"28":2,"137":4}}],["naturalsort",{"2":{"263":1}}],["native",{"2":{"161":1,"216":1}}],["nabla",{"2":{"194":1}}],["naive",{"2":{"89":2}}],["navab",{"2":{"50":1}}],["nassir",{"2":{"50":1}}],["naming",{"2":{"39":9,"45":2}}],["name>=",{"2":{"179":3}}],["namefreezing",{"2":{"144":1}}],["name=",{"2":{"39":1,"56":1}}],["name=nothing",{"2":{"39":11,"56":1}}],["names",{"2":{"22":1,"46":1,"56":1}}],["namedarrays",{"2":{"263":1}}],["nameddimsarray",{"2":{"89":1}}],["named",{"2":{"22":2,"50":1,"65":2,"155":1,"233":2,"235":1,"265":1}}],["namedtuples",{"2":{"155":1}}],["namedtuple",{"2":{"7":4,"8":1,"10":3,"22":5,"23":1,"39":10,"40":2,"42":9,"43":2,"44":4,"45":9,"46":5,"47":1,"49":1,"51":1,"52":1,"56":3,"96":24,"97":1,"107":1,"123":17,"126":3,"127":6,"130":3,"132":3,"133":7,"134":3,"143":3,"146":4,"147":11,"153":4,"154":3,"156":1,"171":2,"202":2,"238":147,"265":1,"273":2,"274":2,"285":2,"294":6}}],["name",{"0":{"144":1},"2":{"7":2,"8":5,"22":1,"23":1,"45":1,"56":3,"67":2,"98":1,"114":1,"132":5,"144":5,"145":6,"149":1,"155":1,"246":4}}],["nanmath",{"2":{"263":1}}],["nans",{"0":{"127":1},"2":{"24":3,"127":7,"128":1}}],["nan",{"2":{"24":5,"89":5,"127":31,"254":71,"287":1}}],["nₙ",{"2":{"19":2}}],["n₂",{"2":{"19":2}}],["n₁",{"2":{"19":2}}],["numer",{"2":{"293":3,"294":3}}],["numerically",{"2":{"67":3,"77":1,"87":1,"204":1}}],["numerical",{"2":{"46":4,"50":1,"65":4,"77":1,"173":1,"265":1}}],["numeric",{"2":{"15":2}}],["numpy",{"2":{"147":1}}],["num",{"2":{"76":1,"80":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"258":32,"260":5,"261":9,"262":1,"266":2,"267":1,"275":1,"277":1,"281":1,"289":1,"298":1}}],["number=0",{"2":{"15":2}}],["number=1",{"2":{"15":2}}],["numbers",{"2":{"15":6,"16":12,"77":1,"82":1,"147":2,"186":1,"189":3}}],["number",{"2":{"9":1,"10":1,"15":8,"29":1,"39":3,"40":10,"41":3,"42":3,"44":7,"45":2,"46":3,"47":3,"49":1,"50":4,"51":3,"52":1,"56":3,"63":4,"65":1,"67":1,"76":3,"77":1,"80":2,"81":1,"88":1,"89":7,"153":1,"188":1,"189":1,"192":2,"193":1,"260":1,"264":1,"265":4,"270":1}}],["nepochs",{"2":{"197":3,"213":3,"235":3,"246":3}}],["next",{"2":{"137":1,"153":1,"265":1,"266":1,"292":1,"294":1,"295":1}}],["neighboring",{"2":{"89":2}}],["neighbour",{"2":{"82":1,"89":1}}],["nest",{"2":{"163":1}}],["nested",{"0":{"20":1,"162":1,"180":1},"1":{"163":1,"164":1,"165":1,"166":1,"167":1,"168":1,"169":1,"170":1},"2":{"3":1,"20":2,"25":2,"52":5,"55":2,"89":2,"119":1,"162":1,"165":2,"180":3,"248":1,"250":1,"255":1}}],["ness",{"2":{"89":1}}],["negative",{"2":{"50":1,"292":1}}],["netpbm",{"2":{"263":1,"269":1,"291":1}}],["net",{"2":{"50":1,"252":11,"265":1,"294":1,"297":2}}],["network",{"0":{"263":1,"265":1,"271":1,"294":1,"295":1,"296":1},"1":{"264":1,"265":1,"266":1,"267":1},"2":{"15":1,"65":1,"67":1,"75":1,"86":1,"91":2,"93":1,"97":1,"131":1,"163":1,"164":2,"171":2,"188":1,"199":2,"243":5,"244":3,"264":1,"265":6,"266":4,"271":1,"294":6}}],["networks",{"0":{"171":1,"251":1,"276":1},"1":{"277":1,"278":1,"279":1,"280":1,"281":1},"2":{"15":3,"37":1,"45":1,"50":2,"63":3,"67":3,"78":1,"81":1,"87":1,"90":1,"93":3,"170":1,"171":1,"208":1,"216":2,"251":2,"265":1}}],["nearest",{"2":{"47":7,"81":9,"82":4,"89":1}}],["necessary",{"2":{"42":3,"153":1}}],["never",{"2":{"21":1,"35":1,"45":1,"83":1,"294":1}}],["neuralpde",{"2":{"248":2}}],["neuralode",{"2":{"232":7,"235":4,"238":5}}],["neuralodecompact",{"2":{"232":2,"235":1,"238":1}}],["neural",{"0":{"171":1,"229":1,"232":1,"233":1,"237":1,"251":1,"263":1,"265":1,"271":1,"290":1,"294":1,"295":1,"296":1},"1":{"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"239":1,"264":1,"265":1,"266":1,"267":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"15":4,"37":1,"50":1,"55":1,"63":4,"67":3,"75":1,"78":1,"86":1,"90":2,"91":2,"93":4,"97":1,"131":3,"163":3,"164":2,"170":1,"171":3,"188":1,"199":2,"208":1,"216":5,"229":2,"232":1,"233":1,"264":1,"265":7,"266":2,"271":1,"294":7,"295":1,"297":2}}],["needed",{"2":{"18":2,"19":1,"49":1,"59":2,"86":1,"87":2,"114":1,"139":1,"158":1,"162":1,"186":2}}],["need",{"2":{"6":1,"8":1,"22":1,"25":1,"35":1,"45":1,"56":2,"76":1,"87":1,"89":5,"96":1,"102":1,"114":1,"123":3,"126":1,"130":1,"137":1,"138":2,"153":3,"154":2,"158":1,"162":1,"171":1,"184":1,"187":1,"189":1,"201":1,"202":1,"211":2,"220":1,"232":1,"244":1,"252":1,"273":1,"274":1,"292":1,"294":1}}],["needs",{"2":{"3":1,"7":1,"10":1,"40":1,"41":1,"53":1,"59":2,"77":1,"132":1,"138":1,"166":1,"182":1}}],["newtonian",{"2":{"292":2,"294":1,"296":1}}],["new",{"0":{"105":1,"108":1,"112":1,"117":1,"237":1},"2":{"1":1,"23":9,"25":5,"41":1,"43":4,"45":5,"49":1,"56":1,"102":1,"104":1,"115":1,"123":1,"130":1,"138":1,"151":1,"153":1,"154":1,"162":1,"171":1,"243":2,"265":3}}],["nom",{"2":{"265":1}}],["noisy",{"2":{"201":2}}],["noise=0",{"2":{"283":1}}],["noise",{"2":{"197":1,"283":5,"287":2}}],["now",{"2":{"75":1,"105":1,"112":1,"115":1,"116":4,"117":2,"123":4,"124":1,"126":2,"127":2,"138":1,"140":3,"147":3,"153":1,"154":1,"158":1,"163":1,"164":2,"165":1,"166":1,"170":2,"171":2,"173":1,"188":2,"193":1,"197":1,"202":2,"204":1,"211":1,"214":1,"220":1,"263":1,"265":2,"266":1,"274":1,"292":1,"294":2,"295":1,"297":1}}],["norm",{"2":{"46":1,"78":2,"163":7,"164":7,"165":6,"166":6,"170":12}}],["normally",{"2":{"123":1}}],["normalizing",{"0":{"282":1},"1":{"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"63":1,"67":1}}],["normalize",{"2":{"46":2,"89":1}}],["normalized",{"2":{"46":7,"50":3,"65":8,"85":1,"89":9,"255":2}}],["normalization",{"0":{"46":1,"65":1},"2":{"15":1,"19":1,"46":11,"65":12,"164":1}}],["normalises",{"2":{"46":2,"65":2}}],["normal",{"2":{"13":2,"15":12,"16":6,"80":1,"89":3,"147":11,"186":6,"258":2,"265":1,"294":3}}],["nooplayer",{"2":{"39":6,"41":3,"45":3}}],["nonunitary",{"2":{"89":2}}],["nonzero",{"2":{"89":2}}],["nonlinear",{"2":{"15":1}}],["non",{"0":{"135":1},"2":{"15":2,"22":1,"24":1,"28":2,"44":1,"49":2,"50":1,"56":9,"62":1,"64":1,"67":3,"81":2,"82":1,"87":1,"107":1,"110":1,"120":2,"125":1,"140":1,"158":2,"228":1,"232":2,"292":1}}],["nonetheless",{"2":{"2":1}}],["none",{"2":{"2":1,"21":1,"22":1,"24":1,"54":1,"76":1,"89":1,"127":1}}],["nodes",{"2":{"277":1}}],["node",{"2":{"3":1,"277":3}}],["notice",{"2":{"84":2,"89":1,"147":1,"153":1,"154":1,"164":2,"235":1}}],["notion",{"2":{"52":1}}],["notangent",{"2":{"70":1,"127":1}}],["not",{"2":{"2":2,"3":4,"4":2,"6":1,"7":1,"8":1,"15":2,"19":1,"24":1,"25":1,"27":2,"37":3,"39":1,"43":11,"45":1,"46":1,"49":1,"51":1,"54":1,"55":1,"56":1,"61":1,"62":1,"63":1,"65":3,"68":1,"71":1,"77":1,"79":8,"80":3,"81":1,"82":1,"83":3,"86":2,"87":5,"88":1,"89":12,"93":1,"96":1,"107":2,"114":1,"118":1,"120":1,"121":1,"122":4,"123":1,"126":1,"127":1,"137":1,"140":1,"149":1,"151":1,"158":3,"162":1,"163":5,"167":1,"170":1,"171":1,"178":1,"185":1,"187":1,"214":2,"220":1,"228":1,"232":1,"235":2,"237":1,"238":2,"254":7,"265":1,"266":1,"271":1,"280":102,"287":2,"292":1}}],["notes",{"2":{"62":1,"64":1,"70":1,"195":1,"229":1}}],["note",{"2":{"2":1,"3":4,"15":1,"22":1,"26":1,"45":1,"49":1,"50":1,"52":2,"55":1,"56":1,"59":2,"67":1,"77":1,"80":1,"89":6,"90":1,"122":2,"123":1,"124":1,"126":1,"137":1,"147":1,"153":1,"154":1,"161":1,"162":1,"165":1,"174":1,"191":1,"194":1,"206":1,"232":1,"238":1,"263":1}}],["nothing",{"2":{"2":4,"3":5,"4":1,"8":2,"22":5,"27":3,"28":1,"39":4,"40":4,"41":1,"43":11,"44":4,"45":10,"46":5,"49":2,"50":12,"51":1,"52":2,"55":1,"61":1,"62":1,"63":2,"64":1,"65":15,"70":1,"76":4,"89":7,"96":27,"123":24,"127":1,"133":4,"163":2,"165":2,"166":2,"209":1,"210":2,"218":1,"219":1,"220":2,"231":2,"238":117,"242":6,"260":1,"261":3,"265":1,"274":12,"283":3,"294":1}}],["no",{"2":{"2":2,"4":2,"5":1,"21":1,"22":3,"37":1,"45":1,"49":1,"50":2,"54":1,"57":1,"69":1,"76":1,"89":3,"93":1,"96":1,"109":1,"112":1,"114":1,"124":1,"126":1,"127":1,"133":2,"139":3,"149":2,"150":1,"153":1}}],["u=u",{"2":{"252":1}}],["uris",{"2":{"230":1}}],["url",{"2":{"90":1}}],["url=",{"2":{"72":1}}],["u0=res",{"2":{"220":1}}],["u0",{"2":{"218":2,"220":2,"221":1,"293":2,"294":2,"295":1,"297":2}}],["utc",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["utility",{"0":{"234":1,"245":1,"292":1}}],["utilities",{"0":{"48":1},"1":{"49":1,"50":1,"51":1,"52":1,"53":1,"54":1,"55":1,"56":1,"57":1},"2":{"68":1,"158":1}}],["utils",{"0":{"26":1},"1":{"27":1,"28":1,"29":1,"30":1,"31":1,"32":1},"2":{"22":2,"52":1,"87":1,"163":4,"261":2,"280":2}}],["uint8",{"2":{"89":2}}],["ulyanov",{"2":{"46":1,"65":1}}],["u",{"2":{"18":7,"218":4,"220":12,"221":6,"232":7,"236":2,"251":4,"252":11,"254":4,"255":7,"293":3,"294":4,"296":1,"297":2}}],["upchain",{"2":{"258":1}}],["upchain=chain",{"2":{"258":1}}],["upto",{"2":{"250":1}}],["up",{"0":{"295":1},"2":{"51":1,"81":1,"84":2,"89":1,"167":1,"170":1,"228":2,"229":1}}],["upsamples",{"2":{"81":4}}],["upsampled",{"2":{"47":1,"81":4}}],["upsample",{"2":{"47":5,"81":22,"116":1,"258":3}}],["upsampling",{"0":{"47":1,"81":1},"2":{"47":2}}],["upscaling",{"2":{"47":1,"81":2}}],["upscale",{"2":{"47":2}}],["updating",{"0":{"53":1,"102":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1},"2":{"31":1,"65":2,"95":1,"197":1}}],["updated",{"2":{"25":1,"39":6,"41":3,"43":7,"45":1,"46":1,"49":5,"50":1,"61":1,"63":2,"65":2,"92":1,"93":1,"97":1,"116":1,"140":1,"153":1,"202":1,"228":1,"273":1}}],["updates",{"2":{"25":1,"46":1,"49":3,"110":1,"137":1}}],["update",{"2":{"10":2,"41":3,"43":5,"46":3,"49":2,"53":1,"56":1,"61":1,"63":2,"95":2,"102":1,"191":1}}],["upon",{"2":{"5":1,"89":5}}],["untrained",{"2":{"294":2,"297":1}}],["until",{"2":{"41":1}}],["uncertain",{"2":{"266":1}}],["unchanged",{"2":{"39":1}}],["unable",{"2":{"191":1}}],["unnecessary",{"2":{"164":1}}],["unnormalized",{"2":{"46":1}}],["un",{"2":{"114":1}}],["unexpected",{"2":{"93":1,"254":7,"280":102,"287":2}}],["unpadded",{"2":{"89":1}}],["unpack",{"2":{"8":1,"263":1}}],["unfold",{"2":{"80":14}}],["unfreezes",{"2":{"22":1}}],["unfreeze",{"2":{"22":2}}],["unwrapped",{"2":{"52":1}}],["unwrap",{"2":{"52":2}}],["unwraps",{"2":{"22":1}}],["unreleased",{"2":{"72":1}}],["unreasonably",{"2":{"39":1}}],["unroll",{"2":{"59":1}}],["unrolls",{"2":{"39":1}}],["undone",{"2":{"25":1}}],["undef",{"2":{"85":1,"192":1}}],["undefined",{"2":{"8":1,"43":1}}],["understood",{"2":{"83":2,"89":2}}],["understands",{"2":{"89":1}}],["understandable",{"2":{"89":2}}],["understand",{"2":{"56":1,"151":1,"202":1,"229":1,"232":1}}],["understanding",{"2":{"15":2,"243":1}}],["underlying",{"2":{"83":1,"232":1}}],["under",{"2":{"67":1,"158":1}}],["undesirable",{"2":{"11":1}}],["unlike",{"2":{"16":1,"56":1,"83":1}}],["unless",{"2":{"2":1,"24":1,"67":1,"89":1,"243":1}}],["unified",{"2":{"216":1}}],["uniformly",{"2":{"137":1}}],["uniform",{"2":{"15":6,"16":6,"40":5,"43":6,"44":4,"67":1,"96":1,"153":3}}],["uninitiated",{"0":{"188":1},"1":{"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1}}],["unicodefun",{"2":{"263":1,"291":1}}],["unicodeplots",{"2":{"67":2}}],["unicode",{"2":{"67":2}}],["universal",{"2":{"45":1}}],["unity",{"2":{"292":1}}],["unitfulext",{"2":{"230":1,"263":2,"291":2}}],["unitful",{"2":{"230":1,"263":3,"291":3}}],["unitrange",{"2":{"79":6,"191":1}}],["units",{"2":{"67":4,"89":1}}],["unit",{"2":{"43":1,"67":5,"87":1}}],["union",{"2":{"2":3,"8":1,"15":2,"24":2,"37":1,"43":2,"46":1,"50":4,"63":1,"78":1,"89":2,"242":2,"259":1,"283":2}}],["unsupported",{"2":{"8":1,"13":1}}],["unsafeatomicsllvm",{"2":{"200":2,"269":2,"291":1}}],["unsafeatomics",{"2":{"200":2,"263":1,"269":1,"291":1}}],["unsafe",{"2":{"5":1}}],["unknown",{"2":{"3":2}}],["unknowndevice",{"2":{"3":2}}],["usr",{"2":{"207":2,"215":2,"239":2,"247":2,"256":2,"262":2,"275":2,"281":2,"289":2}}],["usacases",{"2":{"120":3}}],["usage",{"2":{"22":1,"89":1,"101":1,"122":1}}],["usually",{"2":{"47":1,"50":2,"56":1,"67":1,"189":1,"220":1}}],["usual",{"2":{"46":3,"166":1,"171":1}}],["us",{"2":{"18":2,"21":1,"125":1,"126":1,"127":6,"154":2,"155":3,"158":1,"162":1,"188":1,"192":1,"195":1,"196":1,"197":3,"201":1,"237":1,"294":1,"295":1,"297":2}}],["usecases",{"2":{"52":1,"55":1,"171":1,"248":1}}],["uses",{"2":{"35":1,"46":1,"52":4,"53":1,"55":1,"62":1,"64":2,"80":3,"83":1,"85":1,"89":3,"107":1,"162":1,"165":1,"173":1,"176":2,"179":1,"192":1,"216":1,"292":2,"293":1,"294":1,"296":1}}],["userbase",{"2":{"35":1}}],["user",{"2":{"7":2,"8":2,"39":1,"55":1,"59":2,"89":3,"91":1,"130":1,"158":1,"160":2,"165":1}}],["users",{"2":{"6":1,"7":2,"22":1,"27":2,"28":1,"49":1,"52":6,"61":1,"91":1,"93":1,"107":1,"110":1,"147":1,"173":1,"191":1,"229":2,"232":1,"238":1}}],["useful",{"2":{"3":1,"15":3,"24":2,"50":1,"51":1,"54":1,"56":2,"66":1,"80":1,"87":1,"89":1,"90":1,"124":2,"125":1,"126":1}}],["use",{"0":{"93":1,"161":1},"2":{"2":2,"4":2,"5":1,"15":1,"18":2,"19":1,"21":1,"22":2,"24":1,"27":4,"40":11,"43":18,"44":16,"46":3,"49":4,"52":5,"56":1,"59":3,"61":1,"63":1,"64":2,"66":1,"68":2,"72":1,"80":1,"85":4,"87":5,"88":1,"89":3,"90":1,"93":1,"96":2,"97":2,"101":2,"110":1,"114":2,"115":4,"116":1,"117":1,"120":6,"122":2,"123":3,"124":2,"125":1,"126":1,"137":2,"141":1,"143":1,"147":2,"148":2,"149":1,"161":1,"162":3,"163":1,"164":1,"165":2,"167":2,"171":4,"173":2,"176":1,"177":1,"181":1,"182":2,"185":2,"187":1,"188":1,"189":1,"192":1,"193":1,"194":1,"195":2,"197":3,"201":2,"202":3,"204":2,"208":2,"210":1,"229":1,"231":1,"232":1,"233":2,"235":2,"248":1,"252":1,"263":1,"264":1,"265":5,"266":1,"272":1,"273":1,"274":2,"277":2,"280":2,"282":1,"283":1,"292":1,"294":5}}],["used",{"2":{"2":4,"3":6,"8":2,"15":1,"24":2,"25":2,"32":1,"35":1,"37":2,"40":3,"41":3,"43":1,"44":1,"45":1,"46":7,"47":1,"49":4,"50":5,"52":1,"53":1,"55":1,"56":11,"63":3,"76":2,"77":2,"78":1,"80":1,"81":2,"83":1,"84":2,"86":1,"87":2,"88":1,"89":9,"107":1,"111":1,"114":1,"115":1,"117":1,"123":1,"140":1,"149":2,"162":1,"163":2,"173":1,"184":1,"185":1,"187":1,"189":1,"236":1,"261":1,"280":1,"290":1}}],["using",{"0":{"36":1,"69":1,"123":1,"124":1,"135":1,"164":1,"168":1,"169":1,"170":1,"203":1,"216":1,"229":1,"236":1,"257":1,"268":1},"1":{"37":1,"124":1,"217":1,"218":1,"219":1,"220":1,"221":1,"222":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"239":1,"258":1,"259":1,"260":1,"261":1,"262":1,"269":1,"270":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"2":1,"3":1,"4":2,"5":2,"7":2,"15":2,"18":2,"22":1,"23":1,"30":1,"31":1,"32":1,"35":3,"37":4,"39":11,"43":6,"44":1,"45":2,"46":1,"49":7,"50":4,"52":1,"53":1,"56":9,"66":3,"67":2,"69":2,"70":1,"72":2,"77":1,"80":1,"81":6,"83":2,"87":1,"89":6,"91":1,"92":1,"93":2,"95":1,"96":2,"97":5,"99":1,"107":1,"112":1,"117":1,"122":2,"123":8,"124":2,"126":1,"136":1,"140":1,"147":2,"148":2,"150":2,"151":2,"155":2,"157":2,"160":1,"161":1,"162":4,"163":6,"164":1,"165":1,"166":2,"167":5,"170":14,"171":3,"173":1,"174":1,"181":1,"188":1,"189":2,"192":1,"195":2,"196":1,"197":1,"198":1,"199":2,"201":1,"203":1,"206":1,"207":1,"208":1,"209":2,"214":1,"215":1,"216":1,"219":1,"220":1,"222":1,"230":2,"232":1,"236":1,"238":2,"239":1,"247":1,"248":3,"250":1,"251":1,"252":1,"256":1,"257":1,"261":2,"262":1,"263":2,"265":1,"266":1,"267":1,"271":1,"274":1,"275":1,"276":1,"280":2,"281":1,"289":1,"291":1,"293":1,"294":1,"298":1}}],["wu",{"2":{"65":1}}],["w3",{"2":{"56":3,"97":3}}],["w3=dense",{"2":{"56":1,"97":1}}],["w2",{"2":{"56":3,"97":3}}],["w2=",{"2":{"56":1,"97":1}}],["w1",{"2":{"56":3,"97":3}}],["w1=dense",{"2":{"56":1,"97":1}}],["w=w",{"2":{"252":1}}],["w=",{"2":{"56":1}}],["w=rand",{"2":{"56":1}}],["w=ones",{"2":{"56":3}}],["w=gv∥v∥weight",{"2":{"46":1}}],["w",{"2":{"42":6,"44":1,"47":4,"56":21,"80":11,"81":4,"85":7,"89":22,"97":4,"147":1,"155":2,"176":1,"189":1,"197":5,"210":1,"231":1,"251":4,"252":10}}],["wrote",{"0":{"91":1},"1":{"92":1,"93":1},"2":{"228":1}}],["wrong",{"2":{"67":1,"87":2}}],["write",{"2":{"147":1,"189":2,"191":1,"197":1,"243":1}}],["writes",{"2":{"84":3}}],["writing",{"0":{"134":1},"2":{"114":1,"135":1,"151":1}}],["written",{"2":{"35":1}}],["wrapping",{"2":{"79":1,"107":1}}],["wrapper",{"2":{"18":2,"24":1,"43":1,"55":1,"60":1,"83":1,"89":1,"154":2,"166":1,"251":1,"274":1}}],["wrappers",{"0":{"18":1},"2":{"83":2,"89":5}}],["wrappedlayer",{"2":{"41":1}}],["wrappedfunction",{"2":{"40":1,"45":2,"127":3,"238":9,"294":1}}],["wrapped",{"2":{"7":3,"24":1,"39":8,"45":2,"154":1}}],["wrap",{"2":{"31":1,"125":1,"137":1,"154":1,"219":1}}],["wraps",{"2":{"28":2,"37":1,"43":2,"45":1}}],["wrt",{"2":{"18":2,"49":3,"50":2,"51":1,"70":1,"138":1,"166":2,"252":1}}],["wikipedia",{"2":{"89":3}}],["wihch",{"2":{"89":1}}],["wide",{"2":{"193":1}}],["wider",{"2":{"93":1,"105":1,"109":1,"195":1}}],["widest",{"2":{"87":1}}],["widely",{"2":{"83":2,"89":2}}],["width=30",{"2":{"89":2}}],["width",{"2":{"40":1,"260":6}}],["wise",{"2":{"67":1}}],["wio",{"2":{"43":1}}],["wio×x+who×hprev+bo",{"2":{"43":1}}],["wig",{"2":{"43":1}}],["wig×x+whg×hprev+bg",{"2":{"43":1}}],["wif",{"2":{"43":1}}],["wif×x+whf×hprev+bf",{"2":{"43":1}}],["wii",{"2":{"43":1}}],["wii×x+whi×hprev+bi",{"2":{"43":1}}],["wiz",{"2":{"43":1}}],["wiz×x+biz+whz×hprev+bhz",{"2":{"43":1}}],["wir",{"2":{"43":1}}],["wir×x+bir+whr×hprev+bhr",{"2":{"43":1}}],["win",{"2":{"43":1}}],["win×x+bin+r⋅",{"2":{"43":1}}],["windows",{"2":{"80":5,"89":3}}],["window",{"2":{"42":18,"78":4,"80":3,"89":53,"147":10}}],["will",{"2":{"3":2,"5":1,"11":1,"15":3,"19":4,"21":1,"24":2,"25":3,"30":2,"35":4,"37":1,"39":5,"41":2,"43":3,"45":1,"46":3,"49":1,"52":4,"54":4,"55":1,"56":8,"65":1,"69":2,"70":1,"71":4,"72":1,"76":1,"77":2,"80":1,"83":4,"84":5,"86":1,"87":3,"89":5,"96":1,"97":1,"107":1,"123":5,"124":2,"125":1,"126":1,"127":2,"131":1,"142":1,"147":2,"151":1,"154":1,"157":1,"158":2,"162":2,"163":3,"165":1,"167":2,"168":1,"171":8,"173":2,"175":1,"185":1,"187":1,"189":1,"190":1,"191":1,"192":2,"193":4,"194":1,"195":1,"197":2,"199":2,"200":1,"201":3,"202":6,"204":1,"208":2,"214":2,"216":2,"219":4,"220":1,"228":1,"232":1,"235":1,"248":3,"250":1,"251":1,"252":2,"253":3,"258":1,"263":2,"265":4,"268":1,"269":1,"272":1,"273":1,"274":2,"294":1}}],["withgradient",{"2":{"56":1,"87":2}}],["within",{"2":{"56":1,"63":2,"65":2,"87":6,"89":9,"163":2,"261":1,"280":1}}],["without",{"2":{"2":1,"6":1,"56":1,"87":1,"89":1,"93":2,"107":2,"120":2,"123":1,"124":1,"147":1,"151":1,"163":2,"191":1,"202":1,"220":2}}],["with",{"0":{"208":1},"1":{"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"215":1},"2":{"1":1,"2":1,"3":1,"5":2,"6":1,"7":2,"8":1,"10":2,"11":1,"15":5,"19":2,"22":5,"23":2,"24":3,"25":1,"28":2,"31":1,"32":2,"35":1,"37":1,"39":14,"40":11,"41":6,"42":12,"43":10,"44":12,"45":5,"46":3,"47":5,"49":2,"50":6,"51":2,"52":2,"53":5,"54":1,"56":3,"59":7,"61":1,"62":2,"64":1,"66":1,"67":2,"69":2,"70":1,"75":1,"77":4,"78":6,"79":10,"80":3,"81":5,"82":3,"83":3,"86":1,"87":4,"88":3,"89":20,"91":1,"93":7,"97":4,"104":1,"105":1,"107":2,"114":3,"116":1,"117":1,"118":2,"120":1,"121":2,"122":2,"123":2,"124":3,"125":3,"126":1,"127":1,"135":2,"137":2,"139":2,"140":1,"141":1,"147":5,"149":1,"151":1,"153":1,"154":1,"155":6,"157":1,"162":3,"163":1,"164":4,"165":1,"168":1,"169":1,"170":1,"171":5,"173":1,"175":1,"176":6,"181":1,"185":2,"186":2,"187":2,"188":1,"189":5,"190":2,"191":1,"192":2,"193":1,"195":2,"197":4,"199":1,"200":1,"201":1,"203":1,"204":1,"205":2,"210":1,"212":1,"213":1,"214":1,"216":2,"217":1,"220":1,"228":1,"231":1,"232":3,"233":1,"234":1,"242":1,"243":2,"244":1,"245":1,"249":1,"250":1,"252":2,"254":1,"257":1,"258":2,"259":1,"260":1,"261":1,"263":2,"264":1,"265":6,"266":2,"269":1,"270":1,"276":1,"277":2,"278":1,"279":1,"280":1,"282":1,"283":2,"284":1,"285":1,"286":1,"287":1,"292":5,"294":2,"295":2}}],["why",{"0":{"91":1,"93":1},"1":{"92":1,"93":1},"2":{"153":1}}],["what",{"2":{"87":1,"89":2,"126":1,"127":1,"153":1,"155":1,"160":1,"166":1,"183":1,"189":2}}],["whatever",{"2":{"45":1,"49":1,"89":1}}],["whole",{"2":{"46":2,"77":1,"88":1}}],["whose",{"2":{"44":1,"46":1,"87":1}}],["who",{"2":{"43":1}}],["whg",{"2":{"43":1}}],["whf",{"2":{"43":1}}],["whn",{"2":{"43":1}}],["whn×hprev+bhn",{"2":{"43":1}}],["whz",{"2":{"43":1}}],["whr",{"2":{"43":1}}],["whcn",{"2":{"40":1,"41":2,"46":3}}],["whi",{"2":{"43":1}}],["while",{"2":{"3":1,"23":1,"37":1,"39":1,"47":1,"55":1,"89":1,"93":1,"124":1,"145":1,"147":1,"151":1,"153":1,"173":1,"195":1,"276":1}}],["which",{"0":{"130":1},"2":{"2":1,"3":1,"7":1,"15":1,"21":1,"22":14,"25":1,"39":3,"40":1,"42":3,"43":1,"45":3,"46":4,"49":1,"50":5,"55":1,"56":5,"65":1,"67":3,"77":1,"81":1,"82":1,"84":1,"85":1,"87":4,"88":1,"89":5,"93":2,"101":1,"116":1,"117":1,"125":2,"147":1,"151":1,"153":1,"154":2,"155":1,"160":1,"167":1,"173":1,"188":1,"189":2,"191":1,"197":1,"203":1,"220":1,"232":1,"236":1,"263":1,"265":1,"274":1,"290":1,"293":1,"294":1}}],["whether",{"2":{"24":2,"37":1,"41":1,"89":6,"201":1}}],["whereas",{"2":{"40":1,"80":1,"89":1}}],["where",{"2":{"3":1,"15":7,"23":1,"24":1,"40":5,"42":6,"43":6,"44":4,"45":3,"46":3,"47":1,"50":4,"55":1,"59":2,"62":1,"64":1,"65":1,"66":1,"67":2,"78":1,"81":7,"83":1,"85":2,"86":1,"87":2,"89":15,"122":1,"127":2,"158":1,"177":1,"194":1,"202":1,"203":1,"242":1,"265":2,"266":2,"283":1,"284":2,"285":3,"292":3}}],["whenever",{"2":{"83":1}}],["when",{"2":{"3":2,"15":2,"37":2,"40":1,"44":1,"45":1,"47":1,"50":2,"51":1,"56":2,"62":1,"69":1,"77":1,"83":2,"87":5,"89":2,"120":1,"121":1,"122":2,"124":1,"139":1,"144":1,"153":2,"162":1,"174":1,"183":1,"193":2,"202":1,"214":1,"232":1,"292":1,"295":1}}],["woodburymatrices",{"2":{"263":1}}],["world",{"2":{"220":1}}],["worthwhile",{"2":{"93":1}}],["word",{"2":{"44":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["words",{"2":{"40":1}}],["workaround",{"2":{"171":1}}],["workerutilities",{"2":{"230":1}}],["worker",{"2":{"178":1}}],["workers",{"2":{"29":3,"30":5}}],["worked",{"2":{"105":1,"165":1}}],["working",{"2":{"24":1,"118":1,"120":2,"122":1,"123":1,"140":1,"141":1,"147":1,"149":1,"187":1,"264":1}}],["work",{"2":{"2":1,"5":1,"22":1,"45":1,"47":1,"87":2,"90":1,"121":1,"122":1,"128":1,"130":1,"137":1,"140":1,"151":1,"158":1,"162":1,"185":1,"189":2,"194":1,"205":2,"280":1}}],["workspace",{"2":{"89":1}}],["works",{"2":{"2":1,"24":1,"35":1,"47":2,"52":1,"55":1,"84":1,"89":1,"93":1,"112":1,"122":1,"135":1,"139":1,"153":2,"155":1,"162":1,"163":1,"171":1,"178":1,"189":1,"232":1}}],["would",{"2":{"7":2,"8":1,"40":2,"45":3,"82":2,"83":1,"89":4,"91":1,"93":1,"107":1,"123":2,"126":1,"144":1,"158":1}}],["wondered",{"2":{"93":1}}],["won",{"2":{"2":1,"8":1,"56":1,"110":1,"131":1,"202":1,"216":1}}],["wall",{"2":{"265":1}}],["walk",{"2":{"89":1}}],["wan",{"2":{"162":1}}],["wanted",{"2":{"93":1}}],["wants",{"2":{"89":2,"160":1}}],["want",{"0":{"73":1,"74":1},"2":{"8":2,"49":2,"72":1,"97":1,"124":1,"131":1,"144":1,"151":1,"163":1,"166":1,"189":1,"195":1,"216":1,"219":1,"253":1}}],["waveforms",{"0":{"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1}}],["waveform",{"2":{"89":3,"292":5,"293":7,"294":9,"295":4,"297":12}}],["warmup",{"2":{"295":1}}],["warntype",{"2":{"238":3}}],["warn",{"2":{"54":3,"57":1}}],["warning",{"2":{"2":3,"3":1,"4":2,"8":1,"18":2,"35":3,"37":2,"40":1,"46":1,"49":1,"50":1,"52":5,"53":1,"54":5,"55":1,"62":1,"68":1,"87":1,"89":1,"142":1,"149":1,"162":1,"163":2,"171":1,"205":2,"227":1,"228":1,"261":1,"280":13}}],["warde",{"2":{"45":1}}],["way",{"2":{"35":3,"37":2,"39":1,"63":2,"67":3,"77":1,"87":1,"89":2,"93":1,"97":1,"125":1,"151":2,"168":1,"179":1,"203":2,"265":1}}],["wasteful",{"2":{"153":1}}],["was",{"2":{"2":1,"5":3,"7":1,"11":1,"15":1,"25":1,"35":1,"40":1,"81":1,"89":4,"107":3,"109":1,"110":1,"116":1,"123":1,"139":2,"173":2,"198":1,"200":2,"207":1,"215":1,"220":1,"222":1,"232":1,"239":1,"247":1,"256":1,"262":1,"267":1,"269":2,"275":1,"281":1,"289":1,"298":1}}],["webp",{"2":{"263":1,"269":1,"291":1}}],["welcome",{"2":{"162":1}}],["well",{"2":{"37":1,"50":1,"91":1,"118":1,"120":2,"123":1,"163":1,"164":1,"188":1}}],["weird",{"2":{"153":1,"155":2}}],["weight6",{"2":{"147":12}}],["weight3",{"2":{"147":4}}],["weight1",{"2":{"147":4}}],["weightnorm",{"2":{"46":4,"93":1,"153":1}}],["weighting",{"2":{"50":1}}],["weightinitializerschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["weightinitializerscudaext",{"2":{"241":2}}],["weightinitializersgpuarraysext",{"2":{"241":2,"291":1}}],["weightinitializersreactantext",{"2":{"200":2,"269":2}}],["weightinitializers",{"0":{"12":1,"109":1},"1":{"13":1,"14":1,"15":1,"16":1},"2":{"3":1,"15":8,"16":24,"153":3,"186":2,"200":1,"241":2,"263":2,"269":2,"291":3,"294":1}}],["weightih×x+biasih+weighthh×hprev+biashh",{"2":{"43":1}}],["weights",{"0":{"186":1},"1":{"187":1},"2":{"43":4,"50":1,"89":2,"147":1,"186":10,"187":5,"243":1,"265":2,"266":3}}],["weight=truncated",{"2":{"294":3}}],["weight=l",{"2":{"153":1}}],["weight=randn32",{"2":{"116":1}}],["weight=rand32",{"2":{"44":1,"116":1}}],["weight=ps",{"2":{"56":1}}],["weight=ones32",{"2":{"44":1}}],["weight=glorot",{"2":{"40":1,"153":1}}],["weight=nothing",{"2":{"40":1,"43":3,"44":3}}],["weight=zero",{"2":{"23":1}}],["weight",{"2":{"12":1,"15":3,"22":3,"23":4,"25":4,"40":6,"43":12,"44":24,"46":5,"62":4,"64":4,"87":1,"96":15,"123":15,"127":4,"133":2,"143":4,"144":2,"145":4,"147":5,"153":6,"155":4,"163":2,"165":4,"166":4,"186":1,"197":1,"238":30,"243":5,"244":2,"261":1,"265":1,"273":2,"280":2,"294":3,"296":3}}],["weakrefstrings",{"2":{"230":1,"241":1}}],["weak",{"2":{"68":1}}],["weren",{"2":{"114":1}}],["were",{"2":{"11":1,"102":1,"109":1,"114":2,"162":1}}],["we",{"0":{"91":1,"161":1},"1":{"92":1,"93":1},"2":{"2":4,"3":3,"7":2,"8":4,"21":2,"28":1,"35":1,"39":3,"40":4,"43":6,"44":4,"49":2,"50":2,"52":5,"54":1,"56":3,"59":2,"62":2,"64":3,"68":1,"89":13,"93":4,"96":4,"97":3,"102":2,"107":1,"109":1,"110":2,"114":1,"115":1,"116":1,"118":1,"120":2,"121":3,"122":4,"123":13,"124":1,"126":3,"127":6,"128":2,"130":1,"131":3,"135":1,"138":1,"140":3,"141":2,"142":1,"143":1,"145":1,"147":7,"151":1,"153":5,"154":4,"155":4,"156":1,"157":2,"158":2,"161":1,"162":3,"163":2,"164":3,"165":3,"166":3,"167":3,"170":2,"171":11,"173":3,"177":1,"178":1,"188":1,"189":11,"191":4,"192":9,"193":6,"195":2,"197":3,"199":1,"201":3,"202":11,"203":2,"204":1,"205":1,"206":3,"208":2,"211":2,"214":3,"216":4,"219":4,"220":5,"228":1,"229":1,"232":3,"235":1,"237":1,"238":2,"243":2,"248":3,"250":2,"251":2,"252":3,"253":3,"258":2,"263":3,"264":1,"265":8,"266":7,"268":1,"272":1,"273":2,"274":5,"276":2,"277":2,"283":2,"292":3,"294":9,"295":1}}],[">randn32",{"2":{"56":2}}],[">",{"2":{"2":3,"3":8,"8":2,"15":8,"16":24,"43":1,"50":3,"56":2,"67":3,"83":2,"84":1,"87":2,"89":3,"123":1,"127":2,"132":1,"147":26,"157":2,"158":4,"163":2,"164":2,"165":2,"166":2,"170":1,"204":1,"254":2,"270":1,"274":1,"284":2,"292":1,"296":1}}],["cnt",{"2":{"280":4}}],["cnew",{"2":{"43":2}}],["cnew=f⋅cprev+i⋅ghnew=o⋅tanh",{"2":{"43":1}}],["cvae",{"2":{"257":2,"258":18,"261":9}}],["cvpr",{"2":{"50":1,"81":1}}],["csv",{"2":{"230":1,"241":1}}],["cst",{"2":{"147":15}}],["cycle",{"2":{"163":2,"254":2,"287":1}}],["cdev",{"2":{"149":2,"150":1,"205":3,"217":1,"221":1,"249":1,"254":2,"257":1,"273":1,"274":1,"276":1,"282":1}}],["cdims",{"2":{"62":3,"89":19}}],["cc",{"2":{"97":1,"200":1,"205":1,"214":1,"254":78,"261":1,"269":1,"280":1140,"287":19}}],["ctc",{"2":{"86":1}}],["cimg",{"2":{"260":2}}],["circle",{"2":{"293":1,"294":2,"297":4}}],["circ",{"2":{"283":8}}],["circbuff",{"2":{"254":1}}],["circumvented",{"2":{"150":1}}],["circularly",{"2":{"79":1}}],["circular",{"2":{"15":1,"40":1,"79":9}}],["ci",{"2":{"93":1,"210":1,"227":1,"231":1,"242":2,"259":2}}],["cite",{"2":{"90":2}}],["citation",{"0":{"90":1}}],["cifar",{"2":{"67":1}}],["c",{"2":{"23":3,"40":4,"42":21,"43":1,"47":5,"80":4,"81":2,"83":13,"85":4,"89":11,"202":2,"210":1,"231":1,"238":9,"266":2,"292":4}}],["cluster",{"2":{"266":1}}],["clamp",{"2":{"258":1}}],["classifying",{"2":{"266":1}}],["classify",{"2":{"199":1,"264":1}}],["classifier=st",{"2":{"202":1}}],["classifier",{"0":{"202":1},"2":{"202":11,"203":3,"265":1}}],["classifiers",{"2":{"50":2}}],["classified",{"2":{"50":1}}],["classification",{"0":{"208":1,"229":1},"1":{"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"215":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"239":1},"2":{"15":2,"50":2,"86":2,"263":1}}],["classic",{"2":{"67":2}}],["classes",{"2":{"50":1,"86":1,"277":1}}],["class",{"0":{"121":1},"2":{"50":1,"86":1,"119":1,"122":2,"212":5,"234":5,"245":5,"263":1}}],["clockwise",{"2":{"199":1,"201":4}}],["closeopenintervals",{"2":{"200":1,"263":1,"269":1,"291":1}}],["closest",{"2":{"133":1}}],["closures",{"2":{"56":1}}],["cl",{"2":{"154":3,"186":4}}],["clear",{"2":{"25":1,"127":1}}],["client",{"2":{"2":3,"209":1}}],["client=missing",{"2":{"2":1}}],["cenum",{"2":{"263":1}}],["centered",{"2":{"89":2}}],["center=",{"2":{"82":1}}],["center=size",{"2":{"82":4}}],["center",{"2":{"82":7,"85":1,"89":12}}],["central",{"2":{"15":1,"89":1,"238":13}}],["celu",{"2":{"67":4}}],["cell=st",{"2":{"202":1}}],["cell`",{"2":{"202":1}}],["cells",{"2":{"43":1,"116":1}}],["cell",{"2":{"43":45,"202":10,"203":4}}],["certain",{"0":{"159":1},"1":{"160":1},"2":{"7":1,"8":2,"41":2,"51":1,"59":1,"89":1,"91":1,"107":1,"114":1,"123":1,"137":1,"141":1,"142":1,"173":1,"185":1,"232":1}}],["cassette",{"2":{"291":1}}],["case",{"2":{"22":1,"43":19,"54":1,"79":4,"83":1,"87":1,"89":2,"139":1,"153":1,"161":1,"163":1,"197":1,"220":1,"243":1,"266":1,"294":1}}],["cases",{"2":{"8":3,"15":1,"21":1,"35":1,"49":2,"52":6,"54":1,"55":1,"56":1,"72":1,"80":1,"89":5,"93":1,"121":1,"122":1,"128":2,"157":1,"164":1,"171":1,"177":1,"191":1}}],["cairo",{"2":{"263":2,"291":2}}],["cairomakie",{"2":{"217":1,"218":1,"221":1,"249":1,"255":2,"263":3,"264":1,"269":3,"270":1,"274":1,"282":1,"283":1,"291":3,"293":1,"294":1,"297":2}}],["ca",{"2":{"220":2,"243":3}}],["capabilities",{"2":{"190":1,"248":1}}],["capture",{"2":{"162":1}}],["capped",{"2":{"67":1}}],["cat",{"2":{"132":1,"201":1}}],["category",{"2":{"86":1}}],["catch",{"0":{"172":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1},"2":{"49":1,"126":2,"127":2,"133":1,"175":2}}],["causing",{"2":{"173":1}}],["causal",{"2":{"76":4}}],["causes",{"2":{"141":1}}],["caused",{"2":{"83":1}}],["cause",{"2":{"8":1,"25":1,"59":1,"173":1}}],["caching",{"2":{"49":1}}],["cached",{"2":{"49":2}}],["cache",{"2":{"49":2,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"274":1,"275":1,"281":1,"289":1,"298":1}}],["care",{"2":{"89":1,"192":1}}],["cartesianindex",{"2":{"84":2,"89":2}}],["cartesianindices",{"2":{"84":2}}],["cartesian",{"2":{"44":1}}],["carry",{"2":{"43":4,"202":3,"203":3}}],["caveats",{"2":{"15":1}}],["calc",{"2":{"89":1}}],["calculating",{"2":{"89":1}}],["calculation",{"2":{"80":1,"89":1,"163":1,"164":3}}],["calculations",{"2":{"37":1}}],["calculates",{"2":{"42":3,"87":1,"89":3}}],["calculate",{"2":{"40":4,"42":3,"44":1,"85":1,"89":5}}],["calculated",{"2":{"7":2,"50":7,"63":1,"89":2}}],["callback",{"2":{"220":4,"295":3,"296":1}}],["callable",{"2":{"39":2}}],["calls",{"2":{"7":1,"8":2,"67":1,"162":2}}],["call",{"2":{"7":1,"8":1,"23":1,"41":5,"46":3,"55":1,"63":1,"67":1,"69":6,"83":2,"84":1,"87":1,"89":5,"93":1,"96":1,"147":2,"162":1,"163":2,"171":1,"186":4,"192":2,"202":1,"261":1,"280":1}}],["calling",{"2":{"5":1,"7":1,"8":1,"28":1,"69":1,"83":1,"84":1,"124":1,"137":1,"140":1,"265":1}}],["called",{"2":{"3":1,"28":1,"39":2,"41":3,"63":2,"65":2,"77":1,"144":1,"147":1,"153":1,"158":2,"160":1,"187":1,"189":2,"202":1}}],["canvas",{"2":{"260":5}}],["candidates",{"2":{"133":1}}],["cannot",{"2":{"15":1,"56":1,"92":1,"153":1,"154":1,"158":1,"171":1,"227":1}}],["can",{"0":{"161":1},"2":{"2":2,"3":4,"4":2,"7":3,"8":3,"11":1,"22":1,"24":1,"28":3,"39":6,"40":6,"42":3,"43":7,"44":3,"45":3,"49":1,"50":2,"52":1,"55":1,"56":5,"57":1,"59":1,"61":1,"62":1,"63":2,"64":1,"65":14,"66":1,"67":6,"69":1,"70":1,"72":2,"76":1,"78":1,"79":5,"81":4,"82":1,"83":1,"84":5,"89":8,"95":2,"96":3,"97":2,"98":1,"115":1,"123":2,"124":1,"125":2,"126":1,"127":2,"135":1,"137":1,"143":1,"145":1,"148":3,"150":1,"154":1,"155":3,"158":1,"160":1,"161":1,"162":2,"163":2,"164":3,"166":1,"167":2,"171":1,"173":2,"174":1,"175":1,"178":1,"181":1,"184":1,"185":1,"186":1,"187":1,"189":11,"190":2,"192":1,"194":1,"197":1,"202":1,"203":1,"205":1,"206":1,"220":1,"232":2,"235":1,"236":1,"244":1,"250":1,"251":1,"265":2,"266":3,"274":1,"292":1,"294":1}}],["cover",{"2":{"157":1}}],["covered",{"2":{"128":1}}],["covariate",{"2":{"65":1}}],["cosθ",{"2":{"89":1}}],["cos",{"2":{"89":1,"165":1,"253":1,"283":2,"292":2,"293":3,"294":8}}],["cosh",{"2":{"67":1}}],["coordinates",{"2":{"85":1,"89":3,"266":1}}],["co",{"2":{"67":1}}],["course",{"2":{"135":1,"163":1}}],["courville",{"2":{"45":1}}],["coupled",{"2":{"91":1,"185":1}}],["could",{"2":{"79":1,"153":1,"154":1,"265":1}}],["counterpart",{"2":{"161":1}}],["count",{"2":{"56":1,"78":1,"147":4}}],["coefficient",{"2":{"50":2,"67":2,"89":2}}],["colon",{"2":{"284":1}}],["colormap=",{"2":{"266":3}}],["colorbrewer",{"2":{"263":1,"269":1,"291":1}}],["colorbar",{"2":{"255":1}}],["colors",{"2":{"263":1}}],["colorschemes",{"2":{"230":1,"263":1,"291":1}}],["colortypes",{"2":{"263":1}}],["colorview",{"2":{"260":2}}],["colorvectorspace",{"2":{"230":1,"263":2,"291":2}}],["color=",{"2":{"218":2,"221":4,"264":2,"270":2,"274":2}}],["col",{"2":{"89":6,"260":5}}],["col=similar",{"2":{"89":6}}],["col2im",{"2":{"80":1,"89":1}}],["collect",{"2":{"56":1,"85":1,"97":1,"201":2,"210":2,"231":2,"253":4,"255":1,"266":4,"270":1,"284":1,"285":2}}],["collects",{"2":{"39":1}}],["columns",{"2":{"84":2,"189":1}}],["column",{"2":{"44":1,"77":2,"84":2,"86":1,"89":2,"147":1,"189":4}}],["cols",{"2":{"15":1,"260":8}}],["cora",{"0":{"276":1,"277":1},"1":{"277":1,"278":1,"279":1,"280":1,"281":1},"2":{"277":1}}],["corner",{"2":{"47":1,"85":1}}],["corners",{"2":{"47":1,"81":9,"85":1,"116":1}}],["corners=false",{"2":{"47":1}}],["cores",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["core",{"2":{"35":1,"69":3,"93":1,"157":1,"191":1,"198":1,"207":1,"215":1,"222":1,"238":5,"239":1,"243":5,"244":3,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["correlation=true",{"2":{"40":2,"115":1,"117":1}}],["correlation=false",{"2":{"40":2}}],["correlation",{"2":{"40":7}}],["corresponding",{"2":{"4":3,"23":2,"25":2,"44":1,"47":2,"50":3,"111":1,"114":1,"124":1,"147":1,"149":1,"164":1}}],["corresponds",{"2":{"2":1,"220":1}}],["corrections",{"2":{"135":1}}],["correctness",{"0":{"70":1},"2":{"52":1,"68":1}}],["correct",{"0":{"134":1},"2":{"3":2,"70":1,"173":2,"212":3,"234":3,"245":3,"260":1}}],["correctly",{"2":{"2":1,"24":1,"89":1,"128":1,"137":1}}],["codebases",{"2":{"55":1}}],["code",{"0":{"137":1},"2":{"21":2,"35":1,"37":1,"52":4,"56":4,"80":1,"97":2,"102":1,"120":1,"123":1,"124":2,"137":1,"147":8,"156":1,"160":1,"161":1,"163":2,"173":1,"174":1,"175":1,"179":1,"188":1,"203":1,"205":1,"238":3,"264":1,"283":1,"290":2}}],["combinatorics",{"2":{"263":1}}],["combination",{"2":{"133":1,"162":1}}],["combined",{"2":{"43":2,"96":1}}],["combines",{"2":{"39":1}}],["come",{"2":{"126":1,"173":1,"195":1}}],["comes",{"2":{"39":1,"91":1,"93":2,"163":1,"165":1}}],["coming",{"2":{"97":1,"153":1,"173":1}}],["com",{"2":{"43":1,"72":1,"89":1}}],["community",{"2":{"228":1}}],["communication",{"0":{"30":1},"2":{"139":1,"140":1}}],["communications",{"2":{"28":2}}],["commit",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["command",{"2":{"72":2,"97":1,"179":2,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["comm",{"2":{"27":2}}],["commonsolve",{"2":{"263":1}}],["commonsubexpressions",{"2":{"263":1}}],["commonworldinvalidations",{"2":{"263":1}}],["commonly",{"2":{"77":1}}],["common",{"2":{"12":1,"25":1,"52":4,"84":1,"154":1,"178":1,"184":1,"186":1}}],["comprises",{"2":{"154":1}}],["compilable",{"2":{"123":1}}],["compilation",{"2":{"99":1,"123":1,"147":1,"280":11}}],["compiling",{"0":{"123":1},"1":{"124":1},"2":{"93":1,"122":1,"147":1}}],["compiled=nothing",{"2":{"260":1}}],["compiled",{"2":{"120":1,"123":7,"205":2,"213":4,"260":2,"261":4,"280":6}}],["compiles",{"2":{"97":1}}],["compilersupportlibraries",{"2":{"263":1}}],["compiler",{"2":{"91":2,"123":1}}],["compile=true",{"2":{"49":1}}],["compile",{"2":{"3":1,"39":1,"49":1,"53":1,"69":1,"78":1,"123":8,"124":1,"205":1,"213":1,"261":2,"274":2,"280":3}}],["complicated",{"2":{"89":1}}],["completeness",{"2":{"170":1,"235":1}}],["completely",{"2":{"43":1,"87":1,"91":1,"114":1}}],["complete",{"2":{"42":3,"43":1,"186":1}}],["complexity",{"2":{"101":1,"193":1}}],["complexf64",{"2":{"16":8}}],["complexf32",{"2":{"16":8,"186":2}}],["complexf16",{"2":{"16":8}}],["complex",{"2":{"5":1,"49":1,"56":1,"66":1,"80":1,"83":1,"89":6,"186":1}}],["compatlinearalgebraext",{"2":{"269":1,"291":1}}],["compat",{"2":{"269":2,"291":2}}],["compatibility",{"2":{"35":1,"122":1,"151":1}}],["compatible",{"2":{"6":1,"24":2,"31":1,"32":2,"50":2,"62":1,"64":1,"151":1,"162":1,"171":2,"235":1}}],["comparator",{"2":{"254":70,"280":1026,"287":16}}],["compare",{"2":{"138":1,"147":4,"170":1}}],["comparison",{"2":{"39":1}}],["compactmacroimpl",{"2":{"238":7}}],["compactluxlayer",{"2":{"56":2,"238":3,"243":2}}],["compact",{"0":{"56":1,"203":1},"2":{"55":1,"56":20,"97":3,"115":2,"151":2,"162":2,"203":2,"205":1,"232":4,"236":1,"238":7,"243":1,"258":2,"278":2}}],["component",{"2":{"292":7}}],["componentvector",{"2":{"197":1,"238":6,"296":1}}],["components",{"2":{"89":1,"292":4}}],["componentarray",{"2":{"25":1,"163":2,"164":2,"165":2,"166":3,"170":4,"197":1,"220":2,"233":1,"235":1,"243":2,"294":1}}],["componentarraysreversediffext",{"2":{"291":2}}],["componentarraysrecursivearraytoolsext",{"2":{"230":2,"291":2}}],["componentarraystrackerext",{"2":{"291":2}}],["componentarrayszygoteext",{"2":{"241":2,"291":2}}],["componentarraysgpuarraysext",{"2":{"241":2,"291":1}}],["componentarrayskernelabstractionsext",{"2":{"241":1,"291":1}}],["componentarraysoptimisersext",{"2":{"241":1,"291":1}}],["componentarraysscimlbaseext",{"2":{"230":2,"291":2}}],["componentarrays",{"2":{"25":2,"155":1,"162":1,"193":1,"217":1,"230":3,"238":12,"241":7,"265":1,"291":11}}],["composability",{"2":{"263":1}}],["compositionsbaseinversefunctionsext",{"2":{"263":1}}],["compositionsbase",{"2":{"263":2}}],["composition",{"2":{"65":1,"154":1}}],["composes",{"2":{"118":1}}],["composedlinear",{"2":{"154":3}}],["composed",{"2":{"43":1,"49":1}}],["compose",{"2":{"37":1,"265":1}}],["computing",{"0":{"166":1,"168":1,"169":1,"170":1},"2":{"18":2,"19":1,"77":1,"164":1,"168":1,"194":1}}],["computed",{"2":{"39":4,"40":2,"44":1,"45":1,"46":2,"49":3,"50":3,"63":1,"65":1,"67":3,"194":4,"273":1}}],["compute",{"2":{"18":6,"19":1,"49":9,"50":1,"59":2,"64":1,"67":1,"85":1,"96":3,"107":1,"116":2,"123":1,"127":1,"166":3,"167":5,"170":1,"193":1,"194":1,"195":1,"196":1,"204":1,"220":1,"250":1,"252":2,"265":1,"292":4,"293":1,"294":1,"295":1,"297":1}}],["computes",{"2":{"15":1,"19":1,"46":4,"49":2,"60":1,"62":1,"65":3,"77":1,"78":1,"86":1,"87":1,"89":2,"158":1,"167":1}}],["computer",{"2":{"5":1,"15":2,"50":4,"65":1}}],["computationally",{"2":{"193":1}}],["computational",{"2":{"193":1}}],["computation",{"0":{"163":1,"165":1},"1":{"164":1},"2":{"3":1,"39":2,"46":2,"50":1,"59":1,"70":1,"82":1,"85":4,"107":1,"178":2}}],["copying",{"2":{"202":1}}],["copyto",{"2":{"52":3}}],["copy",{"2":{"8":1,"49":2,"52":1,"83":2,"158":2,"191":5}}],["copied",{"2":{"5":1,"43":4,"83":1,"84":2}}],["conjunction",{"2":{"93":1}}],["conjugate",{"2":{"83":1}}],["concurrentutilities",{"2":{"230":1}}],["concise",{"2":{"151":1,"203":1}}],["concatenate",{"2":{"132":1}}],["concatenated",{"2":{"43":7}}],["conclusion",{"0":{"128":1}}],["concretestructs",{"2":{"257":1,"263":1,"276":1,"282":1}}],["concreterarray",{"2":{"123":13,"147":11,"273":4}}],["concrete",{"0":{"131":1},"1":{"132":1,"133":1,"134":1,"135":1},"2":{"80":2,"102":1,"155":1,"158":1,"258":1,"259":1,"284":2,"285":1}}],["connectionist",{"2":{"86":1}}],["connection",{"2":{"39":26,"43":4}}],["connected",{"0":{"64":1},"2":{"15":1,"44":3}}],["confusion",{"2":{"107":1}}],["confusing",{"2":{"89":1,"220":1}}],["conform",{"2":{"39":1}}],["conference",{"2":{"15":6,"50":4,"65":2}}],["convolve",{"2":{"89":2}}],["convolutions",{"2":{"40":5,"80":2,"89":4}}],["convolution",{"0":{"80":1},"2":{"15":1,"40":9,"80":6,"89":27,"147":2}}],["convolutional",{"0":{"40":1,"62":1,"257":1,"276":1},"1":{"258":1,"259":1,"260":1,"261":1,"262":1,"277":1,"278":1,"279":1,"280":1,"281":1},"2":{"15":2,"40":4,"50":1,"67":1,"87":1,"89":1,"257":1}}],["conv3d",{"2":{"80":1}}],["conv2d",{"2":{"80":1,"89":2}}],["convdims",{"2":{"62":2,"80":3,"89":5}}],["convtranspose",{"2":{"40":1,"116":1,"117":2}}],["conv",{"2":{"37":2,"40":4,"62":3,"77":1,"80":7,"89":10,"115":1,"116":1,"147":2,"157":1,"176":2,"211":6,"258":6}}],["convention",{"2":{"191":1}}],["conveniently",{"2":{"189":1}}],["convenience",{"0":{"16":1},"2":{"7":1,"55":1,"118":1,"274":1}}],["conveys",{"2":{"89":1}}],["conversely",{"2":{"193":1}}],["converse",{"2":{"40":1}}],["conversions",{"2":{"54":2}}],["conversion",{"0":{"183":1},"2":{"5":1}}],["convert2image",{"2":{"259":1}}],["converts",{"2":{"47":1,"53":4,"81":1,"89":2}}],["converted",{"2":{"37":2,"54":1}}],["converting",{"2":{"25":1,"37":3,"89":2}}],["convert",{"2":{"5":1,"8":1,"35":2,"37":4,"54":3,"84":1,"89":1,"158":1,"161":1,"171":1,"211":1,"221":1,"232":1}}],["cond",{"2":{"8":3}}],["conditioners=lux",{"2":{"285":1}}],["conditioners",{"2":{"285":19}}],["conditioner",{"2":{"284":2,"285":8}}],["conditions",{"2":{"214":1,"252":1,"293":1,"294":1}}],["conditionals",{"2":{"89":1}}],["condition",{"2":{"8":2,"15":1}}],["contour",{"2":{"255":1,"266":6,"291":1}}],["contents",{"2":{"87":1}}],["content",{"2":{"84":1}}],["contextvariablesx",{"2":{"241":1}}],["context`",{"2":{"63":1}}],["context",{"2":{"51":1,"63":1,"65":2}}],["continua",{"2":{"292":1}}],["continue",{"2":{"140":1}}],["continuously",{"2":{"67":1}}],["contiguous",{"2":{"83":1}}],["contracting",{"2":{"147":3}}],["contrastive",{"2":{"50":1}}],["contrast",{"2":{"2":1,"79":1}}],["contrib",{"2":{"126":1}}],["contributions",{"2":{"80":1}}],["controlled",{"2":{"54":1,"92":1,"117":1}}],["controlling",{"2":{"54":1,"183":1,"184":1}}],["control",{"2":{"47":2,"83":2,"89":4,"123":2,"173":1,"188":1}}],["controls",{"2":{"15":1,"40":4,"43":3,"46":8,"50":1}}],["contained",{"2":{"39":1}}],["containerlayer",{"2":{"232":1}}],["containers",{"0":{"39":1},"2":{"107":2,"238":3}}],["container",{"0":{"154":1},"2":{"7":2,"8":1,"24":1,"32":1,"80":2,"89":2,"108":1,"137":1,"154":2,"202":2}}],["contains",{"0":{"165":1},"2":{"8":1,"19":1,"41":1,"45":1,"51":2,"80":1,"85":2,"87":1}}],["contain",{"2":{"7":1,"54":1,"56":1,"84":1,"93":1,"153":2,"202":1}}],["containing",{"0":{"163":1},"1":{"164":1},"2":{"3":1,"7":3,"15":4,"16":24,"43":14,"44":1,"49":2,"50":2,"65":2,"84":2,"89":1,"92":1}}],["consoleprogressmonitor",{"2":{"263":1,"291":1}}],["consensus",{"2":{"165":1}}],["consecutive",{"2":{"39":1}}],["consequence",{"2":{"23":1}}],["consult",{"2":{"20":1}}],["considering",{"2":{"193":1}}],["consider",{"2":{"131":1,"153":1,"155":1,"173":1,"194":1,"197":1}}],["considered",{"2":{"3":1,"4":2,"49":1,"70":1,"85":1,"142":1,"148":1}}],["consistent",{"2":{"43":1}}],["consistency",{"2":{"15":1,"89":1}}],["consists",{"2":{"39":1,"164":1}}],["constrained",{"2":{"156":1}}],["constructured",{"2":{"158":1}}],["constructed",{"2":{"56":1,"63":1,"153":1}}],["constructing",{"2":{"49":1,"137":1}}],["constructionbaseintervalsetsext",{"2":{"263":1,"291":1}}],["constructionbasestaticarraysext",{"2":{"263":1,"291":1}}],["constructionbaselinearalgebraext",{"2":{"263":1}}],["constructionbase",{"2":{"263":4,"291":2}}],["constructionbaseunitfulext",{"2":{"230":1,"263":1,"291":1}}],["construction",{"2":{"7":1,"47":1,"114":1}}],["constructor",{"2":{"22":1,"49":1}}],["construct",{"2":{"15":1,"22":1,"24":2,"56":2,"92":1,"96":2,"124":1,"125":1,"154":2,"220":1,"233":1,"265":2}}],["constructs",{"2":{"7":1,"15":2,"22":1}}],["const",{"2":{"73":4,"74":3,"123":5,"127":1,"147":1,"217":2,"218":2,"238":5,"249":2,"257":2,"273":2,"276":2,"282":2,"285":1,"292":6,"293":1,"294":2}}],["constants",{"2":{"63":1,"293":1,"294":1}}],["constant",{"2":{"3":1,"79":14,"147":5}}],["cupti",{"2":{"239":1,"247":1}}],["cusparse",{"2":{"239":1,"247":1}}],["cusolver",{"2":{"239":1,"247":1}}],["customparamtype",{"2":{"22":2}}],["customabstractluxlayer",{"2":{"7":4}}],["customize",{"2":{"3":1}}],["custom",{"0":{"97":1,"129":1,"158":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"3":1,"7":2,"8":1,"11":1,"22":1,"56":1,"66":1,"75":1,"93":1,"97":1,"127":2,"134":1,"151":1,"155":1,"162":1,"181":1,"191":1,"199":1,"202":1,"203":1,"232":1,"248":1,"252":1}}],["cufft",{"2":{"239":1,"247":1}}],["cublas",{"2":{"239":1,"247":1}}],["cublaslt",{"2":{"64":1}}],["cu",{"2":{"190":4}}],["curand",{"2":{"13":1,"239":1,"247":1}}],["currently",{"2":{"4":4,"24":1,"42":3,"47":1,"64":2,"66":1,"87":1,"89":1,"120":1,"121":2,"122":1,"123":1,"141":1,"147":1,"148":1,"162":1,"171":2,"195":1}}],["current",{"2":{"2":1,"5":1,"7":1,"22":2,"41":1,"49":1,"97":1,"118":1,"131":1,"191":1,"265":1,"266":1}}],["cuarrays",{"2":{"89":1}}],["cuarray",{"2":{"5":3,"13":4,"89":3,"171":2,"186":2,"238":18}}],["cuiterator",{"2":{"5":2,"112":1,"178":1}}],["cudevice",{"2":{"4":1}}],["cudnn",{"2":{"2":2,"3":1,"62":1,"73":1,"241":2}}],["cudadevice",{"2":{"2":2,"4":2,"5":1,"111":1,"140":1,"150":2,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["cuda",{"0":{"190":1},"2":{"2":6,"3":2,"4":1,"5":5,"13":2,"28":7,"62":1,"64":1,"73":1,"82":1,"89":1,"100":1,"105":2,"112":1,"123":1,"139":1,"140":1,"141":1,"148":1,"171":4,"178":1,"181":3,"182":1,"186":7,"190":5,"198":3,"207":3,"215":3,"222":3,"230":1,"238":36,"239":10,"241":6,"247":10,"256":3,"262":3,"267":3,"275":3,"280":11,"281":3,"289":3,"298":3}}],["ch",{"2":{"265":2,"266":1}}],["chemfiles",{"2":{"230":1,"241":1}}],["cheat",{"2":{"165":1}}],["cheaper",{"2":{"67":1}}],["checking",{"2":{"153":1,"238":1}}],["check=",{"2":{"127":2}}],["check",{"2":{"8":4,"24":11,"28":1,"41":3,"46":3,"56":1,"70":1,"89":3,"123":1,"127":3,"170":1,"175":2,"184":3,"260":1}}],["checked",{"2":{"3":1}}],["checks",{"2":{"3":2,"22":2,"24":1,"56":2,"89":1}}],["christian",{"2":{"65":1}}],["chunks",{"2":{"51":1}}],["chosen",{"2":{"187":1}}],["choice",{"2":{"88":1,"206":1}}],["chopra",{"2":{"50":1}}],["choose",{"2":{"2":1}}],["chs",{"2":{"40":17,"46":19}}],["chaotic",{"2":{"89":1}}],["characterization",{"2":{"80":1}}],["channel",{"2":{"42":3,"46":12,"47":3,"78":1,"79":4,"80":3,"81":5,"89":4}}],["channels",{"2":{"40":6,"41":2,"46":1,"47":1,"65":1,"80":1,"81":3,"89":2}}],["changed",{"2":{"104":1,"138":1,"160":1}}],["changesofvariablestestext",{"2":{"263":1}}],["changesofvariablesinversefunctionsext",{"2":{"263":1}}],["changesofvariables",{"2":{"263":3}}],["changes",{"0":{"104":1,"107":1,"111":1,"114":1,"115":1,"116":2},"2":{"7":1,"11":1,"39":9,"45":2,"57":1,"91":1,"102":1,"107":1,"109":1,"126":1,"138":1,"157":1,"162":1}}],["change",{"2":{"1":1,"2":1,"55":1,"89":2,"107":1,"110":1,"137":1,"147":1,"153":1,"235":1,"292":1}}],["chains",{"0":{"37":1},"2":{"37":4,"211":1,"214":1,"265":2}}],["chainrulescoresparsearraysext",{"2":{"263":1,"269":2,"291":2}}],["chainrulescoreext",{"2":{"241":1}}],["chainrulescore",{"2":{"82":1,"127":2,"263":1,"269":2,"291":2}}],["chainrules",{"2":{"24":2,"50":1,"62":1,"64":1,"87":1,"119":1,"122":1,"162":1,"241":1,"263":1,"291":1}}],["chain=chain",{"2":{"23":1}}],["chain",{"0":{"132":1,"146":1},"2":{"23":10,"25":3,"35":1,"37":2,"39":7,"43":2,"45":4,"46":6,"56":1,"77":1,"96":9,"114":1,"123":4,"124":1,"126":8,"127":6,"131":1,"132":2,"135":3,"143":2,"144":1,"146":2,"147":2,"157":2,"163":1,"164":1,"165":1,"166":1,"170":1,"171":1,"211":6,"220":1,"233":2,"238":18,"244":2,"251":1,"258":6,"265":5,"266":2,"271":2,"274":2,"285":1,"294":2}}],["crude",{"2":{"292":1}}],["crlibm",{"2":{"263":1,"291":1}}],["crayons",{"2":{"263":1}}],["crc32c",{"2":{"291":1}}],["crc",{"2":{"127":2}}],["critical",{"2":{"80":1,"121":1}}],["criteria",{"2":{"2":1,"24":1}}],["creating",{"0":{"202":1},"2":{"56":1,"76":1,"232":1}}],["created",{"2":{"56":1}}],["create",{"0":{"233":1,"244":1},"2":{"2":1,"5":1,"27":2,"35":1,"39":2,"44":3,"49":1,"52":2,"56":1,"79":1,"89":3,"123":2,"124":1,"127":1,"134":1,"189":1,"192":1,"199":1,"201":3,"202":2,"205":1,"233":2,"235":1,"238":3,"244":2,"246":1,"251":4,"260":6,"265":2,"274":1}}],["creates",{"2":{"1":1,"8":1,"15":1,"43":5,"56":1,"89":1,"192":1,"265":1}}],["crossentropyloss",{"2":{"50":6,"212":1,"234":1,"245":1,"279":1}}],["crosscor",{"2":{"40":1,"115":1}}],["cross",{"2":{"40":11,"50":3,"77":1,"115":1,"117":1,"151":1}}],["cpuid",{"2":{"291":1}}],["cpu=true",{"2":{"235":2}}],["cpu`",{"2":{"149":1}}],["cpusummary",{"2":{"263":1,"291":1}}],["cpus",{"2":{"37":1,"59":1,"60":2,"173":1,"177":1}}],["cpudevice",{"2":{"2":4,"4":2,"149":2,"150":1,"217":1,"219":2,"249":1,"257":1,"276":1,"282":1}}],["cpu",{"0":{"99":1},"2":{"2":4,"5":1,"42":3,"64":1,"66":1,"70":1,"74":2,"93":1,"99":1,"115":2,"119":1,"120":3,"122":1,"123":2,"149":5,"150":7,"158":1,"171":3,"185":1,"198":2,"205":2,"206":1,"207":2,"209":1,"214":1,"215":2,"217":1,"222":2,"235":4,"239":2,"247":2,"249":1,"256":2,"257":1,"260":2,"262":2,"267":2,"273":1,"275":2,"276":1,"281":2,"282":1,"289":2,"298":2}}],["rk4",{"2":{"293":1,"294":1,"295":1,"297":1}}],["r₂",{"2":{"292":2}}],["r₁",{"2":{"292":2}}],["r2",{"2":{"292":1}}],["r=r1−r2",{"2":{"292":1}}],["r=σ",{"2":{"43":1}}],["rhat",{"2":{"265":1}}],["rhs",{"2":{"147":2}}],["rmath",{"2":{"263":2,"291":2}}],["risk",{"2":{"141":1}}],["right",{"2":{"45":1,"76":1,"79":5,"85":1}}],["rrules",{"2":{"122":1}}],["rrule",{"2":{"89":1,"127":1}}],["rrelu",{"2":{"67":5}}],["r^d",{"2":{"81":1}}],["r^2",{"2":{"81":1}}],["r1",{"2":{"79":5,"292":1}}],["ryan",{"2":{"65":1}}],["rᴰ",{"2":{"47":1}}],["r²",{"2":{"47":1}}],["r",{"2":{"47":9,"56":9,"79":14,"81":4,"89":1,"147":1,"292":3}}],["rn",{"2":{"79":5}}],["rnns",{"2":{"43":1}}],["rnn",{"2":{"43":3,"51":1}}],["rnncell",{"2":{"43":9,"116":1,"117":1}}],["rng=random",{"2":{"213":1,"258":2}}],["rngs",{"2":{"92":1}}],["rng",{"0":{"13":1},"2":{"3":3,"7":2,"8":6,"9":1,"10":1,"11":2,"13":11,"15":11,"16":24,"23":3,"35":1,"37":1,"39":4,"41":6,"43":3,"44":4,"45":18,"56":5,"63":6,"88":1,"89":4,"96":7,"97":4,"109":2,"114":1,"123":1,"124":1,"126":3,"127":2,"133":2,"135":1,"143":4,"146":6,"147":2,"153":9,"154":2,"155":5,"157":7,"158":9,"173":4,"186":13,"187":6,"188":2,"189":1,"192":5,"194":1,"195":1,"197":8,"205":1,"213":2,"220":1,"233":4,"243":3,"246":2,"254":4,"258":7,"261":4,"264":11,"265":1,"270":4,"273":1,"280":4,"283":5,"285":4,"287":5,"288":1,"294":1}}],["rgb",{"2":{"40":1,"260":1}}],["rough",{"2":{"162":1}}],["route",{"2":{"154":1}}],["routines",{"2":{"162":1}}],["routine",{"2":{"89":1}}],["roundingemulator",{"2":{"291":1}}],["rounded",{"2":{"40":2,"42":3}}],["round",{"2":{"28":1,"89":2,"246":4,"264":1}}],["row",{"2":{"86":1,"88":2,"147":1,"189":3,"260":5,"261":5,"266":1}}],["rows",{"2":{"15":1,"77":1,"89":1,"189":1,"260":6}}],["rotated",{"2":{"82":2}}],["rotates",{"2":{"82":2,"89":1}}],["rotate",{"2":{"82":1,"89":2}}],["rotations",{"2":{"89":3}}],["rotation",{"0":{"82":1},"2":{"82":10,"89":7}}],["rootsforwarddiffext",{"2":{"263":1}}],["rootschainrulescoreext",{"2":{"263":1}}],["roots",{"2":{"263":3}}],["root",{"2":{"30":7,"207":1,"215":1,"239":1,"247":1,"256":1,"262":1,"275":1,"281":1,"289":1}}],["robin",{"2":{"28":1}}],["rocarray",{"2":{"13":4}}],["rocrand",{"2":{"13":1}}],["rocm",{"2":{"3":1,"73":2,"140":1,"141":1,"181":3,"182":1}}],["rademacher",{"2":{"170":1}}],["radians",{"2":{"82":2}}],["ra",{"2":{"123":18,"124":9,"213":2}}],["ratio≥0",{"2":{"292":1}}],["ratio≤1",{"2":{"292":1}}],["ratio",{"2":{"292":6,"293":2,"294":1,"295":1,"297":1}}],["ratiosfixedpointnumbersext",{"2":{"263":1,"291":1}}],["ratios",{"2":{"263":2,"291":2}}],["rationale",{"2":{"107":1}}],["rate=1e",{"2":{"261":1}}],["rate=16000",{"2":{"89":1}}],["rate",{"2":{"89":4,"197":1,"261":1,"265":1}}],["rather",{"2":{"3":1,"43":1,"62":1,"80":1,"88":1,"89":2,"92":1,"202":1}}],["raw",{"2":{"86":1,"89":1,"210":3,"231":3}}],["raia",{"2":{"50":1}}],["ran",{"2":{"126":1}}],["randc64",{"2":{"16":1}}],["randc32",{"2":{"16":1}}],["randc16",{"2":{"16":1}}],["rand64",{"2":{"16":1}}],["rand32",{"2":{"16":1}}],["rand16",{"2":{"16":1}}],["randnc64",{"2":{"16":1}}],["randnc32",{"2":{"16":1}}],["randnc16",{"2":{"16":1}}],["randn64",{"2":{"16":1}}],["randn32",{"2":{"16":1}}],["randn16",{"2":{"16":1}}],["randn",{"2":{"15":1,"35":1,"37":1,"39":2,"45":2,"56":1,"77":1,"83":13,"123":1,"124":1,"126":1,"133":1,"143":1,"146":1,"147":1,"149":1,"150":1,"153":1,"154":1,"155":1,"157":2,"158":6,"163":2,"164":2,"189":2,"192":1,"194":1,"195":1,"197":4,"258":1,"260":1,"261":2,"270":1,"283":1,"285":1}}],["random123",{"2":{"263":1}}],["randomnumbers",{"2":{"263":1}}],["randomness",{"0":{"192":1},"2":{"43":3,"92":1,"96":1,"167":1,"188":1,"192":1}}],["randomized",{"2":{"67":1}}],["randomly",{"2":{"41":2,"67":1}}],["random",{"2":{"13":1,"15":8,"16":12,"23":2,"35":2,"37":2,"39":2,"41":3,"45":8,"56":3,"63":4,"88":1,"89":2,"96":4,"97":3,"123":3,"124":1,"126":1,"132":1,"143":2,"146":4,"147":25,"153":4,"155":3,"157":4,"158":4,"162":1,"167":1,"171":1,"173":2,"186":3,"188":5,"189":1,"192":8,"197":4,"200":1,"205":1,"209":1,"217":1,"219":1,"220":1,"230":1,"233":2,"241":1,"243":1,"249":1,"253":1,"254":2,"257":1,"258":1,"261":1,"263":1,"264":2,"269":1,"270":2,"276":1,"280":2,"282":1,"283":1,"287":2,"288":1,"291":1,"294":1}}],["rand",{"2":{"5":1,"56":1,"76":3,"89":6,"96":3,"97":1,"165":1,"166":1,"170":2,"171":1,"173":2,"189":5,"190":1,"192":3,"264":8,"283":2}}],["rank",{"2":{"4":3,"29":3,"137":1,"138":1}}],["rangearrays",{"2":{"263":1,"291":1}}],["ranges",{"2":{"39":1}}],["range",{"2":{"3":2,"85":1,"109":1,"195":1,"218":1,"253":4,"266":23,"270":1,"293":1}}],["rule",{"2":{"47":1,"87":1,"274":2}}],["rules",{"0":{"134":1},"2":{"0":1,"52":1,"121":1}}],["runs",{"2":{"82":1}}],["runtimegeneratedfunctions",{"2":{"230":1,"263":1}}],["runtime",{"2":{"70":2,"239":2,"241":1,"247":2}}],["running",{"0":{"133":1},"2":{"46":16,"63":2,"65":18,"89":1,"97":1,"107":1,"123":1,"126":5,"127":11,"143":2,"146":2,"160":1,"174":1,"202":1}}],["run",{"2":{"2":1,"3":1,"5":1,"41":1,"69":1,"72":2,"73":1,"74":1,"84":2,"89":1,"96":1,"114":1,"123":3,"127":3,"134":1,"141":1,"147":5,"148":1,"153":1,"158":1,"171":3,"173":1,"179":3,"192":1,"202":1,"227":1,"266":1}}],["red",{"2":{"218":1,"221":2,"264":1}}],["reduction",{"2":{"42":3,"50":1,"67":1,"84":2}}],["reducing",{"2":{"39":1,"65":1}}],["reduces",{"2":{"89":1}}],["reduced",{"2":{"81":1}}],["reduce",{"2":{"30":3,"45":1,"68":1,"80":1,"147":2,"265":1}}],["reiterate",{"2":{"158":1}}],["renamed",{"2":{"107":2,"110":1,"111":2}}],["renormalize",{"2":{"46":2}}],["requested",{"2":{"200":1,"269":1}}],["request",{"2":{"101":1,"107":1}}],["requisites",{"2":{"96":1}}],["required",{"2":{"89":2}}],["requirements",{"2":{"39":1,"274":1}}],["require",{"2":{"35":1,"37":1,"56":1,"65":1,"140":1,"220":1}}],["requires",{"2":{"8":1,"10":1,"28":4,"51":1,"100":1,"130":1,"147":1,"220":1,"243":1,"269":1,"291":1}}],["rev=true",{"2":{"254":1}}],["revising",{"2":{"93":1}}],["reversing",{"2":{"89":1}}],["reverses",{"2":{"45":1}}],["reversesequence",{"2":{"45":3}}],["reversed",{"2":{"45":1,"89":1}}],["reversediffadjoint",{"2":{"232":1}}],["reversediff",{"2":{"8":2,"49":1,"52":1,"62":1,"64":1,"66":1,"70":1,"119":1,"120":1,"133":1,"235":1,"291":1}}],["reverse",{"2":{"18":2,"24":2,"45":2,"56":1,"70":3,"84":2,"89":2,"119":7,"123":1,"127":1,"147":2,"193":2,"194":2,"252":3,"285":1}}],["refresher",{"2":{"153":1}}],["ref",{"2":{"89":7}}],["referred",{"2":{"189":1}}],["referring",{"2":{"85":1}}],["refer",{"2":{"97":1,"216":1,"228":1,"248":1}}],["references",{"2":{"15":6,"45":1,"46":1,"50":4,"63":2,"65":4}}],["reference",{"0":{"14":1},"1":{"15":1,"16":1},"2":{"81":1,"124":1,"208":1,"254":7,"280":102,"287":2}}],["reflecting",{"2":{"79":2}}],["reflect",{"2":{"79":9,"89":2}}],["re",{"2":{"75":1,"89":1,"97":1,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["reexport",{"2":{"75":1,"114":2,"263":1}}],["remake",{"2":{"220":1}}],["remark",{"2":{"164":1}}],["remains",{"2":{"63":1}}],["remember",{"2":{"124":1,"125":1,"128":1,"153":1,"162":1,"171":1,"202":1,"235":1}}],["remove",{"2":{"192":1}}],["removes",{"2":{"89":1}}],["removed",{"0":{"114":1,"139":1},"2":{"40":1,"89":1,"104":1,"107":2,"111":1,"114":10,"115":4,"123":1,"139":1,"163":2,"164":1}}],["reuse",{"2":{"62":1,"64":1}}],["reusing",{"2":{"62":1,"64":1}}],["relocatablefolders",{"2":{"263":1,"291":1}}],["reltol",{"2":{"238":13}}],["reltol=1",{"2":{"233":1}}],["reliance",{"2":{"163":2}}],["reliable",{"2":{"120":4}}],["reliability",{"2":{"93":1}}],["relies",{"2":{"158":2,"179":1,"192":1}}],["relevant",{"2":{"123":1,"263":1,"292":1}}],["release",{"2":{"102":2,"109":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["released",{"2":{"72":1}}],["relativisticorbitmodel",{"2":{"293":3}}],["relatively",{"2":{"162":1,"171":1}}],["relation",{"2":{"43":1}}],["related",{"2":{"101":1}}],["rely",{"2":{"56":1,"61":1,"107":1,"192":1}}],["relu6",{"2":{"67":3}}],["relu",{"2":{"35":1,"37":4,"39":4,"43":1,"45":3,"46":8,"56":3,"62":1,"67":6,"87":1,"97":1,"126":5,"127":5,"147":4,"157":2,"171":1,"211":12,"244":2,"271":2,"274":4,"278":1}}],["retcode",{"2":{"296":1}}],["retrieve",{"2":{"44":1}}],["retained",{"2":{"41":1}}],["retuened",{"2":{"3":2}}],["returned",{"2":{"13":1,"15":1,"25":1,"32":1,"35":1,"37":1,"41":4,"43":3,"44":1,"45":1,"49":10,"51":1,"63":1,"65":1,"83":2,"89":4}}],["returning",{"2":{"8":1,"153":1,"205":2,"280":1}}],["returns",{"2":{"3":3,"7":2,"8":3,"15":6,"18":2,"19":1,"25":1,"39":7,"40":2,"41":3,"42":9,"43":8,"44":5,"45":6,"46":5,"47":2,"49":3,"50":6,"51":2,"54":1,"59":2,"63":2,"65":4,"66":1,"71":1,"76":1,"85":2,"87":3,"88":1,"89":16,"112":1,"149":3,"164":1,"165":1,"166":1,"220":1,"238":6,"284":1,"294":1}}],["return",{"2":{"2":3,"3":5,"7":1,"9":1,"10":1,"11":2,"15":6,"16":24,"22":1,"23":2,"39":3,"43":4,"45":1,"49":8,"50":9,"51":2,"52":2,"56":19,"67":3,"76":2,"89":13,"92":1,"97":3,"107":1,"123":2,"124":1,"127":2,"132":1,"134":2,"143":1,"144":3,"145":4,"147":4,"153":5,"154":2,"155":3,"158":1,"163":1,"164":1,"165":1,"166":1,"168":1,"169":1,"170":1,"187":1,"197":1,"201":1,"202":2,"203":2,"204":1,"205":1,"210":1,"212":1,"213":1,"218":1,"220":3,"231":1,"232":5,"233":1,"234":1,"236":2,"238":3,"242":2,"243":3,"244":1,"245":1,"246":1,"251":2,"252":3,"254":1,"255":1,"258":8,"259":2,"260":6,"261":2,"264":1,"265":2,"266":2,"270":1,"273":1,"274":1,"277":1,"278":4,"279":1,"280":2,"283":2,"284":6,"285":5,"286":1,"287":2,"292":11,"293":1,"294":1,"295":2}}],["regression",{"0":{"197":1},"2":{"173":1,"197":1}}],["regressions",{"2":{"8":1,"214":1}}],["reg",{"2":{"163":2,"164":2}}],["region",{"2":{"89":2}}],["regions",{"2":{"89":3,"266":1}}],["registers",{"2":{"280":11}}],["registered",{"2":{"56":1,"72":1}}],["registry",{"2":{"72":1,"98":1}}],["regarding",{"2":{"158":1,"193":1}}],["regard",{"2":{"83":2}}],["regularization",{"2":{"163":1,"170":1,"265":1}}],["regularized",{"2":{"67":1}}],["regularisation",{"2":{"87":1}}],["regular",{"2":{"37":2,"56":2,"171":1}}],["rewrite",{"2":{"35":1,"59":1,"250":1}}],["recipesbase",{"2":{"263":1,"291":1}}],["rec",{"2":{"258":4}}],["rectifier",{"2":{"67":1}}],["rectifiers",{"2":{"15":2}}],["rectified",{"2":{"67":6}}],["recon",{"2":{"260":4,"261":2}}],["reconstruct",{"2":{"260":2,"261":1}}],["reconstruction",{"2":{"153":1,"260":3}}],["record",{"2":{"71":1,"255":1,"266":1}}],["recorded",{"2":{"70":1,"71":4}}],["recomputing",{"2":{"63":1}}],["recommendations",{"0":{"120":1},"2":{"172":1,"173":1}}],["recommendation",{"2":{"93":1}}],["recommend",{"2":{"7":1,"93":1,"140":1,"151":1,"153":1,"191":1,"195":1,"203":1,"206":1,"229":1,"232":1,"238":1,"243":1,"276":1}}],["recommended",{"0":{"149":1},"2":{"3":1,"7":2,"8":1,"22":1,"35":1,"52":1,"56":2,"61":1,"68":1,"75":1,"87":1,"93":1,"97":1,"151":1,"153":2,"158":1,"162":1,"163":2,"168":1,"170":1,"171":1,"174":1,"182":1,"204":1,"294":1}}],["recognition",{"2":{"50":1,"86":2}}],["receives",{"2":{"45":1}}],["recvbuf",{"2":{"30":6}}],["recur",{"2":{"43":3}}],["recurrence",{"2":{"43":4,"202":1}}],["recurrent",{"0":{"43":1},"2":{"15":1,"43":10,"78":1,"116":1,"199":2}}],["recurse",{"2":{"52":3}}],["recurses",{"2":{"24":1}}],["recursion",{"2":{"52":1,"53":4}}],["recursivefactorization",{"2":{"230":1,"291":1}}],["recursivearraytoolsreversediffext",{"2":{"230":1,"291":1}}],["recursivearraytoolszygoteext",{"2":{"230":1,"291":1}}],["recursivearraytoolsforwarddiffext",{"2":{"230":1,"263":1,"291":1}}],["recursivearraytoolsfastbroadcastext",{"2":{"230":1,"291":1}}],["recursivearraytoolstrackerext",{"2":{"230":1,"263":1,"291":1}}],["recursivearraytoolssparsearraysext",{"2":{"230":1,"263":1,"291":1}}],["recursivearraytoolsstructarraysext",{"2":{"230":1,"263":1,"291":1}}],["recursivearraytools",{"2":{"230":8,"263":5,"291":8}}],["recursive",{"0":{"52":1},"2":{"52":9,"173":1}}],["recursively",{"2":{"10":1,"52":5}}],["repository",{"2":{"162":1}}],["report",{"2":{"21":1,"101":1,"122":1}}],["reproducer",{"2":{"141":1}}],["represents",{"2":{"86":2,"265":1}}],["representation",{"2":{"56":1}}],["representing",{"2":{"47":1,"81":1}}],["represent",{"2":{"15":1,"83":1}}],["reparameterized",{"2":{"46":2}}],["reparameterization",{"2":{"46":1,"258":1}}],["repack",{"2":{"8":1}}],["repeated",{"2":{"84":2}}],["repeatedly",{"2":{"39":1}}],["repeatedlayer",{"2":{"39":1}}],["repeating",{"2":{"79":1}}],["repeat",{"2":{"79":8,"201":2}}],["repeats",{"2":{"39":6,"43":5}}],["replacing",{"2":{"87":1,"157":1,"189":1}}],["replacement",{"2":{"89":1,"107":1}}],["replaced",{"2":{"59":1,"88":1}}],["replace",{"2":{"59":2,"135":1,"176":5}}],["replaces",{"2":{"24":1,"42":3,"89":1}}],["repl",{"2":{"69":1,"72":1,"95":1,"174":2,"179":1}}],["replicated",{"2":{"192":1}}],["replicate",{"2":{"7":1,"8":1,"39":1,"192":3,"258":1}}],["readme",{"2":{"123":1}}],["read",{"2":{"101":1,"147":1,"191":1}}],["reason",{"2":{"120":1,"173":1}}],["reasonable",{"2":{"39":1}}],["reasons",{"2":{"8":1,"89":1,"232":1}}],["realnvp",{"2":{"282":2,"285":5,"287":1}}],["realdot",{"2":{"263":1}}],["reallocations",{"2":{"62":1,"64":1}}],["really",{"2":{"37":1,"56":1,"89":2,"122":2,"153":1,"189":1}}],["real",{"2":{"15":1,"41":1,"50":4,"78":1,"80":1,"81":4,"82":2,"83":1,"89":11,"134":1,"255":2}}],["reactantbackend",{"2":{"274":1}}],["reactantabstractfftsext",{"2":{"269":2}}],["reactantarrayinterfaceext",{"2":{"200":1,"269":1}}],["reactantoffsetarraysext",{"2":{"269":2}}],["reactantspecialfunctionsext",{"2":{"200":1,"269":1}}],["reactantstatisticsext",{"2":{"200":1,"269":1}}],["reactantnnlibext",{"2":{"200":1,"269":1}}],["reactantdevice",{"2":{"2":2,"97":1,"147":1,"205":2,"213":2}}],["reactant",{"0":{"74":1,"123":1,"147":1,"257":1},"1":{"124":1,"258":1,"259":1,"260":1,"261":1,"262":1},"2":{"2":1,"49":2,"53":1,"74":11,"93":1,"97":5,"99":1,"119":1,"120":6,"122":4,"123":27,"124":5,"147":16,"148":6,"200":10,"205":1,"209":3,"214":1,"249":2,"257":2,"269":12,"273":5,"274":2,"276":2,"277":2,"282":2}}],["resolve",{"2":{"166":1}}],["resolution",{"2":{"47":1,"81":3}}],["resources",{"0":{"101":1}}],["research",{"2":{"63":1}}],["reserved",{"2":{"56":1}}],["resettablestacks",{"2":{"230":1,"291":1}}],["resets",{"2":{"3":1}}],["reset",{"2":{"3":1}}],["rescale",{"2":{"46":4}}],["reshape",{"2":{"50":4,"79":12,"80":3,"81":1,"83":1,"84":1,"85":1,"89":2,"97":1,"147":1,"168":2,"169":2,"191":2,"201":2,"210":1,"231":1,"232":4,"242":2,"253":2,"255":2,"258":1,"265":1,"270":1}}],["reshaped",{"2":{"45":1,"62":1}}],["reshapes",{"2":{"45":1}}],["reshapelayer",{"2":{"45":3}}],["reshaping",{"2":{"15":1,"83":1,"89":1}}],["res",{"2":{"39":4,"56":2,"220":5,"296":1,"297":1}}],["resnet",{"2":{"39":1}}],["respective",{"2":{"91":1,"137":2,"158":1,"191":1}}],["respectively",{"2":{"80":2}}],["respect",{"2":{"19":2,"70":1,"82":1,"164":2}}],["responsibility",{"2":{"8":1,"155":1}}],["results",{"0":{"221":1,"255":1,"288":1,"297":1},"2":{"89":1,"123":1,"170":2,"254":7,"261":1,"266":2,"274":1,"280":103,"287":2,"293":1,"294":1,"296":1,"297":1}}],["resulting",{"2":{"81":3}}],["result",{"2":{"15":1,"19":2,"30":4,"46":1,"61":1,"71":4,"77":1,"83":3,"89":8,"92":1,"155":4}}],["restoring",{"2":{"89":1}}],["restricted",{"2":{"52":1,"78":1,"83":1}}],["rest",{"2":{"15":1,"202":2,"203":2}}],["restarted",{"2":{"1":1,"57":1}}],["mtensor",{"2":{"292":4}}],["mtlarray",{"2":{"13":3}}],["mvnormal",{"2":{"265":1}}],["mcse",{"2":{"265":1}}],["mcmc",{"2":{"265":1,"266":1}}],["mcmcchains",{"2":{"263":1,"265":1}}],["mcmcdiagnostictools",{"2":{"263":1}}],["mcclelland",{"2":{"15":1}}],["mnist\\ttraining",{"2":{"246":1}}],["mnist\\ttime",{"2":{"246":50}}],["mnist",{"0":{"208":1,"210":1,"229":1,"231":1,"240":1,"257":1,"259":1},"1":{"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"215":1,"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"239":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1,"258":1,"259":1,"260":1,"261":1,"262":1},"2":{"209":1,"210":2,"230":1,"231":2,"242":1,"246":2,"257":1,"259":2}}],["m×hop",{"2":{"89":1}}],["mkl",{"2":{"64":1,"263":1,"269":2,"291":1}}],["mse",{"2":{"252":3}}],["mseloss",{"2":{"50":4,"96":2,"97":1,"123":1,"124":1,"197":1,"220":1,"252":1,"260":1,"273":1,"295":3}}],["ms",{"2":{"200":59,"230":128,"241":76,"263":448,"269":105,"291":403}}],["msleloss",{"2":{"50":2}}],["my",{"2":{"214":1}}],["mybias",{"2":{"155":3}}],["myinputtype",{"2":{"130":3}}],["myfancychain",{"2":{"39":2}}],["myweight",{"2":{"22":1,"155":4}}],["m2",{"2":{"35":3}}],["m",{"2":{"35":3,"39":2,"53":8,"65":2,"73":2,"76":2,"78":2,"89":5,"232":2,"236":2,"261":2,"264":15,"280":2,"285":2,"292":4,"293":5,"294":3}}],["mpitrampoline",{"2":{"241":1}}],["mpich",{"2":{"230":1,"241":1}}],["mpi",{"0":{"181":1},"2":{"27":2,"28":8,"140":2,"141":1,"181":5}}],["mpibackend",{"2":{"27":2,"28":2}}],["mₘ",{"2":{"19":2}}],["m₂",{"2":{"19":2,"292":7}}],["m₁",{"2":{"19":2,"292":6}}],["mlstyle",{"2":{"263":1}}],["mljmodelinterface",{"2":{"263":1}}],["mlx",{"2":{"257":2,"276":1,"282":1}}],["mldatasets",{"2":{"209":1,"230":3,"241":3,"257":1,"276":1}}],["mldatadevicestrackerext",{"2":{"263":2,"291":2}}],["mldatadevicesfillarraysext",{"2":{"263":2,"269":2,"291":2}}],["mldatadevicessparsearraysext",{"2":{"263":2,"269":2,"291":2}}],["mldatadeviceszygoteext",{"2":{"241":2,"291":2}}],["mldatadevicesonehotarraysext",{"2":{"241":2}}],["mldatadeviceschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["mldatadeviceschainrulesext",{"2":{"241":2,"263":2,"291":2}}],["mldatadevicescudnnext",{"2":{"241":2}}],["mldatadevicescudaext",{"2":{"241":2}}],["mldatadevicescomponentarraysext",{"2":{"241":2,"291":2}}],["mldatadevicesgpuarraysext",{"2":{"241":2,"291":1}}],["mldatadevicesreversediffext",{"2":{"291":2}}],["mldatadevicesrecursivearraytoolsext",{"2":{"230":2,"263":2,"291":2}}],["mldatadevicesreactantext",{"2":{"200":2,"269":2}}],["mldatadevicesmlutilsext",{"2":{"200":2,"241":2}}],["mldatadevices",{"0":{"0":1,"110":1},"1":{"1":1,"2":1,"3":1,"4":1,"5":1,"111":1,"112":1},"2":{"0":1,"1":1,"2":3,"3":12,"4":1,"5":2,"110":2,"150":2,"182":2,"198":3,"200":2,"207":3,"215":3,"217":1,"219":5,"222":3,"230":1,"239":3,"241":8,"247":3,"249":1,"256":3,"257":1,"262":3,"263":7,"267":3,"269":5,"275":3,"276":1,"281":3,"282":1,"289":3,"291":11,"298":3}}],["ml",{"2":{"189":1,"220":2}}],["mlir",{"2":{"97":1,"123":1,"147":3}}],["mlp",{"0":{"268":1},"1":{"269":1,"270":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"56":1,"97":1,"123":1,"244":1,"251":4,"268":1,"285":2}}],["mlutils",{"0":{"32":1},"2":{"5":3,"32":4,"112":1,"137":1,"178":2,"200":3,"201":3,"209":1,"217":1,"219":1,"230":1,"241":2,"249":1,"257":1,"282":1}}],["motion",{"2":{"292":1,"293":1,"294":1}}],["motivating",{"2":{"8":1,"55":1}}],["moons",{"0":{"283":1},"2":{"283":10,"287":1}}],["mooncake",{"2":{"119":1}}],["mosaicviews",{"2":{"263":1,"291":1}}],["mostly",{"2":{"60":1,"93":1,"102":1,"114":1,"141":1,"232":1}}],["most",{"2":{"7":1,"8":1,"15":1,"24":1,"49":2,"52":1,"53":1,"66":1,"72":1,"89":1,"93":1,"110":1,"118":1,"120":4,"124":1,"151":1,"157":1,"166":1,"168":1,"171":1,"228":1,"294":1}}],["moment",{"2":{"165":1,"292":1}}],["momentum",{"2":{"46":2,"65":6}}],["momentum=0",{"2":{"46":2}}],["monolithic",{"2":{"93":1}}],["monotonic",{"2":{"67":1}}],["month",{"2":{"90":1}}],["mobilenetv3",{"2":{"67":1}}],["mouthful",{"2":{"25":1}}],["moved",{"0":{"115":1},"2":{"107":1,"115":2,"123":1}}],["move",{"2":{"21":1,"123":1,"124":2,"190":1,"191":1,"219":3}}],["movement",{"2":{"3":2}}],["mod",{"2":{"260":1}}],["modivations",{"2":{"114":1}}],["modified",{"2":{"52":2,"132":1}}],["modify",{"2":{"46":1}}],["modulating",{"2":{"50":1}}],["modules=",{"2":{"69":2}}],["modules",{"2":{"69":8}}],["module",{"0":{"51":1},"2":{"21":1,"26":1,"51":1,"69":1,"114":1,"115":1,"147":1}}],["mode=",{"2":{"85":2}}],["modes",{"2":{"47":1,"57":1,"66":1}}],["mode",{"0":{"160":1},"2":{"18":2,"24":7,"41":6,"43":2,"46":3,"47":5,"57":2,"63":2,"65":2,"66":5,"70":3,"85":6,"115":1,"119":1,"120":1,"124":1,"125":3,"126":2,"127":2,"160":4,"163":2,"173":1,"174":1,"179":1,"193":1,"194":4,"195":2,"261":1,"280":1,"292":1}}],["model`",{"2":{"232":1}}],["modeling",{"2":{"87":1}}],["models",{"0":{"35":2,"37":1,"123":1,"125":1,"135":1,"147":1,"216":1,"227":1},"1":{"124":1,"126":1,"127":1,"128":1,"217":1,"218":1,"219":1,"220":1,"221":1,"222":1},"2":{"8":1,"12":1,"35":2,"37":2,"49":2,"56":2,"68":1,"93":6,"96":1,"97":1,"123":1,"125":2,"128":1,"131":1,"147":3,"171":1,"186":1,"189":1,"195":1,"199":1,"203":1,"216":1,"227":2}}],["model",{"0":{"126":1,"142":1,"205":1,"206":1,"211":1,"214":1,"220":1,"258":1,"261":1,"278":1,"280":1,"285":1,"287":1,"290":1,"293":1,"294":1},"1":{"143":1,"144":1,"145":1,"146":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"8":8,"23":1,"24":2,"25":2,"35":6,"37":11,"39":16,"43":3,"45":12,"46":4,"49":10,"50":26,"55":3,"56":14,"67":1,"77":2,"91":1,"96":6,"97":11,"123":18,"124":8,"125":3,"126":17,"127":25,"128":1,"131":1,"132":1,"133":2,"134":2,"135":2,"137":2,"142":1,"143":6,"146":7,"147":12,"153":1,"154":5,"157":9,"158":11,"160":1,"161":1,"163":10,"164":7,"165":7,"166":7,"168":2,"169":2,"170":9,"171":4,"173":6,"197":11,"202":2,"203":1,"204":2,"205":13,"206":6,"208":1,"211":4,"212":2,"213":10,"214":4,"220":9,"221":2,"227":1,"232":11,"233":8,"234":2,"235":10,"236":8,"238":17,"244":4,"245":2,"246":9,"252":4,"253":1,"254":7,"260":7,"261":5,"265":7,"266":1,"273":2,"274":6,"279":2,"280":5,"285":6,"286":2,"287":5,"288":3,"292":7,"293":4,"294":11,"295":1,"296":1,"297":2}}],["more",{"2":{"5":1,"7":1,"8":1,"11":1,"15":2,"35":1,"37":2,"39":1,"47":1,"51":1,"52":1,"55":1,"56":3,"60":1,"70":1,"76":1,"77":1,"83":3,"89":7,"93":1,"97":1,"101":1,"104":1,"107":1,"109":1,"112":1,"114":1,"116":2,"140":1,"163":1,"164":1,"173":1,"175":1,"180":1,"183":1,"184":1,"189":3,"193":1,"202":1,"203":1,"204":1,"216":1,"266":1}}],["mechanics",{"2":{"292":1}}],["mechanism",{"2":{"87":1,"93":1}}],["message",{"2":{"162":1}}],["messages",{"2":{"83":1}}],["mersennetwister",{"2":{"133":1,"186":2,"270":1}}],["merged",{"2":{"43":1}}],["merge",{"2":{"22":1,"23":1,"43":2,"132":1,"202":1,"285":1}}],["mentioned",{"2":{"98":1,"153":1}}],["mel",{"2":{"89":3}}],["mels",{"2":{"89":6}}],["melscale",{"2":{"89":2}}],["medium",{"2":{"64":1}}],["medical",{"2":{"50":1}}],["measure",{"2":{"50":1}}],["meaningless",{"2":{"63":1}}],["meanpool",{"2":{"42":1,"78":3}}],["meant",{"2":{"24":1,"49":1,"53":1,"55":1,"187":1}}],["mean",{"2":{"15":2,"42":7,"46":13,"50":17,"63":2,"65":13,"78":1,"84":1,"88":2,"126":1,"127":2,"143":1,"146":1,"155":1,"197":1,"252":3,"254":12,"265":1,"266":1,"279":1,"286":1}}],["means",{"2":{"15":1,"21":1,"56":1,"82":1,"83":1,"135":1,"153":1,"155":1,"173":1,"189":1,"191":1,"220":2}}],["metadata",{"2":{"277":1}}],["metaldevice",{"2":{"4":2}}],["metal",{"2":{"2":1,"3":2,"13":1,"73":2,"96":1,"100":2,"148":4,"182":1}}],["met",{"2":{"15":1,"24":1}}],["methodinstance",{"2":{"238":3}}],["methoderror",{"2":{"133":1}}],["method=",{"2":{"82":7}}],["methods",{"2":{"47":1,"87":1,"89":1,"167":1,"171":1,"187":2,"213":1,"216":2,"220":1,"232":1,"233":1,"242":1,"258":1,"265":1,"283":1,"284":1,"285":1,"292":2}}],["method",{"2":{"15":3,"82":1,"87":1,"89":4,"97":1,"123":1,"127":1,"130":1,"133":2,"147":1,"168":1,"169":1,"170":1,"201":1,"203":1,"204":1,"210":1,"212":1,"217":1,"231":1,"232":1,"234":1,"243":1,"244":1,"245":1,"249":1,"252":2,"257":1,"258":1,"259":1,"260":1,"265":1,"266":2,"270":1,"276":1,"277":1,"278":1,"279":1,"282":1,"283":1,"286":1,"292":3,"294":1,"295":2}}],["memory=false",{"2":{"43":1}}],["memory=zeros32",{"2":{"43":1}}],["memory",{"2":{"5":1,"43":20,"62":1,"64":1,"89":2,"198":1,"202":1,"207":1,"215":1,"222":1,"239":2,"247":2,"256":1,"262":1,"267":1,"275":1,"280":11,"281":1,"289":1,"298":1}}],["mu",{"2":{"258":1}}],["mu=dense",{"2":{"258":1}}],["mutability",{"0":{"191":1}}],["mutables",{"2":{"158":1}}],["mutable",{"2":{"153":1,"158":1}}],["mutations",{"2":{"91":1,"93":2,"120":1}}],["mutation",{"2":{"62":1,"64":1,"89":1,"120":1,"191":2}}],["mutating",{"2":{"62":1,"64":1}}],["mutate",{"2":{"191":1}}],["mutated",{"2":{"52":1,"61":1,"84":1,"191":2,"192":1}}],["mutates",{"2":{"8":1}}],["muladdmacro",{"2":{"291":1}}],["mult",{"2":{"80":1,"89":2}}],["multilayer",{"2":{"268":1}}],["multithreaded",{"2":{"89":1}}],["multihead",{"2":{"76":1}}],["multiply",{"2":{"89":2,"189":1}}],["multiplied",{"2":{"63":1,"88":1,"89":1}}],["multiplication",{"2":{"60":3,"83":4}}],["multiples",{"2":{"81":1}}],["multiple",{"0":{"164":1},"2":{"39":1,"43":1,"46":1,"62":1,"64":1,"84":2,"89":2,"148":1,"171":2,"266":1}}],["multigate",{"2":{"51":1}}],["multi",{"0":{"4":1},"2":{"44":1,"50":1,"147":1}}],["mul",{"2":{"60":2,"80":2,"83":18,"89":4,"176":1}}],["much",{"2":{"50":2,"67":1,"93":1,"138":1,"266":1}}],["must",{"2":{"1":1,"3":2,"4":1,"7":1,"15":4,"19":2,"23":2,"25":2,"28":1,"32":2,"35":1,"37":2,"39":3,"40":2,"41":3,"43":7,"44":4,"46":1,"47":1,"49":3,"57":1,"65":1,"76":2,"77":1,"79":2,"83":1,"86":3,"89":6,"92":4,"93":1,"137":1,"147":2,"153":1,"156":1,"273":1,"292":4}}],["major",{"0":{"105":1,"108":1,"112":1},"2":{"102":1,"109":1,"147":2}}],["masses",{"2":{"292":1}}],["mass2",{"2":{"292":5}}],["mass1",{"2":{"292":6}}],["mass",{"2":{"292":14,"293":2,"294":1,"295":1,"297":1}}],["mass=1",{"2":{"292":3}}],["massachusetts",{"2":{"90":1}}],["master",{"2":{"137":1}}],["masked",{"2":{"284":2}}],["maskedcoupling",{"2":{"284":4,"285":2}}],["masks",{"2":{"76":1}}],["mask",{"2":{"41":10,"63":9,"76":8,"278":2,"279":3,"284":7,"285":8}}],["madness",{"2":{"89":2}}],["made",{"2":{"49":1,"52":5,"83":1,"88":1,"102":1,"107":1,"110":1,"153":1,"162":1}}],["macrotools",{"2":{"263":1}}],["macro",{"2":{"56":4,"69":1,"70":1,"97":2,"123":1,"151":2,"203":1,"232":2}}],["machines",{"2":{"50":2}}],["machine",{"2":{"15":1,"63":1,"65":1,"67":1,"167":1,"189":1,"214":1,"292":1}}],["maeloss",{"2":{"50":2}}],["makie",{"2":{"263":1,"269":1,"291":1}}],["makiecore",{"2":{"263":1,"291":1}}],["making",{"2":{"49":1,"91":1,"93":1,"110":1,"123":1,"171":1}}],["makes",{"2":{"23":1,"43":1,"50":1,"93":1,"117":1,"151":1}}],["make",{"2":{"10":2,"25":1,"43":1,"49":1,"52":6,"76":2,"83":1,"89":2,"91":2,"93":1,"126":1,"138":1,"153":1,"155":1,"157":1,"163":1,"164":1,"171":1,"179":1,"189":1,"193":1,"201":1,"265":1,"283":4,"294":2}}],["magnitude",{"2":{"46":2,"89":2}}],["maxiters",{"2":{"254":4,"287":4}}],["maxiters=1000",{"2":{"296":1}}],["maxiters=500",{"2":{"220":1}}],["maxiters=epochs",{"2":{"220":1}}],["maximally",{"2":{"89":2}}],["maximum",{"2":{"45":1,"62":1,"64":1,"67":1,"89":5,"147":2,"253":2,"255":1}}],["maxout",{"2":{"45":5}}],["max",{"2":{"42":4,"45":1,"50":2,"67":7,"78":1,"84":1,"89":1,"253":8,"255":1,"258":24,"261":2,"266":1,"283":3}}],["maxpool",{"2":{"37":2,"42":1,"78":1,"147":2,"211":6}}],["mathtexengine",{"2":{"263":1,"269":1,"291":1}}],["mathematical",{"2":{"86":1}}],["mat",{"2":{"230":1,"241":1}}],["matmul",{"2":{"60":2,"168":1,"169":1,"176":1}}],["matrices",{"2":{"44":1,"64":1,"84":1,"86":1}}],["matrix",{"2":{"5":3,"15":13,"44":6,"45":1,"50":6,"56":4,"60":3,"61":1,"64":2,"76":1,"77":3,"79":6,"81":2,"83":16,"84":5,"88":5,"89":12,"96":9,"97":1,"123":2,"126":5,"127":13,"133":3,"134":1,"135":1,"164":1,"165":1,"166":1,"167":5,"171":1,"186":4,"189":13,"193":3,"219":2,"277":3,"288":1}}],["matches",{"2":{"204":2,"260":1}}],["matched",{"2":{"133":5}}],["matching",{"0":{"54":1},"2":{"89":4,"133":1}}],["match",{"2":{"23":1,"43":5,"54":4,"80":1,"89":1,"156":2,"173":1,"183":1,"254":7,"280":102,"287":2}}],["marker=",{"2":{"293":1,"294":2,"297":4}}],["markersize=2",{"2":{"283":1,"288":1}}],["markersize=12",{"2":{"270":1,"274":2,"293":1,"294":2,"297":4}}],["markersize=16",{"2":{"264":2}}],["marked",{"2":{"5":1,"56":1}}],["marks",{"2":{"69":2,"110":1}}],["mark",{"2":{"53":4,"56":2,"219":1,"294":1}}],["margin=2",{"2":{"50":1}}],["margin−y^",{"2":{"50":1}}],["margin",{"2":{"50":2}}],["martens",{"2":{"15":1}}],["mainly",{"2":{"87":1}}],["maintaining",{"2":{"65":1}}],["main",{"0":{"15":1},"2":{"68":1,"134":1,"135":1,"147":1,"153":1,"178":1,"202":1,"205":3,"219":2,"232":1,"238":18,"251":1,"261":3,"274":2,"280":2,"287":2}}],["mappedarrays",{"2":{"263":1,"291":1}}],["mapping",{"2":{"15":7,"50":1,"84":1}}],["maps",{"2":{"15":2,"42":3,"43":2,"50":1,"258":1}}],["map",{"0":{"23":1},"2":{"7":1,"23":4,"43":4,"52":2,"114":2,"115":1,"143":2,"144":1,"260":1,"266":1}}],["managing",{"0":{"192":1},"2":{"192":1}}],["management",{"0":{"148":1,"149":1,"150":1},"1":{"149":1,"150":1},"2":{"110":1,"149":4}}],["manager",{"2":{"95":1}}],["manage",{"2":{"56":1}}],["mandatorily",{"2":{"153":1}}],["many",{"2":{"67":2,"164":1,"189":1,"193":1}}],["manualmemory",{"2":{"200":1,"263":1}}],["manual",{"0":{"150":1},"2":{"8":1,"20":1,"22":1,"55":1,"97":1,"128":1,"142":1,"147":2,"162":1,"180":1,"190":1}}],["manually",{"2":{"2":1,"35":1,"37":1,"137":1,"151":1,"190":1,"276":1}}],["manipulations",{"2":{"77":1}}],["manipulation",{"2":{"7":2,"93":1}}],["maybe",{"2":{"54":1,"193":1}}],["may",{"2":{"3":1,"47":2,"56":3,"67":1,"80":2,"83":3,"84":2,"89":2,"214":1,"228":1}}],["microcollections",{"2":{"263":1}}],["mig",{"2":{"239":1,"247":1}}],["migrating",{"0":{"157":1},"1":{"158":1,"159":1,"160":1,"161":1}}],["migration",{"0":{"138":1},"1":{"139":1,"140":1},"2":{"153":1}}],["might",{"2":{"2":1,"8":1,"35":1,"37":1,"39":1,"45":1,"49":3,"55":2,"56":1,"59":1,"61":1,"89":1,"122":1,"123":1,"124":1,"128":1,"153":2,"162":1,"163":2,"173":1,"177":1,"185":2,"189":1,"191":1,"214":1,"220":1,"237":1,"261":1,"280":1}}],["miopen",{"2":{"117":1}}],["midpoints",{"2":{"89":1}}],["middle",{"2":{"89":1}}],["minibatching",{"2":{"216":1,"220":1}}],["minibatch",{"2":{"201":1,"210":1,"231":1}}],["mini",{"2":{"195":1}}],["minimally",{"2":{"290":1}}],["minimal",{"2":{"141":1}}],["minimum",{"2":{"89":2,"253":4,"255":2}}],["minimized",{"2":{"295":1}}],["minimize",{"2":{"64":1}}],["minimizes",{"2":{"62":1,"197":1}}],["minor",{"2":{"162":1,"171":1}}],["minded",{"2":{"89":1}}],["min",{"2":{"67":4,"84":1,"253":10,"255":2,"283":5}}],["milletari",{"2":{"50":1}}],["mirza",{"2":{"45":1}}],["mistakes",{"2":{"93":1}}],["mish",{"2":{"67":4}}],["mismatch",{"0":{"126":1},"2":{"54":3,"140":1,"173":1,"183":1}}],["misc",{"0":{"45":1}}],["miscellaneous",{"0":{"3":1,"57":1,"87":1},"2":{"39":1}}],["missings",{"2":{"263":1}}],["missing",{"2":{"28":1,"46":1,"65":1,"97":2,"147":2}}],["mixing",{"2":{"19":1,"89":4}}],["mixed",{"2":{"18":2,"62":1}}],["s0025",{"2":{"292":2}}],["ssmproblems",{"2":{"263":1}}],["sgd",{"2":{"197":1}}],["sz",{"2":{"132":2}}],["szegedy",{"2":{"65":1}}],["sneak",{"2":{"89":1}}],["swap",{"2":{"89":1}}],["swapbatch",{"2":{"89":2}}],["swapping",{"2":{"89":1}}],["swish",{"2":{"67":9}}],["switch",{"2":{"41":3,"110":1,"162":1,"193":1}}],["switching",{"0":{"34":1},"1":{"35":1},"2":{"162":4,"180":2}}],["src",{"2":{"84":25,"126":1,"133":1,"163":2,"205":2,"238":3,"261":1,"280":2}}],["srivastava",{"2":{"63":1}}],["s5",{"2":{"81":2}}],["s4",{"2":{"81":4}}],["s3",{"2":{"81":6,"297":2}}],["s2",{"2":{"81":6,"274":2,"294":2,"297":2}}],["s1",{"2":{"81":6,"274":2,"294":2,"297":2}}],["slow",{"2":{"163":2}}],["slower",{"2":{"125":1}}],["sleefpirates",{"2":{"59":2,"291":1}}],["sliding",{"2":{"80":5,"89":3}}],["slighly",{"2":{"67":1}}],["slight",{"2":{"59":1,"193":1}}],["slightly",{"2":{"39":1,"67":1,"89":1}}],["slices",{"2":{"51":1,"83":2,"89":2}}],["slice",{"2":{"15":1,"43":2,"46":2,"65":2}}],["squares",{"2":{"89":1}}],["square",{"2":{"76":1,"189":2,"253":1}}],["squaredhingeloss",{"2":{"50":2}}],["squared",{"2":{"50":3,"89":1,"197":1}}],["sqrt",{"2":{"15":4,"40":2,"43":6,"44":3,"265":1,"293":2}}],["sm",{"2":{"239":1,"247":1}}],["smarter",{"2":{"253":1}}],["smatrix",{"2":{"171":4}}],["smaller",{"2":{"67":1,"93":1}}],["small",{"2":{"37":1,"64":1,"93":1,"123":1,"173":1,"189":1,"208":1,"220":1}}],["smodels",{"2":{"285":5}}],["smodel",{"2":{"163":3,"164":4,"165":2,"166":2,"168":2,"169":2,"170":2,"220":3,"285":2,"286":3}}],["smooth",{"2":{"50":2}}],["smoothing=0",{"2":{"50":3}}],["smoothing",{"2":{"50":16}}],["skipped",{"2":{"39":1,"292":1}}],["skip",{"2":{"39":1,"70":3,"123":1,"253":1}}],["skipconnection",{"2":{"39":4}}],["spill",{"2":{"280":22}}],["spilled",{"2":{"280":11}}],["spiralclassifiercompact",{"2":{"203":2,"205":1}}],["spiralclassifier",{"2":{"202":5,"205":1}}],["spiral",{"2":{"201":1}}],["spirals",{"2":{"199":1,"201":6}}],["spiritual",{"2":{"136":1}}],["spurious",{"0":{"173":1}}],["spectralnorm",{"2":{"93":1}}],["spectrum",{"2":{"89":1}}],["spectrograms",{"2":{"89":1}}],["spectrogram",{"2":{"89":5}}],["specialfunctionschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["specialfunctions",{"2":{"263":2,"269":1,"291":2}}],["specialfunctionsext",{"2":{"241":1,"263":1,"291":2}}],["specialized",{"2":{"56":1,"59":1,"66":1,"171":1,"191":1}}],["special",{"2":{"3":2,"30":2,"43":1,"50":1,"56":1,"64":2,"121":1,"238":3}}],["specifies",{"2":{"40":4,"42":3,"45":1,"79":5,"195":1}}],["specified",{"2":{"2":2,"15":2,"39":8,"44":1,"45":3,"47":1,"70":1,"77":1,"81":4,"82":1,"149":1,"187":1}}],["specifically",{"2":{"160":1,"189":1}}],["specifications",{"2":{"128":1}}],["specification",{"0":{"126":1},"2":{"265":1}}],["specific",{"2":{"4":2,"8":1,"44":1,"52":1,"121":1,"182":1}}],["specifying",{"2":{"15":1,"40":2,"42":3,"46":2,"56":1,"69":1}}],["specify",{"2":{"2":1,"41":2,"47":1,"50":1,"56":1,"67":3,"81":2,"85":1,"145":1,"265":1}}],["speech",{"2":{"86":1}}],["speedup",{"2":{"67":1,"214":2}}],["splatted",{"2":{"56":1}}],["splatting",{"2":{"56":1}}],["splittablesbase",{"2":{"241":1,"263":1}}],["split=",{"2":{"210":1,"231":1,"259":1}}],["splitobs",{"2":{"201":1,"210":1,"230":1,"231":1}}],["split",{"2":{"51":1,"76":1,"87":1,"137":2,"201":1,"202":1,"210":2,"231":2}}],["spliced",{"2":{"43":2}}],["space",{"2":{"43":6,"89":1,"258":1}}],["spatial",{"2":{"40":5,"42":6,"80":2,"89":5,"253":1}}],["sparspak",{"2":{"230":1,"291":1}}],["sparsity=0",{"2":{"15":1}}],["sparsity",{"2":{"15":4}}],["sparseconnectivitytracernnlibext",{"2":{"263":1,"291":2}}],["sparseconnectivitytracernanmathext",{"2":{"263":1,"291":2}}],["sparseconnectivitytracerspecialfunctionsext",{"2":{"263":1,"291":2}}],["sparseconnectivitytracerlogexpfunctionsext",{"2":{"263":1,"291":1}}],["sparseconnectivitytracer",{"2":{"263":5,"291":5}}],["sparseinversesubset",{"2":{"263":1}}],["sparsematrixcoloringscolorsext",{"2":{"263":2,"291":2}}],["sparsematrixcolorings",{"2":{"263":2,"291":2}}],["sparsearrays",{"2":{"263":1}}],["sparsearraysext",{"2":{"200":1,"263":2,"269":2,"291":2}}],["sparsevector",{"2":{"89":1}}],["sparsely",{"2":{"15":2,"44":1}}],["sparse",{"2":{"15":2,"67":1,"89":1}}],["scopedvalues",{"2":{"291":1}}],["scores",{"2":{"76":10}}],["score",{"2":{"50":1}}],["scientific",{"2":{"292":1}}],["scientificmachinelearning",{"2":{"290":1}}],["scientifictypesbase",{"2":{"263":1}}],["scimljacobianoperators",{"2":{"230":1,"291":1}}],["scimlbasemakieext",{"2":{"263":2,"291":2}}],["scimlbasezygoteext",{"2":{"230":1,"291":1}}],["scimlbasechainrulescoreext",{"2":{"230":1,"263":1,"291":1}}],["scimlbase",{"2":{"230":3,"263":3,"291":4}}],["scimloperatorssparsearraysext",{"2":{"230":1,"263":1,"291":1}}],["scimloperatorsstaticarrayscoreext",{"2":{"230":1,"263":1,"291":1}}],["scimloperators",{"2":{"230":3,"263":3,"291":3}}],["scimlstructures",{"2":{"230":1,"263":1}}],["scimlsensitivity",{"2":{"217":1,"230":3,"238":26,"291":3}}],["sciml",{"2":{"55":1,"56":1,"93":2,"97":1,"195":1}}],["scratch",{"2":{"197":1,"202":1,"229":1,"263":1,"291":1}}],["script",{"2":{"147":1}}],["schwarzschild",{"2":{"293":1}}],["school",{"2":{"90":1}}],["schemes",{"2":{"12":1,"186":1}}],["scattered",{"2":{"89":1}}],["scatter",{"0":{"84":1},"2":{"84":17,"89":3,"264":2,"270":1,"274":2,"283":1,"288":1,"293":1,"294":2,"297":4}}],["scalable",{"2":{"168":1}}],["scalars",{"2":{"67":1,"89":1}}],["scalar",{"0":{"174":1},"2":{"3":2,"66":1,"87":2,"89":1,"174":1,"194":1,"251":1}}],["scalekeepaspect",{"2":{"259":1}}],["scales",{"2":{"81":3}}],["scale=ones32",{"2":{"46":4}}],["scale",{"2":{"44":2,"46":20,"47":6,"65":15,"67":1,"81":17,"89":2,"127":1,"146":1,"163":1,"284":9}}],["scaled",{"2":{"15":3,"67":2,"89":1}}],["scaling",{"2":{"15":2,"63":2,"80":1,"81":2,"292":6}}],["s",{"2":{"10":1,"39":1,"41":2,"44":1,"46":3,"47":1,"54":3,"55":1,"56":1,"66":1,"67":1,"73":1,"81":4,"83":4,"85":1,"87":1,"88":1,"89":17,"93":1,"96":1,"97":1,"120":1,"123":2,"124":1,"126":1,"127":1,"138":2,"143":1,"153":1,"154":1,"158":1,"162":2,"163":3,"165":1,"166":1,"167":1,"170":3,"171":1,"173":1,"188":4,"189":5,"194":1,"197":2,"202":5,"204":1,"206":1,"216":1,"220":2,"235":1,"261":102,"265":2,"270":3,"271":1,"274":1,"283":1,"287":12,"293":3}}],["symmetrically",{"2":{"79":1}}],["symmetric",{"2":{"79":9,"89":2}}],["symbolicindexinginterface",{"2":{"230":1,"263":1,"291":1}}],["symbol",{"2":{"10":1,"24":1,"46":1,"56":1,"123":1,"124":1,"155":1,"206":1,"238":29,"265":2,"285":2}}],["syntax",{"2":{"39":1,"56":1,"189":2,"190":1}}],["synchronized",{"2":{"137":2}}],["synchronize",{"2":{"30":2,"137":6,"138":2,"140":2,"171":2}}],["systems",{"2":{"63":1,"93":2,"96":1,"185":1}}],["system",{"2":{"2":1,"3":1,"80":1,"123":1,"155":1,"193":1,"248":1,"293":1,"294":1}}],["saving",{"0":{"206":1},"2":{"206":1}}],["saveat=tsteps",{"2":{"293":1,"294":1,"295":1,"297":1}}],["saveat=t",{"2":{"218":1,"220":1,"221":1}}],["save",{"2":{"147":1,"206":4,"233":2,"238":26}}],["saves",{"2":{"89":1}}],["sake",{"2":{"202":1,"204":1,"253":1}}],["sarray",{"2":{"171":7}}],["sarrays",{"2":{"171":1}}],["sanity",{"2":{"170":1}}],["saw",{"2":{"124":1}}],["say",{"2":{"93":1,"143":1}}],["safely",{"2":{"294":1}}],["safe",{"2":{"87":2,"89":1}}],["sampler",{"2":{"265":1}}],["samples=batchsize",{"2":{"261":1}}],["samples",{"2":{"197":15,"205":7,"259":2,"260":3,"261":6,"265":2,"266":2,"283":2,"287":17}}],["sample",{"2":{"85":6,"89":5,"167":3,"170":1,"258":1,"265":3,"266":1,"285":2,"288":1}}],["sampled",{"2":{"67":1,"85":1,"265":2}}],["sampling",{"0":{"85":1},"2":{"85":3,"253":1,"258":1,"263":1}}],["samepad",{"2":{"40":2,"42":3}}],["same",{"0":{"135":1},"2":{"3":1,"5":2,"7":3,"15":1,"18":2,"23":1,"24":1,"25":1,"39":1,"40":2,"42":3,"43":3,"45":2,"46":1,"49":2,"51":2,"54":2,"59":2,"61":1,"63":1,"65":3,"72":1,"76":1,"78":3,"79":1,"80":3,"83":1,"85":4,"88":1,"89":2,"92":2,"97":1,"123":1,"137":1,"154":1,"164":1,"170":2,"171":1,"190":1,"192":1,"196":1,"205":3,"208":1,"232":1,"260":1,"280":1}}],["satisfactory",{"2":{"266":1}}],["satisfying",{"2":{"40":2,"42":6,"153":1}}],["satisfies",{"2":{"37":2,"45":1}}],["satisfied",{"2":{"8":1,"273":1}}],["said",{"2":{"21":1}}],["saxe",{"2":{"15":1}}],["sortingalgorithms",{"2":{"263":1,"269":1,"291":1}}],["sorted",{"2":{"101":1}}],["soln",{"2":{"292":13,"293":2,"294":2,"297":2}}],["soln2orbit",{"2":{"292":3}}],["solution",{"2":{"253":5,"255":1}}],["solutions",{"2":{"15":1}}],["sol",{"2":{"220":2,"221":2}}],["solver",{"2":{"232":6,"236":3,"238":7}}],["solver=tsit5",{"2":{"232":2,"236":1}}],["solvers",{"2":{"93":1}}],["solves",{"2":{"220":1}}],["solve",{"2":{"86":1,"218":1,"220":4,"221":1,"232":2,"236":1,"248":1,"293":1,"294":1,"295":1,"296":1,"297":1}}],["software",{"2":{"90":2}}],["softfail",{"2":{"71":2}}],["soft",{"2":{"70":4}}],["softsign",{"2":{"67":6}}],["softshrink",{"2":{"67":6}}],["softplus",{"2":{"67":6}}],["softmax",{"0":{"77":1},"2":{"50":5,"76":3,"77":13,"86":1}}],["so",{"2":{"59":1,"67":1,"77":1,"83":1,"86":2,"89":9,"96":1,"158":1,"162":3,"163":1,"164":1,"219":1,"232":2,"236":2,"250":1,"251":1,"273":1}}],["society",{"2":{"50":1}}],["sooner",{"2":{"21":1}}],["sometimes",{"2":{"189":2}}],["something",{"2":{"122":1}}],["somewhere",{"2":{"35":1,"37":1}}],["some",{"0":{"218":1,"292":1},"2":{"8":1,"24":1,"35":2,"43":1,"45":1,"56":1,"79":5,"80":1,"84":1,"87":1,"89":5,"93":1,"102":1,"114":1,"118":1,"120":1,"121":1,"122":1,"125":1,"127":1,"136":1,"140":1,"145":1,"147":1,"154":1,"158":1,"162":7,"163":1,"173":1,"176":1,"178":2,"189":1,"235":1,"243":1,"253":1}}],["source=code",{"2":{"147":1}}],["source",{"2":{"1":1,"2":3,"3":8,"4":2,"5":1,"7":3,"8":7,"9":2,"10":5,"11":1,"15":8,"16":24,"18":2,"19":1,"22":5,"23":1,"24":2,"25":1,"27":2,"28":3,"29":2,"30":4,"31":1,"32":1,"35":3,"37":3,"39":6,"40":2,"41":3,"42":9,"43":6,"44":4,"45":7,"46":5,"47":2,"49":7,"50":15,"51":8,"52":6,"53":4,"54":1,"55":1,"56":3,"57":1,"59":2,"60":1,"61":2,"62":1,"63":2,"64":1,"65":4,"66":1,"67":27,"69":2,"70":2,"71":1,"76":3,"77":2,"78":4,"79":6,"80":7,"81":9,"82":2,"83":6,"84":9,"85":2,"86":1,"87":4,"88":2,"89":50,"107":1,"188":2}}],["sixel",{"2":{"263":1,"269":1,"291":1}}],["situations",{"2":{"141":1}}],["sided",{"2":{"292":2}}],["sides",{"2":{"79":5,"89":4}}],["side",{"2":{"56":1,"62":1,"76":1,"79":2}}],["siamese",{"2":{"50":1}}],["siamesecontrastiveloss",{"2":{"50":3}}],["sig",{"2":{"265":2}}],["sigma",{"2":{"67":2}}],["sigmoid",{"2":{"50":3,"67":20,"87":3,"202":1,"203":1,"258":1,"265":2}}],["signeddistancefields",{"2":{"263":1}}],["signify",{"2":{"102":1,"109":1}}],["significantly",{"2":{"151":1,"235":1}}],["significant",{"2":{"8":1,"52":5,"214":1,"237":1}}],["signal",{"2":{"89":4}}],["signature",{"2":{"23":1,"63":1}}],["si+1",{"2":{"40":1,"42":3}}],["silently",{"2":{"25":1,"87":1}}],["size=1000",{"2":{"201":1}}],["size=",{"2":{"81":4,"261":1,"288":1}}],["sized",{"2":{"51":1,"64":1,"65":1,"79":1,"82":3}}],["sizes",{"2":{"15":2,"64":1,"89":1,"107":1,"278":4}}],["size",{"0":{"11":1},"2":{"11":2,"15":24,"16":72,"19":2,"35":1,"37":1,"39":1,"40":16,"42":48,"43":11,"44":16,"45":8,"46":7,"47":8,"56":6,"59":2,"65":3,"76":16,"77":1,"78":7,"79":6,"80":14,"81":27,"82":2,"83":11,"84":1,"86":1,"89":16,"117":1,"132":1,"168":2,"169":2,"171":2,"189":1,"197":2,"198":1,"201":7,"207":1,"210":3,"215":1,"222":1,"231":3,"232":2,"235":1,"239":1,"247":1,"256":1,"259":2,"260":5,"261":6,"262":1,"265":4,"266":1,"267":1,"275":1,"280":2,"281":1,"283":1,"284":1,"285":1,"287":1,"289":1,"292":4,"298":1}}],["singular",{"0":{"153":1},"2":{"153":1,"154":2}}],["singleton",{"2":{"7":3,"40":2,"42":3,"89":4,"107":2,"108":1}}],["single",{"2":{"7":2,"8":1,"40":6,"42":9,"43":13,"49":6,"50":1,"56":1,"61":1,"62":1,"81":1,"84":2,"89":3,"96":2,"97":1,"107":1,"124":3,"154":1,"158":1,"164":1,"167":1,"171":4,"189":1,"197":1,"205":1,"213":1,"235":1,"246":1,"254":1,"260":1,"261":1,"265":1,"274":1,"280":1,"287":1}}],["sinθ",{"2":{"89":1}}],["sin",{"2":{"89":1,"283":2,"292":1}}],["since",{"2":{"8":1,"15":1,"23":1,"56":1,"72":1,"83":1,"93":1,"107":1,"130":1,"137":1,"155":1,"195":1,"202":1,"204":1,"220":2,"250":1,"274":1,"277":1}}],["simultaneously",{"2":{"155":1}}],["simulate",{"2":{"127":1,"292":1,"293":1,"294":2}}],["simulating",{"0":{"293":1},"2":{"15":1}}],["simdtypes",{"2":{"200":1,"263":1}}],["simd",{"2":{"66":1,"89":1,"263":1,"291":1}}],["simplicity",{"2":{"167":1,"204":1,"253":1}}],["simplified",{"2":{"153":1}}],["simplifies",{"2":{"84":2}}],["simpletraits",{"2":{"263":1,"291":1}}],["simpleunpack",{"2":{"263":1}}],["simplebufferstream",{"2":{"230":1}}],["simplest",{"2":{"39":1}}],["simplechainslayer",{"2":{"37":2,"211":1}}],["simplechains",{"0":{"208":1},"1":{"209":1,"210":1,"211":1,"212":1,"213":1,"214":1,"215":1},"2":{"37":14,"93":1,"208":3,"209":2,"211":1,"214":2}}],["simple",{"0":{"37":1,"199":1},"1":{"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"207":1},"2":{"37":4,"43":1,"49":1,"56":1,"63":2,"66":1,"123":1,"149":2,"158":1,"160":1,"188":1,"194":1,"211":1,"214":1,"216":1}}],["simply",{"2":{"8":1,"49":1,"53":1,"54":1,"56":1,"59":1,"72":1,"89":2,"124":1,"125":1,"135":1,"145":1,"147":1,"155":1,"243":1,"265":1}}],["similarly",{"2":{"83":1,"154":1,"258":1}}],["similarity",{"2":{"5":1}}],["similar",{"2":{"2":1,"3":1,"7":1,"43":1,"44":1,"50":1,"51":1,"52":1,"65":1,"123":1,"124":1,"155":1,"157":1,"178":1,"193":1,"260":1,"261":1,"294":1}}],["ship",{"2":{"125":1}}],["shi",{"2":{"81":1}}],["shifted",{"2":{"15":1}}],["shift",{"2":{"15":4,"46":4,"65":1,"284":7}}],["shuffle=true",{"2":{"201":2,"210":1,"231":1,"242":1,"254":2,"259":1,"283":1}}],["shuffle=false",{"2":{"5":1,"201":1,"210":1,"231":1,"242":1}}],["shuffle",{"2":{"47":1,"81":4,"201":2,"210":2,"231":2}}],["shuffling",{"2":{"47":1,"81":1}}],["shaderabstractions",{"2":{"263":1,"291":1}}],["shape=",{"2":{"261":1}}],["shapedaxis",{"2":{"238":30}}],["shaped",{"2":{"42":9}}],["shape",{"2":{"40":1,"43":16,"45":1,"46":17,"85":8,"89":7,"126":4,"147":2,"189":1,"197":4,"258":11}}],["shate",{"2":{"25":1}}],["sharing",{"2":{"25":9}}],["sharedarrays",{"2":{"263":1,"291":1}}],["shared",{"2":{"25":1,"65":2,"110":1}}],["share",{"2":{"25":4}}],["shooting",{"2":{"220":1}}],["shortcomings",{"0":{"141":1}}],["shortcut",{"2":{"8":1,"39":1}}],["shorter",{"2":{"89":2}}],["short",{"2":{"43":1,"89":3,"193":1}}],["shorthand",{"2":{"8":1}}],["showoff",{"2":{"263":1,"291":1}}],["showcasing",{"2":{"228":1}}],["shown",{"2":{"200":2,"265":1,"269":2}}],["showing",{"2":{"170":1}}],["shows",{"2":{"155":1,"166":1,"167":1,"170":1,"266":1}}],["showerror",{"2":{"133":1}}],["show",{"2":{"5":2,"87":2,"96":1,"163":1,"190":1,"216":1,"263":1}}],["shouldn",{"2":{"49":1,"107":1,"155":1,"165":1}}],["should",{"0":{"130":1},"2":{"4":2,"22":1,"25":1,"27":2,"37":1,"40":5,"42":6,"43":1,"46":1,"47":1,"49":2,"50":1,"52":1,"56":1,"61":1,"66":1,"80":1,"85":2,"87":2,"89":4,"110":1,"111":1,"142":1,"148":1,"153":2,"154":2,"155":1,"157":1,"158":2,"173":1,"191":1,"192":1,"229":1,"271":1,"294":1}}],["stubs",{"2":{"263":1,"291":1}}],["stime",{"2":{"213":2,"235":2,"246":2}}],["stick",{"2":{"193":1}}],["still",{"0":{"161":1},"2":{"2":1,"8":1,"37":1,"87":1,"126":1,"135":1,"153":1,"154":1,"202":1,"238":1,"266":1,"271":1}}],["stopping",{"2":{"280":2}}],["stop=6",{"2":{"266":4}}],["stochastic",{"2":{"92":1,"93":1,"197":1}}],["storage",{"2":{"89":6}}],["storing",{"2":{"65":1,"89":2,"134":1,"158":1}}],["stores",{"2":{"41":1,"44":1,"55":1,"89":2,"140":1,"158":1,"189":1,"232":1,"280":11}}],["stored",{"2":{"40":1,"49":3,"81":2,"147":2}}],["store",{"2":{"30":2,"44":1,"89":2,"92":2,"158":1,"220":1,"264":1,"294":1,"295":1}}],["stencils",{"2":{"292":2}}],["stencil",{"2":{"89":1}}],["stepnorm=0",{"2":{"296":1}}],["steprangelen",{"2":{"219":2}}],["steps",{"2":{"49":1,"96":1,"124":1,"265":1}}],["step",{"2":{"49":11,"86":1,"96":1,"97":1,"124":3,"197":1,"205":1,"213":1,"235":1,"246":1,"254":1,"261":1,"265":3,"274":3,"280":1,"287":1}}],["stft",{"2":{"89":10}}],["style",{"2":{"87":1}}],["stylization",{"2":{"46":1,"65":1}}],["std=1e",{"2":{"294":3}}],["std=0",{"2":{"15":1}}],["stdlib",{"2":{"192":1}}],["stdout",{"2":{"133":1}}],["std",{"2":{"15":5,"65":1,"265":1}}],["stages",{"2":{"288":3}}],["stage",{"2":{"162":1}}],["start=false",{"2":{"233":1}}],["started",{"0":{"94":1,"101":1},"1":{"95":1,"96":1,"97":1,"98":1,"99":1,"100":1},"2":{"157":1}}],["start",{"2":{"89":1,"123":1,"188":1,"238":13,"260":6,"261":3,"263":1,"287":2}}],["starting",{"2":{"52":5,"105":1,"124":1,"189":1,"236":1}}],["stackviews",{"2":{"263":1}}],["stacktraces",{"2":{"125":1}}],["stacktrace",{"2":{"79":1,"114":2,"126":1}}],["stack",{"2":{"43":2,"220":1,"253":6,"255":2,"259":1}}],["stackedrnncells",{"2":{"43":1}}],["statsapi",{"2":{"263":1}}],["statsfunschainrulescoreext",{"2":{"230":1,"263":1,"269":1,"291":1}}],["statsfunsinversefunctionsext",{"2":{"230":1,"263":1,"291":1}}],["statsfuns",{"2":{"230":3,"263":3,"269":1,"291":3}}],["statsbase",{"2":{"200":1,"263":1,"269":1,"291":1}}],["stats",{"2":{"49":2,"96":2,"254":7}}],["stats=false",{"2":{"46":5}}],["stats=true",{"2":{"39":2,"46":7,"126":1,"127":2}}],["statisticaltraits",{"2":{"263":1}}],["statistics",{"2":{"15":2,"46":7,"49":2,"65":2,"117":1,"209":1,"230":1,"249":1,"257":1,"263":2,"265":1,"269":1,"273":1,"276":1,"282":1}}],["staticint",{"2":{"211":2}}],["staticarrayinterfaceoffsetarraysext",{"2":{"263":2,"269":2,"291":1}}],["staticarrayinterfacestaticarraysext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["staticarrayinterface",{"2":{"200":2,"263":3,"269":3,"291":3}}],["staticarraysext",{"2":{"263":1,"291":2}}],["staticarrayschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["staticarrayscore",{"2":{"171":4,"263":1}}],["staticarraysstatisticsext",{"2":{"263":1,"291":1}}],["staticarrays",{"2":{"171":1,"263":3,"269":1,"291":3}}],["static",{"2":{"37":1,"51":1,"55":1,"66":1,"96":8,"123":9,"133":2,"171":7,"211":4,"238":58,"263":1,"274":6,"291":1}}],["staticbool",{"2":{"24":1,"51":1,"63":1}}],["staticsymbol",{"2":{"24":1,"51":2}}],["stated",{"2":{"67":1}}],["statements",{"2":{"56":1}}],["statefulrealnvp",{"2":{"285":3}}],["statefulrecurrentcell",{"2":{"43":2}}],["statefulneuralode",{"2":{"236":4,"237":1,"238":5}}],["statefulluxlayers",{"2":{"162":1}}],["statefulluxlayer",{"2":{"55":1,"115":4,"162":7,"163":1,"164":1,"165":1,"166":1,"168":1,"169":1,"170":1,"220":2,"236":2,"252":7,"254":2,"265":2,"285":3,"286":1,"287":1,"294":2}}],["stateful",{"0":{"55":1,"236":1,"237":1},"2":{"43":2,"163":1,"164":1,"166":2,"238":6}}],["state=false",{"2":{"43":4}}],["state=zeros32",{"2":{"43":3}}],["stateless",{"2":{"8":1,"45":1}}],["statelength",{"2":{"7":1,"10":1,"153":4,"154":1}}],["state",{"0":{"156":1},"2":{"7":1,"8":3,"10":5,"22":1,"35":1,"39":6,"41":4,"43":64,"45":1,"46":4,"49":7,"55":4,"56":7,"63":2,"65":1,"92":3,"93":3,"96":8,"97":4,"124":4,"137":5,"153":5,"154":3,"156":4,"158":1,"202":3,"205":7,"213":7,"220":6,"246":11,"254":6,"261":7,"265":2,"273":1,"280":13,"287":5,"294":1}}],["states",{"0":{"10":1},"2":{"3":1,"7":4,"8":1,"10":2,"22":4,"23":6,"24":1,"25":1,"35":1,"37":1,"39":16,"41":3,"43":5,"45":2,"46":7,"49":3,"50":2,"53":2,"54":4,"55":1,"56":5,"96":2,"97":2,"107":2,"124":1,"126":2,"127":1,"132":1,"137":2,"143":3,"146":1,"151":1,"153":5,"154":3,"156":1,"158":3,"197":2,"202":2,"205":2,"206":1,"211":2,"213":2,"232":1,"235":2,"246":4,"254":1,"261":2,"265":1,"271":1,"273":1,"274":6,"280":5,"287":1,"294":1}}],["standard",{"2":{"15":3,"16":6,"40":2,"46":1,"96":1,"163":1,"294":1}}],["stabilities",{"2":{"237":1}}],["stability",{"0":{"238":1},"2":{"8":1,"46":4,"50":1,"56":1,"65":4,"77":1,"102":1,"109":1,"184":1}}],["stablerng",{"2":{"163":3,"164":3,"165":2,"166":2,"170":3}}],["stablerngs",{"2":{"13":1,"162":1,"291":1}}],["stablehlo",{"2":{"147":44}}],["stable",{"2":{"49":1,"52":5,"56":2,"67":3,"77":1,"87":1,"184":1,"204":1,"232":1,"238":1}}],["st",{"2":{"7":2,"8":4,"10":6,"22":2,"23":6,"25":1,"35":4,"37":2,"39":6,"43":1,"45":14,"49":3,"50":2,"51":2,"54":1,"55":4,"56":19,"96":4,"97":6,"115":2,"123":14,"124":6,"126":5,"127":7,"130":3,"132":7,"133":2,"134":10,"135":2,"137":2,"143":9,"144":7,"145":6,"146":2,"147":2,"153":7,"154":11,"155":3,"156":2,"157":3,"158":6,"163":7,"164":7,"165":6,"166":6,"168":2,"169":2,"170":9,"171":8,"173":5,"197":6,"202":10,"204":4,"205":8,"206":3,"212":4,"213":3,"220":5,"232":4,"233":4,"234":4,"235":2,"236":7,"238":16,"245":4,"246":2,"252":7,"254":3,"258":13,"260":10,"261":4,"265":2,"273":1,"274":1,"279":4,"280":5,"285":7,"286":3,"287":2,"294":2}}],["strain",{"2":{"292":5}}],["strange",{"2":{"89":4}}],["stream",{"2":{"280":11}}],["strength",{"2":{"50":1}}],["strokewidth=2",{"2":{"264":2,"270":1,"274":2,"294":2,"297":4}}],["strokecolor=",{"2":{"264":2,"270":1,"274":2}}],["strongly",{"2":{"191":1}}],["structio",{"2":{"291":1}}],["structarrayssparsearraysext",{"2":{"263":1}}],["structarraysstaticarraysext",{"2":{"263":1,"291":1}}],["structarraysext",{"2":{"263":1,"291":1}}],["structarrayslinearalgebraext",{"2":{"263":1}}],["structarraysadaptext",{"2":{"263":1}}],["structarraysgpuarrayscoreext",{"2":{"241":1,"263":1,"269":2,"291":1}}],["structarrays",{"2":{"241":1,"263":6,"269":1,"291":2}}],["structtypes",{"2":{"241":1}}],["struct",{"2":{"132":1,"151":1,"153":1,"158":3,"206":1,"219":1,"258":1,"259":1,"284":2,"285":1}}],["structs",{"2":{"53":4}}],["structures",{"2":{"52":2,"93":1,"173":1}}],["structured",{"2":{"43":3,"155":1,"243":1}}],["structure",{"2":{"3":5,"7":2,"8":3,"18":2,"23":4,"25":1,"30":1,"44":1,"52":3,"126":5,"127":13,"153":2,"154":2,"156":1,"158":1,"235":1}}],["stridedviews",{"2":{"230":1}}],["stridedarray",{"2":{"87":1,"89":2}}],["strided",{"2":{"83":1,"89":5}}],["strides",{"2":{"52":5,"147":2}}],["stride=2",{"2":{"80":1,"258":3}}],["stride=k",{"2":{"78":4}}],["stride=window",{"2":{"42":3}}],["stride=1",{"2":{"40":2,"80":1,"258":3}}],["stride",{"2":{"40":5,"42":6,"78":4,"80":3,"83":5,"89":1,"147":2}}],["stridearray",{"2":{"37":1}}],["stridearrayscore",{"2":{"37":1,"200":1,"263":1,"269":1,"291":1}}],["stringmanipulation",{"2":{"263":1}}],["strings",{"2":{"115":1}}],["string=",{"2":{"57":2}}],["string",{"2":{"1":2,"3":1,"57":1,"69":1,"147":1}}],["suitesparse",{"2":{"263":2}}],["surprise",{"2":{"173":1}}],["surpassing",{"2":{"15":2}}],["sure",{"2":{"155":1,"294":1}}],["supertype",{"2":{"154":1}}],["super",{"2":{"81":2,"292":1}}],["suppose",{"2":{"266":1}}],["supposed",{"2":{"93":1,"114":1,"184":1}}],["supporting",{"2":{"120":1}}],["support",{"0":{"4":1,"73":1,"74":1,"99":1,"100":1,"121":1,"181":1},"2":{"3":4,"4":4,"7":2,"22":1,"42":3,"50":2,"52":1,"53":1,"55":1,"62":2,"64":2,"89":2,"93":3,"96":1,"100":5,"105":2,"109":1,"118":1,"119":1,"120":2,"122":2,"141":1,"148":3,"155":1,"160":1,"277":1}}],["supports",{"2":{"3":1,"56":1,"99":1,"117":1,"120":1,"160":1,"181":1,"189":1,"190":1,"250":1}}],["supported",{"0":{"13":1},"2":{"2":1,"3":2,"18":2,"19":1,"35":1,"42":3,"47":1,"49":6,"54":1,"66":1,"82":1,"83":2,"89":3,"92":2,"96":1,"109":1,"120":2,"121":2,"122":1,"148":2,"178":1,"186":2}}],["supplied",{"2":{"28":1,"39":1,"79":1}}],["supply",{"2":{"28":1}}],["subprocess",{"2":{"280":11}}],["subtracts",{"2":{"189":1}}],["subtract",{"2":{"189":1}}],["subtyping",{"2":{"154":1}}],["subtype",{"2":{"153":1,"154":2}}],["subtlety",{"2":{"89":1}}],["subject",{"2":{"89":1,"147":1}}],["subsequent",{"2":{"81":1}}],["subclass",{"2":{"80":2}}],["subarray",{"2":{"51":1}}],["suboptimal",{"2":{"49":1,"171":1}}],["sum",{"2":{"56":5,"69":3,"70":2,"77":3,"80":1,"87":1,"127":2,"157":2,"158":2,"163":1,"164":1,"165":2,"166":1,"168":1,"169":1,"204":1,"212":1,"234":1,"245":1,"252":3,"260":1,"286":1,"292":1}}],["sumit",{"2":{"50":1}}],["summary",{"2":{"5":8,"158":1,"265":1}}],["suggests",{"2":{"45":1}}],["success",{"2":{"296":1}}],["successor",{"2":{"136":1}}],["successfully",{"2":{"1":1,"200":14,"230":10,"241":28,"263":20,"269":21,"291":59}}],["such",{"2":{"4":2,"6":1,"8":1,"40":2,"42":6,"56":1,"76":1,"82":1,"83":2,"86":1,"87":1,"89":7,"93":1,"123":1,"128":1,"187":1,"203":1,"243":1}}],["seq",{"2":{"201":1}}],["sequentially",{"2":{"39":2,"43":5}}],["sequences",{"2":{"201":1}}],["sequence=true",{"2":{"43":2}}],["sequence",{"2":{"39":1,"43":9,"76":2,"86":2,"89":2,"201":7,"202":3}}],["several",{"2":{"179":1,"193":1}}],["seven",{"2":{"89":1}}],["separation",{"2":{"93":1}}],["separating",{"2":{"93":1}}],["separately",{"2":{"190":1}}],["separate",{"2":{"80":1,"89":2}}],["seaborn",{"2":{"266":3}}],["searching",{"2":{"67":1}}],["seamlessly",{"2":{"49":1}}],["segment",{"2":{"67":1}}],["segmentation",{"2":{"50":2}}],["server",{"2":{"198":1,"207":1,"214":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["serves",{"2":{"35":1}}],["service",{"2":{"97":1,"200":1,"205":1,"214":1,"254":8,"261":1,"269":1,"280":103,"287":3}}],["serious",{"2":{"248":1}}],["seriously",{"2":{"96":1}}],["serializes",{"2":{"137":1}}],["serializationext",{"2":{"263":1}}],["serialization",{"2":{"137":1,"206":1,"263":1}}],["series",{"2":{"73":2}}],["sergey",{"2":{"65":1}}],["self",{"2":{"63":1,"67":4}}],["selu",{"2":{"63":1,"67":4}}],["selecting",{"2":{"172":1}}],["selection",{"0":{"182":1},"2":{"2":1,"3":1,"149":1,"150":1,"182":1}}],["selectdim",{"2":{"45":1}}],["select",{"2":{"2":4,"147":4,"148":1}}],["selected",{"0":{"228":1},"2":{"2":3,"3":1,"114":1}}],["selects",{"2":{"2":1,"74":1}}],["sensealg=reversediffadjoint",{"2":{"235":1}}],["sensealg=gaussadjoint",{"2":{"235":1}}],["sensealg=interpolatingadjoint",{"2":{"220":1,"233":1,"235":1}}],["sensealg",{"2":{"233":1,"235":1,"238":13}}],["sensitivities",{"2":{"232":1,"235":1}}],["sensible",{"2":{"93":1}}],["sensibly",{"2":{"51":1}}],["sensical",{"2":{"56":1}}],["send",{"2":{"101":1}}],["sendbuf",{"2":{"30":6}}],["sendrecvbuf",{"2":{"30":6}}],["seyed",{"2":{"50":1}}],["sec",{"2":{"265":1}}],["seconds",{"2":{"200":14,"230":10,"241":28,"263":20,"265":2,"269":21,"291":59}}],["second",{"2":{"39":1,"63":1,"89":1,"189":1,"195":1,"292":2}}],["section",{"2":{"37":1,"39":2,"95":1,"128":1,"164":1,"180":1,"190":1,"292":1}}],["semvar",{"2":{"115":1}}],["semver",{"2":{"21":1}}],["semantic",{"2":{"163":2}}],["semantically",{"2":{"39":1,"77":2}}],["semantics",{"2":{"5":2,"173":1}}],["semi",{"2":{"15":2}}],["seems",{"2":{"296":2}}],["seen",{"2":{"83":1}}],["seed=0",{"2":{"261":1}}],["seeding",{"2":{"96":1}}],["seed",{"2":{"39":1,"45":4,"93":1,"96":1,"97":1,"146":1,"153":1,"155":1,"188":1,"192":2,"197":1,"233":1,"254":3,"261":2,"264":1,"270":1,"280":1,"287":1}}],["see",{"2":{"3":1,"7":3,"8":3,"11":1,"22":1,"24":2,"25":1,"35":1,"37":2,"39":4,"41":3,"43":5,"45":2,"46":4,"47":1,"49":1,"50":1,"52":2,"54":1,"55":2,"56":2,"57":1,"60":1,"61":2,"63":2,"65":4,"67":17,"70":1,"76":3,"77":3,"78":3,"79":5,"80":3,"81":1,"83":1,"84":4,"86":1,"87":1,"89":7,"93":2,"95":1,"97":2,"104":1,"107":3,"109":1,"114":2,"116":2,"123":1,"124":1,"125":1,"126":3,"127":2,"136":1,"140":1,"153":1,"154":1,"173":2,"177":1,"180":1,"183":1,"184":1,"186":1,"189":2,"190":1,"202":1,"214":1,"237":1,"266":2,"292":2}}],["session",{"2":{"1":1,"57":1,"124":1}}],["setprogress",{"2":{"263":1}}],["sets",{"2":{"69":1,"123":2,"184":2}}],["setindexing",{"2":{"62":1,"64":1}}],["setups",{"2":{"166":1}}],["setup",{"2":{"8":1,"23":1,"25":1,"35":1,"37":1,"39":1,"45":4,"56":6,"96":1,"97":1,"123":1,"124":1,"126":2,"127":2,"133":1,"135":1,"137":1,"143":1,"146":1,"147":2,"153":2,"154":1,"155":1,"157":1,"158":1,"163":1,"164":1,"165":1,"166":1,"170":1,"171":1,"173":1,"197":1,"205":1,"213":1,"220":1,"233":1,"246":1,"254":1,"261":1,"265":1,"273":1,"280":1,"287":1,"294":1}}],["setfield",{"2":{"7":1,"107":1,"263":1}}],["setting",{"0":{"295":1},"2":{"4":2,"40":2,"43":4,"44":3,"83":1,"89":4,"127":1,"162":1,"175":1,"184":2,"185":1}}],["set",{"2":{"1":2,"2":1,"4":6,"15":1,"22":2,"25":1,"28":2,"35":2,"37":1,"40":2,"41":1,"43":18,"47":2,"50":2,"52":1,"55":1,"56":3,"57":5,"63":4,"65":4,"66":1,"69":1,"70":1,"74":3,"76":1,"89":2,"102":1,"123":2,"124":2,"127":2,"140":1,"148":1,"149":1,"162":1,"163":4,"175":2,"179":2,"180":1,"181":3,"182":3,"184":2,"197":3,"209":1,"261":2,"263":1,"280":2}}],["ogg",{"2":{"263":1,"291":1}}],["os",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["oops",{"2":{"173":1}}],["ok",{"2":{"165":1}}],["oh",{"2":{"127":1}}],["own",{"2":{"89":1,"141":1,"155":1,"189":1}}],["our",{"2":{"89":2,"93":1,"96":1,"97":1,"121":1,"127":3,"147":2,"160":1,"163":1,"167":1,"173":2,"188":1,"189":1,"191":1,"192":1,"194":1,"197":2,"201":1,"202":1,"216":1,"220":3,"248":1,"264":1,"265":3,"266":4,"273":1,"274":1}}],["ouput",{"2":{"81":2}}],["outer",{"2":{"283":7}}],["outermost",{"2":{"83":1}}],["outside",{"2":{"52":1,"89":1,"114":2}}],["outperforms",{"2":{"122":1}}],["outpad",{"2":{"40":3,"117":1}}],["outpad=0",{"2":{"40":1}}],["outputs",{"2":{"11":1,"24":1,"37":1,"39":2,"43":6,"45":2,"55":1,"81":4,"92":1,"164":1,"165":1,"192":1,"194":1,"294":1}}],["outputsize",{"2":{"11":3,"107":1}}],["output",{"2":{"11":1,"15":6,"24":1,"37":2,"39":7,"40":8,"42":33,"43":13,"44":7,"45":4,"46":1,"47":4,"50":6,"55":1,"56":2,"59":2,"62":1,"63":3,"64":1,"76":1,"78":1,"80":3,"81":5,"85":3,"86":1,"87":2,"89":11,"97":1,"117":1,"123":2,"126":4,"127":14,"153":1,"156":2,"171":13,"173":1,"192":1,"200":3,"202":1,"232":1,"251":1,"265":2,"269":4}}],["out",{"2":{"15":2,"21":1,"39":1,"40":8,"41":2,"42":3,"43":15,"44":19,"52":1,"56":10,"63":2,"80":1,"85":18,"88":1,"89":2,"97":6,"102":2,"125":1,"127":1,"151":1,"153":8,"156":1,"175":1,"202":3,"203":2,"235":1,"263":1,"278":4,"285":2}}],["occurs",{"2":{"127":1}}],["occurred",{"2":{"127":3}}],["occurrences",{"2":{"10":3}}],["octavian",{"0":{"185":1},"2":{"64":2,"177":1,"185":1}}],["odefunction",{"2":{"232":2,"236":1}}],["odeproblem",{"2":{"218":1,"220":1,"221":1,"232":2,"236":1,"293":1,"294":1,"297":1}}],["odesolution",{"2":{"232":1}}],["odes",{"0":{"229":1},"1":{"230":1,"231":1,"232":1,"233":1,"234":1,"235":1,"236":1,"237":1,"238":1,"239":1},"2":{"93":1,"131":1,"229":2,"293":1,"294":1}}],["ode",{"0":{"232":1,"233":1,"237":1,"290":1},"1":{"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"55":1,"131":1,"216":3,"218":3,"219":1,"220":2,"221":2,"232":2,"233":1,"293":3,"294":6,"295":1,"297":2}}],["odd",{"2":{"15":1,"82":3,"89":1}}],["old",{"2":{"49":1,"104":1,"108":2}}],["older",{"0":{"34":1},"1":{"35":1},"2":{"114":1,"140":1}}],["o=σ",{"2":{"43":1}}],["oi=",{"2":{"40":1,"42":3}}],["o",{"2":{"40":4,"42":6,"147":2}}],["observables",{"2":{"291":1}}],["observations",{"2":{"40":1}}],["observe",{"2":{"265":1}}],["obtain",{"2":{"137":1}}],["obtained",{"2":{"66":1,"80":1}}],["obviously",{"2":{"125":1}}],["obj",{"2":{"49":2,"154":2,"155":1}}],["objectfile",{"2":{"291":1}}],["objects",{"2":{"50":1,"89":1,"111":3,"123":1,"150":1,"173":1,"219":1}}],["objective",{"2":{"49":8,"274":1,"295":1}}],["object",{"2":{"2":3,"3":4,"18":2,"49":7,"50":2,"62":1,"137":1,"149":3}}],["other",{"0":{"16":1,"33":1},"1":{"34":1,"35":1,"36":1,"37":1},"2":{"5":1,"8":1,"15":1,"16":1,"30":2,"37":1,"39":1,"40":1,"45":2,"47":1,"50":1,"52":2,"62":1,"67":1,"70":1,"83":1,"87":1,"89":4,"105":1,"114":1,"122":1,"124":1,"151":1,"153":1,"154":1,"189":1,"193":1,"206":1,"216":1}}],["otherwisewhere",{"2":{"50":1}}],["otherwise",{"2":{"1":1,"2":2,"3":1,"44":1,"57":1,"67":1,"84":2,"89":1}}],["opus",{"2":{"263":1,"291":1}}],["openexr",{"2":{"263":2,"269":1,"291":2}}],["openspecfun",{"2":{"263":1,"291":1}}],["openssl",{"2":{"230":1,"263":1,"291":1}}],["openlibm",{"2":{"263":1}}],["openmpi",{"2":{"241":1}}],["opening",{"2":{"124":1,"162":1}}],["open",{"2":{"120":1,"123":1,"128":1,"141":1,"147":1,"171":1,"227":1,"228":2}}],["operands",{"2":{"89":3}}],["operator",{"2":{"52":1,"78":1,"80":2,"84":2}}],["operators",{"2":{"45":1,"84":1}}],["operating",{"2":{"43":1}}],["operation",{"2":{"30":4,"42":3,"47":1,"62":2,"64":1,"66":4,"78":4,"80":1,"81":2,"84":4,"87":1,"89":1,"193":1}}],["operations",{"0":{"52":1,"60":1,"83":1},"2":{"2":1,"19":1,"37":1,"51":2,"52":1,"53":1,"62":3,"64":1,"84":1,"89":2,"105":1,"162":2,"171":2,"185":1,"190":1}}],["operates",{"2":{"83":2,"89":2,"123":1}}],["operate",{"2":{"43":1,"67":1,"158":1}}],["opposite",{"2":{"79":1}}],["optprob",{"2":{"296":2}}],["optf",{"2":{"296":2}}],["opt",{"2":{"69":6,"124":1,"137":5,"197":2,"220":10,"261":2,"274":1,"280":2,"287":2}}],["optimal",{"2":{"220":1}}],["optim",{"2":{"56":2,"230":1,"263":1,"291":1}}],["optimiser",{"0":{"204":1}}],["optimisersadaptext",{"2":{"263":1,"269":1,"291":1}}],["optimisersenzymecoreext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["optimisers",{"2":{"31":1,"35":2,"49":4,"56":3,"96":10,"97":1,"123":1,"137":1,"158":2,"161":1,"197":2,"199":1,"200":2,"209":1,"216":1,"220":2,"230":1,"241":1,"249":1,"254":1,"257":1,"263":3,"269":4,"272":1,"276":1,"282":1,"291":3}}],["optimize",{"2":{"89":2,"124":2}}],["optimized",{"2":{"53":1,"66":1,"89":1,"177":1,"185":1,"220":1}}],["optimizer",{"0":{"272":1},"2":{"31":4,"49":7,"137":3,"274":3}}],["optimizers",{"0":{"31":1},"2":{"296":1}}],["optimizationzygoteext",{"2":{"291":2}}],["optimizationreversediffext",{"2":{"291":2}}],["optimizationenzymeext",{"2":{"291":2}}],["optimizationmldatadevicesext",{"2":{"263":2,"291":2}}],["optimizationforwarddiffext",{"2":{"263":1,"291":2}}],["optimizationfinitediffext",{"2":{"263":1}}],["optimizationfunction",{"2":{"220":1,"296":1}}],["optimizationbase",{"2":{"263":4,"291":6}}],["optimizationproblem",{"2":{"220":2,"296":1}}],["optimizationoptimjl",{"2":{"217":1,"263":1,"291":3}}],["optimizationoptimisers",{"2":{"217":1}}],["optimizations",{"2":{"97":1,"114":1,"123":1}}],["optimization",{"0":{"216":1},"1":{"217":1,"218":1,"219":1,"220":1,"221":1,"222":1},"2":{"15":1,"96":1,"97":3,"123":1,"216":4,"217":1,"220":8,"263":1,"291":3,"296":4}}],["options",{"2":{"54":1,"97":2,"182":1,"200":2,"205":2,"214":2,"254":2,"261":2,"269":2,"280":2,"287":2}}],["option",{"2":{"47":2,"54":2,"66":1,"120":5}}],["optional",{"0":{"177":1},"2":{"15":1,"22":3,"43":1,"50":1,"55":1,"56":2,"62":1,"64":1,"88":1,"89":11,"96":1,"107":1,"185":1,"187":1}}],["optionally",{"2":{"7":1,"15":1,"46":1,"153":1,"187":1}}],["op",{"2":{"4":2,"5":1,"30":6,"51":3,"54":1,"65":1,"69":1,"84":8,"89":5,"112":1}}],["orbit₂",{"2":{"292":2}}],["orbit₁",{"2":{"292":2}}],["orbits",{"2":{"292":1}}],["orbit2",{"2":{"292":4}}],["orbit2tensor",{"2":{"292":2}}],["orbit1",{"2":{"292":4}}],["orbit",{"2":{"292":17}}],["orange",{"2":{"270":1,"274":1}}],["ordinarydiffeqloworderrk",{"2":{"291":3}}],["ordinarydiffeq",{"2":{"232":1,"293":1}}],["ordinarydiffeqcoreenzymecoreext",{"2":{"230":1,"291":1}}],["ordinarydiffeqcore",{"2":{"230":2,"238":26,"291":2}}],["ordinarydiffeqtsit5",{"2":{"217":1,"230":3,"238":13}}],["orderedcollections",{"2":{"263":1}}],["ordering",{"2":{"43":6}}],["order",{"0":{"20":1},"2":{"2":1,"22":1,"40":1,"89":1,"119":1,"147":2,"163":1,"165":1,"195":1,"250":3,"252":1,"292":2}}],["orcjit",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["originates",{"2":{"24":1}}],["originally",{"2":{"290":1}}],["original",{"2":{"15":1,"22":1,"25":1,"35":1,"89":3,"123":1,"126":1,"191":2}}],["org",{"2":{"15":1,"45":1,"81":1,"90":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"292":2,"298":1}}],["orthogonal",{"2":{"13":4,"15":6}}],["or",{"2":{"2":3,"3":2,"4":2,"8":1,"15":2,"22":1,"24":3,"30":3,"35":1,"37":2,"39":7,"40":6,"42":9,"43":3,"44":7,"47":4,"49":1,"50":5,"51":2,"56":3,"57":1,"62":2,"63":3,"64":1,"65":2,"66":2,"73":1,"76":4,"78":3,"79":1,"80":1,"81":1,"82":2,"83":7,"84":5,"86":1,"87":1,"88":3,"89":19,"93":2,"95":1,"96":1,"101":1,"120":1,"124":2,"127":2,"140":2,"141":1,"153":2,"154":1,"158":1,"160":1,"162":1,"165":1,"173":1,"176":2,"181":1,"187":1,"189":4,"190":1,"201":1,"227":1,"228":2,"292":1}}],["overview",{"0":{"119":1}}],["overlapping",{"2":{"89":1}}],["overloaded",{"2":{"114":1}}],["overload",{"2":{"87":1}}],["overloads",{"2":{"87":1}}],["overloading",{"2":{"52":1}}],["overwrite",{"2":{"87":2}}],["overridden",{"2":{"64":1}}],["overrides",{"2":{"70":1}}],["override",{"2":{"11":1,"35":1,"89":1}}],["overfitting",{"2":{"63":2}}],["overcome",{"2":{"35":1}}],["over",{"0":{"23":1,"93":1},"2":{"2":1,"5":2,"23":1,"24":1,"43":1,"45":1,"46":3,"55":1,"77":1,"84":2,"89":1,"96":1,"142":1,"147":1,"199":1,"219":1,"248":1,"251":1,"274":1,"295":1,"297":1}}],["onlinestats",{"2":{"249":1,"254":5}}],["online",{"2":{"228":1}}],["only",{"2":{"2":1,"8":1,"24":2,"40":1,"41":1,"42":3,"43":9,"44":1,"47":1,"49":3,"50":3,"54":2,"56":1,"70":3,"76":1,"82":1,"83":1,"87":4,"89":1,"120":2,"123":1,"137":1,"145":1,"154":1,"155":1,"162":2,"165":1,"166":1,"171":2,"194":2,"206":1,"231":1,"259":1}}],["onto",{"2":{"78":1,"89":4}}],["once",{"2":{"11":1,"43":2,"54":1,"80":1,"89":2,"127":1,"153":1,"190":1,"191":1,"193":1}}],["onwards",{"2":{"7":1,"114":1}}],["on",{"0":{"129":1,"174":1,"240":1,"248":1,"276":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1,"277":1,"278":1,"279":1,"280":1,"281":1},"2":{"2":2,"3":6,"5":1,"6":2,"7":1,"8":2,"15":11,"20":2,"21":1,"22":1,"23":2,"30":2,"37":1,"39":2,"40":2,"41":2,"42":6,"43":2,"44":1,"50":5,"52":3,"55":2,"56":2,"57":1,"59":2,"60":3,"61":1,"62":3,"64":3,"65":4,"67":2,"69":1,"72":1,"76":1,"77":1,"78":3,"79":12,"80":1,"83":2,"84":4,"85":1,"89":9,"90":1,"93":1,"95":1,"96":1,"97":1,"101":2,"105":1,"107":2,"115":1,"116":2,"117":1,"120":3,"122":1,"123":3,"128":1,"130":2,"134":1,"137":1,"141":1,"147":1,"151":2,"153":2,"156":1,"158":4,"162":4,"163":4,"164":1,"165":2,"171":3,"173":2,"174":1,"175":1,"177":1,"178":2,"179":2,"180":1,"183":1,"184":1,"186":1,"188":1,"189":1,"190":1,"192":3,"195":2,"198":1,"207":1,"214":2,"215":1,"216":1,"219":1,"220":2,"222":1,"227":1,"235":2,"239":1,"243":1,"247":1,"253":2,"256":1,"257":1,"259":1,"260":1,"262":1,"265":1,"267":1,"268":1,"275":1,"276":1,"281":1,"282":1,"289":1,"298":1}}],["one2two",{"2":{"292":3}}],["onetbb",{"2":{"263":1,"291":1}}],["onecold",{"2":{"212":2,"234":2,"245":2,"279":2}}],["onehotbatch",{"2":{"210":1,"231":1,"242":2,"277":1}}],["onehotarrays",{"2":{"209":1,"230":1,"241":3,"257":1,"276":1}}],["onearray",{"2":{"13":3}}],["oneapidevice",{"2":{"4":2}}],["oneapi",{"2":{"2":1,"3":2,"13":1,"73":2,"96":1,"100":2,"148":2,"182":1}}],["onesc64",{"2":{"16":1}}],["onesc32",{"2":{"16":1}}],["onesc16",{"2":{"16":1}}],["ones64",{"2":{"16":1}}],["ones32",{"2":{"16":1}}],["ones16",{"2":{"16":1}}],["ones",{"2":{"8":1,"16":7,"56":1,"80":1,"84":1,"88":4,"89":1,"132":1,"189":2,"195":1,"238":1,"253":2,"264":1,"265":1}}],["one",{"2":{"1":1,"24":1,"39":2,"43":1,"46":2,"47":2,"50":1,"53":1,"67":1,"78":3,"83":1,"84":2,"89":3,"137":1,"162":1,"176":2,"187":1,"189":1,"227":1,"228":1,"265":1,"292":4}}],["offsetarraysadaptext",{"2":{"263":1}}],["offsetarrays",{"2":{"263":2}}],["official",{"2":{"172":1,"198":1,"207":1,"215":1,"216":1,"222":1,"228":1,"239":1,"247":1,"256":1,"262":1,"263":2,"267":1,"275":1,"281":1,"289":1,"298":1}}],["off",{"2":{"167":1}}],["offending",{"2":{"127":9}}],["ofcourse",{"2":{"126":1}}],["often",{"2":{"3":1,"44":1,"121":2,"131":1,"156":1,"167":1,"170":1,"173":1,"176":2,"189":2,"216":1,"265":1}}],["of",{"0":{"143":1,"145":1,"146":1,"166":1},"2":{"1":1,"3":7,"4":3,"6":1,"7":2,"8":17,"9":3,"10":6,"11":4,"15":31,"16":36,"18":6,"19":6,"21":2,"22":6,"23":11,"24":2,"25":9,"28":2,"29":1,"35":7,"37":3,"39":32,"40":38,"41":3,"42":42,"43":37,"44":32,"45":23,"46":30,"47":13,"49":13,"50":13,"51":1,"52":20,"53":5,"54":10,"55":8,"56":17,"60":2,"61":2,"62":1,"63":4,"64":1,"65":8,"67":9,"68":1,"69":1,"70":7,"72":1,"75":2,"76":10,"77":2,"78":2,"79":20,"80":18,"81":18,"82":4,"83":7,"84":8,"85":12,"86":2,"87":7,"88":6,"89":90,"90":3,"93":5,"95":1,"97":2,"102":3,"107":2,"108":1,"109":2,"110":2,"112":2,"114":6,"116":2,"117":1,"118":2,"122":3,"123":6,"124":1,"126":1,"127":6,"131":2,"133":1,"135":1,"137":4,"140":2,"143":2,"144":1,"145":3,"147":4,"151":3,"153":3,"154":4,"155":1,"156":3,"158":1,"163":1,"164":6,"165":1,"166":1,"167":6,"171":1,"173":1,"176":4,"177":1,"179":1,"180":1,"181":1,"184":4,"185":1,"186":1,"189":9,"190":1,"191":1,"193":4,"195":1,"197":3,"199":1,"201":1,"202":3,"203":2,"204":1,"206":1,"214":1,"216":1,"227":2,"228":2,"229":1,"232":5,"236":1,"237":1,"238":1,"243":2,"248":3,"250":1,"253":1,"254":7,"258":1,"260":3,"263":2,"264":2,"265":6,"266":3,"274":4,"280":102,"287":2,"292":5,"293":2,"294":3}}],["bj",{"2":{"284":19,"285":2}}],["bzip2",{"2":{"263":1,"291":1}}],["bs",{"2":{"210":1,"231":1}}],["b=f",{"2":{"158":1}}],["b=layer",{"2":{"158":1}}],["b=",{"2":{"56":1}}],["b=zeros",{"2":{"56":1}}],["bfgs",{"2":{"216":1,"220":1,"263":1,"291":1,"296":2}}],["bfloat16sext",{"2":{"241":1}}],["bfloat16s",{"2":{"53":1}}],["bfloat16",{"2":{"53":3}}],["bf16",{"2":{"53":1}}],["bc",{"2":{"252":7,"253":21,"254":70}}],["bce",{"2":{"50":5}}],["bcast",{"2":{"30":2}}],["bh2",{"2":{"292":1}}],["bh",{"2":{"292":1}}],["bhn",{"2":{"43":2}}],["bhz",{"2":{"43":2}}],["bhr",{"2":{"43":2}}],["bijector",{"2":{"284":5}}],["bijectorsenzymecoreext",{"2":{"263":2}}],["bijectorstrackerext",{"2":{"263":1}}],["bijectorsdistributionsadext",{"2":{"263":1}}],["bijectorsforwarddiffext",{"2":{"263":1}}],["bijectors",{"0":{"284":1},"2":{"263":5}}],["bigger",{"2":{"189":1}}],["bigfloat",{"2":{"189":3}}],["bibtex",{"2":{"90":2}}],["bittwiddlingconveniencefunctions",{"2":{"263":1,"291":1}}],["bitflags",{"2":{"230":1}}],["bit",{"2":{"89":1,"165":1,"220":1}}],["bilinear",{"2":{"44":3,"47":6,"81":10,"82":5,"85":1,"89":3,"116":1}}],["bidirectional",{"2":{"43":1}}],["bidirectionalrnn",{"2":{"43":1}}],["bindings",{"2":{"75":1}}],["binarycrossentropy",{"2":{"204":2}}],["binarycrossentropyloss",{"2":{"50":7,"204":1}}],["binaryfocalloss",{"2":{"50":3}}],["binary",{"2":{"50":3,"84":1,"201":1}}],["bin",{"2":{"43":1,"50":7}}],["biz",{"2":{"43":1}}],["bir",{"2":{"43":1}}],["bias6",{"2":{"147":12}}],["bias3",{"2":{"147":4}}],["bias1",{"2":{"147":4}}],["bias=l",{"2":{"153":1}}],["bias=ps",{"2":{"56":1}}],["bias=false",{"2":{"43":6,"44":3}}],["bias=true",{"2":{"40":4,"43":2,"44":6}}],["bias=nothing",{"2":{"40":1,"43":3,"44":3}}],["bias=zeros32",{"2":{"40":1,"44":1,"46":4,"153":1,"258":2,"294":3}}],["bias=zero",{"2":{"23":1}}],["bias",{"0":{"61":1},"2":{"22":1,"23":4,"25":4,"40":12,"43":25,"44":22,"46":20,"56":1,"61":12,"62":3,"64":3,"65":15,"67":1,"76":3,"87":6,"96":15,"117":4,"123":15,"127":3,"133":2,"143":4,"144":2,"145":2,"146":1,"147":5,"153":6,"155":4,"163":3,"165":4,"166":4,"176":5,"184":1,"197":1,"238":30,"265":1,"273":2,"280":2,"294":3,"296":3}}],["biases",{"2":{"15":1,"147":1,"265":1}}],["bright",{"2":{"266":3}}],["brackpropagate",{"2":{"258":1}}],["bradbury",{"2":{"89":1}}],["branched",{"2":{"39":1}}],["branchlayer",{"2":{"39":5}}],["breaking",{"0":{"104":1,"107":1,"111":1,"114":1,"115":1,"116":1},"2":{"109":1}}],["break",{"2":{"89":1,"254":1,"280":1,"285":1,"287":1}}],["broadcastfunction",{"2":{"123":1}}],["broadcasting",{"2":{"59":1,"66":1,"89":1,"189":1}}],["broadcastable",{"2":{"46":1,"76":2}}],["broadcasted",{"2":{"30":2,"50":1,"61":1,"62":2,"77":1}}],["broadcast",{"2":{"30":1,"41":1,"45":1,"89":2,"123":1,"147":5,"265":1}}],["broken=true",{"2":{"69":2}}],["broken=false",{"2":{"69":2}}],["broken",{"2":{"21":1,"69":5,"70":4,"71":1}}],["blue",{"2":{"218":1,"221":2,"264":1,"270":1}}],["black",{"2":{"264":2,"270":1,"274":2}}],["blanks",{"2":{"89":3}}],["blank",{"2":{"86":1,"260":1}}],["blasfloat",{"2":{"89":1}}],["blas",{"2":{"8":1,"64":1,"83":3,"89":1}}],["blisblas",{"2":{"64":1}}],["blocks",{"2":{"43":1,"49":1,"202":1}}],["block",{"2":{"39":1,"42":3,"56":3,"202":1}}],["blog",{"2":{"37":1}}],["bn=batchnorm",{"2":{"23":1}}],["b",{"2":{"19":3,"42":6,"44":4,"56":6,"62":4,"63":2,"64":3,"83":35,"87":4,"88":2,"89":12,"126":2,"147":4,"149":1,"155":2,"158":9,"164":8,"176":2,"197":8,"263":1,"291":1,"292":4}}],["bulk",{"2":{"265":1}}],["bunch",{"2":{"89":1}}],["builds",{"2":{"126":1,"133":3,"163":2,"205":2,"238":3,"261":1,"280":2}}],["buildkite",{"2":{"126":1,"133":3,"163":2,"198":1,"205":2,"207":1,"215":1,"222":1,"238":3,"239":1,"247":1,"256":1,"261":1,"262":1,"267":1,"275":1,"280":2,"281":1,"289":1,"298":1}}],["build",{"2":{"96":1,"181":3,"188":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["building",{"0":{"265":1},"2":{"49":1}}],["built",{"0":{"38":1},"1":{"39":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"47":1},"2":{"91":1,"151":2,"191":1,"214":1}}],["bufferedstreams",{"2":{"230":1}}],["buffer",{"2":{"30":3,"62":1,"64":1,"89":2,"254":70,"280":1026,"287":16}}],["bugs",{"2":{"101":1,"110":1,"155":1}}],["bug",{"2":{"11":1,"56":1,"171":1,"227":1,"254":7,"280":102,"287":2}}],["but",{"2":{"3":2,"7":2,"8":1,"15":1,"18":2,"35":1,"37":3,"45":1,"47":2,"51":3,"52":5,"54":2,"56":4,"59":1,"60":1,"61":1,"65":1,"67":1,"77":2,"83":2,"87":1,"89":3,"92":1,"93":2,"97":1,"107":1,"118":1,"120":1,"121":3,"122":3,"123":1,"126":3,"127":2,"133":1,"136":1,"153":3,"154":1,"155":1,"163":3,"164":1,"165":1,"170":1,"171":1,"173":2,"178":1,"185":1,"197":1,"202":2,"204":1,"219":1,"235":1,"237":1,"253":1,"261":1,"265":1,"271":1,"280":1,"294":1}}],["book",{"2":{"195":1}}],["boom",{"2":{"165":1}}],["bool=true",{"2":{"43":1,"280":1}}],["bool=false",{"2":{"2":2,"35":2,"37":1,"43":4,"69":1,"233":1,"235":1}}],["boolean",{"2":{"8":1,"37":1,"71":1,"76":2}}],["bool",{"2":{"2":1,"3":5,"8":3,"24":1,"37":1,"50":3,"51":1,"52":1,"81":9,"87":1,"89":19,"210":1,"231":1,"238":26,"242":2,"259":1,"285":1}}],["boilerplate",{"2":{"123":1,"124":1,"203":1}}],["boils",{"2":{"89":1}}],["borrow",{"2":{"263":1}}],["borrowed",{"2":{"89":1}}],["bore",{"2":{"89":1}}],["border",{"2":{"79":5,"85":5}}],["borders",{"2":{"40":2,"42":3}}],["box",{"2":{"264":1}}],["boxtimes",{"2":{"80":1,"83":1}}],["boxing",{"2":{"55":1,"236":1,"238":1}}],["bottom",{"2":{"56":1,"85":1,"123":1}}],["both",{"2":{"2":1,"24":2,"50":2,"79":5,"83":2,"84":1,"89":4,"91":1,"96":1,"117":1,"127":3,"158":1,"181":1,"184":1,"228":1,"265":1}}],["body",{"2":{"56":2,"238":3,"292":9}}],["boundaries",{"2":{"266":1}}],["boundary",{"2":{"252":1}}],["bound",{"2":{"40":6,"43":18,"44":9,"67":1,"85":8}}],["bounds",{"2":{"40":2,"43":6,"44":3}}],["bytes",{"2":{"280":22}}],["bypasses",{"2":{"56":2}}],["bypass",{"2":{"11":1,"154":1,"182":1}}],["by",{"0":{"144":1},"2":{"2":1,"7":1,"8":2,"11":1,"15":5,"37":1,"39":2,"40":5,"42":3,"43":5,"44":5,"45":1,"46":1,"47":1,"49":3,"50":2,"51":1,"53":1,"56":2,"59":2,"62":1,"64":1,"65":1,"67":3,"74":1,"76":1,"77":2,"79":1,"80":4,"81":7,"83":10,"84":1,"85":2,"86":1,"87":1,"88":2,"89":18,"93":2,"95":1,"96":1,"107":1,"114":1,"123":2,"127":1,"137":3,"144":3,"145":2,"149":1,"150":1,"151":1,"162":3,"163":2,"166":1,"167":1,"173":1,"174":1,"175":1,"181":1,"185":1,"189":2,"190":2,"191":1,"193":1,"199":1,"211":1,"219":1,"228":1,"238":1,"263":1,"265":1,"266":1,"273":2}}],["bayes",{"2":{"265":3}}],["bayesian",{"0":{"263":1},"1":{"264":1,"265":1,"266":1,"267":1},"2":{"264":1,"266":1}}],["bangbangstaticarraysext",{"2":{"263":1}}],["bangbangstructarraysext",{"2":{"263":1}}],["bangbangtablesext",{"2":{"263":1}}],["bangbangdataframesext",{"2":{"241":1}}],["bangbangchainrulescoreext",{"2":{"241":1,"263":1}}],["bangbang",{"2":{"241":2,"263":5}}],["banks",{"2":{"89":2}}],["battery",{"2":{"93":1}}],["batches",{"2":{"210":1,"231":1}}],["batchedtranspose",{"2":{"83":2,"89":3}}],["batchedadjoint",{"2":{"83":3,"89":3}}],["batched",{"0":{"19":1,"60":1,"83":1,"164":1},"2":{"11":1,"19":1,"60":6,"80":2,"83":40,"89":10,"162":1,"164":8,"168":1,"169":1,"176":2}}],["batching",{"2":{"171":3}}],["batchsize=32",{"2":{"254":2}}],["batchsize=256",{"2":{"242":1}}],["batchsize=min",{"2":{"242":2}}],["batchsize=8",{"2":{"219":2}}],["batchsize=128",{"2":{"201":2,"261":1}}],["batchsize=12",{"2":{"178":2}}],["batchsize=13",{"2":{"5":1}}],["batchsize",{"2":{"80":1,"210":3,"231":3,"242":4,"259":2,"261":2,"283":2,"287":2}}],["batchlastindex",{"2":{"43":2,"51":1}}],["batchnorm",{"2":{"7":1,"39":4,"46":8,"65":1,"126":4,"127":4,"143":1,"146":1,"153":1,"156":1,"157":2,"160":1,"163":2,"164":1,"258":5}}],["batch",{"2":{"5":1,"19":3,"37":1,"40":3,"42":3,"43":13,"45":1,"46":6,"47":3,"65":6,"76":9,"77":1,"79":4,"80":2,"81":3,"83":6,"89":6,"104":1,"147":2,"164":1,"171":6,"201":1,"219":1,"220":9,"254":8}}],["ba",{"2":{"65":1}}],["basically",{"2":{"154":1}}],["basic",{"2":{"49":2,"89":1,"188":1}}],["basis",{"2":{"40":2,"44":1}}],["baselet",{"2":{"263":1}}],["baseline",{"2":{"50":1}}],["base",{"2":{"41":1,"45":1,"51":3,"69":3,"89":2,"114":1,"133":1,"155":1,"162":3,"165":1,"166":2,"216":1,"219":2,"233":1,"238":25,"259":3,"285":2,"294":2}}],["based",{"2":{"2":1,"3":2,"39":1,"52":1,"64":1,"65":1,"87":1,"89":1,"101":1,"116":2,"117":1,"163":1,"167":1,"188":1,"216":1,"257":1,"276":1,"282":1}}],["bad",{"2":{"24":1,"87":4,"173":1,"266":1}}],["backtracking",{"2":{"296":1}}],["background",{"2":{"153":1,"173":1,"293":1}}],["backpropagated",{"2":{"81":4}}],["backward",{"2":{"24":2,"43":11,"51":1,"89":5,"127":3,"128":1,"191":1}}],["back",{"2":{"2":1,"3":1,"59":1,"89":1,"155":1}}],["backendtpu",{"2":{"74":1}}],["backendgpu",{"2":{"74":1}}],["backends=",{"2":{"70":2}}],["backends",{"0":{"27":1},"2":{"1":1,"2":4,"3":5,"5":1,"18":6,"19":2,"49":5,"62":3,"64":4,"66":1,"70":7,"75":1,"82":1,"92":2,"112":1,"121":3,"148":3,"171":1,"178":1,"180":1,"190":1,"195":1,"235":1}}],["backend",{"0":{"36":1,"149":1,"150":1,"182":1},"1":{"37":1},"2":{"1":17,"2":3,"3":1,"18":6,"19":3,"27":5,"28":20,"29":4,"30":11,"31":1,"32":1,"49":6,"58":1,"70":1,"74":5,"80":2,"105":1,"123":4,"137":14,"138":2,"140":1,"148":2,"149":5,"162":1,"171":6,"182":4,"193":1,"209":1,"216":1}}],["bernoulli",{"2":{"265":1}}],["became",{"2":{"173":1}}],["because",{"2":{"89":3,"147":1,"155":1,"296":1}}],["become",{"2":{"153":1,"199":1}}],["behind",{"2":{"107":1}}],["behaving",{"2":{"187":1}}],["behaviour",{"2":{"107":1}}],["behavior",{"2":{"3":2,"7":2,"8":1,"11":2,"15":1,"43":1,"54":3,"89":2,"163":2,"173":1,"202":1}}],["behave",{"2":{"51":1,"87":1}}],["behaves",{"2":{"39":1,"83":2,"89":2,"108":1,"178":1}}],["beautiful",{"2":{"89":1}}],["benchmarking",{"2":{"214":1}}],["benefits",{"2":{"67":1,"123":1}}],["bengio",{"2":{"15":2,"45":1}}],["beta",{"2":{"89":10}}],["beta=0",{"2":{"89":13}}],["better",{"2":{"67":1,"93":1,"101":1}}],["between",{"0":{"33":1},"1":{"34":1,"35":1,"36":1,"37":1},"2":{"15":1,"25":1,"35":1,"44":1,"50":1,"67":1,"86":1,"89":4,"93":1,"123":1,"140":1,"158":3,"266":3}}],["belief",{"2":{"67":1}}],["below",{"2":{"23":1,"39":1,"54":2,"80":1,"85":1,"89":1,"90":1,"232":1,"264":1,"265":2,"266":1}}],["being",{"2":{"46":4,"52":1,"61":1,"63":2,"65":2,"84":2,"86":1,"105":1,"112":1,"120":2,"141":1,"162":1,"163":2,"173":1,"214":1,"220":1,"261":1,"280":1}}],["beginner",{"0":{"224":1}}],["begin",{"2":{"40":2,"42":3,"218":1,"221":1,"253":1,"255":1,"288":1,"293":1,"294":1,"297":1}}],["beyond",{"2":{"39":1}}],["best",{"2":{"35":1,"56":2,"59":2,"62":1,"64":1,"74":1,"120":4,"121":1,"280":3}}],["been",{"2":{"35":2,"52":5,"89":1,"93":1,"104":2,"107":6,"111":3,"114":10,"115":5,"117":1,"121":1,"139":1,"166":1,"188":1,"290":1}}],["beware",{"2":{"16":1,"155":1}}],["before",{"2":{"8":2,"15":2,"28":1,"31":1,"32":1,"39":1,"53":1,"76":2,"97":1,"124":1,"137":1,"153":1,"192":1,"206":1,"274":1}}],["be",{"2":{"1":2,"2":1,"3":10,"4":5,"6":1,"7":4,"8":2,"10":1,"15":8,"19":5,"22":2,"24":3,"25":5,"28":7,"30":2,"32":2,"35":5,"37":5,"39":13,"40":16,"41":4,"42":9,"43":17,"44":8,"45":8,"46":4,"47":3,"49":9,"50":5,"52":3,"53":2,"55":2,"56":16,"57":2,"59":2,"60":1,"61":3,"62":1,"63":4,"64":1,"65":20,"66":1,"67":3,"69":3,"70":3,"71":4,"72":1,"76":3,"77":1,"79":2,"80":3,"81":4,"82":3,"83":6,"84":8,"85":1,"86":5,"87":2,"89":24,"90":1,"91":1,"92":3,"93":2,"96":1,"101":1,"102":1,"110":1,"111":1,"114":1,"115":1,"123":3,"124":1,"125":1,"128":1,"132":1,"137":2,"138":1,"142":1,"144":1,"147":2,"148":1,"150":1,"151":1,"153":1,"155":2,"156":1,"157":1,"158":6,"160":1,"162":2,"163":5,"164":2,"166":1,"168":1,"173":1,"177":1,"182":1,"184":3,"185":2,"187":3,"189":1,"191":1,"192":1,"193":1,"194":1,"199":1,"200":1,"202":1,"214":2,"220":1,"227":1,"228":1,"236":1,"244":1,"248":2,"253":1,"264":1,"265":1,"269":1,"271":1,"292":4,"295":1}}],["ixy",{"2":{"292":2}}],["ixx",{"2":{"292":3}}],["iyy",{"2":{"292":3}}],["i∈",{"2":{"197":1}}],["io",{"2":{"147":2}}],["ioffe",{"2":{"65":1}}],["i64",{"2":{"147":10}}],["iid",{"2":{"265":1}}],["iii",{"2":{"119":2,"121":1}}],["ii",{"2":{"119":2,"121":1}}],["ii+pi+p",{"2":{"40":1,"42":3}}],["irtools",{"2":{"291":1}}],["irrationalconstants",{"2":{"263":1}}],["ir",{"2":{"97":1,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["ijk",{"2":{"84":3}}],["i=σ",{"2":{"43":1}}],["ih",{"2":{"43":6,"117":1}}],["i+n",{"2":{"40":1,"42":3}}],["ignore",{"2":{"243":1,"294":1}}],["ignores",{"2":{"35":1,"87":1}}],["ignored",{"2":{"2":1,"15":1,"44":3,"87":1}}],["iclr",{"2":{"15":1}}],["ieee",{"2":{"15":2,"50":5}}],["imath",{"2":{"263":1,"291":1}}],["imageio",{"2":{"263":1,"269":1,"291":1}}],["imagemetadata",{"2":{"263":1,"291":1}}],["imageaxes",{"2":{"263":1,"291":1}}],["imagetotensor",{"2":{"259":1}}],["imagebase",{"2":{"230":1,"263":1,"291":1}}],["imagecore",{"2":{"230":1,"263":1,"291":1}}],["imageshow",{"2":{"230":1,"241":1,"257":1}}],["images",{"2":{"46":3,"47":2,"81":1,"82":1,"89":1,"210":1,"231":1,"257":1,"260":21,"261":8}}],["image",{"2":{"40":2,"50":1,"67":1,"81":9,"89":4,"258":11,"259":3,"260":8,"261":6}}],["imagenet",{"2":{"15":2,"136":1}}],["img",{"2":{"258":2,"259":2,"260":12,"261":5}}],["imgs",{"2":{"210":6,"231":6,"242":6,"260":4}}],["im",{"0":{"191":1},"2":{"261":102}}],["immutability",{"2":{"192":1}}],["immutable",{"2":{"52":1,"59":1,"62":1,"64":1,"92":1,"93":1,"191":1}}],["immediately",{"2":{"123":1}}],["imrotate",{"2":{"82":6,"89":2}}],["im2col",{"2":{"80":2,"89":35}}],["imbalanced",{"2":{"50":1}}],["imply",{"2":{"156":1}}],["implements",{"2":{"68":1}}],["implemented",{"2":{"35":1,"83":1,"89":1,"93":1,"158":1,"162":1}}],["implementations",{"2":{"49":1,"52":1,"59":1,"60":1,"64":1,"66":1,"78":1,"89":2,"140":1,"185":1}}],["implementation",{"0":{"132":1,"159":1,"236":1,"284":1},"1":{"160":1},"2":{"11":4,"22":1,"39":1,"43":1,"49":1,"52":5,"55":2,"59":3,"62":4,"64":6,"85":1,"88":1,"89":7,"107":1,"140":1,"167":1,"232":1,"257":2,"282":1}}],["implement",{"0":{"243":1},"2":{"7":1,"22":1,"153":2,"158":1,"263":1}}],["implementing",{"0":{"158":1},"2":{"7":2,"93":1,"131":1,"229":1,"263":1}}],["imposed",{"2":{"89":5}}],["imports",{"0":{"200":1,"209":1,"217":1,"230":1,"241":1,"249":1,"269":1,"291":1}}],["importing",{"2":{"153":1,"188":1,"263":1}}],["important",{"0":{"159":1},"1":{"160":1},"2":{"137":2,"148":1,"153":1,"158":1,"163":1,"164":1}}],["importantly",{"2":{"124":1,"189":1,"266":1}}],["imported",{"2":{"56":1}}],["import",{"2":{"35":1,"37":1,"98":1,"147":3,"263":1}}],["improving",{"2":{"52":5,"122":1,"147":1}}],["i",{"2":{"5":10,"11":1,"15":1,"22":1,"35":1,"37":1,"39":9,"40":6,"42":21,"44":4,"45":8,"47":2,"49":2,"50":3,"54":1,"55":1,"56":1,"76":2,"79":6,"81":3,"85":1,"86":1,"88":1,"89":5,"97":2,"107":2,"119":5,"121":1,"124":2,"132":3,"137":1,"147":3,"156":1,"165":1,"171":6,"192":4,"195":1,"197":4,"200":1,"205":1,"214":1,"219":2,"254":4,"255":6,"261":5,"264":12,"265":7,"266":10,"269":1,"280":12,"285":10,"287":1,"288":4}}],["ith",{"2":{"39":1}}],["its",{"2":{"8":1,"42":3,"44":1,"45":1,"46":1,"72":1,"76":1,"79":2,"80":1,"87":1,"89":2,"93":1,"266":1}}],["itself",{"2":{"3":1,"45":1,"126":1}}],["itertools",{"2":{"263":1,"291":1}}],["iter",{"2":{"97":4,"124":12,"220":2,"254":7,"261":51,"287":20,"295":1}}],["iteratively",{"2":{"39":1}}],["iterations",{"2":{"197":12,"265":2}}],["iteration",{"0":{"5":1},"2":{"5":2,"97":12,"124":4,"131":1,"178":1,"192":8,"219":1,"220":26,"254":51,"266":1,"295":1,"297":1}}],["iterate",{"2":{"5":1,"21":1}}],["iterates",{"2":{"5":1}}],["iteratorinterfaceextensions",{"2":{"263":1}}],["iterators",{"2":{"5":1,"202":1,"203":1,"253":1,"254":2,"255":1,"285":1,"287":1}}],["iterator",{"2":{"5":3,"124":1}}],["itemdata",{"2":{"259":1}}],["items",{"2":{"89":1}}],["item",{"2":{"5":1}}],["it",{"2":{"2":1,"3":2,"4":5,"7":3,"8":6,"11":2,"15":3,"21":1,"22":1,"23":1,"25":1,"35":3,"39":2,"40":4,"42":3,"43":8,"45":1,"46":7,"47":3,"49":1,"50":5,"51":1,"52":3,"54":3,"55":1,"56":5,"59":3,"61":2,"62":2,"63":1,"67":1,"68":2,"71":2,"76":1,"77":5,"79":13,"80":1,"81":1,"82":2,"83":2,"84":7,"86":1,"87":5,"88":1,"89":10,"90":1,"91":1,"93":4,"95":1,"97":3,"107":1,"115":1,"117":1,"118":1,"122":3,"123":5,"124":1,"125":2,"126":1,"127":3,"130":1,"137":2,"138":1,"149":1,"151":2,"153":2,"154":5,"155":4,"162":3,"163":6,"164":2,"170":1,"171":2,"173":1,"174":2,"175":1,"178":1,"182":1,"185":1,"188":1,"189":2,"190":1,"191":2,"192":2,"195":1,"197":1,"202":4,"204":2,"216":1,"219":3,"220":2,"227":1,"228":1,"232":1,"243":2,"258":1,"261":1,"265":2,"274":1,"280":1,"292":1,"294":2}}],["inlinestrings",{"2":{"241":1}}],["inbounds",{"2":{"89":1}}],["inplaceops",{"2":{"263":1}}],["inplace",{"2":{"49":2}}],["input",{"0":{"129":1,"135":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"8":2,"15":4,"18":2,"19":1,"24":2,"37":3,"39":30,"40":7,"41":2,"42":6,"43":20,"44":12,"45":7,"46":14,"47":6,"50":4,"55":1,"56":1,"59":2,"61":1,"62":1,"63":3,"64":2,"65":9,"76":3,"77":2,"78":4,"79":1,"80":3,"81":1,"82":2,"85":16,"87":2,"89":15,"96":1,"123":1,"126":6,"127":14,"131":1,"137":1,"138":1,"140":1,"147":4,"153":1,"156":2,"164":3,"166":1,"171":8,"173":2,"183":1,"184":1,"187":1,"202":1,"211":1,"232":1,"251":1,"258":1,"294":1}}],["inputsize",{"2":{"43":1,"107":1}}],["inputs",{"0":{"164":1},"2":{"8":2,"11":1,"19":1,"23":1,"24":1,"39":9,"40":2,"41":3,"42":9,"43":11,"44":1,"45":6,"46":5,"47":2,"49":1,"50":4,"52":1,"55":1,"62":3,"64":1,"92":1,"115":1,"123":1,"140":1,"147":1,"162":1,"164":2,"165":1,"166":1,"196":1,"232":2,"265":1,"273":1,"294":1}}],["ingredient",{"2":{"46":1,"65":1}}],["in2",{"2":{"44":6}}],["in12",{"2":{"44":3}}],["in1",{"2":{"44":8}}],["invertedindices",{"2":{"263":1}}],["inversefunctionsunitfulext",{"2":{"263":1,"291":1}}],["inversefunctionstestext",{"2":{"263":1,"269":1,"291":1}}],["inversefunctionsdatesext",{"2":{"263":1,"291":1}}],["inversefunctions",{"2":{"263":3,"269":1,"291":2}}],["inverse",{"2":{"63":1,"80":3,"89":3,"284":4,"285":1}}],["inversability",{"2":{"40":1}}],["investigate",{"2":{"237":1}}],["investigated",{"2":{"141":1}}],["invp",{"2":{"63":4}}],["involving",{"2":{"55":1}}],["invokes",{"2":{"23":1}}],["invoked",{"2":{"2":1,"202":1}}],["invariant",{"2":{"50":1}}],["inv",{"2":{"40":2,"43":6,"44":3,"292":1}}],["injection",{"2":{"39":5}}],["independently",{"2":{"77":1,"88":1,"89":1}}],["independent",{"2":{"77":1,"164":1}}],["indexing",{"0":{"174":1},"2":{"39":1,"45":1,"66":1,"89":1,"174":1}}],["index",{"2":{"2":1,"39":1,"44":1,"45":2,"83":2,"84":4,"88":1,"89":2,"171":1,"266":1,"277":1}}],["indexed",{"2":{"2":2,"4":2,"45":1,"101":1}}],["indirectarrays",{"2":{"291":1}}],["individual",{"2":{"45":1,"77":1,"220":1}}],["individually",{"2":{"39":1,"47":2,"164":1}}],["indices",{"2":{"44":2,"83":1,"89":5}}],["inflate",{"2":{"291":1}}],["informed",{"2":{"252":3}}],["informs",{"2":{"154":1}}],["information",{"2":{"8":1,"11":1,"20":1,"24":1,"54":1,"63":1,"70":1,"80":1,"89":1,"92":1,"93":1,"124":1,"140":1,"175":1,"216":1,"228":1}}],["info",{"2":{"126":8,"127":20,"150":1,"198":2,"200":1,"207":2,"215":2,"222":2,"239":2,"247":2,"256":2,"262":2,"263":2,"267":2,"269":1,"275":2,"281":2,"289":2,"298":2}}],["inferred",{"2":{"65":1,"84":2}}],["inference",{"0":{"160":1},"2":{"24":1,"41":3,"46":6,"87":1,"90":1,"122":1,"160":1,"163":2,"171":2,"265":2}}],["infinity",{"2":{"63":1}}],["inf",{"2":{"50":1,"78":1,"163":2,"164":2,"165":2,"166":2,"170":4,"280":1}}],["inner",{"2":{"22":2,"24":1,"144":1,"283":7}}],["incase",{"2":{"294":1}}],["including",{"2":{"89":1,"110":1,"122":1,"123":1}}],["included",{"2":{"15":1,"93":1}}],["include",{"2":{"7":1,"8":1,"15":1}}],["incompatibility",{"2":{"277":1}}],["incoming",{"2":{"81":4}}],["inconvenient",{"2":{"130":1}}],["inconsistent",{"2":{"107":1}}],["incorrectly",{"2":{"173":1}}],["incorrect",{"0":{"126":1},"2":{"19":2,"56":1,"87":1,"125":1,"128":1,"261":1,"280":1}}],["increase",{"2":{"40":1,"117":1}}],["initilly",{"2":{"41":1}}],["initialvalues",{"2":{"263":1}}],["initializing",{"0":{"186":1},"1":{"187":1},"2":{"56":1,"97":1,"137":1,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["initialize",{"0":{"233":1,"244":1},"2":{"28":5,"56":3,"84":1,"96":1,"97":1,"137":3,"138":1,"140":1,"171":1,"197":1,"265":1,"270":1}}],["initialized",{"2":{"15":4,"28":2,"46":8}}],["initializers",{"2":{"15":1}}],["initializer",{"2":{"15":1,"43":10,"44":7,"56":3}}],["initializations",{"2":{"116":2}}],["initialization",{"0":{"28":1},"2":{"12":1,"15":2,"28":1,"37":1,"40":4,"116":1,"158":1,"186":1,"202":1,"243":1,"294":1}}],["initial",{"2":{"9":1,"10":1,"39":1,"43":11,"56":2,"93":1,"107":1,"158":1,"202":1,"293":1,"294":1,"296":2}}],["initialstates",{"2":{"7":3,"8":1,"10":1,"153":2,"158":2,"202":1,"285":2}}],["initialparameters",{"2":{"7":3,"8":1,"9":1,"153":2,"158":2,"202":1,"243":3}}],["init",{"2":{"15":7,"40":10,"43":26,"44":21,"46":16,"51":6,"56":6,"84":5,"116":2,"117":1,"138":1,"153":10,"158":4,"187":8,"202":2,"203":2,"258":2,"294":6}}],["int=4",{"2":{"287":1}}],["int=6",{"2":{"287":1}}],["int=64",{"2":{"280":1}}],["int=length",{"2":{"285":1}}],["int=16",{"2":{"287":1}}],["int=10",{"2":{"287":1}}],["int=100",{"2":{"283":1,"287":1}}],["int=128",{"2":{"260":1,"287":1}}],["int=200",{"2":{"280":1}}],["int=20",{"2":{"280":1}}],["int=2",{"2":{"280":1}}],["int=50000",{"2":{"254":1}}],["int=32",{"2":{"251":1,"254":1}}],["int=0",{"2":{"30":5,"254":1}}],["intro",{"2":{"188":1}}],["introduction",{"2":{"108":1}}],["introductory",{"2":{"101":1}}],["introducing",{"2":{"35":1}}],["introduces",{"2":{"188":1}}],["introduced",{"2":{"15":1}}],["introduce",{"2":{"11":1,"55":1,"127":2}}],["int64",{"2":{"50":3,"79":19,"80":5,"81":2,"84":7,"89":1,"96":16,"123":18,"133":4,"189":3,"191":2,"219":2,"238":99,"274":8}}],["int",{"2":{"40":2,"43":2,"45":1,"46":1,"47":1,"65":1,"78":2,"79":6,"81":2,"89":27,"153":4,"197":1,"232":2,"242":3,"258":6,"259":1,"260":5,"264":1,"283":1,"285":11}}],["into",{"0":{"137":1},"2":{"8":1,"15":2,"24":1,"30":1,"37":1,"39":2,"40":2,"42":3,"43":4,"45":1,"47":1,"50":1,"51":1,"53":5,"56":2,"62":1,"76":1,"77":1,"80":2,"84":3,"89":6,"96":1,"107":1,"123":1,"147":1,"151":1,"153":1,"171":2,"189":2,"202":1,"210":1,"231":1}}],["intentionally",{"2":{"157":1}}],["intended",{"2":{"87":1,"89":3}}],["internedstrings",{"2":{"230":1}}],["internals",{"2":{"243":1,"265":1}}],["internally",{"2":{"35":1,"55":1,"89":2,"107":1,"162":1,"171":1,"238":1}}],["internal",{"0":{"89":1},"2":{"22":1,"37":1,"39":1,"45":2,"49":1,"55":2,"65":2,"66":3,"91":1,"93":1,"114":1,"137":1}}],["international",{"2":{"15":6,"50":3,"65":1}}],["intermediate",{"0":{"225":1}}],["interactiveutils",{"2":{"198":2,"207":2,"215":2,"222":2,"230":1,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["interactive",{"2":{"179":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["interacting",{"2":{"147":1}}],["interested",{"2":{"164":1,"266":1}}],["interest",{"2":{"87":1}}],["interpolatingadjoint",{"2":{"238":13}}],["interpolationsunitfulext",{"2":{"263":1,"269":1,"291":1}}],["interpolations",{"2":{"263":2,"269":2,"291":2}}],["interpolation",{"2":{"81":3,"82":1,"85":1,"89":1}}],["interpreted",{"2":{"82":1}}],["interoperability",{"0":{"33":1},"1":{"34":1,"35":1,"36":1,"37":1}}],["inter",{"2":{"19":1}}],["intervalarithmeticrecipesbaseext",{"2":{"263":2,"291":2}}],["intervalarithmeticdiffrulesext",{"2":{"263":1,"291":1}}],["intervalarithmeticforwarddiffext",{"2":{"263":2,"291":2}}],["intervalarithmeticintervalsetsext",{"2":{"263":1,"291":1}}],["intervalarithmetic",{"2":{"263":5,"291":5}}],["intervalsetsext",{"2":{"263":1,"291":2}}],["intervalsetsrecipesbaseext",{"2":{"263":1,"291":2}}],["intervalsetsrandomext",{"2":{"263":1,"291":1}}],["intervalsetsstatisticsext",{"2":{"263":1,"291":1}}],["intervalsets",{"2":{"263":4,"291":4}}],["interval",{"2":{"15":2}}],["interfacem",{"2":{"37":1}}],["interface",{"0":{"151":1,"152":1,"155":1,"156":1},"1":{"152":1,"153":2,"154":2,"155":1,"156":1},"2":{"7":1,"32":2,"37":2,"151":2,"153":1,"216":1,"232":1,"265":1}}],["integral",{"2":{"50":1}}],["integrate",{"2":{"147":1}}],["integrated",{"2":{"8":1,"175":1}}],["integrating",{"0":{"137":1}}],["integration",{"0":{"31":1,"32":1},"2":{"138":1}}],["integers",{"2":{"28":1,"40":11,"42":15,"44":5,"79":5,"84":5,"89":1}}],["integer",{"2":{"2":3,"4":3,"15":11,"40":12,"42":9,"44":1,"45":1,"46":5,"54":3,"78":6,"79":5,"81":11,"82":2,"84":3,"89":1,"259":1}}],["intelopenmp",{"2":{"263":1,"269":2,"291":1}}],["intelligence",{"2":{"15":2}}],["intel",{"2":{"3":1}}],["insert",{"2":{"89":2}}],["instructions",{"2":{"95":1}}],["institute",{"2":{"90":1}}],["installation",{"0":{"95":1},"2":{"239":1,"247":1}}],["install",{"0":{"72":1},"2":{"72":1,"73":1,"74":1,"95":1,"96":1,"98":1}}],["installed",{"2":{"28":4,"32":1,"137":1,"190":1}}],["instability",{"2":{"55":1,"175":1,"184":1}}],["instabilities",{"0":{"175":1},"2":{"35":1,"123":1,"175":2}}],["instancenorm",{"2":{"46":7,"65":2,"116":1,"117":1}}],["instance",{"2":{"43":1,"46":3,"65":3,"83":3,"265":1}}],["instead",{"2":{"3":2,"4":2,"8":1,"22":2,"27":2,"35":1,"40":3,"49":1,"52":5,"53":1,"54":1,"56":1,"67":1,"69":1,"77":1,"79":1,"81":2,"83":3,"87":1,"88":1,"89":2,"110":1,"111":1,"112":1,"114":1,"115":4,"116":2,"130":1,"131":1,"137":1,"145":1,"147":1,"151":1,"154":1,"155":1,"158":2,"165":1,"171":2,"173":1,"176":1,"178":1,"185":1,"189":1,"202":1,"229":1,"263":1}}],["inside",{"0":{"171":1},"2":{"23":1,"54":1,"56":1,"87":1,"144":1,"158":3,"162":3,"171":3}}],["inspiration",{"2":{"5":1}}],["in",{"0":{"38":1,"116":1,"130":1},"1":{"39":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"47":1},"2":{"2":2,"3":1,"4":2,"5":2,"7":4,"8":5,"10":3,"15":15,"18":2,"21":1,"22":2,"23":3,"24":3,"25":6,"28":1,"30":2,"31":1,"35":5,"37":1,"39":20,"40":22,"41":2,"42":3,"43":23,"44":13,"45":6,"46":10,"49":7,"50":3,"51":4,"52":12,"54":1,"55":4,"56":18,"59":3,"61":2,"63":3,"65":2,"67":5,"68":1,"69":1,"72":3,"75":1,"76":2,"77":2,"78":2,"79":18,"80":8,"81":14,"82":5,"83":4,"84":8,"85":12,"86":1,"87":5,"88":1,"89":41,"90":2,"91":2,"92":1,"93":3,"95":1,"97":8,"98":1,"101":2,"102":1,"104":1,"107":1,"116":1,"117":1,"118":2,"120":1,"122":1,"124":4,"125":1,"126":1,"127":11,"128":3,"131":1,"132":1,"137":4,"140":2,"141":1,"142":2,"143":1,"144":1,"147":12,"149":1,"153":10,"156":1,"157":2,"158":1,"161":2,"162":3,"163":3,"164":8,"165":1,"166":1,"167":2,"170":1,"171":5,"173":2,"174":2,"175":1,"178":5,"179":2,"185":1,"186":1,"187":1,"188":2,"189":2,"191":1,"192":3,"193":1,"197":2,"199":1,"200":14,"201":3,"202":6,"203":3,"205":3,"208":1,"212":1,"213":2,"216":2,"219":1,"220":4,"227":1,"230":10,"234":1,"235":2,"237":1,"241":28,"243":1,"245":1,"246":4,"248":1,"253":1,"254":1,"255":2,"257":2,"258":1,"260":2,"261":2,"263":21,"264":6,"265":2,"266":13,"268":1,"269":21,"274":1,"278":5,"280":12,"282":1,"285":10,"287":2,"288":2,"291":59,"293":1,"294":3}}],["idxs",{"2":{"259":2,"284":3}}],["idx",{"2":{"84":27,"89":5,"245":2,"246":12,"260":4,"280":15,"288":4}}],["ideal",{"2":{"214":1}}],["ideally",{"2":{"66":1,"228":1}}],["identity",{"2":{"15":13,"39":1,"62":1,"65":5,"76":1,"89":1,"96":3,"123":3,"133":2,"238":9,"274":2}}],["id",{"2":{"2":7,"4":3}}],["ifelse",{"2":{"263":1,"284":1}}],["if",{"2":{"1":2,"2":13,"3":9,"4":3,"6":1,"7":1,"8":5,"11":2,"19":2,"22":3,"23":1,"24":5,"25":2,"28":2,"35":2,"37":1,"39":17,"40":8,"41":10,"43":28,"44":12,"45":5,"46":14,"47":3,"49":6,"50":8,"51":4,"52":3,"54":1,"55":3,"56":6,"57":1,"59":4,"61":2,"62":2,"63":7,"64":2,"65":6,"69":1,"70":3,"71":3,"72":1,"76":1,"79":11,"80":1,"82":1,"83":3,"84":4,"87":4,"89":14,"90":2,"93":1,"95":1,"96":1,"97":2,"107":2,"114":1,"120":2,"121":1,"122":1,"123":3,"124":2,"126":1,"127":1,"128":1,"140":1,"141":1,"143":1,"144":1,"147":1,"148":2,"149":4,"150":1,"151":1,"153":4,"154":3,"155":3,"160":1,"161":1,"162":3,"163":2,"166":1,"171":1,"177":1,"181":3,"185":1,"186":2,"187":2,"190":2,"192":1,"193":1,"197":1,"198":3,"205":1,"207":3,"210":1,"213":1,"215":3,"220":2,"222":3,"227":1,"228":2,"231":1,"239":3,"242":2,"247":3,"254":1,"256":3,"260":2,"261":4,"262":3,"267":3,"274":1,"275":3,"280":3,"281":3,"287":1,"289":3,"292":3,"298":3}}],["isoband",{"2":{"263":2,"291":2}}],["isdefined",{"2":{"198":3,"207":3,"215":3,"222":3,"239":3,"247":3,"256":3,"261":1,"262":3,"267":3,"275":3,"281":3,"289":3,"298":3}}],["isnan",{"2":{"254":1,"287":1}}],["isn",{"2":{"107":1,"161":1,"292":1}}],["istft",{"2":{"89":1}}],["istraining",{"2":{"51":4}}],["issuing",{"2":{"54":1}}],["issue",{"2":{"54":1,"89":1,"120":1,"123":1,"124":1,"128":1,"141":1,"162":1,"163":2,"166":1,"177":1,"227":1,"228":2}}],["issues",{"2":{"43":1,"49":1,"56":1,"80":1,"89":1,"93":1,"101":1,"121":2,"122":1,"162":1,"171":1}}],["isbitstype",{"2":{"52":1}}],["iszero",{"2":{"23":1,"280":1}}],["isa",{"2":{"15":1,"23":1,"56":1,"87":1,"89":1,"205":2,"213":2}}],["isleaf",{"2":{"3":5,"10":1,"219":2}}],["is",{"2":{"0":1,"1":4,"2":19,"3":18,"4":10,"5":3,"6":1,"7":6,"8":12,"10":2,"11":4,"12":1,"15":9,"18":4,"22":5,"23":4,"24":6,"28":2,"32":2,"35":3,"37":7,"39":23,"40":3,"41":7,"42":3,"43":38,"44":9,"45":10,"46":18,"47":3,"49":6,"50":36,"51":5,"52":14,"53":1,"54":5,"55":3,"56":9,"57":2,"59":2,"60":1,"61":3,"62":4,"63":6,"64":2,"65":7,"66":4,"67":9,"68":2,"71":2,"72":1,"75":1,"76":1,"77":3,"78":4,"79":16,"80":2,"81":3,"82":7,"83":13,"84":8,"85":5,"86":3,"87":13,"88":5,"89":50,"93":6,"95":1,"97":2,"102":1,"105":1,"107":1,"108":1,"110":1,"114":3,"116":1,"117":1,"118":1,"120":5,"122":3,"123":4,"124":2,"125":2,"126":4,"127":5,"128":1,"130":1,"133":1,"136":1,"137":4,"138":1,"139":1,"140":2,"141":2,"144":2,"147":3,"148":1,"149":9,"150":1,"151":4,"153":3,"154":6,"155":2,"158":6,"160":3,"162":3,"163":12,"164":2,"167":1,"168":1,"170":1,"171":3,"173":2,"174":2,"175":1,"176":2,"178":3,"179":1,"181":2,"182":1,"187":3,"188":1,"189":4,"190":3,"191":4,"193":3,"197":1,"201":1,"202":1,"203":1,"204":2,"208":1,"214":2,"216":2,"220":4,"232":1,"235":2,"238":1,"243":1,"248":1,"254":7,"257":1,"261":5,"263":3,"264":1,"265":8,"266":2,"273":1,"274":1,"276":1,"280":104,"282":1,"287":2,"290":1,"292":1,"294":2}}],["fθ",{"2":{"266":1}}],["ffmpeg",{"2":{"263":1,"291":1}}],["fftw",{"2":{"263":2,"291":2}}],["fft−0",{"2":{"89":1}}],["fft",{"2":{"89":14}}],["fw",{"2":{"197":1}}],["fwiw",{"2":{"154":1}}],["fd",{"2":{"163":8,"164":8,"165":8,"166":8}}],["fdrop",{"2":{"76":3}}],["f2",{"2":{"153":2}}],["fb",{"2":{"89":3}}],["fmax",{"2":{"89":2}}],["fmaps",{"2":{"8":1}}],["fmap",{"2":{"7":3,"8":2,"10":1,"23":1,"52":22,"171":2,"265":1}}],["fmin",{"2":{"89":2}}],["f=identity",{"2":{"65":1}}],["f=σ",{"2":{"43":1}}],["fp32",{"2":{"59":1}}],["ft",{"2":{"55":3}}],["f64",{"2":{"53":1,"294":1}}],["f32>",{"2":{"147":11}}],["f32",{"2":{"53":1}}],["f16",{"2":{"53":1}}],["f1",{"2":{"50":1,"153":2}}],["fn=neuralode",{"2":{"233":1}}],["fn",{"2":{"49":2,"50":2,"56":8,"233":1,"284":2}}],["f",{"2":{"18":5,"19":7,"23":3,"39":4,"45":6,"49":2,"52":4,"56":1,"59":4,"62":2,"64":2,"67":34,"69":2,"70":6,"87":9,"89":9,"147":4,"158":1,"187":1,"194":3,"195":1,"196":1,"197":1,"220":1,"284":2}}],["func",{"2":{"147":2,"220":3}}],["functor",{"2":{"52":2,"158":2}}],["functors",{"2":{"3":1,"5":1,"7":2,"10":3,"23":2,"52":17,"53":4,"107":1,"112":1,"115":1,"123":1,"143":1,"158":1,"171":1,"263":2,"269":1,"291":1}}],["functionwrapperswrappers",{"2":{"263":1}}],["functionwrappers",{"2":{"263":1}}],["functionproperties",{"2":{"230":1,"291":1}}],["function3",{"2":{"166":4}}],["function2",{"2":{"165":4}}],["function1",{"2":{"163":5}}],["functions",{"0":{"15":1,"16":1,"29":1,"50":1,"66":1,"67":1,"89":1,"212":1,"234":1,"245":1,"252":1,"260":1,"279":1,"286":1,"292":1},"2":{"7":1,"13":1,"16":1,"35":1,"37":1,"39":1,"43":4,"49":1,"50":2,"51":1,"53":1,"56":1,"59":1,"67":1,"75":1,"89":1,"91":1,"92":2,"93":1,"107":1,"114":2,"118":1,"122":1,"123":1,"124":1,"140":1,"149":1,"153":3,"158":1,"162":1,"166":1,"184":1,"185":1,"189":1,"191":1,"195":2,"243":1,"273":1,"292":1}}],["function",{"0":{"130":1,"163":1,"165":1,"166":1,"273":1},"1":{"164":1},"2":{"3":1,"4":4,"8":4,"10":2,"11":3,"15":2,"18":4,"19":3,"22":1,"23":4,"27":2,"28":1,"37":1,"39":2,"40":4,"43":3,"44":4,"45":5,"47":1,"49":15,"50":2,"52":11,"53":1,"54":3,"55":1,"56":11,"59":4,"60":1,"61":2,"62":1,"63":2,"64":1,"65":7,"67":19,"69":1,"70":1,"76":1,"77":1,"86":2,"87":3,"89":15,"93":1,"97":3,"107":1,"114":2,"116":3,"123":9,"124":1,"127":2,"132":1,"133":1,"134":1,"137":1,"143":1,"144":3,"145":2,"147":3,"148":1,"149":2,"153":3,"154":1,"155":1,"158":2,"161":1,"162":5,"163":1,"164":6,"166":2,"168":1,"169":1,"170":1,"171":2,"187":1,"190":2,"192":1,"194":1,"196":1,"197":1,"201":1,"203":1,"204":2,"210":1,"212":2,"213":1,"217":1,"220":4,"231":1,"232":4,"233":1,"234":2,"235":2,"236":2,"242":2,"243":1,"244":1,"245":2,"248":1,"249":1,"251":2,"252":14,"254":1,"257":1,"258":6,"259":3,"260":6,"261":1,"264":1,"265":5,"266":3,"270":2,"273":3,"274":3,"276":1,"277":1,"278":2,"279":2,"280":14,"282":1,"283":3,"284":7,"285":5,"286":4,"287":1,"292":16,"294":2,"295":7}}],["functionalities",{"2":{"24":1,"26":1}}],["functionality",{"0":{"24":1,"114":1,"115":1,"139":1},"2":{"6":1,"43":1,"49":1,"98":1,"107":2,"114":1,"151":2,"190":1}}],["functional",{"2":{"2":5,"3":5,"28":1,"136":1,"150":2,"184":1,"190":1,"198":2,"207":2,"215":2,"222":2,"228":1,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["future",{"2":{"89":1,"162":1,"263":1}}],["full",{"0":{"170":1},"2":{"84":1,"89":1,"164":1,"170":16,"261":4}}],["fully",{"0":{"64":1},"2":{"15":1,"44":2,"50":1,"52":1,"121":1,"136":1}}],["fusion",{"2":{"62":1,"64":1,"254":7,"280":113,"287":2}}],["fuse",{"2":{"62":1}}],["fuses",{"2":{"62":1}}],["fused",{"2":{"50":1,"62":2,"64":2,"176":3,"184":1}}],["further",{"2":{"49":1}}],["fetch",{"2":{"155":1,"270":1}}],["feel",{"2":{"153":1}}],["feedforward",{"2":{"15":2,"78":1,"265":1}}],["few",{"2":{"91":1,"189":1,"193":1}}],["fed",{"2":{"43":4}}],["features",{"0":{"21":1,"105":1,"108":1,"112":1,"117":1},"1":{"22":1,"23":1,"24":1,"25":1},"2":{"21":3,"67":2,"68":1,"76":2,"81":1,"92":1,"102":1,"114":1,"141":2,"162":1,"210":2,"231":2,"277":1,"280":11}}],["feature",{"2":{"2":1,"15":2,"40":1,"42":6,"46":3,"55":1,"122":1,"123":1,"128":1,"147":4,"162":2,"171":2,"189":1}}],["flexibility",{"2":{"155":1}}],["flexible",{"2":{"7":1}}],["floops",{"2":{"241":1}}],["floopsbase",{"2":{"241":1}}],["floor",{"2":{"89":2}}],["flows",{"0":{"282":1},"1":{"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1}}],["flow",{"2":{"123":2}}],["floating",{"0":{"53":1},"2":{"53":4,"59":1}}],["float16",{"2":{"16":8,"53":1,"54":6,"89":2}}],["float32",{"2":{"5":3,"15":6,"16":8,"35":1,"37":1,"39":2,"45":2,"50":6,"53":2,"54":5,"67":2,"77":1,"81":4,"89":14,"96":57,"97":1,"123":41,"124":1,"126":9,"127":27,"133":6,"134":3,"135":3,"143":10,"146":6,"147":22,"149":1,"150":1,"153":2,"154":2,"155":3,"157":2,"163":10,"164":2,"165":11,"166":11,"170":2,"173":5,"186":5,"189":8,"194":5,"195":4,"196":1,"197":8,"201":1,"210":1,"219":4,"231":1,"238":77,"253":5,"254":5,"260":1,"261":2,"264":8,"270":3,"273":8,"283":1,"287":1,"288":2,"294":7}}],["float64=0",{"2":{"280":3,"287":2}}],["float64",{"2":{"5":4,"16":8,"50":1,"53":1,"54":4,"56":4,"67":3,"77":3,"80":2,"81":4,"82":6,"84":2,"85":4,"87":2,"88":4,"89":3,"171":7,"173":3,"189":5,"192":1,"219":4,"265":13,"293":1,"294":1,"295":1,"296":2}}],["flipkernel=true",{"2":{"117":1}}],["flips",{"2":{"80":1}}],["flipped=true",{"2":{"80":2}}],["flipped=false",{"2":{"80":2}}],["flipped",{"2":{"80":3}}],["flat",{"2":{"235":1}}],["flattening",{"2":{"114":1}}],["flatten",{"2":{"45":1,"77":1}}],["flattens",{"2":{"45":1}}],["flattenlayer",{"2":{"37":1,"45":3,"147":1,"211":3,"233":1,"238":9,"244":1,"258":1}}],["flattened",{"2":{"35":1,"45":3,"258":5,"263":1}}],["flaky",{"2":{"139":1}}],["flag",{"2":{"37":1}}],["fluxlinear",{"2":{"158":5}}],["fluxlayer",{"2":{"35":2}}],["fluxmpifluxmodel",{"2":{"139":1}}],["fluxmpi",{"0":{"138":1},"1":{"139":1,"140":1},"2":{"136":1,"138":3,"139":1,"140":2}}],["flux",{"0":{"35":1,"93":1,"157":1,"161":1},"1":{"158":1,"159":1,"160":1,"161":1},"2":{"35":14,"43":3,"77":3,"87":2,"91":1,"93":4,"139":1,"153":2,"157":3,"158":5,"160":4,"161":1,"188":1,"263":1,"290":1}}],["fribidi",{"2":{"263":1,"291":1}}],["friendly",{"2":{"91":2}}],["frequently",{"2":{"227":1}}],["frequencies",{"2":{"89":1}}],["frequency",{"2":{"89":6}}],["freqs=200",{"2":{"89":1}}],["freqs",{"2":{"89":7}}],["freq",{"2":{"89":2}}],["freetypeabstraction",{"2":{"263":1,"291":1}}],["freetype",{"2":{"263":1,"291":1}}],["freetype2",{"2":{"263":1,"291":1}}],["frees",{"2":{"178":1}}],["freeze",{"2":{"22":5,"142":1,"143":6,"144":6,"145":5,"146":1}}],["freezing",{"0":{"22":1,"142":1,"143":1,"144":1,"145":1,"146":1},"1":{"143":1,"144":1,"145":1,"146":1},"2":{"142":1,"144":1,"145":2}}],["free",{"2":{"5":1,"15":1,"49":1,"292":1}}],["freeable",{"2":{"5":1}}],["framerate=10",{"2":{"255":1}}],["framework",{"2":{"91":1,"93":1,"151":1,"188":1,"191":1,"208":1}}],["frameworks",{"0":{"34":1},"1":{"35":1},"2":{"40":1,"43":1,"91":3,"151":1,"189":1,"294":1}}],["frame",{"2":{"89":2}}],["frames",{"2":{"89":4}}],["frontend",{"2":{"97":1}}],["frozen",{"2":{"22":8,"143":9,"146":4}}],["frozenlayer",{"2":{"22":8}}],["fromfluxadaptor",{"2":{"35":5,"161":1}}],["from",{"0":{"34":1,"138":1,"157":1},"1":{"35":1,"139":1,"140":1,"158":1,"159":1,"160":1,"161":1},"2":{"4":1,"5":3,"7":2,"11":1,"15":6,"16":12,"23":1,"35":1,"37":1,"39":1,"40":2,"41":1,"43":10,"44":1,"46":2,"49":6,"50":2,"51":1,"56":1,"59":1,"63":1,"66":1,"67":6,"78":1,"79":1,"80":2,"81":4,"82":1,"84":4,"85":4,"86":1,"87":1,"89":5,"93":1,"96":1,"97":1,"114":2,"115":1,"116":2,"147":1,"153":1,"154":1,"155":1,"163":2,"164":2,"165":1,"166":1,"173":1,"176":1,"178":3,"189":1,"197":2,"202":1,"208":1,"216":1,"220":1,"229":1,"238":3,"248":1,"258":2,"260":1,"263":1,"265":3,"266":4,"268":1,"270":1,"272":1,"283":2,"290":2,"292":3,"294":1}}],["far",{"2":{"153":1}}],["farley",{"2":{"45":1}}],["familiar",{"2":{"147":1,"199":1,"243":1}}],["fake",{"2":{"127":1}}],["facusapienza",{"2":{"163":1}}],["fact",{"2":{"89":1,"140":1}}],["factors",{"2":{"81":5,"89":1}}],["factor",{"2":{"15":2,"47":2,"50":2,"63":2,"65":8,"81":1,"89":2}}],["facilitates",{"2":{"55":1}}],["fausto",{"2":{"50":1}}],["favor",{"2":{"40":1,"52":5}}],["failed",{"2":{"126":1}}],["failing",{"2":{"70":1}}],["failures",{"2":{"35":1,"124":1}}],["fail",{"2":{"35":1,"70":4,"121":1}}],["fails",{"2":{"24":1,"69":1,"120":2,"122":1,"124":1}}],["fashionmnist",{"0":{"240":1},"1":{"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1},"2":{"242":1,"246":2}}],["fashion",{"2":{"28":1}}],["fastlapackinterface",{"2":{"291":1}}],["fastbroadcast",{"2":{"230":1,"291":1}}],["fastpowerenzymeext",{"2":{"230":1,"291":1}}],["fastpowerreversediffext",{"2":{"230":1,"291":1}}],["fastpowertrackerext",{"2":{"230":1,"291":1}}],["fastpowerforwarddiffext",{"2":{"230":1,"291":1}}],["fastpower",{"2":{"230":4,"291":5}}],["fastclosures",{"2":{"200":1,"263":1}}],["fastest",{"2":{"120":5,"168":1}}],["faster",{"0":{"176":1},"2":{"3":1,"59":2,"60":3,"67":4,"83":1,"120":1,"162":1,"176":2,"177":1}}],["fast",{"2":{"21":1,"37":1,"46":1,"59":4,"61":2,"65":1,"66":1,"67":12,"87":2,"89":5,"114":1,"167":1,"176":2,"178":1,"195":1,"294":3}}],["fancy",{"2":{"97":1,"123":1}}],["fan",{"2":{"15":6,"40":2}}],["fallback",{"2":{"7":1,"11":3,"62":2,"64":2,"66":1,"107":1,"161":1}}],["fall",{"2":{"3":1}}],["falls",{"2":{"2":1,"59":1}}],["false",{"2":{"2":2,"10":1,"24":1,"35":1,"37":2,"39":2,"40":2,"43":10,"44":3,"49":2,"50":6,"51":1,"52":1,"55":1,"63":1,"66":1,"71":1,"80":1,"87":1,"89":11,"116":1,"124":1,"162":1,"174":1,"180":1,"181":1,"210":1,"230":1,"231":1,"232":3,"236":2,"238":13,"241":1,"242":2,"259":1,"261":2,"280":2,"287":1,"295":1}}],["fitting",{"0":{"268":1},"1":{"269":1,"270":1,"271":1,"272":1,"273":1,"274":1,"275":1}}],["fit",{"2":{"220":1,"254":4,"266":1,"268":1}}],["figure",{"2":{"218":1,"221":1,"255":1,"264":1,"270":1,"274":1,"283":1,"288":1,"293":1,"294":1,"297":2}}],["figured",{"2":{"127":1}}],["fig",{"2":{"218":3,"221":3,"255":6,"264":3,"266":6,"270":3,"274":3,"283":3,"288":3,"293":3,"294":3,"297":6}}],["finite",{"2":{"163":4,"164":2,"167":1}}],["finitediffsparsearraysext",{"2":{"263":1}}],["finitediffstaticarraysext",{"2":{"263":1,"291":2}}],["finitediff",{"2":{"70":1,"162":1,"163":2,"164":2,"263":3,"291":1}}],["finish",{"2":{"125":1}}],["fingerprint",{"2":{"97":1,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["finetune",{"2":{"220":1}}],["fine",{"2":{"83":1,"154":1}}],["findmax",{"2":{"266":1}}],["find",{"2":{"56":1,"128":1,"171":1,"197":1,"227":1,"266":1}}],["final",{"2":{"56":2,"81":1,"246":3}}],["finally",{"0":{"214":1},"2":{"28":1,"56":1,"137":1,"153":1,"154":1,"189":1,"197":1,"202":1,"238":1,"274":1,"297":1}}],["fix",{"2":{"121":1,"126":1,"127":2,"163":2,"167":1,"173":2,"227":1}}],["fixing",{"2":{"110":1,"166":1,"186":1}}],["fixedpointnumbers",{"2":{"263":1}}],["fixed=",{"2":{"56":1}}],["fixed",{"2":{"55":1,"56":4,"63":1,"83":1,"89":1,"115":2,"126":4,"191":1}}],["fix1",{"2":{"41":1,"45":1,"162":3,"165":1,"166":2,"233":1,"238":9,"259":1,"285":2,"294":2}}],["fillarrayssparsearraysext",{"2":{"263":1}}],["fillarraysstatisticsext",{"2":{"263":1}}],["fillarrayspdmatsext",{"2":{"230":1,"263":1}}],["fillarrays",{"2":{"230":1,"263":4}}],["fill",{"2":{"50":1,"67":1,"84":1,"189":1,"261":1}}],["filterbank",{"2":{"89":3}}],["filterbanks",{"2":{"89":6}}],["filters=64",{"2":{"261":1}}],["filters",{"2":{"89":2,"258":24,"261":1}}],["filter",{"2":{"15":1,"80":2,"89":12}}],["filepaths",{"2":{"263":1,"269":1,"291":1}}],["filepathsbasetestext",{"2":{"230":1,"263":1,"269":1,"291":1}}],["filepathsbasemmapext",{"2":{"230":1,"263":1,"291":1}}],["filepathsbase",{"2":{"230":2,"263":3,"269":1,"291":3}}],["fileio",{"2":{"230":1,"241":1,"263":1,"269":1,"291":1}}],["filename",{"2":{"124":1}}],["file",{"2":{"1":1,"124":1,"147":3,"149":3}}],["fields",{"2":{"39":10,"45":2,"49":1,"52":1,"155":1}}],["fieldnames",{"2":{"7":1,"202":1}}],["field",{"2":{"7":3,"8":1,"35":2,"39":1,"51":1,"202":1}}],["first",{"2":{"2":1,"8":1,"23":1,"25":1,"35":1,"37":1,"39":2,"42":3,"45":2,"56":6,"76":1,"79":8,"81":8,"82":2,"83":1,"88":1,"89":2,"96":1,"122":2,"123":2,"127":4,"138":1,"149":1,"153":2,"155":2,"157":1,"158":1,"162":1,"163":1,"171":2,"173":2,"179":1,"191":1,"192":2,"194":1,"197":1,"202":3,"205":1,"212":1,"213":1,"214":1,"232":1,"234":1,"245":1,"250":1,"258":1,"261":1,"263":1,"264":2,"266":3,"274":2,"280":1,"293":1,"294":2,"295":1,"297":1}}],["fontconfig",{"2":{"263":1,"291":1}}],["footnotes",{"0":{"122":1}}],["fold",{"2":{"80":5}}],["foldl",{"2":{"51":3}}],["follow",{"2":{"124":1}}],["follows",{"2":{"3":1,"5":1,"39":1}}],["following",{"2":{"2":1,"5":1,"19":1,"23":1,"52":1,"53":1,"54":2,"66":1,"72":2,"73":2,"74":2,"77":2,"89":2,"124":1,"143":1,"148":1,"151":1,"157":1,"162":1,"167":1,"172":1,"173":1,"177":1,"179":2,"181":1,"202":1,"252":1,"266":1}}],["focuses",{"2":{"50":1}}],["focalloss",{"2":{"50":2}}],["focal",{"2":{"50":4}}],["four",{"2":{"189":1}}],["fourier",{"2":{"89":5}}],["fourth",{"2":{"50":1}}],["found",{"2":{"2":1,"90":1,"228":1}}],["forum",{"2":{"162":1}}],["forget",{"2":{"125":1}}],["fore",{"2":{"76":1}}],["formulas",{"2":{"292":1}}],["format",{"2":{"189":1,"291":1}}],["formats",{"2":{"39":4,"45":1}}],["forms",{"2":{"50":1}}],["form",{"2":{"44":1,"56":1,"87":1,"171":1,"185":1,"191":2}}],["forwarded",{"2":{"69":1}}],["forwarddiffext",{"2":{"291":1}}],["forwarddiffstaticarraysext",{"2":{"263":1,"291":1}}],["forwarddiff",{"2":{"52":1,"66":1,"70":1,"87":6,"119":1,"162":3,"163":2,"165":4,"166":2,"170":1,"193":2,"194":4,"195":1,"263":2,"291":2}}],["forward",{"2":{"24":1,"43":1,"44":2,"51":1,"56":5,"97":1,"119":2,"127":1,"128":1,"163":1,"165":1,"166":1,"193":1,"194":2,"195":2,"265":1,"266":5,"274":1,"284":3,"285":1}}],["forbidden",{"2":{"15":1}}],["force=true",{"2":{"249":1,"257":1,"276":1,"282":1}}],["forces",{"2":{"93":1}}],["force",{"2":{"2":6,"35":4,"69":1,"148":1}}],["for",{"0":{"36":1,"135":1,"164":1,"177":1,"179":1,"188":1,"257":1,"282":1,"295":1},"1":{"37":1,"180":1,"181":1,"182":1,"183":1,"184":1,"185":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1,"258":1,"259":1,"260":1,"261":1,"262":1,"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"0":1,"1":1,"2":3,"3":14,"4":9,"5":3,"6":1,"7":11,"8":7,"11":1,"12":1,"15":11,"18":2,"19":3,"20":1,"22":5,"23":3,"24":8,"25":2,"27":2,"28":8,"29":2,"35":3,"37":5,"39":11,"40":10,"42":15,"43":24,"44":12,"45":6,"46":18,"47":8,"49":8,"50":9,"51":5,"52":15,"53":3,"54":4,"55":2,"56":16,"57":3,"58":1,"59":1,"60":1,"62":5,"63":8,"64":5,"65":15,"66":3,"67":5,"68":2,"69":1,"70":6,"75":3,"76":2,"77":1,"78":9,"79":6,"80":6,"81":5,"82":7,"83":12,"84":11,"85":6,"86":3,"87":7,"88":1,"89":40,"91":1,"92":1,"93":9,"95":1,"96":1,"97":6,"98":1,"99":1,"100":5,"101":1,"104":2,"105":2,"107":2,"108":1,"109":1,"110":1,"111":1,"112":1,"114":1,"116":2,"117":1,"118":2,"120":14,"121":2,"122":6,"123":5,"124":4,"125":1,"126":1,"127":1,"130":1,"132":2,"133":1,"136":1,"137":1,"140":1,"141":2,"142":1,"144":2,"147":1,"148":1,"151":2,"153":6,"154":3,"155":3,"156":1,"157":1,"160":1,"162":6,"164":1,"165":2,"166":1,"167":1,"168":1,"170":2,"171":4,"172":2,"173":3,"175":1,"177":1,"178":3,"180":2,"182":1,"183":1,"184":5,"185":1,"186":2,"187":1,"188":1,"189":3,"190":1,"191":2,"192":2,"193":5,"194":2,"195":2,"197":3,"201":3,"202":4,"203":2,"204":1,"205":5,"208":1,"212":1,"213":2,"214":1,"216":2,"220":3,"228":1,"231":1,"232":2,"234":1,"235":5,"238":3,"245":1,"246":3,"248":1,"253":2,"254":1,"255":2,"259":1,"260":1,"261":2,"264":5,"265":2,"266":5,"271":1,"274":2,"278":3,"280":2,"285":8,"287":1,"288":2,"293":1}}],["tmp",{"2":{"292":4}}],["tmatch",{"2":{"8":3}}],["tval",{"2":{"280":1}}],["t∈",{"2":{"253":1}}],["ttrain",{"2":{"280":2}}],["ttraining",{"2":{"235":1,"246":2,"287":1}}],["ttest",{"2":{"235":1,"246":2,"280":1}}],["ttime",{"2":{"213":2,"235":3,"246":3}}],["td",{"0":{"135":1}}],["tdchain",{"0":{"133":1},"2":{"132":4,"134":1,"135":1}}],["tl",{"2":{"125":1}}],["tloss",{"2":{"124":1,"205":1}}],["t×hop",{"2":{"89":2}}],["tpu",{"0":{"99":1},"2":{"74":1,"99":1,"119":1,"120":1,"123":3}}],["t=rand",{"2":{"70":1}}],["t=float32",{"2":{"15":8}}],["tsteps",{"2":{"293":5,"294":4,"297":6}}],["tstate",{"2":{"197":6,"235":7,"274":12}}],["tsit5",{"2":{"218":1,"220":1,"221":1,"238":13}}],["tspan=",{"2":{"232":2,"236":1}}],["tspan",{"2":{"218":5,"221":2,"232":5,"236":3,"238":7,"293":4,"294":1,"297":1}}],["tsung",{"2":{"50":2}}],["ts",{"2":{"49":16,"255":6,"264":1,"265":4}}],["tail",{"2":{"265":1}}],["taccuracy",{"2":{"205":1}}],["tall",{"2":{"193":1}}],["tasklocalrng",{"2":{"96":1,"146":1,"188":1,"197":1,"219":1}}],["tasks",{"2":{"50":3}}],["tag",{"2":{"87":2}}],["tab>`",{"2":{"194":1}}],["tab",{"2":{"67":2}}],["tables",{"2":{"263":1}}],["tabletraits",{"2":{"263":1}}],["table",{"2":{"44":1,"189":1}}],["tangent",{"2":{"67":1}}],["tanhshrink",{"2":{"67":7}}],["tanh",{"2":{"25":2,"43":1,"67":18,"77":1,"87":2,"89":2,"96":11,"163":1,"164":1,"165":3,"166":3,"170":3,"220":2,"233":4,"238":36,"251":3,"265":3}}],["targets",{"2":{"50":1,"210":2,"231":2,"277":1,"280":9}}],["target",{"2":{"50":2,"69":10,"212":3,"234":3,"245":3,"252":6,"253":12,"254":10}}],["taking",{"2":{"45":1,"50":1,"51":1,"77":1,"89":1}}],["takeaway",{"2":{"155":1}}],["takes",{"2":{"8":1,"39":4,"50":2,"56":1,"89":1,"97":1,"123":1,"220":1,"266":1,"273":1,"294":2}}],["taken",{"2":{"5":1,"15":1,"40":2,"44":1,"57":1,"116":1,"123":1,"166":1}}],["take",{"2":{"1":1,"16":1,"23":1,"40":1,"49":1,"50":2,"57":1,"96":1,"121":1,"131":1,"163":1,"187":1,"251":1,"253":1,"265":1}}],["turingoptimext",{"2":{"263":1}}],["turing",{"2":{"263":9}}],["turns",{"2":{"77":1}}],["tutorials",{"0":{"223":1,"224":1,"225":1,"226":1,"228":1},"1":{"224":1,"225":1,"226":1,"227":1,"228":1},"2":{"56":1,"101":1,"228":1,"248":1}}],["tutorial",{"2":{"55":1,"97":1,"188":3,"193":1,"199":2,"208":2,"216":6,"228":1,"248":1,"263":2,"268":1,"276":1,"282":1,"283":1}}],["tu",{"2":{"18":1}}],["tuple=true",{"2":{"235":1}}],["tuples",{"2":{"7":2,"84":3}}],["tuple",{"2":{"3":2,"7":2,"15":3,"22":4,"23":1,"37":1,"39":7,"40":11,"42":15,"43":22,"44":5,"46":1,"47":3,"49":1,"50":1,"52":1,"65":2,"66":1,"78":3,"79":12,"84":2,"89":5,"92":1,"96":12,"123":1,"132":1,"165":1,"166":1,"171":1,"219":2,"233":2,"238":36,"265":1,"285":2}}],["two",{"2":{"15":3,"39":4,"44":1,"46":1,"47":3,"52":2,"76":1,"81":1,"82":2,"85":1,"89":1,"149":1,"153":1,"154":1,"155":1,"167":1,"187":1,"265":1,"292":4,"294":1}}],["te",{"2":{"213":4,"214":2,"235":2}}],["technology",{"2":{"90":1}}],["tell",{"2":{"89":1}}],["temporal",{"2":{"86":1,"253":1}}],["tends",{"2":{"63":1,"163":1,"164":1}}],["tensordataset",{"2":{"259":4}}],["tensorcore",{"2":{"83":1,"291":1}}],["tensors",{"2":{"47":1,"80":2}}],["tensorflow",{"2":{"43":1}}],["tensor",{"2":{"15":1,"46":1,"62":3,"78":3,"80":2,"147":96,"189":1,"258":1,"292":1}}],["terrible",{"2":{"49":1}}],["terminalloggers",{"2":{"263":1,"291":1}}],["terminate",{"2":{"220":1}}],["terminology",{"2":{"220":1}}],["terms",{"2":{"89":2,"122":1}}],["term",{"2":{"43":1,"163":1,"170":1,"265":2}}],["testext",{"2":{"241":1,"263":1,"291":1}}],["tested",{"2":{"92":1,"93":1,"121":2,"193":1,"227":1}}],["tests",{"2":{"69":2,"70":2,"89":2,"121":1,"141":2,"167":1}}],["testing",{"0":{"69":1},"2":{"68":3,"89":2,"92":1}}],["test",{"0":{"71":1},"2":{"41":3,"46":2,"69":7,"70":10,"71":8,"93":1,"122":1,"123":1,"163":2,"171":1,"210":5,"213":4,"214":20,"231":5,"235":3,"242":6,"246":13,"263":1,"269":1,"280":10,"291":1,"293":1}}],["testmode`",{"2":{"163":2}}],["testmode",{"2":{"10":1,"41":3,"46":3,"123":3,"160":1,"205":2,"212":1,"213":1,"234":1,"245":1,"254":1,"260":3,"261":2,"274":2,"280":8,"287":1}}],["tiffimages",{"2":{"263":1,"269":1,"291":1}}],["tile",{"2":{"189":2}}],["tiles",{"2":{"189":1}}],["tilings",{"2":{"79":1}}],["tightly",{"2":{"185":1}}],["tier",{"2":{"119":9,"121":3}}],["tied",{"0":{"25":1},"2":{"93":1}}],["title=",{"2":{"288":1}}],["title",{"2":{"90":2,"255":1,"266":1}}],["tips",{"2":{"172":2}}],["tip",{"2":{"8":1,"35":1,"37":1,"43":1,"56":1,"64":1,"69":1,"87":2,"88":1,"136":1,"153":1,"162":2,"228":1,"292":1}}],["timeroutputs",{"2":{"291":1}}],["timewrapper",{"2":{"219":8,"220":1}}],["timelastindex",{"2":{"43":2,"51":1}}],["timestep",{"2":{"293":1}}],["timespace",{"2":{"293":1}}],["times",{"2":{"39":3,"67":2,"68":1,"80":1,"189":1,"192":1,"216":1}}],["time",{"0":{"132":1},"2":{"3":1,"35":1,"43":2,"78":1,"81":1,"86":3,"89":7,"120":2,"123":1,"131":3,"134":5,"213":3,"214":20,"219":1,"235":3,"237":1,"246":2,"255":1,"261":60,"287":4,"293":2,"294":3,"295":1,"297":2}}],["typing",{"2":{"95":1}}],["typical",{"2":{"56":1,"89":1}}],["typically",{"2":{"7":1,"43":1,"89":1,"93":1,"165":1,"178":1,"204":1,"253":1,"294":1}}],["typed",{"2":{"194":1}}],["typejoin",{"2":{"89":5}}],["typeof",{"2":{"55":1,"87":1,"96":8,"123":9,"127":1,"133":2,"153":4,"238":80,"274":4,"294":2}}],["types",{"0":{"7":1,"13":1,"129":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"3":1,"8":2,"22":1,"52":6,"66":2,"67":1,"80":1,"83":1,"87":1,"89":1,"109":2,"112":1,"133":1,"155":1,"156":1,"184":1,"186":2}}],["type",{"0":{"54":1,"173":1,"175":1,"238":1},"2":{"3":13,"4":8,"7":6,"8":5,"13":2,"15":7,"16":1,"28":5,"35":1,"37":1,"39":2,"46":1,"50":1,"52":6,"54":12,"55":3,"56":4,"62":1,"66":1,"76":1,"80":1,"89":14,"123":1,"126":5,"127":13,"130":1,"137":3,"143":1,"144":1,"154":1,"155":3,"156":1,"173":2,"175":2,"184":2,"187":1,"202":2,"205":2,"237":1,"238":1,"242":1,"283":1,"284":1,"285":1}}],["tr",{"2":{"167":2,"170":11,"213":4,"235":2}}],["trying",{"2":{"153":1,"220":1}}],["try",{"2":{"120":1,"122":1,"123":1,"126":2,"127":1,"133":1,"155":2,"170":1,"189":1,"197":1,"206":1,"228":1}}],["trelu",{"2":{"67":3}}],["treated",{"2":{"56":1,"61":1,"88":1}}],["treat",{"2":{"56":1,"70":1,"77":2,"89":1,"154":1}}],["triplotbase",{"2":{"291":1}}],["triangularsolve",{"2":{"230":1,"291":1}}],["triangular",{"2":{"89":4}}],["trick",{"2":{"89":1,"258":1}}],["trivial",{"2":{"89":3,"93":1,"151":1,"191":1,"220":1,"238":26}}],["tries",{"2":{"62":1,"89":1}}],["trilinear",{"2":{"47":6,"81":9}}],["trigger",{"2":{"3":4,"4":2,"137":1,"148":1,"149":2}}],["truth",{"2":{"165":1,"197":1}}],["truncation",{"2":{"114":2}}],["truncatedstacktraces",{"2":{"230":1,"291":1}}],["truncated",{"2":{"13":2,"15":2}}],["truly",{"2":{"93":1}}],["true",{"0":{"293":1},"2":{"2":2,"3":2,"10":1,"15":2,"23":1,"24":2,"25":1,"35":2,"37":1,"39":3,"40":2,"41":4,"43":11,"47":1,"49":4,"50":40,"51":3,"52":1,"55":1,"56":4,"63":6,"65":4,"67":2,"70":3,"80":2,"81":12,"83":1,"85":1,"87":1,"89":11,"92":1,"96":8,"116":1,"123":10,"124":1,"126":1,"127":2,"133":2,"143":2,"146":2,"150":1,"163":3,"164":1,"165":1,"166":1,"168":1,"169":1,"170":1,"181":2,"185":1,"197":2,"204":4,"212":1,"218":2,"219":1,"220":2,"234":1,"236":1,"238":58,"245":1,"252":3,"254":2,"263":2,"265":1,"270":1,"274":7,"279":1,"285":2,"286":1,"287":1,"293":1,"294":2}}],["trace",{"0":{"167":1},"1":{"168":1,"169":1,"170":1},"2":{"167":7,"168":3,"169":2,"170":10,"292":4}}],["tracing",{"2":{"52":1,"123":1}}],["tracking",{"0":{"127":1},"2":{"117":1,"128":2}}],["trackerpdmatsext",{"2":{"230":1,"263":1,"291":1}}],["tracker",{"2":{"49":1,"52":1,"62":1,"64":1,"66":1,"70":1,"87":1,"119":1,"133":1,"230":1,"254":12,"263":3,"291":2}}],["tracked",{"2":{"126":1}}],["trackedarray",{"2":{"8":1,"87":1,"133":2}}],["trackedreals",{"2":{"8":1}}],["track",{"2":{"24":1,"39":2,"46":12,"54":1,"126":1,"127":5,"173":1}}],["traditional",{"2":{"44":1,"265":1}}],["transcodingstreams",{"2":{"291":1}}],["transducersadaptext",{"2":{"241":1,"263":1}}],["transducersdataframesext",{"2":{"241":1}}],["transducers",{"2":{"230":1,"241":3,"263":2}}],["transducerslazyarraysext",{"2":{"230":2}}],["transb",{"2":{"89":1}}],["transa",{"2":{"89":1}}],["transposed",{"2":{"40":2,"89":6,"147":1}}],["transpose",{"2":{"40":3,"80":2,"83":17,"89":14,"147":4,"189":1}}],["transformer",{"2":{"76":1}}],["transformed",{"2":{"61":1}}],["transformation",{"2":{"46":1}}],["transformations",{"2":{"35":1}}],["transforms",{"2":{"42":3,"285":7,"287":2,"288":2}}],["transform",{"2":{"35":1,"89":5,"259":4}}],["transferred",{"2":{"3":1,"158":1}}],["transferring",{"2":{"0":1,"5":1,"178":1}}],["transfer",{"0":{"2":1,"178":1},"2":{"2":1,"158":1,"178":2,"206":1}}],["trainloader",{"2":{"259":2}}],["trainset",{"2":{"259":2}}],["trainstate",{"0":{"124":1},"2":{"49":14,"96":3,"97":1,"114":1,"123":2,"124":4,"197":1,"205":1,"213":1,"216":1,"235":1,"246":1,"251":1,"254":1,"261":1,"274":5,"280":1,"287":1}}],["trained2",{"2":{"205":2}}],["trained",{"2":{"37":1,"205":2,"206":8,"220":2,"221":2,"254":5,"255":1,"287":1,"288":2,"297":3}}],["train",{"0":{"237":1},"2":{"37":1,"43":24,"49":7,"56":1,"96":8,"97":9,"123":1,"124":10,"197":4,"201":4,"205":13,"208":1,"210":6,"213":14,"214":4,"216":2,"220":3,"231":6,"235":8,"242":13,"246":24,"251":1,"253":1,"254":9,"259":6,"261":66,"274":1,"280":23,"282":1,"287":8}}],["trainmode`",{"2":{"261":1,"280":1}}],["trainmode",{"2":{"10":1}}],["trainable",{"2":{"7":1,"22":1,"40":2,"43":4,"44":3,"46":1,"49":2,"56":9,"158":9,"261":2,"280":2,"287":2}}],["trainingbackendcache",{"2":{"274":1}}],["training=val",{"2":{"41":3}}],["training",{"0":{"49":1,"136":1,"160":1,"199":1,"205":1,"213":1,"214":1,"216":1,"218":1,"220":1,"235":1,"240":1,"246":1,"248":1,"254":1,"261":1,"274":1,"280":1,"287":1,"290":1,"295":1,"296":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"207":1,"217":1,"218":1,"219":1,"220":1,"221":1,"222":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"4":3,"10":2,"15":2,"24":1,"27":2,"28":4,"41":6,"46":9,"49":15,"50":2,"51":3,"63":9,"65":8,"87":1,"90":1,"93":2,"96":8,"97":3,"114":1,"115":2,"122":1,"123":1,"124":5,"126":1,"127":3,"136":2,"143":2,"145":1,"146":2,"160":2,"197":5,"199":1,"205":2,"208":1,"213":3,"214":20,"216":1,"235":5,"237":1,"246":2,"251":1,"254":2,"261":3,"273":1,"274":6,"280":3,"287":2,"295":2,"296":1}}],["t",{"2":{"2":1,"3":5,"4":6,"8":1,"15":20,"16":1,"23":1,"25":1,"35":1,"43":1,"45":1,"49":1,"51":1,"53":1,"56":6,"59":2,"62":1,"64":1,"68":1,"70":1,"75":1,"81":14,"82":2,"83":14,"85":6,"87":1,"89":28,"93":1,"96":1,"97":1,"107":2,"110":1,"114":1,"121":2,"122":1,"125":1,"128":1,"130":1,"131":1,"132":4,"134":2,"140":1,"141":1,"153":4,"155":3,"156":1,"158":1,"160":1,"161":1,"162":2,"163":1,"164":1,"165":10,"167":1,"171":1,"189":2,"191":2,"195":1,"197":1,"201":1,"202":4,"203":2,"205":2,"206":1,"210":1,"213":3,"216":1,"218":6,"219":11,"220":8,"221":9,"227":1,"231":1,"232":5,"236":3,"243":1,"244":1,"246":2,"253":6,"254":4,"258":4,"274":1,"277":1,"280":2,"283":24,"284":2,"285":9,"287":1,"292":16,"293":1,"294":1,"295":2}}],["th",{"2":{"89":2}}],["thousands",{"2":{"189":1}}],["though",{"2":{"87":1,"153":1,"177":1,"193":1}}],["those",{"2":{"7":1,"8":1,"47":1,"70":1,"80":1,"89":2,"98":1,"127":1,"141":1,"154":1,"189":1,"232":1,"266":1}}],["thunk",{"2":{"123":1}}],["thumb",{"2":{"47":1}}],["thus",{"2":{"39":1,"47":1,"65":1,"77":1,"78":1,"83":1,"87":2,"89":1}}],["threadingutilities",{"2":{"200":1,"263":1}}],["threads",{"2":{"89":2,"198":3,"207":3,"215":3,"222":3,"239":3,"247":3,"256":3,"262":3,"267":3,"275":3,"281":3,"289":3,"298":3}}],["thread",{"2":{"89":1}}],["threshold",{"2":{"67":1,"89":2}}],["three",{"2":{"45":1,"80":1,"189":1}}],["throughput",{"2":{"261":106,"287":3}}],["through",{"2":{"5":1,"8":1,"39":4,"46":4,"86":1,"89":1,"95":1,"101":2,"131":1,"155":1,"158":1,"172":1,"191":1,"202":3,"219":1,"258":1,"266":1}}],["throws",{"2":{"71":1}}],["throw",{"2":{"2":1,"3":1,"24":1,"49":1,"54":1,"107":1,"254":1}}],["thrown",{"2":{"2":2,"22":1,"24":2}}],["than",{"2":{"15":2,"67":1,"77":1,"80":1,"83":1,"89":1,"193":1,"235":1}}],["that",{"2":{"2":3,"3":8,"5":1,"7":4,"8":5,"10":1,"11":1,"15":3,"21":1,"22":1,"24":1,"25":2,"30":2,"37":2,"39":5,"40":2,"41":2,"42":6,"44":1,"45":4,"46":6,"49":1,"50":1,"51":1,"52":3,"55":1,"56":5,"59":2,"62":1,"63":1,"64":2,"66":1,"67":2,"68":1,"76":1,"77":2,"78":1,"79":6,"80":2,"82":1,"83":2,"84":2,"85":1,"87":3,"89":33,"93":2,"96":1,"97":1,"98":1,"102":2,"107":2,"110":2,"122":4,"123":1,"124":2,"125":2,"126":3,"127":1,"128":2,"137":5,"138":1,"140":1,"147":2,"153":3,"154":4,"155":1,"156":2,"157":1,"158":2,"161":1,"162":3,"163":2,"164":3,"165":2,"167":2,"170":3,"171":2,"173":3,"174":1,"179":1,"181":1,"185":1,"189":1,"191":3,"192":1,"194":1,"195":1,"202":3,"206":2,"216":2,"220":4,"227":1,"235":1,"238":1,"250":1,"251":1,"253":1,"258":1,"265":1,"266":3,"271":1,"273":1,"294":3}}],["third",{"2":{"189":2}}],["thirteenth",{"2":{"15":2}}],["think",{"2":{"56":1,"153":1}}],["things",{"2":{"56":1,"80":1,"89":1,"153":1,"155":1,"189":1}}],["thing",{"2":{"45":2,"171":1,"173":1}}],["this",{"2":{"2":3,"3":8,"4":10,"5":1,"7":5,"8":6,"10":1,"11":6,"12":1,"15":5,"18":2,"19":2,"21":2,"23":2,"24":5,"27":2,"28":3,"31":1,"32":1,"35":6,"37":3,"39":3,"40":5,"42":3,"43":6,"44":5,"45":1,"46":5,"47":3,"49":8,"50":4,"51":1,"52":22,"53":2,"54":7,"55":6,"56":11,"59":3,"60":1,"62":3,"64":3,"65":2,"66":4,"67":4,"68":2,"69":1,"70":2,"78":1,"79":4,"82":3,"83":8,"84":1,"85":1,"87":6,"88":1,"89":23,"90":2,"92":1,"93":2,"98":1,"102":2,"107":6,"108":1,"109":1,"110":3,"115":1,"116":1,"117":1,"118":1,"120":5,"122":1,"123":5,"124":3,"125":2,"126":3,"128":2,"133":1,"137":3,"139":2,"141":2,"142":2,"144":1,"147":5,"148":2,"149":2,"151":1,"153":8,"155":6,"156":1,"158":2,"160":3,"162":6,"163":9,"164":5,"165":1,"166":1,"167":3,"168":1,"170":2,"171":5,"173":7,"174":1,"175":1,"177":1,"178":4,"180":1,"181":2,"182":3,"184":1,"185":1,"188":1,"189":5,"191":4,"193":1,"197":1,"198":1,"199":2,"201":1,"203":2,"207":1,"208":1,"211":1,"214":1,"215":1,"216":3,"220":5,"222":1,"235":1,"236":1,"238":2,"239":1,"243":1,"247":1,"248":2,"254":7,"256":1,"257":1,"261":1,"262":1,"263":1,"265":4,"267":1,"268":1,"271":1,"273":1,"275":1,"276":2,"277":1,"280":103,"281":1,"282":2,"283":1,"287":2,"289":1,"290":1,"292":3,"294":2,"296":1,"298":1}}],["theoretical",{"2":{"292":1}}],["theoretically",{"2":{"193":1}}],["theorem",{"2":{"45":1}}],["theglobal",{"2":{"124":1}}],["thesis",{"2":{"90":1}}],["these",{"2":{"7":5,"8":1,"16":1,"21":1,"26":1,"28":1,"35":3,"37":3,"46":1,"49":1,"56":1,"75":1,"83":2,"89":4,"91":2,"93":1,"96":1,"107":2,"114":3,"121":7,"131":1,"141":1,"153":2,"154":2,"162":1,"166":1,"177":1,"181":1,"184":1,"185":3,"188":1,"191":1,"195":1,"227":3,"228":1,"229":1,"265":1}}],["theta",{"2":{"67":1}}],["theta=1",{"2":{"67":1}}],["they",{"2":{"49":1,"56":2,"89":1,"127":1,"143":1,"147":1,"158":1,"189":1,"193":2}}],["therefore",{"2":{"85":1,"89":1}}],["there",{"2":{"22":1,"37":1,"49":1,"82":2,"87":1,"89":2,"96":1,"109":1,"120":1,"122":1,"127":1,"128":1,"137":1,"162":1,"165":2,"173":1,"177":1,"187":1,"189":1,"214":1}}],["them",{"0":{"172":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1},"2":{"21":1,"23":1,"39":1,"44":1,"47":1,"50":1,"53":4,"56":1,"67":1,"83":1,"87":1,"91":1,"92":1,"96":1,"107":1,"114":1,"121":1,"171":1,"173":1,"184":1,"189":2,"192":1,"195":1,"228":2,"251":1,"265":1,"294":2}}],["their",{"2":{"7":2,"40":1,"101":1,"176":1,"189":1,"191":2}}],["then",{"2":{"1":2,"2":7,"3":1,"8":1,"19":2,"22":3,"24":2,"39":5,"40":4,"41":4,"43":9,"44":7,"45":1,"50":6,"51":1,"52":2,"55":2,"57":1,"63":2,"65":1,"67":2,"69":1,"70":3,"71":1,"76":1,"83":5,"89":5,"90":1,"93":1,"95":1,"97":1,"123":1,"147":1,"149":3,"153":1,"154":3,"167":2,"179":1}}],["the",{"0":{"124":1,"133":1,"134":1,"135":1,"145":1,"166":2,"168":1,"169":1,"170":1,"188":1,"203":1,"205":1,"206":1,"211":1,"213":1,"214":1,"219":1,"220":1,"221":1,"232":1,"233":1,"237":1,"244":1,"251":1,"252":1,"253":1,"255":1,"261":1,"265":1,"280":1,"283":1,"287":1,"288":1,"293":1,"295":1,"296":1,"297":1},"1":{"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1},"2":{"1":6,"2":14,"3":31,"4":14,"5":13,"6":3,"7":19,"8":22,"9":4,"10":7,"11":6,"15":60,"16":25,"18":20,"19":13,"20":1,"21":1,"22":14,"23":22,"24":15,"25":14,"26":1,"28":10,"29":4,"30":11,"31":5,"32":3,"35":12,"37":9,"39":46,"40":38,"41":6,"42":45,"43":69,"44":19,"45":33,"46":58,"47":14,"49":41,"50":55,"51":4,"52":18,"53":7,"54":19,"55":18,"56":50,"57":6,"59":10,"60":2,"61":3,"62":11,"63":16,"64":6,"65":21,"66":5,"67":6,"69":6,"70":9,"71":4,"72":7,"73":2,"74":3,"75":1,"76":19,"77":7,"78":5,"79":41,"80":11,"81":32,"82":6,"83":13,"84":14,"85":11,"86":11,"87":9,"88":4,"89":120,"91":1,"92":5,"93":6,"95":3,"96":6,"97":11,"98":1,"101":4,"102":2,"104":2,"107":2,"108":2,"109":2,"110":4,"111":1,"114":11,"116":9,"117":4,"118":3,"120":12,"121":2,"122":3,"123":25,"124":12,"125":3,"126":6,"127":19,"130":2,"131":2,"132":1,"133":1,"134":3,"135":2,"136":1,"137":22,"138":3,"139":1,"140":6,"143":1,"144":7,"145":3,"147":9,"148":2,"149":5,"151":6,"153":21,"154":10,"155":4,"156":5,"157":4,"158":6,"160":4,"161":1,"162":9,"163":5,"164":16,"165":5,"166":13,"167":11,"168":2,"170":7,"171":10,"172":1,"173":9,"175":1,"176":3,"177":2,"178":8,"179":4,"180":1,"181":1,"182":1,"183":1,"184":10,"185":4,"187":11,"188":2,"189":12,"190":7,"191":2,"192":5,"193":6,"194":1,"195":4,"196":2,"197":6,"199":2,"201":6,"202":24,"203":3,"204":2,"205":8,"206":5,"208":2,"210":2,"211":3,"213":1,"214":5,"216":5,"219":8,"220":11,"227":1,"228":6,"231":2,"232":13,"233":1,"235":3,"236":2,"237":3,"238":5,"243":6,"246":1,"248":2,"250":3,"251":2,"252":4,"253":3,"254":7,"257":1,"258":3,"260":6,"263":5,"264":3,"265":16,"266":22,"270":4,"273":3,"274":3,"280":103,"282":1,"283":3,"287":2,"290":1,"292":7,"293":2,"294":10,"295":4,"296":2,"297":2}}],["toolchain",{"2":{"239":1,"247":1}}],["tool",{"2":{"122":1}}],["tools",{"2":{"24":2,"56":1,"125":1,"188":1,"193":1}}],["too",{"2":{"91":1,"193":1,"266":1}}],["top",{"2":{"85":1,"89":4,"123":1,"151":1}}],["topic",{"2":{"43":1}}],["towards",{"2":{"52":6,"81":1,"191":1,"216":1}}],["toarray",{"2":{"37":1}}],["tosimplechainsadaptor",{"2":{"37":4,"93":1,"211":2}}],["together",{"2":{"23":1,"53":1}}],["total",{"2":{"9":1,"10":1,"25":1,"29":2,"39":4,"46":3,"56":4,"96":1,"97":1,"126":1,"127":1,"132":1,"205":16,"211":2,"212":6,"234":6,"245":6,"254":3,"259":2,"260":5,"261":12,"271":1,"280":2,"287":5,"294":1}}],["to",{"0":{"35":1,"37":1,"71":1,"72":1,"101":1,"102":1,"137":1,"147":1,"157":1,"172":1,"290":1},"1":{"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":1,"115":1,"116":1,"117":1,"158":1,"159":1,"160":1,"161":1,"173":1,"174":1,"175":1,"176":1,"177":1,"178":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"1":3,"2":9,"3":10,"4":5,"5":5,"6":2,"7":4,"8":10,"10":2,"11":2,"15":14,"18":8,"19":5,"21":1,"22":7,"23":2,"24":10,"25":5,"28":9,"30":6,"32":1,"35":14,"37":17,"39":18,"40":19,"41":13,"42":9,"43":59,"44":7,"45":9,"46":22,"47":8,"49":10,"50":14,"51":3,"52":14,"53":11,"54":4,"55":4,"56":21,"57":1,"59":7,"60":2,"61":4,"62":5,"63":13,"64":3,"65":16,"66":2,"67":3,"68":2,"69":3,"70":10,"72":2,"73":1,"74":1,"76":7,"77":6,"78":2,"79":13,"80":8,"81":9,"82":1,"83":16,"84":16,"85":7,"86":5,"87":6,"89":71,"90":1,"91":2,"92":1,"93":9,"95":3,"96":2,"97":4,"101":2,"102":5,"104":1,"107":5,"108":1,"109":1,"110":5,"111":2,"114":6,"115":3,"116":6,"117":3,"121":3,"122":1,"123":16,"124":15,"125":4,"126":1,"127":7,"130":2,"131":2,"132":1,"136":1,"137":5,"138":3,"140":4,"142":1,"143":1,"144":1,"145":1,"147":8,"148":4,"149":2,"151":5,"153":7,"154":5,"155":4,"156":2,"157":3,"158":8,"160":1,"161":3,"162":7,"163":10,"164":6,"165":1,"166":4,"167":6,"170":1,"171":9,"173":7,"174":1,"175":2,"176":1,"178":3,"179":4,"180":2,"181":3,"182":4,"184":4,"185":3,"186":1,"187":3,"188":4,"189":7,"190":1,"191":3,"193":1,"194":1,"195":3,"197":2,"199":2,"201":3,"202":10,"204":1,"206":1,"208":2,"210":1,"211":3,"216":4,"219":6,"220":7,"227":1,"228":6,"229":2,"231":1,"232":6,"233":1,"235":1,"236":1,"238":11,"243":1,"244":1,"248":2,"252":2,"253":2,"258":2,"261":3,"263":1,"264":3,"265":12,"266":5,"273":1,"274":1,"277":1,"280":14,"282":2,"283":1,"292":5,"294":6,"295":2,"296":2}}],["toml",{"2":{"1":1,"68":1}}],["l3",{"2":{"297":2}}],["lb",{"2":{"294":1,"297":1}}],["lbfgsb",{"2":{"263":1,"291":1}}],["lbfgs",{"2":{"216":1,"220":3}}],["lzo",{"2":{"263":1,"291":1}}],["lrucache",{"2":{"263":2}}],["lr",{"2":{"254":2,"280":2,"287":2}}],["ld",{"2":{"207":1,"215":1,"239":1,"247":1,"256":1,"262":1,"275":1,"281":1,"289":1}}],["ll",{"2":{"188":1,"264":1,"265":1,"294":2}}],["llvmextra",{"2":{"269":1,"291":1}}],["llvmopenmp",{"2":{"263":1,"291":1}}],["llvm",{"2":{"97":3,"198":1,"200":3,"205":3,"207":1,"214":3,"215":1,"222":1,"239":2,"241":1,"247":2,"254":3,"256":1,"261":3,"262":1,"267":1,"269":4,"275":1,"280":3,"281":1,"287":3,"289":1,"291":1,"298":1}}],["ln",{"2":{"79":5}}],["l=",{"2":{"50":1}}],["ls",{"2":{"50":4}}],["lstmcell",{"2":{"43":6,"116":1,"117":1,"202":3,"203":1}}],["lstm",{"0":{"199":1},"1":{"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"207":1},"2":{"7":1,"43":1,"202":17,"203":4}}],["lprob",{"2":{"285":4,"286":2}}],["lpnorm",{"2":{"117":1}}],["lpnormpool",{"2":{"78":4}}],["lppool",{"2":{"42":1,"78":1,"117":1}}],["lp",{"2":{"42":4,"78":2,"265":1,"266":1}}],["l2",{"2":{"25":5,"89":1,"154":4,"294":2,"297":2}}],["l2=dense",{"2":{"25":1}}],["l1",{"2":{"25":5,"79":5,"154":4,"294":2,"297":2}}],["l1=dense",{"2":{"25":1}}],["lerc",{"2":{"263":1,"291":1}}],["lecture",{"2":{"229":1}}],["lecun",{"2":{"50":1}}],["leveraging",{"2":{"171":1}}],["levels=10",{"2":{"255":1}}],["level",{"2":{"15":2,"80":1,"89":1,"118":1,"124":1,"171":1,"184":2}}],["leftchildrightsiblingtrees",{"2":{"263":1}}],["left",{"2":{"79":5,"85":1}}],["len",{"2":{"70":2,"76":10,"201":1,"253":11}}],["length=25",{"2":{"266":4}}],["length=bc",{"2":{"253":3}}],["length=grid",{"2":{"253":1}}],["length=datasize",{"2":{"218":1,"293":1}}],["length=50",{"2":{"201":1}}],["length+k",{"2":{"89":1}}],["length",{"2":{"15":7,"16":24,"25":2,"39":2,"40":2,"42":6,"76":2,"78":6,"79":10,"80":1,"84":2,"89":21,"153":8,"154":2,"201":5,"204":1,"205":5,"212":1,"219":2,"234":1,"245":1,"255":8,"259":2,"260":1,"261":2,"265":3,"285":1,"292":2,"297":1}}],["lei",{"2":{"65":1}}],["leibler",{"2":{"50":1}}],["lets",{"2":{"155":1,"213":1,"235":1,"246":1,"263":1}}],["letters",{"2":{"86":1}}],["let",{"2":{"56":1,"97":1,"123":2,"124":1,"125":1,"126":2,"127":7,"138":1,"143":1,"153":1,"154":2,"155":2,"158":2,"162":3,"163":2,"165":1,"166":1,"170":3,"171":1,"173":1,"188":2,"189":1,"192":1,"194":1,"195":1,"196":1,"197":4,"202":1,"204":1,"206":1,"220":2,"235":1,"237":1,"265":1,"270":1,"271":1,"274":1,"283":1,"293":1,"294":1,"295":1,"297":2}}],["lempitsky",{"2":{"46":1,"65":1}}],["less",{"2":{"45":1,"67":3,"91":1,"121":1,"193":1}}],["least",{"2":{"89":3}}],["leaky",{"2":{"67":2}}],["leakyrelu",{"2":{"67":5,"116":1,"258":5}}],["learned",{"2":{"78":1,"93":1}}],["learn",{"2":{"67":1,"189":1}}],["learnable",{"2":{"44":1,"46":4}}],["learning",{"2":{"2":1,"12":1,"15":3,"50":1,"63":1,"65":1,"67":2,"167":1,"173":1,"176":1,"186":1,"187":1,"189":1,"197":1,"216":1,"261":1,"292":1}}],["leads",{"2":{"155":1}}],["leading",{"2":{"47":1}}],["lead",{"2":{"24":1,"35":1,"52":1,"56":2,"59":1,"171":1,"220":1,"261":1,"280":1}}],["leaf",{"2":{"3":1,"8":1,"52":2,"53":4,"96":12,"219":1}}],["leaves",{"2":{"3":1,"52":4,"56":1}}],["l",{"2":{"8":5,"9":2,"10":2,"22":13,"23":5,"35":2,"37":1,"39":2,"41":1,"78":2,"89":2,"132":5,"143":2,"144":2,"153":16,"158":4,"197":1,"202":2,"220":4,"232":4,"263":1,"270":2,"274":2,"285":12,"291":1,"293":2,"295":3}}],["lame",{"2":{"263":1,"291":1}}],["lambda=weight",{"2":{"261":1,"280":1}}],["layoutpointers",{"2":{"200":1,"263":1,"269":1,"291":1}}],["layerfreezing",{"2":{"145":1}}],["layernorm",{"2":{"46":4,"65":1,"104":1}}],["layer2",{"2":{"39":1}}],["layer1",{"2":{"39":1}}],["layer",{"0":{"11":1,"23":1,"55":1,"56":1,"144":1,"152":1,"153":1,"154":1,"232":1,"233":1,"236":1,"243":1},"1":{"153":1,"154":1},"2":{"7":13,"8":9,"9":4,"10":4,"11":4,"22":7,"23":6,"24":13,"35":10,"37":11,"39":67,"40":5,"41":3,"42":12,"43":2,"44":5,"45":8,"46":24,"47":3,"54":5,"55":7,"56":11,"65":3,"76":1,"87":1,"93":2,"96":47,"114":2,"115":1,"116":1,"123":39,"125":2,"126":30,"127":69,"130":2,"132":3,"134":5,"143":9,"144":9,"145":5,"146":9,"147":18,"153":8,"154":12,"158":5,"162":1,"163":4,"164":1,"165":4,"166":6,"171":2,"173":1,"184":1,"197":1,"202":1,"211":18,"232":2,"238":211,"243":1,"244":1,"265":1,"271":2,"273":4,"274":8,"278":9,"294":12,"296":4}}],["layers=2",{"2":{"278":1}}],["layers",{"0":{"38":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"62":1,"64":1,"97":1,"143":1,"158":1,"161":1},"1":{"39":1,"40":1,"41":1,"42":1,"43":1,"44":1,"45":1,"46":1,"47":1},"2":{"6":1,"7":9,"8":2,"15":8,"23":3,"24":2,"35":2,"39":39,"40":1,"45":12,"54":1,"55":2,"67":1,"77":1,"81":4,"87":2,"92":4,"93":4,"97":1,"105":1,"114":2,"117":1,"126":9,"127":19,"132":4,"143":1,"151":2,"153":1,"154":3,"157":1,"158":3,"162":1,"173":1,"177":1,"183":1,"184":1,"202":1,"203":1,"232":1,"237":1,"238":6,"243":1,"265":2,"278":4,"280":2,"285":4,"287":2}}],["laid",{"2":{"89":1}}],["lazymodules",{"2":{"291":1}}],["lazyartifacts",{"2":{"269":1,"291":1}}],["lazyarraysstaticarraysext",{"2":{"230":1,"291":1}}],["lazyarrays",{"2":{"230":2,"291":2}}],["lazy",{"2":{"83":2,"89":2,"158":1}}],["lattice",{"2":{"89":1}}],["latter",{"2":{"83":1,"176":2}}],["later",{"2":{"264":1}}],["lateral",{"2":{"81":2}}],["latexstrings",{"2":{"263":1,"291":1}}],["latent",{"2":{"258":10,"260":2,"261":4}}],["latentsize",{"2":{"43":3}}],["latest",{"2":{"72":1,"228":1}}],["lack",{"2":{"83":1}}],["language",{"2":{"52":2,"87":1}}],["label=l",{"2":{"218":2,"221":4}}],["labels",{"2":{"50":2,"86":1,"201":3,"210":3,"231":3,"242":6}}],["label",{"2":{"50":18,"86":2}}],["larger",{"0":{"227":1},"2":{"93":1,"189":1,"227":1}}],["large",{"2":{"35":1,"67":1,"89":1,"93":3,"97":1,"193":1}}],["last",{"2":{"2":1,"15":1,"43":1,"45":2,"47":2,"55":1,"56":1,"61":1,"83":1,"86":1,"89":2,"132":1,"137":1,"164":1,"232":1,"264":2,"278":3}}],["luxreversediffext",{"2":{"291":2}}],["luxreactantext",{"2":{"200":2,"269":2}}],["luxtrackerext",{"2":{"263":2,"291":2}}],["luxtestutils",{"0":{"68":1},"1":{"69":1,"70":1,"71":1},"2":{"69":3,"70":2,"71":1}}],["luxzygoteext",{"2":{"241":2,"291":2}}],["luxenzymeext",{"2":{"200":2,"269":2,"291":2}}],["luxmlutilsext",{"2":{"200":2,"241":2}}],["luxlinear",{"2":{"158":7}}],["luxlibreversediffext",{"2":{"291":2}}],["luxlibreactantext",{"2":{"200":2,"269":2}}],["luxlibloopvectorizationext",{"2":{"291":2}}],["luxlibsleefpiratesext",{"2":{"291":2}}],["luxlibtrackerext",{"2":{"263":2,"291":2}}],["luxlibcudnnext",{"2":{"241":2}}],["luxlibcudaext",{"2":{"241":2}}],["luxlibenzymeext",{"2":{"200":2,"269":2,"291":2}}],["luxlib",{"0":{"58":1,"103":1},"1":{"59":1,"60":1,"61":1,"62":1,"63":1,"64":1,"65":1,"66":1,"104":1,"105":1},"2":{"57":2,"59":2,"60":1,"61":2,"62":1,"63":2,"64":1,"65":4,"66":1,"69":2,"87":3,"88":1,"104":1,"163":4,"176":8,"179":1,"184":2,"185":2,"200":3,"241":2,"261":2,"263":2,"269":3,"280":2,"291":6,"294":2}}],["luxflux",{"2":{"157":1,"158":2}}],["luxdeviceutils",{"2":{"110":3}}],["luxdl",{"2":{"43":1,"72":1,"98":1}}],["luxops",{"0":{"51":1},"2":{"51":8,"202":1,"203":1}}],["luxcomponentarraysext",{"2":{"241":2,"291":2}}],["luxcorechainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["luxcoremldatadevicesext",{"2":{"263":1,"269":1,"291":1}}],["luxcoresetfieldext",{"2":{"263":1,"269":1,"291":1}}],["luxcorefunctorsext",{"2":{"263":1,"269":1,"291":1}}],["luxcorereactantext",{"2":{"200":2,"269":2}}],["luxcoreenzymecoreext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["luxcorearrayinterfacereversediffext",{"2":{"133":2,"291":1}}],["luxcorearrayinterfacetrackerext",{"2":{"133":2,"263":1,"291":1}}],["luxcore",{"0":{"6":1,"106":1},"1":{"7":1,"8":1,"9":1,"10":1,"11":1,"107":1,"108":1},"2":{"6":2,"7":3,"8":7,"9":2,"10":5,"11":1,"57":2,"107":2,"133":5,"151":3,"153":14,"154":8,"155":1,"179":1,"184":3,"200":2,"205":6,"263":7,"269":7,"280":3,"291":8}}],["luxcudadevice",{"2":{"111":1}}],["luxcuda",{"2":{"3":1,"73":2,"96":1,"100":1,"148":1,"171":1,"186":1,"190":3,"230":3,"241":3}}],["lux",{"0":{"33":1,"35":1,"36":1,"37":1,"72":1,"91":1,"93":1,"102":1,"113":1,"123":1,"125":1,"147":1,"151":1,"157":1,"179":1,"188":1,"216":1},"1":{"34":1,"35":1,"36":1,"37":2,"92":1,"93":1,"103":1,"104":1,"105":1,"106":1,"107":1,"108":1,"109":1,"110":1,"111":1,"112":1,"113":1,"114":2,"115":2,"116":2,"117":2,"124":1,"126":1,"127":1,"128":1,"152":1,"153":1,"154":1,"155":1,"156":1,"158":1,"159":1,"160":1,"161":1,"180":1,"181":1,"182":1,"183":1,"184":1,"185":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1,"217":1,"218":1,"219":1,"220":1,"221":1,"222":1},"2":{"4":4,"6":4,"7":3,"8":4,"11":1,"18":2,"19":1,"22":13,"23":4,"24":8,"25":3,"26":1,"27":2,"28":3,"29":2,"30":4,"31":1,"32":1,"35":11,"37":15,"39":11,"40":3,"41":7,"42":9,"43":9,"44":4,"45":12,"46":8,"47":2,"49":12,"50":16,"51":9,"52":14,"53":6,"54":2,"55":5,"56":19,"57":1,"58":1,"68":1,"69":2,"72":5,"73":4,"74":3,"75":1,"90":1,"91":1,"93":9,"95":4,"96":8,"97":5,"98":1,"99":1,"100":1,"101":1,"102":2,"107":3,"110":1,"111":2,"114":6,"115":8,"118":2,"121":1,"122":2,"123":7,"124":3,"125":3,"126":6,"127":4,"128":1,"130":1,"132":3,"133":4,"134":4,"135":1,"136":1,"140":1,"143":6,"144":2,"145":2,"146":3,"147":13,"148":2,"149":1,"151":7,"153":3,"154":2,"155":1,"157":3,"158":13,"160":2,"161":2,"162":4,"163":3,"164":1,"165":1,"166":1,"170":1,"171":6,"173":6,"175":2,"179":3,"184":3,"187":1,"188":3,"190":1,"191":1,"192":2,"193":1,"195":1,"197":2,"199":2,"200":6,"202":6,"203":1,"205":5,"208":1,"209":1,"211":4,"212":1,"213":2,"214":2,"216":2,"217":1,"220":1,"227":1,"228":3,"230":1,"232":5,"233":1,"234":1,"236":4,"238":103,"241":4,"243":4,"244":2,"245":1,"246":1,"248":2,"249":1,"250":1,"251":1,"254":2,"257":1,"258":4,"260":3,"261":5,"263":7,"265":7,"269":5,"273":2,"274":10,"276":1,"280":13,"282":2,"285":1,"287":3,"291":8,"294":2}}],["lt",{"2":{"4":2,"5":1,"15":3,"50":1,"56":2,"78":2,"87":1,"89":2,"98":1,"107":2,"147":4,"149":1,"154":2,"162":12,"218":1,"221":1}}],["live",{"2":{"200":1,"269":1}}],["libass",{"2":{"263":1,"291":1}}],["libaom",{"2":{"263":1,"291":1}}],["libwebp",{"2":{"263":1,"291":1}}],["libglvnd",{"2":{"263":1,"291":1}}],["libgcrypt",{"2":{"263":1,"291":1}}],["libgpg",{"2":{"263":1,"291":1}}],["libtiff",{"2":{"263":1,"291":1}}],["libtask",{"2":{"263":1}}],["libsixel",{"2":{"263":1,"291":1}}],["libvorbis",{"2":{"263":1,"291":1}}],["libuuid",{"2":{"263":1,"291":1}}],["libiconv",{"2":{"263":1,"291":1}}],["libffi",{"2":{"263":1,"291":1}}],["libfdk",{"2":{"263":1,"291":1}}],["libpthread",{"2":{"263":1,"291":1}}],["libpng",{"2":{"263":1,"291":1}}],["libxext",{"2":{"263":1,"291":1}}],["libxrender",{"2":{"263":1,"291":1}}],["libx11",{"2":{"263":1,"291":1}}],["libxcb",{"2":{"263":1,"291":1}}],["libxdmcp",{"2":{"263":1,"291":1}}],["libxau",{"2":{"263":1,"291":1}}],["libmount",{"2":{"263":1,"291":1}}],["lib64",{"2":{"207":1,"215":1,"239":1,"247":1,"256":1,"262":1,"275":1,"281":1,"289":1}}],["libllvm",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["libraries",{"2":{"162":1,"187":1,"239":1,"247":1,"263":2,"265":1}}],["library",{"2":{"90":1,"157":1,"191":1,"206":1,"207":1,"215":1,"239":1,"247":1,"256":1,"262":1,"275":1,"281":1,"289":1}}],["lib",{"2":{"126":1,"133":6,"163":4,"205":4,"207":1,"215":1,"238":3,"239":1,"247":1,"256":1,"261":2,"262":1,"275":1,"280":4,"281":1,"289":1}}],["literate",{"2":{"198":2,"207":2,"215":2,"222":2,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["literature",{"2":{"167":1}}],["literally",{"2":{"89":1}}],["little",{"2":{"89":1,"123":1,"127":1}}],["lisht",{"2":{"67":5}}],["lists",{"2":{"25":1,"118":1}}],["listed",{"2":{"21":1,"54":1}}],["list",{"2":{"3":1,"25":5,"28":1,"39":4,"45":1,"69":1,"70":3,"97":1,"102":2,"179":1,"186":1,"189":1,"228":1,"246":4,"260":1,"285":4}}],["limits=extrema",{"2":{"255":1}}],["limiter",{"2":{"238":26}}],["limit",{"2":{"51":1,"59":1,"63":1,"198":1,"207":1,"215":1,"222":1,"239":2,"247":2,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["limitations",{"2":{"35":1,"49":1,"93":1}}],["linux",{"2":{"198":2,"207":2,"215":2,"222":2,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["linked",{"2":{"228":1}}],["link",{"2":{"188":1}}],["linking",{"2":{"39":1}}],["liner",{"2":{"265":1}}],["linewidth=3",{"2":{"266":3,"270":1,"274":1}}],["linewidth=2",{"2":{"255":1,"293":1,"294":2,"297":3}}],["linewidth=4",{"2":{"218":2,"221":4,"297":1}}],["linesearch=linesearches",{"2":{"296":1}}],["linesearches",{"2":{"230":1,"263":1,"291":3}}],["linestyle=",{"2":{"218":2,"221":2}}],["lines",{"2":{"218":2,"221":4,"270":1,"274":1,"293":1,"294":2,"297":4}}],["line",{"2":{"70":1,"97":1,"200":1,"205":1,"214":1,"254":1,"261":1,"269":1,"280":1,"287":1}}],["lineplot",{"2":{"67":36,"89":4}}],["linear=dense",{"2":{"258":1}}],["linearsolvecudaext",{"2":{"230":2}}],["linearsolvekernelabstractionsext",{"2":{"230":1,"291":1}}],["linearsolveenzymeext",{"2":{"230":1,"291":1}}],["linearsolverecursivearraytoolsext",{"2":{"230":1,"291":1}}],["linearsolve",{"2":{"230":5,"291":4}}],["linearalgebraext",{"2":{"200":1,"263":2,"269":1,"291":2}}],["linearalgebra",{"2":{"89":2,"162":1,"263":1}}],["linearly",{"2":{"67":1}}],["linearities",{"2":{"67":1}}],["linear",{"0":{"44":1,"197":1},"2":{"15":1,"45":1,"47":1,"56":4,"67":13,"81":6,"87":1,"153":12,"154":23,"171":1,"197":1,"258":1}}],["lin",{"2":{"50":2}}],["likely",{"2":{"254":7,"280":102,"287":2}}],["like",{"2":{"15":1,"19":1,"40":2,"43":3,"49":1,"50":3,"52":1,"56":3,"61":1,"62":2,"64":1,"66":1,"68":1,"77":1,"80":1,"89":3,"93":2,"108":1,"151":1,"153":2,"155":1,"157":1,"158":2,"178":1,"187":1,"189":3,"195":1,"216":1,"232":1,"235":1,"258":1,"264":1,"285":1,"293":1,"294":1}}],["lighter",{"2":{"151":1}}],["light",{"2":{"12":1}}],["lightweight",{"2":{"0":1}}],["lost",{"2":{"188":1}}],["lossfn",{"2":{"197":3,"204":2,"205":2,"212":1,"213":1}}],["lossfunctions",{"2":{"50":1}}],["losses",{"0":{"86":1},"2":{"50":1,"295":2,"297":3}}],["loss",{"0":{"50":1,"163":1,"165":1,"166":1,"204":1,"252":1,"273":1},"1":{"164":1},"2":{"49":5,"50":35,"56":5,"77":1,"86":3,"96":2,"97":14,"123":5,"124":2,"162":1,"163":10,"164":9,"165":4,"166":6,"197":19,"204":4,"205":59,"220":30,"245":1,"246":1,"248":1,"250":1,"252":23,"254":243,"260":9,"261":111,"273":4,"274":4,"279":3,"280":176,"286":2,"287":19,"295":6,"296":1,"297":2}}],["love",{"2":{"121":1}}],["lotka",{"2":{"218":2}}],["lot",{"2":{"93":1,"192":1}}],["lower",{"2":{"235":1}}],["low",{"2":{"89":1,"124":1}}],["lo=1",{"2":{"67":1}}],["logdensityproblemsadtrackerext",{"2":{"263":1}}],["logdensityproblemsaddifferentiationinterfaceext",{"2":{"263":1}}],["logdensityproblemsadforwarddiffext",{"2":{"263":1}}],["logdensityproblemsadadtypesext",{"2":{"263":1}}],["logdensityproblemsad",{"2":{"263":5}}],["logdensityproblems",{"2":{"263":1}}],["logexpfunctionsinversefunctionsext",{"2":{"263":1,"291":1}}],["logexpfunctionschangesofvariablesext",{"2":{"263":1}}],["logexpfunctionschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["logexpfunctions",{"2":{"263":4,"269":1,"291":3}}],["loggingextras",{"2":{"241":1,"263":1,"291":1}}],["logging",{"2":{"137":1,"263":1}}],["logaddexp",{"2":{"89":1}}],["logarithm",{"2":{"50":1}}],["logarithmic",{"2":{"50":1}}],["log10",{"2":{"89":2}}],["logs",{"2":{"137":1}}],["logsumexp",{"2":{"87":1}}],["logsoftmax",{"2":{"77":3,"86":1,"87":1}}],["logsigmoid",{"2":{"67":3}}],["logσ²",{"2":{"258":8,"260":4}}],["logσ",{"2":{"67":3}}],["logcosh",{"2":{"67":5}}],["log⁡",{"2":{"50":1}}],["log",{"2":{"50":2,"51":2,"67":4,"77":3,"87":1,"89":2,"258":2,"265":1,"266":1,"284":20,"285":8,"286":1}}],["logitcrossentropy",{"2":{"234":1,"235":1}}],["logitbinarycrossentropy",{"2":{"204":1}}],["logitbce",{"2":{"50":5}}],["logit",{"2":{"50":1}}],["logits=val",{"2":{"50":2,"212":1,"234":1,"245":1,"279":1}}],["logits",{"2":{"50":5}}],["longer",{"2":{"114":2,"139":3}}],["long",{"2":{"35":1,"43":1,"155":1}}],["locations",{"2":{"84":1,"85":7}}],["location",{"2":{"24":3,"126":4,"127":10}}],["local",{"2":{"4":1,"29":2,"56":1,"137":1,"138":1,"207":2,"214":1,"215":2,"239":2,"247":2,"256":2,"262":2,"275":2,"280":11,"281":2,"289":2}}],["localpreferences",{"2":{"1":1,"149":2}}],["loosely",{"2":{"188":1}}],["loopedarrayop",{"2":{"66":1}}],["loopvectorization",{"2":{"59":1,"60":2,"64":1,"66":1,"177":1,"185":1,"291":3}}],["loop",{"0":{"185":1,"213":1},"2":{"59":1,"66":1,"89":2,"177":1,"185":1,"274":1}}],["loops",{"2":{"52":1,"66":1}}],["looping",{"2":{"5":1}}],["looks",{"2":{"178":1}}],["lookup",{"2":{"44":1}}],["look",{"2":{"22":1,"47":1,"195":1,"202":1,"229":1}}],["lo",{"2":{"15":2,"67":1}}],["loads",{"2":{"280":11}}],["loadcora",{"2":{"277":2,"280":1}}],["loadmnist",{"2":{"210":2,"213":1,"231":2,"235":1,"259":2,"261":1}}],["loader",{"2":{"124":1,"205":5}}],["loaded",{"2":{"2":2,"3":7,"4":2,"10":1,"11":1,"32":1,"53":1,"64":1,"107":2,"137":1,"149":2,"161":1,"185":1,"190":1}}],["load",{"0":{"283":1},"2":{"32":1,"59":1,"60":1,"64":1,"68":1,"114":1,"124":1,"148":1,"177":1,"206":1,"210":1,"231":1,"242":4,"246":1,"259":1,"283":2,"287":1}}],["loading",{"0":{"178":1,"210":1,"231":1,"242":1,"259":1,"277":1},"2":{"7":1,"35":1,"37":1,"60":1,"64":1,"75":1,"100":1,"178":1,"206":1}}],["=ϕwhere",{"2":{"293":1,"294":1}}],["=χu",{"2":{"293":1,"294":1}}],["=∫θp",{"2":{"266":1}}],["=wx+b",{"2":{"197":1}}],["=x",{"2":{"194":1}}],["=1v∑i=1vvitjvinote",{"2":{"167":1}}],["=1v∑i=1vvitaviwe",{"2":{"167":1}}],["=12xtx",{"2":{"194":1}}],["=12",{"2":{"89":1}}],["=e",{"2":{"167":1}}],["=i",{"2":{"167":1}}],["=α−βcos⁡",{"2":{"89":1}}],["=∑k=0n−1window",{"2":{"89":1}}],["=vcat",{"2":{"43":1}}],["=val",{"2":{"37":1,"50":2}}],["=true",{"2":{"24":1}}],["=static",{"2":{"24":1}}],["=>",{"2":{"22":2,"23":3,"25":8,"35":2,"37":5,"39":6,"40":2,"43":4,"44":4,"46":12,"56":8,"77":3,"96":6,"97":8,"123":9,"124":3,"126":17,"127":14,"132":4,"147":5,"157":4,"163":2,"164":2,"165":4,"166":4,"170":4,"173":1,"197":1,"202":2,"203":2,"211":15,"233":5,"244":1,"251":4,"258":6,"265":3,"271":4,"274":4,"278":2,"285":3,"294":6}}],["=0",{"2":{"15":2}}],["=nothing",{"2":{"2":1,"43":1,"46":1,"50":2,"283":2}}],["==3",{"2":{"83":2,"89":2,"292":1}}],["==1",{"2":{"83":3}}],["===",{"2":{"25":4,"67":2,"242":2,"260":1,"261":1}}],["==",{"2":{"1":1,"15":4,"19":1,"40":10,"42":15,"44":3,"47":2,"51":2,"55":1,"56":1,"63":1,"76":1,"78":3,"81":5,"83":13,"89":3,"97":2,"124":3,"132":1,"137":1,"144":1,"145":2,"155":2,"156":1,"197":2,"204":1,"210":1,"212":1,"220":1,"231":1,"234":1,"245":1,"246":2,"254":2,"260":2,"261":3,"265":1,"274":2,"279":1,"280":1,"283":1,"285":1,"287":3,"292":5}}],["=",{"2":{"1":2,"2":2,"3":1,"5":8,"15":14,"19":1,"22":2,"23":5,"25":8,"27":3,"35":4,"37":5,"39":40,"40":7,"41":6,"42":9,"43":6,"44":14,"45":25,"46":15,"47":3,"50":59,"52":1,"55":2,"56":46,"61":1,"63":3,"65":6,"67":24,"69":3,"70":2,"73":4,"74":3,"76":2,"77":12,"79":18,"80":30,"81":25,"82":10,"83":4,"84":10,"85":17,"87":12,"88":1,"89":42,"90":13,"96":52,"97":27,"107":2,"123":64,"124":9,"126":25,"127":47,"132":10,"133":3,"134":3,"135":2,"137":7,"138":1,"143":29,"144":1,"146":23,"147":124,"149":3,"150":7,"153":8,"154":9,"155":5,"157":7,"158":12,"163":21,"164":12,"165":20,"166":21,"168":2,"169":2,"170":12,"171":19,"173":5,"178":6,"186":9,"187":6,"188":1,"189":8,"190":1,"191":3,"192":3,"194":3,"195":4,"196":1,"197":17,"198":7,"201":6,"202":6,"203":5,"204":5,"205":19,"207":8,"210":9,"211":21,"212":5,"213":14,"214":2,"215":8,"217":2,"218":12,"219":4,"220":20,"221":6,"222":7,"231":9,"232":6,"233":5,"234":5,"235":10,"236":3,"238":139,"239":8,"242":8,"243":2,"244":3,"245":5,"246":21,"247":8,"249":2,"252":13,"253":19,"254":16,"255":11,"256":8,"257":2,"258":19,"259":7,"260":23,"261":25,"262":8,"264":21,"265":19,"266":19,"267":7,"270":8,"271":3,"272":1,"273":12,"274":16,"275":8,"276":2,"277":3,"278":6,"279":3,"280":19,"281":8,"282":2,"283":13,"284":9,"285":14,"286":3,"287":11,"288":5,"289":8,"292":55,"293":20,"294":38,"295":4,"296":14,"297":13,"298":7}}],["jpegturbo",{"2":{"263":2,"269":1,"291":2}}],["json3",{"2":{"230":1,"241":1}}],["jointly",{"2":{"202":1}}],["journal",{"2":{"63":1}}],["jvi",{"2":{"167":1}}],["jvp",{"0":{"18":1},"2":{"18":1,"169":6,"170":10,"193":4,"195":4}}],["j∈rd×d",{"2":{"167":1}}],["jnp",{"2":{"147":1}}],["jit",{"2":{"97":1,"147":1,"280":2,"288":1}}],["jimmy",{"2":{"65":1}}],["jerk",{"2":{"89":2}}],["jet`",{"2":{"69":1}}],["jet",{"0":{"69":1},"2":{"69":14}}],["jmlr",{"2":{"67":1}}],["jax",{"0":{"147":1},"2":{"147":29,"188":1}}],["jamie",{"2":{"65":1}}],["jacobian`",{"2":{"163":1}}],["jacobian",{"0":{"163":1,"164":1,"166":1,"168":1,"169":1,"170":1,"195":1,"196":1},"1":{"164":1},"2":{"18":11,"19":6,"162":3,"163":3,"164":9,"166":5,"167":4,"168":1,"169":1,"170":17,"193":6,"195":2,"196":1,"261":1,"280":1}}],["j",{"2":{"15":1,"19":2,"76":2,"81":2,"89":1,"163":2,"164":3,"166":2,"167":1,"170":8,"288":2}}],["just",{"2":{"3":1,"39":1,"41":3,"46":3,"56":1,"67":1,"76":1,"89":3,"96":1,"140":1,"147":1,"149":1,"151":2,"187":1,"189":1,"195":1,"235":1}}],["julia∂t",{"2":{"165":1}}],["julia∂x",{"2":{"163":1,"164":1,"166":1}}],["juliaz",{"2":{"288":1}}],["juliazygote",{"2":{"127":1}}],["juliazeros",{"2":{"189":2}}],["juliazerosc64",{"2":{"16":1}}],["juliazerosc32",{"2":{"16":1}}],["juliazerosc16",{"2":{"16":1}}],["juliazeros64",{"2":{"16":1}}],["juliazeros32",{"2":{"16":1}}],["juliazeros16",{"2":{"16":1}}],["juliaxt",{"2":{"134":1}}],["juliax",{"2":{"123":1,"147":1,"173":2,"189":11,"191":2,"197":1}}],["juliaxlogy",{"2":{"51":1}}],["juliaxlogx",{"2":{"51":1}}],["julia∇filter",{"2":{"89":1}}],["julia∇depthwiseconv",{"2":{"89":4}}],["julia∇conv",{"2":{"89":4}}],["julia∇grid",{"2":{"85":1}}],["julia∇imrotate",{"2":{"82":1}}],["julia∇upsample",{"2":{"81":4}}],["juliaq",{"2":{"76":1}}],["juliaσ",{"2":{"67":2}}],["juliaelu",{"2":{"67":1}}],["juliaeachslice",{"2":{"51":1}}],["juliaembedding",{"2":{"44":1}}],["juliakldivergenceloss",{"2":{"50":1}}],["juliakaiming",{"2":{"15":2}}],["juliahlo",{"2":{"147":1}}],["juliahamming",{"2":{"89":1}}],["juliahann",{"2":{"89":1}}],["juliahardswish",{"2":{"67":1}}],["juliahardtanh",{"2":{"67":1}}],["juliahardσ",{"2":{"67":2}}],["juliahuberloss",{"2":{"50":1}}],["juliahingeloss",{"2":{"50":1}}],["juliapkg>",{"2":{"179":1}}],["juliaprob",{"2":{"293":1,"294":1,"297":1}}],["juliaprintln",{"2":{"170":1,"194":1}}],["juliapred",{"2":{"123":2}}],["juliapredilate",{"2":{"89":1}}],["juliapredilated",{"2":{"89":1}}],["juliaps",{"2":{"127":1,"137":1,"197":1,"205":1}}],["juliapixel",{"2":{"81":1}}],["juliapixelshuffle",{"2":{"47":1}}],["juliapower",{"2":{"89":1}}],["juliapooldims",{"2":{"78":1}}],["juliapoissonloss",{"2":{"50":1}}],["juliapad",{"2":{"79":6}}],["juliaparallel",{"2":{"39":1}}],["juliaparameterlength",{"2":{"9":1}}],["juliapairwisefusion",{"2":{"39":1}}],["juliaw",{"2":{"189":1,"197":1}}],["juliaweights",{"2":{"186":1,"187":3}}],["juliaweightnorm",{"2":{"46":1}}],["juliawithin",{"2":{"87":1}}],["juliawrappedfunction",{"2":{"45":1}}],["juliann",{"2":{"171":1}}],["juliannlib",{"2":{"84":4,"89":1}}],["julian",{"2":{"97":1,"197":1}}],["julianooplayer",{"2":{"45":1}}],["juliancclbackend",{"2":{"27":1}}],["juliamodel",{"2":{"123":2,"124":1,"126":2,"127":4,"135":1,"147":1,"154":1,"164":1,"170":1,"197":1,"238":2,"271":1}}],["juliamelscale",{"2":{"89":1}}],["juliameanpool",{"2":{"42":1,"78":1}}],["juliamish",{"2":{"67":1}}],["juliamultigate",{"2":{"51":1}}],["juliamsleloss",{"2":{"50":1}}],["juliamseloss",{"2":{"50":1}}],["juliamake",{"2":{"76":1}}],["juliamatch",{"2":{"54":1}}],["juliamaeloss",{"2":{"50":1}}],["juliamaximum",{"2":{"89":1}}],["juliamaxout",{"2":{"45":1}}],["juliamaxpool",{"2":{"42":1,"78":1}}],["juliampibackend",{"2":{"27":1}}],["juliavjp",{"2":{"196":1,"274":1}}],["juliavariationalhiddendropout",{"2":{"41":1}}],["juliavector",{"2":{"18":1}}],["juliay",{"2":{"39":2}}],["juliabegin",{"2":{"270":1,"274":1,"283":1,"297":1}}],["juliabackend",{"2":{"137":1,"171":2}}],["juliabatchnorm",{"2":{"46":1,"65":1}}],["juliabatched",{"2":{"19":1,"60":1,"83":6,"89":2}}],["juliabf16",{"2":{"53":1}}],["juliabias",{"2":{"61":2,"87":1}}],["juliabinaryfocalloss",{"2":{"50":1}}],["juliabinarycrossentropyloss",{"2":{"50":1}}],["juliabilinear",{"2":{"44":1}}],["juliabidirectionalrnn",{"2":{"43":1}}],["juliabranchlayer",{"2":{"39":1}}],["juliabcast",{"2":{"30":1}}],["julialux",{"2":{"211":1}}],["julialuxops",{"2":{"51":1}}],["julialength",{"2":{"189":1}}],["julialeakyrelu",{"2":{"67":1}}],["julialang",{"2":{"89":1,"126":1,"133":3,"163":2,"198":1,"205":2,"207":1,"215":1,"222":1,"238":3,"239":1,"247":1,"256":1,"261":1,"262":1,"267":1,"275":1,"280":2,"281":1,"289":1,"298":1}}],["julialayernorm",{"2":{"46":1,"65":1}}],["julialayer",{"2":{"23":1}}],["julialpnormpool",{"2":{"78":1}}],["julialppool",{"2":{"42":1}}],["julialisht",{"2":{"67":1}}],["julialstmcell",{"2":{"43":1}}],["julialossfn",{"2":{"197":1}}],["julialoss",{"2":{"123":1,"295":1}}],["julialogaddexp",{"2":{"89":1}}],["julialogsumexp",{"2":{"87":1}}],["julialogsoftmax",{"2":{"77":1}}],["julialogσ",{"2":{"67":2}}],["julialogcosh",{"2":{"67":1}}],["julialocal",{"2":{"29":1}}],["julialoaded",{"2":{"3":1}}],["juliaunfold",{"2":{"80":1}}],["juliaunfreeze",{"2":{"22":2}}],["juliausing",{"2":{"69":1,"73":8,"74":4,"96":1,"97":1,"123":1,"126":1,"127":1,"132":1,"143":1,"146":1,"147":1,"153":1,"155":1,"157":2,"158":2,"162":1,"171":1,"173":1,"174":1,"179":1,"186":2,"188":1,"190":1,"193":1,"197":1,"198":1,"200":1,"207":1,"209":1,"215":1,"217":1,"222":1,"230":1,"239":1,"241":1,"247":1,"249":1,"256":1,"257":1,"262":1,"267":1,"269":1,"275":1,"276":1,"281":1,"282":1,"289":1,"291":1,"298":1}}],["juliaupsample",{"2":{"47":1,"81":5}}],["juliaupdate",{"2":{"10":1}}],["juliafig",{"2":{"266":2}}],["juliaf",{"2":{"194":1,"195":1}}],["juliafmap",{"2":{"123":1}}],["juliafunction",{"2":{"123":2,"127":1,"134":1,"153":2,"163":1,"165":1,"166":1,"168":1,"169":1,"170":1,"197":1,"201":1,"202":2,"203":1,"205":1,"210":1,"213":1,"218":1,"220":1,"231":1,"232":2,"233":1,"235":1,"242":1,"243":2,"244":1,"246":1,"252":1,"254":1,"258":2,"260":1,"261":1,"265":1,"270":1,"274":1,"277":1,"278":1,"279":1,"280":1,"283":2,"285":1,"287":1,"292":4,"293":1,"294":1}}],["juliafunctional",{"2":{"3":1}}],["juliafused",{"2":{"62":1,"64":1}}],["juliafast",{"2":{"59":2}}],["juliaf64",{"2":{"53":1}}],["juliaf32",{"2":{"53":1}}],["juliaf16",{"2":{"53":1}}],["juliaforward",{"2":{"274":1}}],["juliafor",{"2":{"192":1}}],["juliafold",{"2":{"80":1}}],["juliafoldl",{"2":{"51":1}}],["juliafocalloss",{"2":{"50":1}}],["juliaflattenlayer",{"2":{"45":1}}],["juliafluxlayer",{"2":{"35":1}}],["juliafromfluxadaptor",{"2":{"35":1}}],["juliafrozenlayer",{"2":{"22":1}}],["juliafreeze",{"2":{"22":2}}],["juliajvp",{"2":{"195":1}}],["juliajet",{"2":{"69":1}}],["juliajacobian",{"2":{"18":1}}],["juliajulia>",{"2":{"5":1,"15":2,"22":1,"23":1,"25":1,"35":1,"37":1,"39":3,"45":4,"46":3,"50":15,"56":6,"67":27,"69":1,"70":1,"72":2,"77":2,"79":5,"80":2,"81":3,"82":1,"83":3,"84":4,"85":1,"87":1,"88":1,"89":5}}],["juliarng",{"2":{"97":1,"126":1,"133":1,"153":1,"158":2,"188":1,"192":1,"270":1}}],["juliarnncell",{"2":{"43":1}}],["juliarrelu",{"2":{"67":1}}],["juliarandom",{"2":{"192":1}}],["juliarandc64",{"2":{"16":1}}],["juliarandc32",{"2":{"16":1}}],["juliarandc16",{"2":{"16":1}}],["juliarand64",{"2":{"16":1}}],["juliarandnc64",{"2":{"16":1}}],["juliarandnc32",{"2":{"16":1}}],["juliarandnc16",{"2":{"16":1}}],["juliarandn64",{"2":{"16":1}}],["juliarandn32",{"2":{"16":1}}],["juliarandn16",{"2":{"16":1}}],["juliarand32",{"2":{"16":1}}],["juliarand16",{"2":{"16":1}}],["juliareverse",{"2":{"89":1}}],["juliareversesequence",{"2":{"45":1}}],["juliarelu6",{"2":{"67":1}}],["juliarelu",{"2":{"67":1}}],["juliarecursive",{"2":{"52":6}}],["juliarecurrence",{"2":{"43":1}}],["juliareshapelayer",{"2":{"45":1}}],["juliares",{"2":{"39":2}}],["juliareset",{"2":{"3":1}}],["juliarepeatedlayer",{"2":{"39":1}}],["juliareplicate",{"2":{"8":1}}],["juliareduce",{"2":{"30":1}}],["juliareactant",{"2":{"2":1}}],["juliaopen",{"2":{"147":1}}],["juliaopt",{"2":{"137":1,"272":1}}],["juliaonesc64",{"2":{"16":1}}],["juliaonesc32",{"2":{"16":1}}],["juliaonesc16",{"2":{"16":1}}],["juliaones64",{"2":{"16":1}}],["juliaones32",{"2":{"16":1}}],["juliaones16",{"2":{"16":1}}],["juliaorthogonal",{"2":{"15":1}}],["juliaoutputsize",{"2":{"11":1}}],["juliatstate",{"2":{"274":1}}],["juliats",{"2":{"255":1}}],["juliatest",{"2":{"70":1}}],["juliatestmode",{"2":{"10":1}}],["juliatanhshrink",{"2":{"67":1}}],["juliatanh",{"2":{"67":1}}],["juliatosimplechainsadaptor",{"2":{"37":1}}],["juliatotal",{"2":{"29":1}}],["juliatr",{"2":{"170":1,"214":2}}],["juliatry",{"2":{"126":1,"127":1}}],["juliatranspose",{"2":{"89":2}}],["juliatrain",{"2":{"235":4,"237":1}}],["juliatrainstate",{"2":{"49":2}}],["juliatrainmode",{"2":{"10":1}}],["juliatrelu",{"2":{"67":1}}],["juliatruncated",{"2":{"15":1}}],["juliaimport",{"2":{"95":1}}],["juliaim2col",{"2":{"89":2}}],["juliaimrotate",{"2":{"82":1}}],["juliaistft",{"2":{"89":1}}],["juliaistraining",{"2":{"51":1}}],["juliais",{"2":{"89":1}}],["juliaisleaf",{"2":{"3":1}}],["juliainput",{"2":{"171":1}}],["juliainsert",{"2":{"89":1}}],["juliainstancenorm",{"2":{"46":1,"65":1}}],["juliainternal",{"2":{"66":1}}],["juliainitialized",{"2":{"28":1}}],["juliainitialize",{"2":{"28":1}}],["juliainitialstates",{"2":{"10":1}}],["juliainitialparameters",{"2":{"9":1}}],["juliaidentity",{"2":{"15":1}}],["juliadsum",{"2":{"286":1}}],["juliadudt",{"2":{"221":1}}],["juliadataloader",{"2":{"178":2}}],["juliadata",{"2":{"137":1}}],["juliadb",{"2":{"89":1}}],["juliadot",{"2":{"76":2}}],["juliadicecoeffloss",{"2":{"50":1}}],["juliadistributedutils",{"2":{"137":1}}],["juliadistributeddatacontainer",{"2":{"32":1}}],["juliadistributedoptimizer",{"2":{"31":1}}],["juliadisplay",{"2":{"8":1}}],["juliadropout",{"2":{"41":1,"63":1,"88":2}}],["juliadepthwiseconvdims",{"2":{"80":1}}],["juliadepthwiseconv",{"2":{"80":1,"89":2}}],["juliadenseconvdims",{"2":{"80":1}}],["juliadense",{"2":{"44":1}}],["juliadebuglayer",{"2":{"24":1}}],["juliadeviceiterator",{"2":{"5":1}}],["juliadefault",{"2":{"3":1}}],["juliacdev",{"2":{"150":1}}],["juliacalc",{"2":{"89":1}}],["juliactc",{"2":{"86":1}}],["juliacelu",{"2":{"67":1}}],["juliacrossentropyloss",{"2":{"50":1}}],["juliacol2im",{"2":{"89":1}}],["juliacompute",{"2":{"49":1}}],["juliaconst",{"2":{"123":1,"204":1,"212":1,"234":1,"245":1,"265":1,"273":1,"294":2,"295":2}}],["juliaconvdims",{"2":{"80":1}}],["juliaconvtranspose",{"2":{"40":1}}],["juliaconv",{"2":{"40":1,"80":1,"89":2}}],["juliacontains",{"2":{"8":1}}],["juliachain",{"2":{"39":1}}],["juliacheck",{"2":{"8":1}}],["juliacpu",{"2":{"2":1}}],["juliaanalytical",{"2":{"253":1}}],["juliaadtype",{"2":{"296":1}}],["juliaadd",{"2":{"89":1}}],["juliaadaptor",{"2":{"211":1}}],["juliaadaptivemeanpool",{"2":{"42":1}}],["juliaadaptivemaxpool",{"2":{"42":1}}],["juliaadaptivelppool",{"2":{"42":1}}],["juliaadapt",{"2":{"35":1,"37":1}}],["juliaalpha",{"2":{"63":1}}],["juliaalphadropout",{"2":{"41":1}}],["juliaallreduce",{"2":{"30":1}}],["juliaapply",{"2":{"8":1,"49":2}}],["juliaabstract",{"2":{"7":3,"284":1}}],["julia>",{"2":{"5":4,"15":3,"23":6,"25":3,"35":6,"37":7,"39":3,"45":8,"50":45,"56":34,"67":32,"69":1,"70":2,"72":2,"77":6,"79":9,"80":9,"81":9,"82":7,"83":9,"84":5,"85":9,"87":6,"88":3,"89":12}}],["juliasafe",{"2":{"89":1}}],["juliastruct",{"2":{"134":1,"154":1,"155":1,"202":1,"219":1,"232":1,"236":1,"251":1}}],["juliastft",{"2":{"89":1}}],["juliastorage",{"2":{"89":2}}],["juliastatefulluxlayer",{"2":{"55":1}}],["juliastatefulrecurrentcell",{"2":{"43":1}}],["juliastatelength",{"2":{"10":1}}],["juliastateless",{"2":{"8":1}}],["juliaspectrogram",{"2":{"89":1}}],["juliasparse",{"2":{"15":1}}],["juliaswish",{"2":{"67":1}}],["juliasoftmax",{"2":{"77":1}}],["juliasoftsign",{"2":{"67":1}}],["juliasoftshrink",{"2":{"67":1}}],["juliasoftplus",{"2":{"67":1}}],["juliasquaredhingeloss",{"2":{"50":1}}],["juliasize",{"2":{"189":1}}],["juliasigmoid",{"2":{"67":1}}],["juliasiamesecontrastiveloss",{"2":{"50":1}}],["juliasingle",{"2":{"49":2}}],["juliasimplechainslayer",{"2":{"37":1}}],["juliaselu",{"2":{"67":1}}],["juliaselectdim",{"2":{"45":1}}],["juliasetup",{"2":{"8":1}}],["juliaset",{"2":{"4":2,"57":1}}],["juliascale",{"2":{"44":1}}],["juliaskipconnection",{"2":{"39":1}}],["juliasynchronize",{"2":{"30":1}}],["juliashare",{"2":{"25":1}}],["juliasupported",{"2":{"3":1}}],["juliagdev",{"2":{"171":1}}],["juliaglu",{"2":{"87":1}}],["juliaglobalmeanpool",{"2":{"42":1}}],["juliaglobalmaxpool",{"2":{"42":1}}],["juliagloballppool",{"2":{"42":1}}],["juliaglorot",{"2":{"15":2}}],["juliagather",{"2":{"84":1}}],["juliagemm",{"2":{"89":1}}],["juliagelu",{"2":{"67":1}}],["juliagenericlossfunction",{"2":{"50":1}}],["juliagetproperty",{"2":{"51":1}}],["juliaget",{"2":{"3":2,"28":1}}],["juliagrid",{"2":{"85":1}}],["juliagroupnorm",{"2":{"46":1,"65":1}}],["juliagrucell",{"2":{"43":1}}],["juliagpu",{"2":{"1":1,"2":1}}],["julia",{"0":{"188":1},"1":{"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1},"2":{"1":1,"24":1,"35":1,"56":3,"57":1,"62":1,"66":1,"67":1,"69":1,"70":1,"71":1,"72":2,"75":1,"83":1,"89":9,"90":1,"91":1,"93":1,"95":2,"96":3,"97":1,"98":1,"101":1,"114":2,"118":1,"123":1,"124":1,"144":2,"145":2,"147":2,"148":1,"153":1,"162":1,"163":1,"165":1,"166":1,"170":1,"171":2,"172":1,"173":1,"185":2,"186":4,"188":2,"189":7,"192":1,"193":1,"198":9,"206":2,"207":9,"215":9,"222":9,"238":2,"239":12,"247":12,"252":1,"256":9,"259":1,"262":9,"263":1,"264":1,"265":4,"266":2,"267":9,"275":9,"281":9,"289":9,"292":1,"298":9}}],["jllwrappers",{"2":{"291":1}}],["jll",{"2":{"200":1,"230":2,"239":2,"241":6,"247":2,"263":67,"269":6,"291":67}}],["jld2",{"2":{"200":1,"206":3,"230":1,"241":1}}],["jl`",{"2":{"64":1,"163":2,"261":1,"280":1}}],["jl",{"0":{"31":1,"32":1,"69":1,"72":1,"103":1,"106":1,"109":1,"110":1,"113":1,"123":1,"138":1,"179":1,"216":1},"1":{"104":1,"105":1,"107":1,"108":1,"111":1,"112":1,"114":1,"115":1,"116":1,"117":1,"124":1,"139":1,"140":1,"180":1,"181":1,"182":1,"183":1,"184":1,"185":1,"217":1,"218":1,"219":1,"220":1,"221":1,"222":1},"2":{"0":1,"2":3,"3":8,"6":5,"7":2,"8":1,"10":2,"11":1,"13":2,"18":1,"19":1,"28":8,"31":1,"32":2,"35":1,"37":4,"43":1,"49":8,"50":2,"51":1,"53":1,"57":1,"58":1,"59":3,"60":1,"64":2,"66":2,"68":1,"69":1,"70":6,"72":4,"77":1,"82":2,"83":1,"87":2,"89":1,"93":2,"95":2,"97":3,"98":1,"99":2,"100":5,"107":3,"110":5,"112":1,"114":1,"119":10,"120":15,"122":5,"123":3,"124":1,"126":2,"133":6,"136":1,"137":1,"138":1,"139":1,"140":1,"148":1,"151":5,"153":1,"163":4,"171":3,"175":2,"176":2,"177":2,"178":3,"179":3,"181":1,"185":2,"186":1,"188":1,"190":1,"191":1,"193":4,"195":1,"197":1,"198":1,"199":2,"205":4,"207":1,"208":4,"211":1,"214":1,"215":1,"216":3,"220":3,"222":1,"228":3,"229":1,"232":1,"238":6,"239":1,"247":1,"248":3,"256":1,"261":2,"262":1,"263":2,"267":1,"272":1,"275":1,"276":1,"280":4,"281":1,"289":1,"290":1,"293":1,"294":1,"298":1}}],["p=res",{"2":{"297":1}}],["p=θ",{"2":{"295":1}}],["p=params",{"2":{"294":1}}],["p=2",{"2":{"42":3}}],["p^",{"2":{"293":1,"294":1}}],["p^2",{"2":{"293":1}}],["pcre2",{"2":{"291":1}}],["pcie",{"2":{"239":1,"247":1}}],["ptxas",{"2":{"280":11}}],["ptr",{"2":{"209":1}}],["ptrarrays",{"2":{"200":1,"263":1}}],["pngfiles",{"2":{"263":1,"291":1}}],["pdes",{"2":{"248":1}}],["pde",{"0":{"248":1},"1":{"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1},"2":{"253":8,"254":2,"255":3}}],["pdmats",{"2":{"230":1,"263":1}}],["physics",{"2":{"252":6,"254":58,"294":1}}],["philosophies",{"2":{"158":1}}],["phase",{"2":{"46":4}}],["pythonfrom",{"2":{"147":1}}],["python",{"2":{"147":3}}],["pytorch",{"2":{"40":3,"44":1,"78":1,"93":1,"116":3,"117":1,"188":1}}],["pkgversion",{"2":{"263":1,"269":1,"291":1}}],["pkg>",{"2":{"179":2}}],["pkg",{"2":{"72":4,"73":9,"74":2,"95":2,"96":1,"98":2,"179":1,"198":2,"207":2,"215":2,"222":2,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["pmlr",{"2":{"65":1}}],["p2",{"2":{"50":6}}],["p1",{"2":{"50":4}}],["pixman",{"2":{"263":1,"291":1}}],["pixel",{"2":{"47":2,"81":4,"82":4,"85":3,"89":2}}],["pixelshuffle",{"2":{"47":1}}],["pixels",{"2":{"42":3,"47":2,"85":1}}],["pickle",{"2":{"230":1,"241":1}}],["pipe",{"2":{"219":1}}],["pipeline",{"2":{"135":1}}],["pipelines",{"2":{"49":1}}],["pitfalls",{"0":{"172":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1}}],["pinn",{"0":{"248":1},"1":{"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1},"2":{"248":1,"251":4,"254":5,"255":1}}],["pinns",{"2":{"165":1,"248":1}}],["pin",{"2":{"125":2}}],["piecewise",{"2":{"67":2}}],["pi",{"2":{"50":1,"81":1}}],["peel",{"2":{"202":1,"203":1}}],["peel`",{"2":{"202":1}}],["people",{"2":{"153":1}}],["pesky",{"2":{"127":1}}],["pessimistic",{"2":{"89":1}}],["peak",{"2":{"89":1}}],["pended",{"2":{"114":1}}],["penalties",{"2":{"56":1}}],["penultimate",{"2":{"43":2,"61":1,"62":1}}],["perspective",{"2":{"292":1}}],["perceptron",{"2":{"268":1}}],["permutedims",{"2":{"260":1,"283":1}}],["permuteddimsarrays",{"2":{"89":1}}],["permuteddimsarray",{"2":{"83":8,"89":3}}],["permutations",{"2":{"83":1}}],["perhaps",{"2":{"67":1,"89":1}}],["per",{"2":{"46":4,"89":1,"164":1,"265":2}}],["periodic=false",{"2":{"89":2}}],["periodic=true",{"2":{"89":2}}],["periodic",{"2":{"40":1,"89":6}}],["period",{"2":{"21":1}}],["perfect",{"2":{"15":1,"87":1}}],["performs",{"2":{"89":2,"149":1,"220":1}}],["performant",{"2":{"89":2}}],["performance",{"0":{"172":1,"177":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1},"2":{"15":2,"24":1,"49":1,"52":5,"56":3,"59":1,"89":3,"120":4,"121":2,"122":1,"171":1,"172":2,"173":2,"177":1}}],["performing",{"2":{"42":3}}],["performed",{"2":{"37":2,"40":1,"265":1}}],["perform",{"2":{"8":1,"30":2,"40":3,"49":2,"78":3,"89":2,"158":1,"265":1,"292":1}}],["p",{"2":{"35":1,"41":12,"56":2,"63":7,"78":5,"88":5,"218":2,"220":4,"221":2,"232":6,"236":2,"266":2,"292":3,"293":6,"294":2,"296":1}}],["plugin",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["plus",{"2":{"22":1,"25":1,"39":6,"46":7,"56":3,"96":1,"97":1,"126":2,"127":2,"132":1,"211":2,"271":1,"294":1}}],["platform",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["plan",{"2":{"162":1}}],["places",{"2":{"56":2,"80":1,"89":1}}],["place",{"2":{"52":5,"59":1,"61":2,"81":4,"83":2,"84":1,"88":1,"140":1,"260":1}}],["plotutils",{"2":{"263":1,"291":1}}],["plotting",{"0":{"221":1}}],["plot",{"2":{"89":3,"264":3,"266":9,"274":1,"293":1,"294":1,"297":1}}],["please",{"2":{"4":2,"90":2,"95":1,"97":1,"101":1,"122":1,"128":1,"141":1,"162":1,"202":1,"216":1,"227":1,"228":3,"248":1}}],["push",{"2":{"288":1,"295":1}}],["pullback",{"2":{"127":2}}],["pull",{"2":{"101":1,"107":1}}],["publisher",{"2":{"90":1}}],["public",{"2":{"3":1,"50":1,"87":1,"89":1}}],["purpose",{"2":{"193":1}}],["purposes",{"2":{"54":1,"155":1,"220":1,"231":1,"259":1}}],["pure",{"2":{"8":1,"91":1,"92":1,"93":1,"191":1}}],["pseudorandom",{"2":{"192":1}}],["pseudo",{"2":{"41":3,"188":1}}],["ps",{"2":{"7":2,"8":5,"22":2,"23":18,"25":17,"30":2,"35":4,"37":2,"39":4,"45":9,"49":2,"50":2,"54":1,"55":5,"56":22,"96":3,"97":6,"123":16,"124":6,"126":5,"127":12,"130":3,"132":2,"133":2,"134":6,"135":2,"137":2,"143":9,"144":7,"145":6,"146":2,"147":2,"153":7,"154":7,"155":10,"157":5,"158":7,"162":3,"163":9,"164":9,"165":8,"166":11,"168":2,"169":2,"170":9,"171":7,"173":5,"197":7,"202":4,"204":2,"205":4,"206":3,"212":2,"213":3,"220":4,"221":1,"232":2,"233":5,"234":2,"235":2,"236":3,"238":15,"243":2,"245":2,"246":2,"252":4,"254":3,"258":7,"260":8,"261":5,"265":10,"266":1,"273":1,"274":1,"279":2,"280":6,"285":2,"286":2,"287":3,"294":2}}],["powerful",{"2":{"189":1}}],["power",{"2":{"89":9,"266":1}}],["potential",{"2":{"80":1}}],["potentially",{"2":{"7":1,"11":1,"37":1}}],["pool",{"2":{"78":3}}],["pooldims",{"2":{"78":1}}],["pooling",{"0":{"42":1,"78":1},"2":{"42":21,"78":3,"89":1,"117":1}}],["polygonops",{"2":{"291":1}}],["polynomial",{"0":{"268":1},"1":{"269":1,"270":1,"271":1,"272":1,"273":1,"274":1,"275":1},"2":{"268":1,"270":1}}],["polynomials",{"2":{"67":1}}],["polyesterweave",{"2":{"200":1,"263":1,"291":1}}],["polyester",{"2":{"66":1,"200":1,"263":1,"269":1,"291":1}}],["polyalgorithm",{"2":{"64":1}}],["poissonrandom",{"2":{"291":1}}],["poisson",{"2":{"50":1}}],["poissonloss",{"2":{"50":2}}],["pointers",{"2":{"89":1}}],["points",{"2":{"85":1,"89":3,"158":1,"197":1,"264":4,"265":1,"266":1,"270":1}}],["point",{"0":{"53":1},"2":{"4":2,"53":4,"59":1,"125":2,"137":1,"142":1,"148":1,"189":1,"293":1,"294":1}}],["populated",{"2":{"202":2}}],["populate",{"2":{"24":1}}],["positivefactorizations",{"2":{"263":1}}],["position=",{"2":{"218":1,"221":1,"294":1,"297":1}}],["position",{"2":{"45":2,"260":1,"292":1}}],["possibly",{"2":{"40":2,"42":3,"120":1}}],["possible",{"2":{"1":1,"2":1,"7":1,"11":1,"28":2,"35":2,"59":3,"61":1,"62":2,"64":1,"83":1,"121":1,"163":1}}],["posterior",{"2":{"266":2}}],["posted",{"2":{"162":1}}],["post",{"2":{"7":1,"37":1}}],["packing",{"2":{"263":1,"291":1}}],["packageextensioncompat",{"2":{"230":1}}],["packages",{"0":{"33":1,"98":1,"217":1},"1":{"34":1,"35":1,"36":1,"37":1},"2":{"3":3,"18":4,"19":2,"37":1,"49":2,"57":1,"96":1,"98":3,"100":1,"107":1,"110":1,"118":2,"121":3,"122":2,"148":1,"172":1,"177":1,"184":1,"185":3,"239":1,"247":1,"263":1}}],["package",{"0":{"200":1,"209":1,"230":1,"241":1,"249":1,"269":1,"291":1},"2":{"0":1,"3":1,"4":3,"12":1,"13":1,"31":1,"68":3,"73":1,"74":1,"87":1,"95":1,"98":1,"110":3,"114":1,"118":1,"119":1,"122":2,"149":2,"182":1,"187":1,"190":1,"228":1,"248":1}}],["pango",{"2":{"263":1,"291":1}}],["painful",{"2":{"125":1}}],["pairs",{"2":{"50":1,"238":13}}],["pair",{"2":{"40":2,"43":1,"50":1}}],["pairwisefusion",{"2":{"39":3}}],["past",{"2":{"93":1}}],["passes",{"2":{"39":2,"89":1,"128":1,"191":1}}],["passed",{"2":{"10":1,"25":2,"39":6,"43":5,"45":6,"56":1,"65":1,"69":1,"70":1,"77":1,"80":1,"86":1,"92":1,"116":1,"154":1}}],["passing",{"2":{"8":1,"39":1,"46":1,"56":2,"89":1,"137":2,"186":2}}],["pass",{"2":{"8":1,"18":2,"39":1,"40":1,"44":2,"46":1,"51":2,"53":1,"56":3,"70":1,"71":1,"87":1,"89":4,"93":1,"97":1,"114":1,"127":3,"137":1,"153":1,"186":1,"202":3,"274":2}}],["pal2023efficient",{"2":{"90":1}}],["pal2023lux",{"2":{"90":1}}],["pal",{"2":{"90":2}}],["paying",{"2":{"89":1}}],["pay",{"2":{"87":1}}],["paper",{"2":{"87":1}}],["patience",{"2":{"280":2}}],["patches",{"2":{"89":1}}],["patterns",{"2":{"89":1}}],["pattern",{"2":{"50":1,"123":1,"178":1,"264":1}}],["path",{"2":{"10":1,"39":1,"115":2,"143":2,"198":1,"207":2,"215":2,"222":1,"239":2,"247":2,"256":2,"262":2,"267":1,"275":2,"281":2,"289":2,"292":4,"298":1}}],["page",{"2":{"20":1,"21":1,"22":1,"55":1,"102":1,"123":1,"125":1,"171":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["paddedviews",{"2":{"263":1}}],["padded",{"2":{"89":2}}],["padding=0",{"2":{"78":1}}],["padding",{"0":{"79":1},"2":{"15":1,"40":9,"42":12,"79":10,"85":10,"89":11,"147":2}}],["pad=1",{"2":{"80":1,"258":6}}],["pad=0",{"2":{"40":2,"42":3,"78":3,"80":1}}],["pad",{"2":{"40":6,"42":3,"78":6,"79":73,"80":3,"89":7,"147":2}}],["pads",{"2":{"15":1}}],["parent",{"2":{"89":3,"259":1}}],["parsersext",{"2":{"241":2}}],["parse",{"2":{"79":1,"164":1,"210":1,"231":1,"242":2,"259":1}}],["parallelism",{"2":{"178":1}}],["parallel=true",{"2":{"178":2}}],["parallel",{"0":{"136":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"23":1,"39":12,"45":1}}],["parametric",{"2":{"67":1}}],["parameterized",{"2":{"266":1}}],["parameterization",{"2":{"90":1,"263":1}}],["parameter",{"0":{"22":1,"155":1},"2":{"15":3,"22":2,"25":2,"40":4,"45":1,"46":2,"50":1,"56":6,"92":1,"93":4,"96":1,"127":1,"137":1,"145":2,"153":5,"154":2,"155":5,"158":1,"220":2,"235":1,"263":1,"265":2}}],["parameterlength",{"2":{"7":1,"9":1,"153":4,"154":1,"244":1,"261":1,"265":3,"280":1,"287":1}}],["parameters",{"0":{"9":1,"25":1,"142":1,"145":1,"166":1},"1":{"143":1,"144":1,"145":1,"146":1},"2":{"3":1,"7":5,"8":1,"9":2,"15":1,"22":13,"23":7,"24":1,"25":23,"31":1,"35":4,"37":1,"39":24,"40":2,"43":9,"44":4,"45":2,"46":31,"49":10,"50":1,"53":2,"54":4,"55":5,"56":17,"89":6,"91":1,"93":2,"96":5,"97":7,"107":2,"117":1,"124":1,"126":6,"127":7,"132":5,"137":1,"142":2,"145":3,"151":1,"153":5,"154":3,"155":7,"158":6,"162":1,"166":1,"193":1,"197":6,"202":2,"205":2,"206":1,"211":12,"213":2,"220":5,"230":1,"232":1,"235":2,"243":1,"246":4,"254":1,"261":4,"263":1,"265":79,"266":2,"271":3,"273":1,"274":5,"280":7,"287":3,"294":6}}],["params=nothing",{"2":{"292":2}}],["params",{"2":{"22":14,"23":6,"46":3,"49":2,"143":3,"146":1,"284":6,"292":4,"293":3,"294":6,"295":2,"296":1,"297":1}}],["party",{"0":{"228":1}}],["parts",{"2":{"220":1}}],["partial=false",{"2":{"201":2,"210":2,"254":2,"259":1,"283":1}}],["particle",{"2":{"293":2,"294":1}}],["particles",{"2":{"292":1}}],["participate",{"0":{"130":1}}],["particularly",{"2":{"89":1}}],["particular",{"0":{"143":1},"2":{"2":1,"89":2,"125":1,"143":1}}],["partition",{"2":{"32":1}}],["part",{"0":{"145":1,"146":1},"2":{"3":1,"51":1,"87":1,"89":1,"102":1,"163":1,"216":1,"227":1,"232":1}}],["practitioners",{"2":{"220":1}}],["prngs",{"2":{"192":1}}],["prng",{"2":{"188":1,"192":2}}],["prngkey",{"2":{"147":11}}],["prs",{"2":{"162":1}}],["principles",{"0":{"92":1}}],["principle",{"2":{"82":1}}],["printf",{"2":{"97":2,"123":1,"124":1,"197":2,"200":1,"205":2,"209":1,"213":1,"217":1,"220":1,"230":1,"235":1,"241":1,"246":2,"249":1,"254":1,"257":1,"261":3,"269":1,"274":1,"276":1,"280":4,"282":1,"287":2,"291":1,"295":1}}],["printouts",{"2":{"93":1}}],["printout",{"2":{"56":1}}],["print",{"2":{"54":1,"56":1,"147":1}}],["printing",{"2":{"37":1,"45":1,"93":1,"97":1}}],["println",{"2":{"23":1,"126":2,"127":2,"153":2,"154":3,"155":2,"163":2,"164":2,"165":2,"166":2,"170":6,"191":2,"192":2,"194":2,"195":1,"196":1,"197":3,"198":2,"207":2,"215":2,"222":2,"239":2,"246":1,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["prints",{"2":{"4":2,"127":1}}],["printed",{"2":{"2":1,"8":1,"35":1,"56":1}}],["primal",{"2":{"82":1,"85":4}}],["primarily",{"2":{"80":2,"265":1}}],["primitives",{"0":{"30":1,"176":1},"2":{"75":1,"176":1,"265":1}}],["priority",{"2":{"121":2}}],["prior",{"2":{"40":1,"265":1}}],["pr",{"2":{"11":1,"227":1,"228":1}}],["preallocationtoolsreversediffext",{"2":{"230":1,"291":1}}],["preallocationtools",{"2":{"230":2,"291":2}}],["preallocated",{"2":{"89":1}}],["precompilation",{"2":{"200":1,"269":2}}],["precompiling",{"2":{"200":14,"230":10,"241":28,"263":20,"269":21,"291":59}}],["precompiled",{"2":{"200":28,"230":20,"241":56,"263":40,"269":42,"291":118}}],["precompile",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["precisely",{"2":{"87":1}}],["precision",{"0":{"53":1},"2":{"53":1,"62":2,"173":3,"189":1,"254":7,"280":102,"287":2}}],["press",{"2":{"179":1}}],["pressing",{"2":{"95":1}}],["presence",{"2":{"56":1}}],["presented",{"2":{"195":1}}],["presently",{"2":{"120":1,"123":1}}],["present",{"2":{"35":1,"37":1,"40":2,"43":10,"44":2,"51":1,"69":1,"79":1,"114":1,"126":1,"127":1,"149":2}}],["preserving",{"2":{"37":1,"47":1}}],["preservation",{"2":{"35":1}}],["preserved",{"2":{"37":1}}],["preserve",{"2":{"35":7,"37":1}}],["prepate",{"2":{"89":1}}],["prepare",{"2":{"89":2}}],["preprint",{"2":{"46":1,"65":2}}],["preprocessing",{"2":{"8":1}}],["preds",{"2":{"265":2}}],["pred=ŷ",{"2":{"204":1}}],["predilation",{"2":{"89":1}}],["predilate",{"2":{"89":2}}],["predilated",{"2":{"89":2}}],["predilates",{"2":{"89":1}}],["predictive",{"2":{"266":1}}],["predictions",{"2":{"265":2,"266":2,"274":1}}],["prediction",{"0":{"266":1},"2":{"50":2,"265":1,"266":1}}],["predictor",{"2":{"255":1}}],["predict",{"2":{"201":1,"266":6}}],["predicted",{"2":{"50":3,"212":2,"234":2,"245":2,"266":2}}],["pred",{"2":{"50":8,"123":4,"204":5,"220":2,"221":3,"255":4,"274":2,"279":5,"295":4}}],["prettytables",{"2":{"263":1}}],["pretty",{"2":{"45":1,"93":1,"97":1,"138":1,"163":1}}],["prevents",{"2":{"89":1,"114":1}}],["prevent",{"2":{"28":1,"49":1,"50":1,"63":2,"114":1,"153":1}}],["previously",{"2":{"7":1,"11":1,"93":1,"107":1,"116":1,"139":1}}],["previous",{"2":{"5":1,"7":1,"84":2,"87":1,"93":1,"164":1}}],["pre",{"2":{"7":1,"11":1,"40":1,"89":2,"95":1,"96":1,"114":1}}],["prefer",{"2":{"87":1,"176":1}}],["preferably",{"2":{"66":1}}],["preferred",{"2":{"5":1,"66":1}}],["preferencetools",{"2":{"179":2}}],["preference",{"2":{"1":2,"2":1,"54":1,"57":2,"162":1,"173":1,"175":1,"179":6,"182":2,"183":1,"184":3,"185":1}}],["preferences",{"0":{"1":1,"179":1},"1":{"180":1,"181":1,"182":1,"183":1,"184":1,"185":1},"2":{"54":1,"57":4,"114":1,"175":2,"179":4,"181":3,"184":2}}],["progressmeter",{"2":{"263":1,"291":1}}],["progresslogging",{"2":{"263":1,"291":1}}],["progress",{"2":{"263":3}}],["programming",{"2":{"188":1}}],["proj",{"2":{"258":4}}],["project",{"2":{"68":1}}],["probabilistic",{"2":{"265":2}}],["probability",{"2":{"41":3,"50":1,"63":3,"77":1,"88":1}}],["prob",{"2":{"218":2,"220":9,"221":2,"232":4,"236":2,"285":3,"286":1,"293":1,"294":1,"295":1,"297":1}}],["problematic",{"2":{"125":1,"126":1}}],["problem",{"0":{"250":1},"2":{"86":1,"126":1,"127":1,"163":1,"166":1,"191":1,"197":2,"220":1,"236":1,"238":1,"250":1,"271":1}}],["problems",{"0":{"126":1},"2":{"86":1,"91":1,"123":1}}],["promotion",{"0":{"173":1},"2":{"89":1,"173":1}}],["promotions",{"2":{"54":1,"173":1}}],["promote",{"2":{"62":1,"89":1,"173":1}}],["produce",{"2":{"51":1,"83":1,"107":1}}],["products",{"2":{"18":2}}],["product",{"0":{"168":1,"169":1,"195":1,"196":1},"2":{"18":8,"76":10,"162":2,"167":4,"168":1,"169":1,"193":5,"195":2,"196":1,"253":1,"255":1}}],["prod",{"2":{"15":1,"78":2,"80":1,"258":2}}],["propagating",{"2":{"55":1}}],["propagated",{"2":{"39":1}}],["propagate",{"2":{"8":1}}],["proper",{"2":{"135":1,"188":1}}],["properly",{"2":{"24":1}}],["properties",{"2":{"19":1,"39":1}}],["proportion",{"2":{"15":3}}],["proceeding",{"2":{"153":1}}],["proceedings",{"2":{"15":5,"50":2,"65":1}}],["proceeds",{"2":{"43":8}}],["processor",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["processing",{"2":{"43":1,"63":1}}],["processes",{"2":{"31":1,"32":1,"137":3,"178":1}}],["process",{"2":{"4":1,"137":1,"178":1,"210":1,"231":1,"253":1}}],["providing",{"2":{"12":1,"211":1}}],["provides",{"2":{"37":1,"55":1,"70":1,"96":1,"112":1,"151":1,"179":1,"186":1,"195":1,"216":1,"236":1}}],["provide",{"2":{"11":1,"15":1,"35":1,"49":2,"56":1,"89":1,"98":1,"118":1,"124":1,"167":1,"190":1}}],["provided",{"2":{"4":2,"5":1,"21":1,"23":2,"43":3,"51":1,"63":1,"81":1,"83":1,"84":2,"137":1,"151":1,"190":1,"266":1,"273":1}}],["ddot",{"2":{"292":2}}],["ddp",{"2":{"136":1}}],["dt^2",{"2":{"292":1}}],["dt2",{"2":{"292":3}}],["dt",{"2":{"292":21,"293":4,"294":2,"295":2,"297":2}}],["dsum",{"2":{"284":1,"285":1}}],["ds",{"2":{"259":5}}],["dset",{"2":{"242":6}}],["dstsize",{"2":{"84":3}}],["dst",{"2":{"84":23}}],["dnns",{"2":{"125":1}}],["db",{"2":{"89":13}}],["dw",{"2":{"89":7}}],["dgrid",{"2":{"85":1}}],["dy",{"2":{"82":2,"89":8}}],["dynamicpplenzymecoreext",{"2":{"263":2}}],["dynamicpplzygoterulesext",{"2":{"263":1}}],["dynamicpplmcmcchainsext",{"2":{"263":1}}],["dynamicpplchainrulescoreext",{"2":{"263":1}}],["dynamicpplforwarddiffext",{"2":{"263":1}}],["dynamicppl",{"2":{"263":6}}],["dynamic",{"2":{"68":1}}],["dynamics",{"2":{"15":1}}],["d+2",{"2":{"81":1}}],["dx",{"2":{"81":4,"89":7}}],["duration",{"2":{"265":2}}],["during",{"2":{"3":1,"46":4,"84":1,"87":1,"89":2,"200":1,"269":2}}],["dudt",{"2":{"220":2,"221":1,"232":4,"236":2}}],["du",{"2":{"218":3}}],["dump",{"2":{"124":4}}],["dummy",{"2":{"96":1}}],["due",{"2":{"80":1,"89":1,"164":1,"171":1,"238":1}}],["dual",{"2":{"66":1,"87":3}}],["dl",{"2":{"51":1,"294":1}}],["dmitry",{"2":{"46":1,"65":1}}],["d2",{"2":{"25":5,"132":1,"133":3,"292":3}}],["d2=dense",{"2":{"25":1,"132":1,"135":1}}],["d3",{"2":{"25":9}}],["d3=chain",{"2":{"25":1}}],["d1×",{"2":{"46":2,"65":2}}],["d1",{"2":{"25":5,"132":1,"133":3}}],["d1=dense",{"2":{"25":1,"132":1,"135":1}}],["driver",{"2":{"239":3,"247":3}}],["dr",{"2":{"125":1}}],["dropdims",{"2":{"286":1}}],["drop",{"2":{"50":1,"162":1}}],["dropout=0",{"2":{"278":1}}],["dropout",{"0":{"41":1,"63":1,"88":1},"2":{"41":17,"63":14,"76":2,"88":10,"93":1,"143":1,"146":1,"156":1,"160":1,"278":5,"280":2}}],["dropped",{"2":{"18":2,"63":2}}],["drawn",{"2":{"15":5,"266":1}}],["d",{"2":{"8":1,"40":4,"42":6,"45":2,"47":4,"56":11,"81":2,"89":1,"121":1,"143":2,"144":5,"145":6,"155":4,"201":4,"235":2,"261":3,"280":1,"285":8,"287":1,"292":2}}],["domain",{"2":{"253":1}}],["domainerror",{"2":{"24":1,"127":2}}],["doubt",{"2":{"153":1}}],["double",{"2":{"79":1}}],["doi",{"2":{"90":2,"292":2}}],["doing",{"2":{"52":4,"83":1,"89":1,"124":1,"243":1,"265":1,"276":1}}],["dot",{"2":{"76":8,"126":1,"133":3,"147":3,"163":2,"205":2,"218":2,"221":2,"238":3,"261":1,"280":13}}],["document",{"2":{"118":1}}],["documented",{"2":{"114":1}}],["documentations",{"2":{"172":1}}],["documentation",{"2":{"7":1,"8":1,"47":1,"57":1,"60":1,"70":1,"98":1,"101":1,"116":2,"153":1,"183":1,"184":1,"214":1,"228":1}}],["doctor",{"0":{"184":1},"2":{"57":4,"175":1,"184":3}}],["docstringextensions",{"2":{"291":1}}],["docstrings",{"2":{"153":1}}],["docstring",{"2":{"89":1,"114":1}}],["docs",{"2":{"56":2,"104":1,"216":1,"263":2}}],["downloading\\u001b",{"2":{"269":1}}],["downright",{"2":{"93":1,"151":1}}],["downsampled",{"2":{"81":4}}],["downstream",{"2":{"81":4,"93":1,"122":1,"228":1}}],["down",{"0":{"127":1},"2":{"50":2,"54":1,"89":1,"126":1,"127":1,"128":1,"154":1,"173":1,"189":2}}],["does",{"2":{"45":1,"54":1,"59":1,"65":2,"80":2,"87":1,"88":1,"89":7}}],["doesn",{"2":{"23":1,"25":1,"43":1,"51":1,"56":3,"59":2,"75":1,"87":1,"130":1,"153":2,"155":1,"156":1,"160":1,"163":1,"205":2,"244":1,"277":1,"280":1}}],["do",{"2":{"6":1,"39":1,"45":2,"56":14,"89":1,"95":1,"97":2,"109":1,"123":2,"147":1,"153":2,"154":1,"155":1,"156":1,"160":1,"162":1,"167":2,"171":1,"173":1,"178":3,"189":2,"191":1,"202":1,"203":1,"211":1,"220":2,"232":1,"243":1,"254":7,"255":1,"258":2,"260":1,"265":1,"266":1,"271":1,"278":2,"280":102,"287":2,"292":1,"294":1}}],["done",{"2":{"23":1,"54":1,"59":2,"83":1,"89":3,"96":1,"137":1,"149":1,"164":1,"185":1}}],["don",{"2":{"4":2,"16":1,"56":2,"62":1,"64":1,"68":1,"89":1,"93":1,"96":1,"121":2,"122":1,"125":1,"128":1,"140":1,"141":1,"153":2,"155":1,"158":1,"162":1,"164":1,"189":2,"191":2,"195":1,"201":1,"202":1,"206":1,"210":1,"219":1,"231":1,"243":1}}],["dilate",{"2":{"147":2}}],["dilations",{"2":{"147":2}}],["dilation",{"2":{"40":2,"42":3,"78":1,"80":3,"89":3}}],["dilation=1",{"2":{"40":2,"42":3,"78":1,"80":1}}],["direactly",{"2":{"114":1}}],["direct",{"2":{"89":15,"114":1}}],["directions",{"2":{"78":3}}],["direction",{"2":{"46":2}}],["directly",{"2":{"3":1,"5":1,"6":1,"8":1,"22":1,"27":2,"39":2,"45":1,"47":1,"49":1,"52":2,"63":1,"75":1,"77":1,"81":3,"111":1,"115":1,"137":1,"150":1,"162":1,"186":1,"229":1,"276":1}}],["dinput",{"2":{"85":1}}],["digits=2",{"2":{"246":4}}],["digits",{"2":{"189":2}}],["digit",{"2":{"67":1}}],["div",{"2":{"89":2,"260":1}}],["diverges",{"2":{"50":1}}],["divergence",{"2":{"50":2}}],["divisor",{"2":{"80":4}}],["divisible",{"2":{"40":2}}],["divides",{"2":{"47":1,"81":1}}],["divide",{"2":{"40":2}}],["dice",{"2":{"50":2}}],["dicecoeffloss",{"2":{"50":4}}],["diagonal",{"2":{"44":1,"265":1}}],["diffrules",{"2":{"263":1}}],["diffresults",{"2":{"263":1}}],["diffractor",{"2":{"119":1}}],["diffeqsol",{"2":{"232":3,"233":1,"238":9}}],["diffeqnoiseprocessreversediffext",{"2":{"230":1,"291":1}}],["diffeqnoiseprocess",{"2":{"230":2,"291":2}}],["diffeqcallbacks",{"2":{"230":1,"291":1}}],["diffeqbaseunitfulext",{"2":{"291":2}}],["diffeqbasecudaext",{"2":{"230":2}}],["diffeqbasechainrulescoreext",{"2":{"230":1,"291":2}}],["diffeqbaseenzymeext",{"2":{"230":1,"291":1}}],["diffeqbasereversediffext",{"2":{"230":1,"291":1}}],["diffeqbasesparsearraysext",{"2":{"230":1,"291":2}}],["diffeqbasedistributionsext",{"2":{"230":1,"291":1}}],["diffeqbasetrackerext",{"2":{"230":1,"291":1}}],["diffeqbase",{"2":{"230":8,"291":8}}],["diffeqflux",{"2":{"229":1}}],["differ",{"2":{"193":1}}],["differern",{"2":{"164":1}}],["differences",{"0":{"140":1},"2":{"80":1,"136":1,"158":1,"163":2}}],["difference",{"2":{"41":1,"107":1,"123":3,"154":1,"163":2,"164":2,"167":1,"237":1,"254":70,"280":1026,"287":16,"292":2}}],["differentiate",{"2":{"123":1,"170":1,"191":1}}],["differentiationinterfaceenzymeext",{"2":{"230":1,"291":1}}],["differentiationinterfacezygoteext",{"2":{"230":1,"291":1}}],["differentiationinterfacesparsematrixcoloringsext",{"2":{"263":1,"291":1}}],["differentiationinterfacesparsearraysext",{"2":{"230":1,"263":1}}],["differentiationinterfacestaticarraysext",{"2":{"230":1,"263":1,"291":2}}],["differentiationinterfaceforwarddiffext",{"2":{"230":1,"263":1,"291":2}}],["differentiationinterfacefinitediffext",{"2":{"230":1,"263":1}}],["differentiationinterfacetrackerext",{"2":{"230":1,"263":1,"291":1}}],["differentiationinterfacechainrulescoreext",{"2":{"230":1,"263":1,"291":2}}],["differentiationinterfacereversediffext",{"2":{"230":1,"291":1}}],["differentiationinterface",{"2":{"162":1,"195":2,"230":10,"263":8,"291":8}}],["differentiation",{"0":{"17":1,"118":1,"162":1,"180":1,"193":1},"1":{"18":1,"19":1,"20":1,"119":1,"120":1,"121":1,"122":1,"163":1,"164":1,"165":1,"166":1,"167":1,"168":1,"169":1,"170":1,"194":1,"195":1,"196":1},"2":{"20":1,"87":1,"122":1,"123":1,"155":1,"162":1,"165":1,"171":1,"180":2,"188":1,"195":1}}],["differential",{"2":{"90":1,"295":1}}],["differentiable",{"2":{"67":1,"87":1}}],["differently",{"2":{"39":1,"85":1,"87":1}}],["different",{"0":{"36":1},"1":{"37":1},"2":{"23":1,"39":1,"50":1,"57":1,"64":1,"66":1,"75":1,"89":1,"116":1,"158":1,"164":1,"183":1,"192":1,"220":1}}],["differs",{"2":{"80":1}}],["diff",{"2":{"163":1,"165":1,"166":1,"193":1}}],["difficulty",{"2":{"15":2}}],["dimwhere",{"2":{"50":1}}],["dim",{"2":{"45":6,"76":4,"87":2,"147":7,"197":7,"258":5,"278":10,"280":2}}],["dimensionmismatch",{"2":{"126":2}}],["dimensionality",{"2":{"50":1,"89":1}}],["dimensional",{"2":{"44":3,"47":2,"65":2,"81":1,"89":4,"147":1,"195":1}}],["dimension",{"0":{"126":1},"2":{"37":1,"40":4,"41":2,"42":6,"43":8,"44":1,"45":9,"46":14,"51":1,"61":2,"62":1,"65":1,"79":11,"81":1,"83":2,"87":1,"89":6,"104":1,"125":1,"132":1,"135":1,"187":1}}],["dimensions",{"2":{"15":11,"19":2,"37":1,"40":3,"42":9,"43":2,"44":10,"45":5,"46":4,"47":8,"65":2,"76":3,"77":1,"78":1,"79":14,"80":2,"81":7,"82":2,"83":1,"89":4,"117":1,"147":2,"197":1,"211":1}}],["dims=tuple",{"2":{"284":1,"285":1}}],["dims=8",{"2":{"261":1}}],["dims=4",{"2":{"260":1}}],["dims=3",{"2":{"201":1,"259":1}}],["dims=ndims",{"2":{"132":1}}],["dims=1",{"2":{"50":1,"65":1,"77":1,"88":5,"253":5}}],["dims=colon",{"2":{"46":1}}],["dims=2",{"2":{"43":1,"76":1,"77":1,"287":1}}],["dims=",{"2":{"41":2}}],["dims=pad",{"2":{"40":1}}],["dims",{"2":{"15":11,"37":2,"40":1,"41":4,"43":31,"44":55,"45":7,"46":3,"50":2,"51":1,"63":2,"65":3,"76":1,"77":11,"79":37,"80":1,"87":3,"88":3,"89":16,"104":1,"147":14,"153":12,"187":4,"202":9,"203":7,"251":11,"254":2,"258":11,"259":1,"260":2,"261":3,"285":18,"286":3,"287":2}}],["disabling",{"0":{"185":1},"2":{"177":1}}],["disabled",{"2":{"8":1,"40":2,"44":3,"174":1}}],["disable",{"2":{"8":1,"24":1,"57":3,"89":1,"114":2,"127":1,"162":1,"174":1,"180":1,"184":1,"185":2}}],["disallow",{"2":{"174":1}}],["disaster",{"2":{"173":1}}],["discrete",{"2":{"232":1,"235":1}}],["discouraged",{"2":{"163":2}}],["discourse",{"2":{"162":1,"163":1,"165":1}}],["discuss",{"2":{"216":1}}],["discussed",{"2":{"128":1}}],["discussions",{"2":{"101":1}}],["discussion",{"2":{"43":1,"165":1,"195":1}}],["disruptive",{"2":{"91":1}}],["dissimilar",{"2":{"50":1}}],["dist",{"2":{"285":7,"287":1}}],["distinguish",{"2":{"158":2}}],["distinction",{"2":{"93":1}}],["distinct",{"2":{"43":1}}],["distance",{"2":{"50":1,"89":2}}],["distributionsadtrackerext",{"2":{"263":1}}],["distributionsadforwarddiffext",{"2":{"263":1}}],["distributionsad",{"2":{"263":3}}],["distributionsdensityinterfaceext",{"2":{"263":1}}],["distributionschainrulescoreext",{"2":{"230":1,"263":1,"269":1,"291":1}}],["distributionstestext",{"2":{"230":1,"263":1,"269":1,"291":1}}],["distributions",{"2":{"50":1,"77":1,"230":3,"263":4,"265":2,"269":3,"291":3}}],["distribution",{"2":{"15":8,"16":12,"40":2,"43":6,"44":3,"50":5,"67":1,"258":3,"266":1}}],["distributeddatacontainer",{"2":{"32":1,"137":2,"138":1}}],["distributedoptimizer",{"2":{"31":2,"137":2,"138":1}}],["distributedutils",{"0":{"137":1},"2":{"4":2,"26":1,"27":2,"28":3,"29":2,"30":6,"31":1,"32":1,"136":1,"137":11,"138":5,"139":1,"140":3}}],["distributed",{"0":{"26":1,"136":1},"1":{"27":1,"28":1,"29":1,"30":1,"31":1,"32":1,"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"4":3,"15":2,"27":4,"28":7,"93":1,"137":2,"263":1}}],["disjoint",{"2":{"25":1}}],["displays",{"2":{"264":1,"266":1}}],["displayed",{"2":{"124":1,"149":1}}],["display",{"2":{"8":1,"83":1,"93":1,"154":1,"261":1}}],["dispatch=",{"2":{"243":1}}],["dispatch=nothing",{"2":{"56":1}}],["dispatch₋₋₋",{"2":{"238":3}}],["dispatching",{"0":{"129":1},"1":{"130":1,"131":1,"132":1,"133":1,"134":1,"135":1},"2":{"89":1}}],["dispatchdoctorchainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["dispatchdoctorenzymecoreext",{"2":{"200":1,"263":1,"291":1}}],["dispatchdoctor",{"2":{"8":1,"57":1,"175":1,"200":1,"263":3,"269":1,"291":3}}],["dispatches",{"2":{"3":1,"8":1,"51":1,"56":1,"66":2,"151":1}}],["dispatch",{"0":{"130":1,"134":1,"184":1},"2":{"3":1,"56":2,"57":4,"68":1,"78":1,"130":2,"134":1,"135":1,"156":1,"171":1,"175":1,"184":3}}],["date",{"2":{"228":2}}],["datapoints",{"2":{"270":1}}],["dataapi",{"2":{"263":1}}],["dataaugmentation",{"2":{"257":1}}],["datavalueinterfaces",{"2":{"263":1}}],["dataframes",{"2":{"241":1}}],["datadeps",{"2":{"230":1,"241":1}}],["datastructures",{"2":{"263":1,"269":1,"291":1}}],["datasize",{"2":{"218":1,"293":1}}],["datasets",{"0":{"242":1},"2":{"201":1,"242":2,"246":1}}],["dataset",{"0":{"201":1,"270":1,"277":1,"283":1},"2":{"32":1,"178":2,"201":7,"210":5,"220":1,"231":5,"242":2,"259":5,"264":2,"270":2,"283":2}}],["datatypes",{"2":{"89":4}}],["datatype",{"2":{"8":1}}],["dataloaders",{"2":{"201":3,"205":2,"246":3}}],["dataloader",{"0":{"219":1},"2":{"5":8,"112":1,"124":4,"178":5,"201":5,"210":3,"212":2,"213":6,"219":5,"220":4,"230":1,"231":3,"234":2,"235":5,"242":2,"245":2,"246":9,"254":6,"259":1,"261":5,"283":3,"287":3}}],["data",{"0":{"2":1,"136":1,"178":1,"218":1,"253":1,"264":1},"1":{"137":1,"138":1,"139":1,"140":1,"141":1},"2":{"0":1,"2":1,"3":3,"5":1,"32":2,"40":5,"42":9,"43":1,"45":1,"46":6,"47":2,"49":6,"50":1,"56":10,"80":2,"81":1,"84":4,"89":9,"93":1,"97":14,"112":1,"123":1,"124":2,"137":3,"153":1,"171":1,"173":1,"178":5,"197":1,"201":8,"210":6,"218":3,"219":3,"220":3,"221":2,"231":6,"245":2,"246":16,"252":5,"253":10,"254":63,"264":6,"265":1,"266":7,"268":1,"270":4,"273":1,"274":5,"277":7,"283":3,"293":3,"294":2,"295":1,"297":2}}],["danger",{"2":{"2":1,"4":2,"19":1,"28":1}}],["de",{"2":{"163":2}}],["demonstration",{"2":{"220":1,"231":1,"248":1,"259":1}}],["demonstrative",{"2":{"155":1}}],["demonstrate",{"2":{"158":1,"163":1,"197":1,"208":1}}],["demonstrates",{"2":{"80":1,"282":1}}],["derive",{"2":{"89":1}}],["derivatives",{"2":{"87":1,"89":1,"163":1,"164":1,"195":1,"250":1,"294":1}}],["derivative",{"2":{"87":3,"89":1,"165":1}}],["deadlocks",{"2":{"141":1}}],["deal",{"2":{"89":3,"165":1,"232":1}}],["deactivate",{"2":{"43":3}}],["deg2rad",{"2":{"82":4}}],["decay",{"2":{"261":1,"280":3}}],["decay=1e",{"2":{"261":1}}],["dec",{"2":{"258":4}}],["decode",{"2":{"258":2,"260":4,"261":3}}],["decoder=st",{"2":{"258":2}}],["decoder",{"2":{"258":14}}],["decouples",{"2":{"46":1}}],["decision",{"2":{"179":1}}],["decides",{"2":{"160":1}}],["decibel",{"2":{"89":1}}],["decimal",{"2":{"67":1}}],["decrease",{"2":{"59":1}}],["declared",{"2":{"56":1}}],["debuggable",{"2":{"89":2}}],["debugging",{"0":{"24":1,"125":1},"1":{"126":1,"127":1,"128":1},"2":{"24":3,"54":1,"89":2,"124":1,"125":2}}],["debuglayer",{"2":{"24":3,"126":4,"127":4}}],["debug",{"2":{"24":5,"83":2,"115":1,"125":4,"126":6,"127":10,"173":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["density",{"0":{"282":1},"1":{"283":1,"284":1,"285":1,"286":1,"287":1,"288":1,"289":1},"2":{"265":1}}],["densityinterface",{"2":{"263":1}}],["dense=dense",{"2":{"278":1}}],["denselayerparameters",{"2":{"155":3}}],["denseconvdims",{"2":{"80":2}}],["dense",{"2":{"22":2,"23":18,"25":4,"35":2,"37":3,"39":12,"44":2,"45":1,"46":12,"50":2,"56":10,"64":2,"77":3,"80":1,"87":1,"89":1,"96":17,"97":6,"105":1,"116":1,"123":18,"124":3,"126":17,"127":14,"132":6,"133":2,"135":2,"143":9,"144":3,"146":3,"147":10,"153":1,"155":2,"157":5,"163":2,"164":2,"165":4,"166":4,"170":4,"171":2,"173":1,"176":1,"184":1,"197":2,"202":2,"203":1,"211":9,"220":3,"233":5,"238":45,"244":4,"251":4,"265":4,"271":4,"274":8,"277":1,"278":1,"285":3,"294":6}}],["denom",{"2":{"293":3,"294":3}}],["denominator",{"2":{"46":4,"65":4}}],["denotes",{"2":{"3":4}}],["delaunaytriangulation",{"2":{"263":1,"291":1}}],["delimitedfiles",{"2":{"263":1}}],["delta=0",{"2":{"50":1}}],["delta",{"2":{"50":2}}],["delving",{"2":{"15":2}}],["deleted",{"2":{"1":1}}],["descent",{"2":{"197":2}}],["describe",{"2":{"171":1}}],["describes",{"2":{"37":1,"125":1,"293":1,"294":1}}],["described",{"2":{"15":3,"39":2}}],["desperate",{"2":{"89":1}}],["despite",{"2":{"49":1}}],["destination",{"2":{"84":6}}],["destructure",{"2":{"35":3,"93":2,"161":1}}],["desirable",{"2":{"185":1,"265":1}}],["desired",{"2":{"1":1,"40":1,"117":1}}],["design",{"0":{"92":1},"2":{"5":1,"91":1,"158":1}}],["det",{"2":{"284":11,"285":4}}],["detour",{"2":{"193":1}}],["detection",{"2":{"50":2}}],["detected",{"2":{"24":1,"127":2,"254":1}}],["deterministic",{"2":{"15":1,"93":1}}],["determined",{"2":{"80":1,"96":1}}],["determines",{"2":{"37":1}}],["determine",{"2":{"3":1,"52":1,"63":2,"65":2,"140":1,"265":1}}],["details",{"0":{"159":1},"1":{"160":1},"2":{"24":1,"25":1,"35":1,"37":2,"46":1,"47":1,"55":1,"56":2,"57":1,"60":1,"63":2,"65":4,"70":1,"76":1,"78":3,"84":1,"86":1,"97":1,"104":1,"107":1,"109":1,"114":1,"116":2,"153":1,"180":1,"183":1,"184":1,"190":1}}],["detailed",{"2":{"7":1,"22":1,"126":1}}],["depots",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["depot",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["depwarn",{"2":{"114":1}}],["depwthwise",{"2":{"89":1}}],["deprecated",{"2":{"52":10,"104":1}}],["deprecation",{"2":{"21":1,"52":5,"110":2,"163":2}}],["deprecations",{"2":{"2":1,"114":1}}],["depth",{"2":{"81":1}}],["depthwiseconvdims",{"2":{"80":1}}],["depthwiseconv",{"2":{"80":1,"89":2}}],["depthwise",{"2":{"40":2,"80":2,"89":8}}],["depths",{"2":{"23":1}}],["dependent",{"0":{"132":1},"2":{"131":1}}],["dependencies",{"0":{"177":1},"2":{"68":1,"107":1,"185":1,"200":5,"230":3,"241":6,"263":5,"265":1,"269":5,"291":16}}],["dependency",{"2":{"6":1,"12":1,"68":1,"151":1,"164":1,"200":10,"230":7,"241":22,"263":15,"269":18,"291":43}}],["depend",{"2":{"6":1,"65":1,"89":1,"151":1,"153":1}}],["depending",{"2":{"6":1,"8":1,"15":1,"85":1}}],["defiing",{"0":{"294":1}}],["definition",{"0":{"250":1,"258":1,"278":1,"285":1}}],["definitions",{"2":{"154":1}}],["definitely",{"2":{"170":1}}],["defining",{"0":{"97":1,"204":1},"2":{"0":1,"3":2,"11":1,"56":1,"97":1,"123":1,"130":1,"151":1,"153":1,"154":1,"155":1,"202":1,"203":2,"211":1,"243":1}}],["define",{"0":{"211":1,"213":1,"219":1,"232":1,"234":1,"245":1,"251":1,"252":1,"283":1,"292":1},"2":{"7":1,"66":1,"84":1,"89":1,"97":1,"127":2,"130":2,"131":1,"134":1,"147":2,"151":2,"153":3,"154":1,"155":2,"158":2,"171":1,"197":1,"202":3,"203":1,"204":1,"219":1,"220":1,"232":1,"251":1,"252":1,"258":2,"265":3,"283":1,"292":2,"294":2,"295":2}}],["definesingletons",{"2":{"263":1}}],["defines",{"2":{"6":1,"7":2,"52":1,"292":1,"293":1}}],["defined",{"2":{"3":1,"8":1,"11":1,"82":2,"87":1,"89":1,"133":1,"147":1}}],["defer",{"2":{"195":1}}],["def",{"2":{"147":1}}],["defaulting",{"2":{"89":1}}],["defaults",{"0":{"116":1},"2":{"43":3,"49":2,"55":1,"79":9,"116":4,"294":1}}],["default",{"2":{"2":1,"3":2,"8":1,"13":6,"15":8,"16":24,"23":1,"35":1,"37":3,"43":1,"45":4,"46":1,"53":1,"54":1,"56":1,"65":10,"74":4,"76":4,"77":1,"78":3,"80":3,"82":1,"85":2,"88":1,"89":6,"93":1,"96":1,"97":1,"104":1,"107":2,"114":1,"116":2,"123":5,"124":1,"146":1,"147":2,"148":1,"153":1,"155":5,"157":2,"158":2,"160":2,"173":1,"174":1,"181":1,"184":1,"185":1,"186":4,"187":1,"188":1,"189":1,"191":1,"197":1,"198":1,"202":1,"205":1,"207":1,"209":1,"213":1,"215":1,"219":1,"220":1,"222":1,"233":1,"238":1,"239":1,"243":1,"247":1,"254":1,"256":1,"258":2,"262":1,"264":1,"267":1,"275":1,"280":1,"281":1,"283":1,"287":1,"288":1,"289":1,"294":1,"298":1}}],["dev=gpu",{"2":{"233":1}}],["dev=cpu",{"2":{"213":1}}],["developer",{"2":{"123":1}}],["developed",{"2":{"93":1,"228":1}}],["deviate",{"2":{"294":1}}],["deviation",{"2":{"15":3,"46":1}}],["devicememory",{"2":{"5":3,"171":2,"186":2,"238":18}}],["deviceiterator",{"2":{"5":3,"112":2,"124":2,"219":1}}],["device=missing",{"2":{"2":1}}],["device",{"0":{"178":1},"2":{"2":27,"3":32,"4":11,"5":3,"28":1,"56":1,"66":1,"73":5,"74":4,"96":2,"97":1,"110":1,"111":3,"115":2,"123":6,"124":5,"140":4,"147":1,"148":1,"149":7,"150":2,"158":3,"171":1,"178":5,"205":3,"213":1,"214":1,"217":2,"219":1,"233":1,"235":2,"238":1,"239":1,"246":1,"247":1,"249":2,"257":2,"260":3,"273":2,"276":2,"282":2}}],["devices",{"2":{"0":1,"28":4,"158":1}}],["dev",{"2":{"2":1,"3":1,"4":2,"5":4,"66":2,"73":4,"74":3,"96":5,"97":5,"147":3,"205":5,"213":4,"233":2,"235":3,"246":4}}],["deepcopy",{"2":{"49":1}}],["deep",{"2":{"2":1,"12":1,"15":6,"65":1,"67":3,"78":1,"90":1,"93":1,"173":1,"176":1,"186":1,"187":1}}],["aac",{"2":{"263":1,"291":1}}],["a100",{"2":{"239":1,"247":1}}],["a∈rd×d",{"2":{"167":1}}],["a=layer",{"2":{"158":1}}],["a=0",{"2":{"67":1}}],["a×b×x",{"2":{"158":1}}],["aware",{"0":{"181":1},"2":{"140":2,"141":1,"181":4}}],["ah",{"2":{"127":1}}],["ahead",{"2":{"97":1}}],["ahmadi",{"2":{"50":1}}],["ahmad",{"2":{"50":1}}],["aesthetic",{"2":{"89":1}}],["ax",{"2":{"218":4,"221":6,"255":4,"264":3,"270":4,"274":5,"283":2,"288":2,"293":4,"294":6,"297":11}}],["axisalgorithms",{"2":{"263":1}}],["axisarrays",{"2":{"263":1,"291":1}}],["axislegend",{"2":{"218":1,"221":1,"270":1,"274":1,"293":1,"294":1,"297":1}}],["axis",{"2":{"89":1,"218":1,"221":1,"238":44,"255":1,"264":1,"266":1,"270":1,"274":1,"283":1,"288":1,"293":1,"294":1,"297":2}}],["axes",{"2":{"89":2,"243":3}}],["ai",{"2":{"195":1}}],["aid",{"2":{"89":2}}],["aims",{"2":{"15":1}}],["author",{"2":{"90":2}}],["automa",{"2":{"263":1,"291":1}}],["automatically",{"2":{"5":1,"7":2,"8":1,"43":2,"63":3,"65":2,"74":1,"140":1,"148":1,"160":1,"161":1,"162":1,"177":1,"201":1,"202":2,"203":1,"210":1,"231":1,"265":1}}],["automatic",{"0":{"17":1,"118":1,"149":1,"162":1,"180":1,"183":1,"193":1},"1":{"18":1,"19":1,"20":1,"119":1,"120":1,"121":1,"122":1,"163":1,"164":1,"165":1,"166":1,"167":1,"168":1,"169":1,"170":1,"194":1,"195":1,"196":1},"2":{"2":1,"3":1,"8":1,"20":1,"123":1,"148":1,"149":2,"150":1,"155":1,"162":2,"171":1,"180":4,"182":1,"188":1,"195":1}}],["autotuner",{"2":{"254":7,"280":102,"287":2}}],["autotuning",{"2":{"254":7,"280":102,"287":2}}],["autotracker",{"2":{"49":1,"70":1}}],["autojacvec=reversediffvjp",{"2":{"235":1}}],["autojacvec=zygotevjp",{"2":{"233":1,"235":1}}],["auto",{"2":{"160":1,"173":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["autofinitediff",{"2":{"70":1}}],["autoforwarddiff",{"2":{"18":1,"19":1,"70":1,"164":1,"169":1,"195":2}}],["autoencoder",{"2":{"257":1}}],["autoencoders",{"2":{"67":1}}],["autoenzyme",{"2":{"49":1,"70":1,"97":1,"124":2,"205":1,"213":1,"254":1,"261":1,"274":2,"280":1,"287":1}}],["autodiff",{"2":{"63":2,"65":2,"87":1,"91":1,"155":1,"162":1,"163":2,"193":2,"261":1,"280":1}}],["autoreactant",{"2":{"49":1}}],["autoreversediff",{"2":{"49":2,"70":1}}],["autozygote",{"2":{"18":1,"19":1,"49":1,"70":1,"96":2,"168":1,"196":1,"197":1,"205":1,"213":1,"220":1,"235":1,"246":1,"296":1}}],["autoselection",{"2":{"2":1}}],["audio",{"2":{"89":2}}],["aorb",{"2":{"83":1}}],["aka",{"2":{"62":1,"64":1}}],["affected",{"2":{"167":1}}],["affects",{"2":{"40":1}}],["affinebijector",{"2":{"284":5,"285":2}}],["affine",{"2":{"46":1}}],["affine=false",{"2":{"46":5,"116":1}}],["affine=true",{"2":{"39":2,"46":20,"116":1,"126":1,"127":2}}],["afterwards",{"2":{"46":1}}],["after",{"2":{"24":1,"39":2,"41":1,"42":3,"46":4,"56":1,"63":2,"76":1,"86":1,"93":1,"97":1,"125":1,"137":1,"140":1,"178":1,"197":14,"202":1}}],["average",{"2":{"266":3}}],["averages",{"2":{"30":2,"31":1}}],["avik",{"2":{"90":2}}],["avoids",{"2":{"236":1}}],["avoiding",{"2":{"164":1}}],["avoid",{"2":{"43":1,"53":4,"63":1,"89":1,"93":1,"140":1,"236":1,"238":1,"265":2,"277":1}}],["avg",{"2":{"30":2}}],["available",{"2":{"26":1,"32":1,"59":2,"62":1,"64":1,"89":1,"95":1,"98":1,"118":1,"120":1,"123":1,"148":1,"149":1,"150":1,"161":1,"177":1,"239":1,"247":1}}],["ab7d",{"2":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["abyss",{"2":{"188":1}}],["ability",{"2":{"151":1}}],["abitrary",{"2":{"79":1}}],["able",{"2":{"79":1,"89":1,"107":1,"199":1}}],["above",{"2":{"39":2,"51":1,"87":1,"89":2,"95":1,"97":1,"166":1,"187":1,"190":1,"200":1,"266":1,"269":1}}],["about",{"2":{"24":1,"54":1,"67":1,"80":1,"89":1,"93":1,"153":1,"189":1,"192":1,"202":1,"253":1,"266":1}}],["abstol",{"2":{"238":13}}],["abstol=1",{"2":{"233":1}}],["abstractbijector",{"2":{"284":3}}],["abstractbatchedmatrix",{"2":{"83":2,"89":2}}],["abstractppl",{"2":{"263":1}}],["abstractmcmc",{"2":{"263":1}}],["abstractmatrix",{"2":{"45":1,"64":2,"153":1,"154":1,"232":1}}],["abstracttrees",{"2":{"263":1}}],["abstracttimeseriesdatabatchordering=batchlastindex",{"2":{"43":2}}],["abstractfftstestext",{"2":{"263":1,"269":1,"291":1}}],["abstractfftschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["abstractffts",{"2":{"263":3,"269":2,"291":2}}],["abstractfloat",{"2":{"87":1,"283":2}}],["abstraction",{"2":{"110":1}}],["abstractions",{"2":{"66":1}}],["abstractexplicitcontainerlayer",{"2":{"107":2,"108":1}}],["abstractexplicitlayer",{"2":{"107":1}}],["abstractvector",{"2":{"45":1,"62":1,"64":1,"265":1,"292":2}}],["abstractrange",{"2":{"259":1}}],["abstractrule",{"2":{"49":1}}],["abstractrecurrentcell",{"2":{"43":2}}],["abstractrng=utils",{"2":{"15":8,"16":24}}],["abstractrng",{"2":{"7":2,"8":2,"9":1,"10":1,"15":3,"63":4,"153":2,"158":2,"243":1,"270":1,"283":1,"285":2}}],["abstractadtype",{"2":{"18":2,"19":1,"49":1}}],["abstractarrays",{"2":{"70":1}}],["abstractarray",{"2":{"15":16,"16":60,"19":1,"41":3,"44":6,"45":6,"46":1,"51":1,"52":1,"59":2,"62":2,"65":1,"66":1,"81":10,"82":2,"83":2,"85":5,"89":6,"134":1,"158":2,"162":3,"166":1,"202":1,"203":1,"221":1,"252":3,"260":1,"284":9,"285":2}}],["abstractluxdistributedbacked",{"2":{"31":1}}],["abstractluxdistributedbackend",{"2":{"28":3,"29":2,"30":7,"32":1}}],["abstractluxwrapperlayer",{"2":{"7":4,"23":1,"107":1,"108":2,"132":1,"154":1,"232":1,"236":1}}],["abstractluxcontainerlayer",{"2":{"7":5,"107":1,"154":3,"202":2,"251":1,"258":1,"285":1}}],["abstractluxlayers",{"2":{"39":1}}],["abstractluxlayer",{"2":{"7":5,"8":2,"22":4,"23":3,"24":1,"37":3,"39":4,"46":1,"49":1,"55":1,"107":1,"130":2,"133":3,"134":2,"153":2,"158":1,"232":3,"236":2,"243":2,"258":2}}],["abstract",{"0":{"7":1},"2":{"3":2,"6":1,"7":3,"44":4,"220":1}}],["abstractdevice",{"2":{"2":1,"3":9,"4":4,"5":1}}],["abstractgpudevice",{"2":{"1":1,"150":1}}],["absolute",{"2":{"50":1,"173":1}}],["absolutely",{"2":{"49":1}}],["abs2",{"2":{"50":1,"56":3,"70":1,"163":2,"164":2,"165":2,"166":1,"252":3,"265":1,"285":1}}],["abs",{"2":{"15":1,"45":1,"81":1,"255":2,"292":1}}],["amazing",{"2":{"228":1}}],["am",{"2":{"165":1,"294":1}}],["amin",{"2":{"89":2}}],["amount",{"2":{"89":1}}],["ambiguous",{"2":{"52":1,"107":1}}],["amplitude",{"2":{"89":1}}],["amp",{"0":{"18":1,"147":1,"172":1,"188":1,"283":1},"1":{"173":1,"174":1,"175":1,"176":1,"177":1,"178":1,"189":1,"190":1,"191":1,"192":1,"193":1,"194":1,"195":1,"196":1,"197":1,"198":1},"2":{"18":2,"19":1,"40":4,"43":1,"44":1,"45":1,"46":1,"49":1,"81":1,"87":2,"91":2}}],["amd",{"2":{"3":1,"120":1,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["amdgpudevice",{"2":{"4":2,"140":1,"150":2,"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["amdgpu",{"2":{"2":2,"3":1,"13":2,"28":1,"73":2,"96":1,"100":2,"121":1,"141":1,"148":2,"198":2,"207":2,"215":2,"222":2,"239":2,"247":2,"256":2,"262":2,"267":2,"275":2,"281":2,"289":2,"298":2}}],["academic",{"2":{"90":1}}],["acts",{"2":{"265":1}}],["action",{"2":{"57":1}}],["activity",{"2":{"70":2}}],["activates",{"2":{"184":1}}],["activate",{"2":{"70":1}}],["activated",{"2":{"43":4}}],["activations",{"2":{"59":1,"62":1,"67":1,"86":1,"265":1}}],["activation=gelu",{"2":{"285":2}}],["activation=tanh",{"2":{"43":1}}],["activation=identity",{"2":{"40":2,"44":4,"46":4}}],["activation",{"0":{"59":1,"61":1,"67":1},"2":{"40":6,"43":3,"44":10,"46":12,"56":1,"59":7,"61":12,"62":4,"64":3,"65":5,"67":19,"77":2,"86":2,"87":2,"114":3,"116":3,"176":7,"184":1,"265":1,"285":2,"294":3}}],["actively",{"2":{"105":1,"123":1}}],["active",{"2":{"35":2}}],["act",{"2":{"56":8,"59":2,"65":2,"87":4,"89":2,"97":5,"251":4}}],["act=relu",{"2":{"56":2,"97":1}}],["actually",{"2":{"107":1,"151":1}}],["actual",{"2":{"24":1,"194":2,"274":1}}],["acc",{"2":{"205":3,"213":8,"214":4,"235":4,"246":13,"280":64}}],["accumulated",{"2":{"89":1,"220":1}}],["accumulate",{"2":{"89":1}}],["accumulates",{"2":{"46":2,"80":1,"84":1}}],["accumulation",{"2":{"89":2}}],["accurate",{"2":{"67":3}}],["accuracy",{"0":{"204":1},"2":{"59":1,"204":2,"205":1,"212":2,"213":4,"214":40,"234":2,"235":94,"237":18,"245":2,"246":212,"279":2,"280":3}}],["accompanied",{"2":{"89":1}}],["accounting",{"2":{"80":1}}],["account",{"2":{"56":1}}],["accordingly",{"2":{"46":2,"65":2}}],["according",{"2":{"15":1,"84":5,"85":1}}],["accidental",{"2":{"54":1}}],["acceptance",{"2":{"265":1}}],["acceptable",{"2":{"83":1}}],["accepted",{"2":{"83":2}}],["accept",{"2":{"77":1,"155":1,"265":1}}],["accepts",{"2":{"47":1,"89":1}}],["accelerate",{"2":{"185":1}}],["accelerators",{"2":{"123":1}}],["accelerator",{"2":{"75":1}}],["accelerating",{"2":{"65":1}}],["access",{"2":{"39":2,"56":1,"73":1,"74":1,"89":1,"153":1,"154":1,"155":1}}],["accessors",{"2":{"25":1,"230":1,"241":1,"263":7,"291":7}}],["accessing",{"2":{"23":1,"35":1,"37":1,"75":1}}],["achieved",{"2":{"40":1,"164":1}}],["achieve",{"2":{"15":1,"43":1,"80":1}}],["across",{"2":{"0":1,"30":2,"31":1,"32":1,"65":2,"79":3,"92":2,"137":3,"189":2,"266":1}}],["adj",{"2":{"278":5,"279":2,"280":10}}],["adjacency",{"2":{"277":1}}],["adjust",{"2":{"254":1}}],["adjoint",{"2":{"67":1,"80":2,"82":2,"83":11,"89":6,"220":2,"277":1}}],["adtype=autotracker",{"2":{"265":1}}],["adtype",{"2":{"70":1,"296":1}}],["adtypesconstructionbaseext",{"2":{"263":1}}],["adtypeschainrulescoreext",{"2":{"263":1,"269":1,"291":1}}],["adtypesenzymecoreext",{"2":{"200":1,"263":1}}],["adtypes",{"2":{"49":1,"162":1,"200":2,"263":4,"269":2,"274":1,"291":1}}],["advances",{"2":{"63":1}}],["advancedvi",{"2":{"263":2}}],["advancedmhmcmcchainsext",{"2":{"263":1}}],["advancedmhforwarddiffext",{"2":{"263":1}}],["advancedmhstructarraysext",{"2":{"263":1}}],["advancedmh",{"2":{"263":4}}],["advancedpslibtaskext",{"2":{"263":1}}],["advancedps",{"2":{"263":2}}],["advancedhmcmcmcchainsext",{"2":{"263":1}}],["advancedhmc",{"2":{"263":2}}],["advanced",{"0":{"226":1},"2":{"7":2,"123":1}}],["adamw",{"2":{"261":1,"280":1}}],["adam",{"2":{"56":1,"96":15,"97":1,"124":1,"205":1,"213":1,"220":4,"235":1,"246":1,"254":1,"272":3,"274":2,"280":1,"287":1}}],["adapted",{"2":{"290":2}}],["adaptext",{"2":{"200":1,"263":1}}],["adaptstaticarraysext",{"2":{"263":1,"291":1}}],["adapting",{"2":{"67":1}}],["adaptive=false",{"2":{"293":1,"294":1,"295":1,"297":1}}],["adaptivemeanpool",{"2":{"42":1}}],["adaptivemaxpool",{"2":{"42":1}}],["adaptive",{"2":{"42":3}}],["adaptivelppool",{"2":{"42":1,"117":1}}],["adaptor",{"2":{"37":4,"111":1,"211":1}}],["adapt",{"2":{"3":4,"35":6,"37":6,"263":2,"291":1}}],["ads",{"2":{"52":1}}],["adding",{"2":{"141":1}}],["addition",{"2":{"52":1,"61":1}}],["additional",{"0":{"98":1},"2":{"51":1,"69":1,"70":1,"77":1,"98":1,"100":1,"157":1,"190":1,"195":1,"197":1}}],["additionally",{"2":{"7":1,"49":1,"50":1,"51":2,"56":1,"86":1,"118":1,"147":1,"148":1,"151":1,"158":1,"167":1,"171":1,"178":1,"206":1,"219":1,"252":1}}],["address",{"2":{"91":1}}],["add",{"2":{"52":3,"68":1,"72":2,"73":5,"74":1,"89":1,"95":2,"96":1,"98":1,"117":1,"121":1,"147":5,"163":1,"179":3,"189":1,"190":1,"228":1}}],["added",{"2":{"40":2,"42":3,"45":1,"46":4,"50":2,"61":1,"65":4,"76":1,"89":1,"102":1,"117":1}}],["adds",{"2":{"24":1,"60":1,"89":2,"189":1}}],["ad",{"0":{"19":1,"20":1},"2":{"18":6,"19":1,"20":1,"24":2,"49":3,"50":1,"52":2,"55":2,"62":1,"64":1,"92":1,"93":3,"96":2,"118":4,"119":2,"122":4,"147":1,"162":4,"165":2,"172":1,"180":1,"188":1,"191":1,"193":2,"194":5,"195":2,"205":2,"235":1,"248":1,"250":2,"252":1,"255":1,"274":1}}],["atomsbase",{"2":{"230":1,"241":1}}],["atomixcudaext",{"2":{"241":1}}],["atomix",{"2":{"200":1,"241":1,"263":1}}],["at=train",{"2":{"210":1,"231":1}}],["at=0",{"2":{"201":1}}],["attempt",{"2":{"89":2}}],["attempts",{"2":{"35":1,"60":1,"64":1}}],["attention",{"0":{"76":1},"2":{"76":18,"89":1}}],["atleast",{"2":{"65":1}}],["at",{"2":{"4":2,"22":1,"30":2,"40":2,"42":3,"43":1,"47":1,"50":1,"56":1,"63":1,"67":1,"78":1,"84":1,"85":1,"87":1,"89":7,"96":1,"123":1,"126":4,"127":10,"141":1,"142":1,"148":1,"155":1,"184":2,"195":1,"202":1,"228":2,"254":70,"266":2,"280":1028,"287":16,"292":2,"293":1}}],["april",{"2":{"90":1}}],["append",{"2":{"264":2}}],["appendix",{"0":{"198":1,"207":1,"215":1,"222":1,"239":1,"247":1,"256":1,"262":1,"267":1,"275":1,"281":1,"289":1,"298":1}}],["appears",{"2":{"80":1}}],["appreciate",{"2":{"56":1}}],["approach",{"0":{"149":1},"2":{"75":1,"123":1,"220":1}}],["approx",{"2":{"70":1}}],["approximation",{"2":{"45":1,"67":3}}],["approximately",{"2":{"15":1}}],["appropriate",{"2":{"8":1,"15":1,"89":1}}],["applications",{"2":{"93":1,"191":1}}],["applicable",{"2":{"2":1,"4":1}}],["applies",{"2":{"39":1,"46":6,"47":2,"61":1,"65":1,"77":1,"79":1,"89":1,"137":1}}],["applied",{"2":{"8":1,"15":3,"39":1,"41":4,"46":4,"50":2,"52":1,"76":2,"79":5,"84":2,"86":1,"89":6,"158":2}}],["applychain",{"2":{"238":6}}],["applying",{"2":{"15":1,"39":1,"63":2,"76":1,"83":2,"89":2}}],["apply",{"0":{"59":1},"2":{"8":7,"39":1,"40":2,"41":2,"42":3,"49":5,"52":1,"67":1,"80":1,"83":1,"89":2,"96":3,"130":1,"132":1,"133":4,"134":3,"153":1,"154":1,"171":1,"184":1,"259":1,"265":1,"274":1,"284":3}}],["appleaccelerate",{"2":{"64":1}}],["apple",{"2":{"3":1,"100":1}}],["api",{"0":{"14":1,"49":1,"124":1,"203":1},"1":{"15":1,"16":1},"2":{"3":1,"30":3,"35":1,"39":9,"45":2,"49":2,"50":2,"59":2,"60":1,"61":2,"62":1,"63":2,"64":1,"65":4,"87":3,"88":1,"89":1,"96":3,"102":1,"104":2,"109":1,"123":2,"124":2,"142":1,"157":1,"197":1,"199":1,"203":1,"208":1,"216":1,"232":1,"236":1,"238":1,"239":1,"247":1,"273":1,"294":2}}],["astroinformatics",{"2":{"290":1}}],["ask",{"2":{"189":3}}],["ascii",{"2":{"67":2}}],["asymmetric",{"2":{"40":2,"42":3}}],["assuming",{"2":{"260":1}}],["assume",{"2":{"123":1,"147":1,"292":1}}],["assumed",{"2":{"70":1,"86":1,"89":2}}],["assumes",{"2":{"11":1,"39":1,"85":1,"89":1,"116":1}}],["assert",{"2":{"192":1,"260":1,"265":1,"285":1,"292":5}}],["associated",{"2":{"86":1,"91":1}}],["assymetric",{"2":{"79":1}}],["assigned",{"2":{"56":1}}],["assign",{"2":{"28":1,"84":2}}],["as",{"2":{"3":2,"4":2,"5":3,"7":4,"15":5,"18":2,"24":1,"25":1,"35":1,"37":1,"39":10,"40":1,"42":3,"43":3,"45":3,"46":1,"47":1,"49":2,"50":8,"51":2,"52":1,"54":2,"56":11,"59":2,"61":2,"63":2,"65":3,"67":2,"69":2,"70":2,"71":4,"72":1,"76":1,"77":1,"78":5,"79":2,"80":3,"81":4,"83":5,"85":5,"86":1,"89":11,"90":1,"93":2,"97":2,"102":1,"114":2,"115":2,"120":3,"123":6,"127":1,"131":1,"137":1,"138":1,"140":2,"147":3,"154":3,"155":2,"163":1,"164":1,"165":2,"166":1,"170":1,"171":2,"173":2,"183":1,"187":2,"188":1,"189":1,"191":1,"192":2,"194":1,"203":1,"208":2,"214":1,"216":1,"219":1,"220":1,"232":1,"238":1,"243":1,"263":1,"265":6,"266":1,"294":2}}],["agent",{"2":{"126":1,"133":3,"163":2,"205":2,"238":3,"261":1,"280":2}}],["aggressive",{"2":{"110":1}}],["aggregating",{"2":{"84":1}}],["aggregation",{"2":{"50":1,"84":1}}],["aggregate",{"2":{"84":1}}],["aggregated",{"2":{"50":1}}],["agg=sum",{"2":{"50":2,"260":1}}],["agg=mean",{"2":{"50":1,"279":1}}],["agg",{"2":{"50":26}}],["agnostic",{"2":{"3":3,"30":3,"178":1}}],["against",{"2":{"121":1,"167":1}}],["again",{"2":{"3":1,"89":1,"137":1,"164":1}}],["algebras",{"2":{"195":1}}],["algebraic",{"2":{"164":1}}],["algorithm",{"2":{"2":1,"167":1,"263":1}}],["alpha=0",{"2":{"270":1,"274":2,"293":2,"294":4,"297":7}}],["alpha=1",{"2":{"89":12}}],["alpha",{"2":{"63":4,"89":8,"265":2}}],["alphadropout",{"2":{"41":4}}],["already",{"2":{"57":1,"91":1,"96":1,"192":1,"200":14,"230":10,"241":28,"263":20,"269":21,"273":1,"291":59}}],["aliastables",{"2":{"200":1,"263":1}}],["aliases",{"2":{"61":1}}],["aliased",{"2":{"49":3}}],["alias",{"2":{"51":1}}],["alignment",{"2":{"86":1}}],["aligned",{"2":{"47":1}}],["align",{"2":{"47":2,"81":9,"85":1,"116":2,"117":1}}],["alternate",{"0":{"236":1},"2":{"123":1}}],["alternatives",{"2":{"176":1}}],["alternatively",{"2":{"43":2,"47":1,"70":1,"95":1,"124":1,"137":1,"173":1}}],["alternative",{"2":{"3":1,"81":3}}],["alter",{"2":{"89":1}}],["altered",{"2":{"81":1}}],["alts",{"2":{"45":2}}],["almost",{"2":{"35":1,"89":1}}],["always",{"2":{"18":2,"21":1,"22":1,"35":1,"50":1,"52":1,"77":1,"78":3,"79":1,"83":1,"89":3,"116":1,"156":1}}],["al",{"2":{"15":2,"50":2,"63":2,"81":1,"86":1,"290":1}}],["along",{"2":{"15":1,"39":1,"40":1,"41":2,"43":2,"46":1,"50":1,"51":1,"61":1,"65":1,"77":1,"81":1,"87":1,"89":1,"93":1,"132":1}}],["also",{"2":{"3":1,"7":1,"15":2,"22":1,"28":2,"39":2,"41":5,"45":1,"46":7,"47":2,"51":1,"52":2,"54":1,"56":1,"61":2,"67":10,"76":1,"77":2,"78":1,"79":5,"80":2,"81":1,"82":2,"83":5,"84":1,"87":3,"89":5,"93":2,"95":1,"102":1,"110":1,"153":2,"186":1,"189":1,"203":1,"205":1,"232":1,"235":2,"265":1,"266":1}}],["allocations",{"2":{"89":1}}],["allocating",{"2":{"84":2}}],["allocated",{"2":{"89":1}}],["allocate",{"2":{"89":2,"171":1}}],["allowed",{"2":{"81":2}}],["allow",{"2":{"25":1,"52":1,"70":1,"110":1,"114":1,"173":1}}],["allowscalar",{"2":{"174":1,"230":1,"241":1}}],["allows",{"2":{"6":1,"7":1,"8":1,"18":2,"30":2,"39":1,"43":1,"45":1,"52":1,"56":1,"101":1,"117":1,"154":1,"155":1,"235":1}}],["allreduce",{"2":{"30":3,"31":1,"139":1}}],["all",{"2":{"3":1,"6":1,"7":1,"8":1,"10":3,"15":1,"21":1,"22":3,"23":2,"30":5,"37":2,"39":3,"40":2,"42":6,"44":1,"45":2,"47":2,"49":2,"52":1,"53":1,"55":1,"56":1,"69":2,"70":1,"75":1,"78":4,"79":1,"82":1,"83":3,"87":1,"89":7,"92":3,"93":3,"96":1,"98":2,"110":1,"112":1,"123":1,"137":3,"140":1,"145":2,"147":1,"153":1,"178":1,"184":1,"185":1,"187":1,"189":3,"190":1,"191":1,"219":1,"251":1,"260":1,"264":1,"265":2}}],["artifact",{"2":{"239":1,"247":1,"269":1}}],["artificially",{"2":{"127":1}}],["artificial",{"2":{"15":2,"264":2}}],["architecture",{"2":{"153":2,"158":1}}],["architectures",{"2":{"76":1,"93":1}}],["architectural",{"2":{"153":1}}],["arranged",{"2":{"189":1,"264":1}}],["arraylayoutssparsearraysext",{"2":{"230":1,"291":1}}],["arraylayouts",{"2":{"230":2,"291":2}}],["arrayinterfacereversediffext",{"2":{"291":1}}],["arrayinterfacetrackerext",{"2":{"263":1,"291":1}}],["arrayinterfacechainrulesext",{"2":{"241":2,"263":1,"291":1}}],["arrayinterfacechainrulescoreext",{"2":{"200":1,"263":1,"269":1,"291":1}}],["arrayinterfacecudaext",{"2":{"241":2}}],["arrayinterfacesparsearraysext",{"2":{"200":2,"263":1}}],["arrayinterfacestaticarrayscoreext",{"2":{"200":1,"263":1}}],["arrayinterfacegpuarrayscoreext",{"2":{"200":1,"263":1}}],["arrayinterface",{"2":{"200":5,"241":2,"263":7,"269":1,"291":4}}],["arrayandtime",{"2":{"134":8,"135":1}}],["array",{"2":{"8":1,"13":3,"15":9,"19":2,"37":7,"40":5,"42":6,"43":2,"44":4,"45":6,"46":16,"50":1,"52":2,"56":1,"59":5,"63":4,"65":5,"66":1,"67":1,"76":6,"77":2,"79":7,"80":7,"81":17,"82":10,"83":14,"84":5,"85":5,"86":1,"87":1,"88":1,"89":20,"123":1,"134":3,"147":6,"164":1,"165":1,"187":1,"189":8,"190":1,"191":6,"212":1,"218":1,"220":1,"232":3,"233":1,"238":9,"264":4,"265":1,"280":6,"288":1,"293":1,"294":1,"295":1,"297":1}}],["arrays",{"0":{"174":1,"189":1,"190":1},"1":{"190":1},"2":{"3":1,"11":1,"39":1,"44":1,"45":1,"47":5,"52":1,"53":1,"62":2,"64":3,"66":8,"76":3,"81":1,"82":2,"83":2,"87":1,"123":1,"147":1,"153":1,"158":2,"171":2,"189":4,"191":1}}],["arr",{"2":{"82":18,"89":2}}],["argcheck",{"2":{"263":1}}],["arg9",{"2":{"147":2}}],["arg8",{"2":{"147":2}}],["arg7",{"2":{"147":2}}],["arg6",{"2":{"147":2}}],["arg5",{"2":{"147":2}}],["arg4",{"2":{"147":2}}],["arg3",{"2":{"147":2}}],["arg2",{"2":{"147":2}}],["arg12",{"2":{"147":4}}],["arg11",{"2":{"147":4}}],["arg10",{"2":{"147":2}}],["arg1",{"2":{"147":2}}],["arg0",{"2":{"147":2}}],["args",{"2":{"52":2,"54":6,"69":2,"70":4,"80":1,"115":1,"278":2,"283":2}}],["argumenterror",{"2":{"79":1,"254":1}}],["argument",{"2":{"8":2,"16":1,"39":4,"43":1,"45":1,"56":1,"81":3,"83":1,"84":1,"88":1,"89":1,"107":1,"116":1,"117":1,"124":1,"133":1,"144":1}}],["arguments",{"2":{"2":2,"4":2,"8":1,"15":3,"18":2,"19":1,"22":1,"24":3,"25":1,"31":1,"35":2,"37":2,"39":11,"40":4,"41":5,"42":9,"43":7,"44":8,"45":7,"46":9,"47":2,"49":6,"55":1,"56":2,"59":2,"61":1,"62":1,"63":2,"64":1,"65":4,"69":3,"70":5,"76":1,"78":3,"80":1,"81":4,"82":1,"84":1,"85":2,"89":16,"104":1,"153":1,"187":1,"238":3}}],["arbitrary",{"2":{"47":1,"76":1,"78":1,"89":1,"235":1}}],["around",{"2":{"18":2,"35":1,"40":2,"42":3,"60":1,"79":1,"80":1,"82":5,"89":1,"93":1,"154":1,"166":1}}],["arxiv",{"2":{"15":1,"45":1,"46":2,"65":4,"81":1}}],["aren",{"2":{"53":1,"227":1}}],["are",{"2":{"2":3,"3":2,"6":1,"7":2,"8":1,"11":1,"15":4,"19":2,"21":2,"22":6,"24":4,"25":1,"26":1,"28":2,"37":2,"44":3,"45":2,"46":5,"47":2,"49":10,"50":6,"51":2,"52":6,"54":2,"56":2,"57":1,"59":1,"64":1,"65":3,"66":1,"70":2,"76":1,"79":1,"80":2,"81":3,"83":3,"84":1,"85":3,"87":1,"89":13,"92":2,"93":2,"95":1,"97":1,"98":1,"107":1,"115":1,"117":1,"120":4,"121":5,"122":2,"123":2,"125":1,"127":1,"133":1,"137":1,"140":1,"141":1,"143":1,"147":4,"148":2,"153":3,"154":2,"155":2,"156":1,"162":2,"163":2,"164":1,"167":1,"170":3,"171":1,"173":2,"176":1,"177":2,"178":1,"181":2,"182":1,"184":1,"185":4,"189":1,"191":1,"193":1,"202":2,"220":1,"227":2,"228":2,"232":1,"243":1,"260":1,"261":1,"265":1,"266":1,"274":1,"276":1,"280":12,"293":1,"294":1}}],["animation",{"2":{"266":1}}],["animations",{"2":{"263":1}}],["analytical",{"2":{"253":4,"255":1}}],["analogous",{"2":{"83":2,"89":2}}],["anticlockwise",{"2":{"199":1,"201":4}}],["anonymous",{"2":{"114":1}}],["another",{"2":{"44":1,"154":1}}],["angle",{"2":{"82":2}}],["answers",{"2":{"87":1,"101":1}}],["ans",{"2":{"67":6,"81":3,"89":1}}],["anything",{"2":{"153":1}}],["anywhere",{"2":{"107":1}}],["any",{"2":{"8":2,"11":1,"21":1,"24":1,"37":1,"39":1,"43":3,"45":1,"49":2,"50":3,"52":2,"56":2,"62":1,"64":1,"67":1,"70":1,"79":1,"83":1,"87":3,"89":2,"92":1,"110":2,"122":1,"123":3,"124":1,"126":1,"128":1,"133":7,"140":1,"151":2,"153":2,"155":1,"158":1,"162":1,"171":1,"185":1,"190":1,"191":2,"192":1,"193":1,"206":1,"228":1,"235":1,"238":15,"244":1,"273":1}}],["an",{"2":{"2":5,"3":2,"4":1,"5":1,"8":3,"15":19,"16":36,"18":2,"24":1,"27":2,"30":1,"31":1,"39":10,"40":3,"41":3,"42":3,"43":2,"44":8,"45":2,"46":7,"49":2,"50":2,"52":1,"54":1,"56":1,"59":1,"63":4,"65":3,"67":2,"71":3,"76":2,"77":2,"78":2,"79":4,"80":2,"81":4,"82":4,"83":2,"84":2,"87":2,"88":1,"89":15,"93":1,"96":1,"107":1,"114":1,"116":2,"117":1,"118":1,"120":1,"122":2,"123":1,"124":3,"125":1,"127":1,"128":2,"131":1,"141":1,"147":3,"149":1,"162":1,"163":2,"164":1,"166":1,"173":1,"177":1,"179":2,"189":3,"192":1,"208":1,"220":1,"227":1,"228":3,"260":1,"261":1,"264":1,"265":1,"266":1,"280":1}}],["andrea",{"2":{"46":1,"65":1}}],["and",{"0":{"33":1,"84":1,"178":1,"204":1,"233":1,"240":1,"244":1},"1":{"34":1,"35":1,"36":1,"37":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1},"2":{"1":1,"2":13,"3":3,"4":7,"5":4,"7":7,"8":7,"15":13,"19":1,"21":2,"22":4,"23":7,"24":5,"25":3,"28":4,"30":2,"32":2,"35":5,"37":2,"39":10,"40":9,"42":3,"43":31,"44":8,"45":3,"46":22,"47":8,"49":8,"50":19,"51":2,"52":11,"53":1,"54":4,"55":1,"56":5,"57":1,"59":1,"60":1,"61":1,"62":4,"63":2,"64":3,"65":20,"67":5,"68":2,"69":2,"71":2,"76":3,"77":1,"78":10,"79":15,"80":8,"81":3,"82":4,"83":13,"84":14,"85":4,"86":4,"87":5,"89":56,"91":3,"92":2,"93":6,"95":1,"96":4,"97":7,"99":1,"101":2,"107":5,"112":2,"114":5,"115":3,"116":3,"117":4,"120":7,"121":2,"122":2,"123":7,"124":2,"126":3,"127":6,"128":3,"135":1,"137":6,"138":2,"140":1,"141":1,"143":1,"147":5,"149":3,"150":1,"151":2,"153":9,"154":8,"155":3,"156":1,"158":10,"160":1,"162":2,"163":5,"165":1,"166":1,"167":2,"168":2,"170":1,"171":5,"173":3,"176":2,"177":1,"178":5,"182":2,"184":2,"185":4,"187":2,"189":7,"190":2,"191":1,"192":3,"193":3,"194":1,"196":1,"197":3,"199":2,"201":3,"202":10,"203":1,"206":3,"210":1,"211":1,"214":1,"219":1,"220":3,"227":1,"228":3,"231":1,"232":2,"235":2,"238":1,"248":1,"251":1,"253":2,"258":1,"264":1,"265":12,"266":3,"270":1,"273":2,"274":1,"284":11,"285":2,"292":3,"293":2,"294":6}}],["a",{"0":{"36":1,"143":1,"146":1,"199":1,"202":1,"240":1,"243":1,"248":1,"268":1,"290":1,"294":1},"1":{"37":1,"200":1,"201":1,"202":1,"203":1,"204":1,"205":1,"206":1,"207":1,"241":1,"242":1,"243":1,"244":1,"245":1,"246":1,"247":1,"249":1,"250":1,"251":1,"252":1,"253":1,"254":1,"255":1,"256":1,"269":1,"270":1,"271":1,"272":1,"273":1,"274":1,"275":1,"291":1,"292":1,"293":1,"294":1,"295":1,"296":1,"297":1,"298":1},"2":{"0":1,"1":2,"2":5,"3":5,"4":5,"5":3,"6":1,"7":10,"8":12,"10":1,"11":4,"12":1,"15":20,"16":13,"18":2,"19":4,"21":1,"22":4,"23":1,"24":3,"25":4,"28":2,"30":3,"31":1,"35":9,"37":9,"39":27,"40":21,"42":18,"43":43,"44":8,"45":10,"46":20,"47":6,"49":5,"50":4,"51":5,"52":5,"54":3,"55":4,"56":21,"60":1,"61":3,"62":2,"63":4,"64":2,"65":2,"67":11,"68":2,"69":1,"70":4,"71":5,"73":1,"74":1,"76":3,"77":4,"78":5,"79":6,"80":6,"81":4,"82":3,"83":55,"84":9,"86":5,"87":9,"88":6,"89":87,"91":2,"92":1,"93":4,"95":1,"96":2,"97":4,"101":1,"102":2,"107":9,"108":2,"109":2,"112":3,"122":1,"123":4,"124":4,"125":1,"126":3,"127":5,"130":2,"131":1,"134":3,"135":1,"136":2,"137":1,"140":1,"141":1,"142":1,"143":1,"145":2,"147":2,"149":7,"151":3,"153":8,"154":8,"155":2,"158":17,"160":1,"161":1,"162":6,"163":7,"164":2,"165":3,"166":1,"167":10,"170":1,"171":4,"173":5,"178":1,"181":1,"182":1,"186":2,"187":2,"188":3,"189":12,"190":1,"192":3,"193":12,"194":1,"195":2,"197":4,"199":1,"201":2,"202":3,"203":1,"208":2,"214":1,"216":3,"219":2,"220":6,"227":1,"232":3,"235":2,"236":1,"237":1,"244":1,"248":2,"251":3,"252":1,"253":1,"254":7,"258":4,"260":3,"261":1,"263":1,"264":3,"265":7,"266":5,"268":2,"271":1,"274":2,"277":1,"280":103,"282":1,"283":1,"287":2,"292":9,"294":3,"295":1,"296":1}}]],"serializationVersion":2}';export{e as default}; diff --git a/dev/assets/chunks/VPLocalSearchBox.DSrXxC9l.js b/dev/assets/chunks/VPLocalSearchBox.DM1AnBlx.js similarity index 99% rename from dev/assets/chunks/VPLocalSearchBox.DSrXxC9l.js rename to dev/assets/chunks/VPLocalSearchBox.DM1AnBlx.js index 1827dcdd5d..a25c895b9f 100644 --- a/dev/assets/chunks/VPLocalSearchBox.DSrXxC9l.js +++ b/dev/assets/chunks/VPLocalSearchBox.DM1AnBlx.js @@ -1,4 +1,4 @@ -var Ot=Object.defineProperty;var Ft=(a,e,t)=>e in a?Ot(a,e,{enumerable:!0,configurable:!0,writable:!0,value:t}):a[e]=t;var Ae=(a,e,t)=>Ft(a,typeof e!="symbol"?e+"":e,t);import{V as Ct,p as ie,h as ge,aj as tt,ak as Rt,al as At,am as Mt,q as $e,an as Lt,d as Dt,D as xe,ao as nt,ap as zt,aq as Pt,s as jt,ar as Vt,v as Me,P as fe,O as _e,as as $t,at as Bt,W as Wt,R as Kt,$ as Jt,o as H,b as qt,j as _,a0 as Ut,k as L,au as Gt,av as Ht,aw as Qt,c as Z,n as st,e as Se,C as it,F as rt,a as he,t as pe,ax as Yt,ay as at,az as Zt,a9 as Xt,af as en,aA as tn,_ as nn}from"./framework.I-x9Gl6h.js";import{u as sn,c as rn}from"./theme.Dw8Jqbck.js";const an={root:()=>Ct(()=>import("./@localSearchIndexroot.CoigOnOV.js"),[])};/*! +var Ot=Object.defineProperty;var Ft=(a,e,t)=>e in a?Ot(a,e,{enumerable:!0,configurable:!0,writable:!0,value:t}):a[e]=t;var Ae=(a,e,t)=>Ft(a,typeof e!="symbol"?e+"":e,t);import{V as Ct,p as ie,h as ge,aj as tt,ak as Rt,al as At,am as Mt,q as $e,an as Lt,d as Dt,D as xe,ao as nt,ap as zt,aq as Pt,s as jt,ar as Vt,v as Me,P as fe,O as _e,as as $t,at as Bt,W as Wt,R as Kt,$ as Jt,o as H,b as qt,j as _,a0 as Ut,k as L,au as Gt,av as Ht,aw as Qt,c as Z,n as st,e as Se,C as it,F as rt,a as he,t as pe,ax as Yt,ay as at,az as Zt,a9 as Xt,af as en,aA as tn,_ as nn}from"./framework.BetCMmtc.js";import{u as sn,c as rn}from"./theme.Cah_qJpF.js";const an={root:()=>Ct(()=>import("./@localSearchIndexroot.Cbmt1wEA.js"),[])};/*! * tabbable 6.2.0 * @license MIT, https://github.com/focus-trap/tabbable/blob/master/LICENSE */var gt=["input:not([inert])","select:not([inert])","textarea:not([inert])","a[href]:not([inert])","button:not([inert])","[tabindex]:not(slot):not([inert])","audio[controls]:not([inert])","video[controls]:not([inert])",'[contenteditable]:not([contenteditable="false"]):not([inert])',"details>summary:first-of-type:not([inert])","details:not([inert])"],Ne=gt.join(","),vt=typeof Element>"u",ae=vt?function(){}:Element.prototype.matches||Element.prototype.msMatchesSelector||Element.prototype.webkitMatchesSelector,Oe=!vt&&Element.prototype.getRootNode?function(a){var e;return a==null||(e=a.getRootNode)===null||e===void 0?void 0:e.call(a)}:function(a){return a==null?void 0:a.ownerDocument},Fe=function a(e,t){var n;t===void 0&&(t=!0);var s=e==null||(n=e.getAttribute)===null||n===void 0?void 0:n.call(e,"inert"),r=s===""||s==="true",i=r||t&&e&&a(e.parentNode);return i},on=function(e){var t,n=e==null||(t=e.getAttribute)===null||t===void 0?void 0:t.call(e,"contenteditable");return n===""||n==="true"},bt=function(e,t,n){if(Fe(e))return[];var s=Array.prototype.slice.apply(e.querySelectorAll(Ne));return t&&ae.call(e,Ne)&&s.unshift(e),s=s.filter(n),s},yt=function a(e,t,n){for(var s=[],r=Array.from(e);r.length;){var i=r.shift();if(!Fe(i,!1))if(i.tagName==="SLOT"){var o=i.assignedElements(),l=o.length?o:i.children,c=a(l,!0,n);n.flatten?s.push.apply(s,c):s.push({scopeParent:i,candidates:c})}else{var f=ae.call(i,Ne);f&&n.filter(i)&&(t||!e.includes(i))&&s.push(i);var g=i.shadowRoot||typeof n.getShadowRoot=="function"&&n.getShadowRoot(i),h=!Fe(g,!1)&&(!n.shadowRootFilter||n.shadowRootFilter(i));if(g&&h){var b=a(g===!0?i.children:g.children,!0,n);n.flatten?s.push.apply(s,b):s.push({scopeParent:i,candidates:b})}else r.unshift.apply(r,i.children)}}return s},wt=function(e){return!isNaN(parseInt(e.getAttribute("tabindex"),10))},re=function(e){if(!e)throw new Error("No node provided");return e.tabIndex<0&&(/^(AUDIO|VIDEO|DETAILS)$/.test(e.tagName)||on(e))&&!wt(e)?0:e.tabIndex},ln=function(e,t){var n=re(e);return n<0&&t&&!wt(e)?0:n},cn=function(e,t){return e.tabIndex===t.tabIndex?e.documentOrder-t.documentOrder:e.tabIndex-t.tabIndex},xt=function(e){return e.tagName==="INPUT"},un=function(e){return xt(e)&&e.type==="hidden"},dn=function(e){var t=e.tagName==="DETAILS"&&Array.prototype.slice.apply(e.children).some(function(n){return n.tagName==="SUMMARY"});return t},fn=function(e,t){for(var n=0;nsummary:first-of-type"),i=r?e.parentElement:e;if(ae.call(i,"details:not([open]) *"))return!0;if(!n||n==="full"||n==="legacy-full"){if(typeof s=="function"){for(var o=e;e;){var l=e.parentElement,c=Oe(e);if(l&&!l.shadowRoot&&s(l)===!0)return ot(e);e.assignedSlot?e=e.assignedSlot:!l&&c!==e.ownerDocument?e=c.host:e=l}e=o}if(gn(e))return!e.getClientRects().length;if(n!=="legacy-full")return!0}else if(n==="non-zero-area")return ot(e);return!1},bn=function(e){if(/^(INPUT|BUTTON|SELECT|TEXTAREA)$/.test(e.tagName))for(var t=e.parentElement;t;){if(t.tagName==="FIELDSET"&&t.disabled){for(var n=0;n=0)},wn=function a(e){var t=[],n=[];return e.forEach(function(s,r){var i=!!s.scopeParent,o=i?s.scopeParent:s,l=ln(o,i),c=i?a(s.candidates):o;l===0?i?t.push.apply(t,c):t.push(o):n.push({documentOrder:r,tabIndex:l,item:s,isScope:i,content:c})}),n.sort(cn).reduce(function(s,r){return r.isScope?s.push.apply(s,r.content):s.push(r.content),s},[]).concat(t)},xn=function(e,t){t=t||{};var n;return t.getShadowRoot?n=yt([e],t.includeContainer,{filter:Be.bind(null,t),flatten:!1,getShadowRoot:t.getShadowRoot,shadowRootFilter:yn}):n=bt(e,t.includeContainer,Be.bind(null,t)),wn(n)},_n=function(e,t){t=t||{};var n;return t.getShadowRoot?n=yt([e],t.includeContainer,{filter:Ce.bind(null,t),flatten:!0,getShadowRoot:t.getShadowRoot}):n=bt(e,t.includeContainer,Ce.bind(null,t)),n},oe=function(e,t){if(t=t||{},!e)throw new Error("No node provided");return ae.call(e,Ne)===!1?!1:Be(t,e)},Sn=gt.concat("iframe").join(","),Le=function(e,t){if(t=t||{},!e)throw new Error("No node provided");return ae.call(e,Sn)===!1?!1:Ce(t,e)};/*! diff --git a/dev/assets/chunks/framework.BetCMmtc.js b/dev/assets/chunks/framework.BetCMmtc.js new file mode 100644 index 0000000000..705110f757 --- /dev/null +++ b/dev/assets/chunks/framework.BetCMmtc.js @@ -0,0 +1,18 @@ +/** +* @vue/shared v3.5.13 +* (c) 2018-present Yuxi (Evan) You and Vue contributors +* @license MIT +**//*! #__NO_SIDE_EFFECTS__ */function $s(e){const t=Object.create(null);for(const n of e.split(","))t[n]=1;return n=>n in t}const ee={},Rt=[],We=()=>{},zo=()=>!1,tn=e=>e.charCodeAt(0)===111&&e.charCodeAt(1)===110&&(e.charCodeAt(2)>122||e.charCodeAt(2)<97),js=e=>e.startsWith("onUpdate:"),pe=Object.assign,Vs=(e,t)=>{const n=e.indexOf(t);n>-1&&e.splice(n,1)},Qo=Object.prototype.hasOwnProperty,Q=(e,t)=>Qo.call(e,t),K=Array.isArray,Mt=e=>Fn(e)==="[object Map]",ci=e=>Fn(e)==="[object Set]",q=e=>typeof e=="function",oe=e=>typeof e=="string",Je=e=>typeof e=="symbol",se=e=>e!==null&&typeof e=="object",ai=e=>(se(e)||q(e))&&q(e.then)&&q(e.catch),fi=Object.prototype.toString,Fn=e=>fi.call(e),Zo=e=>Fn(e).slice(8,-1),ui=e=>Fn(e)==="[object Object]",ks=e=>oe(e)&&e!=="NaN"&&e[0]!=="-"&&""+parseInt(e,10)===e,Ot=$s(",key,ref,ref_for,ref_key,onVnodeBeforeMount,onVnodeMounted,onVnodeBeforeUpdate,onVnodeUpdated,onVnodeBeforeUnmount,onVnodeUnmounted"),Hn=e=>{const t=Object.create(null);return n=>t[n]||(t[n]=e(n))},el=/-(\w)/g,Ne=Hn(e=>e.replace(el,(t,n)=>n?n.toUpperCase():"")),tl=/\B([A-Z])/g,it=Hn(e=>e.replace(tl,"-$1").toLowerCase()),Dn=Hn(e=>e.charAt(0).toUpperCase()+e.slice(1)),wn=Hn(e=>e?`on${Dn(e)}`:""),nt=(e,t)=>!Object.is(e,t),Sn=(e,...t)=>{for(let n=0;n{Object.defineProperty(e,t,{configurable:!0,enumerable:!1,writable:s,value:n})},Ss=e=>{const t=parseFloat(e);return isNaN(t)?e:t},nl=e=>{const t=oe(e)?Number(e):NaN;return isNaN(t)?e:t};let ur;const $n=()=>ur||(ur=typeof globalThis<"u"?globalThis:typeof self<"u"?self:typeof window<"u"?window:typeof global<"u"?global:{});function Us(e){if(K(e)){const t={};for(let n=0;n{if(n){const s=n.split(rl);s.length>1&&(t[s[0].trim()]=s[1].trim())}}),t}function Ws(e){let t="";if(oe(e))t=e;else if(K(e))for(let n=0;n!!(e&&e.__v_isRef===!0),al=e=>oe(e)?e:e==null?"":K(e)||se(e)&&(e.toString===fi||!q(e.toString))?pi(e)?al(e.value):JSON.stringify(e,gi,2):String(e),gi=(e,t)=>pi(t)?gi(e,t.value):Mt(t)?{[`Map(${t.size})`]:[...t.entries()].reduce((n,[s,r],i)=>(n[Zn(s,i)+" =>"]=r,n),{})}:ci(t)?{[`Set(${t.size})`]:[...t.values()].map(n=>Zn(n))}:Je(t)?Zn(t):se(t)&&!K(t)&&!ui(t)?String(t):t,Zn=(e,t="")=>{var n;return Je(e)?`Symbol(${(n=e.description)!=null?n:t})`:e};/** +* @vue/reactivity v3.5.13 +* (c) 2018-present Yuxi (Evan) You and Vue contributors +* @license MIT +**/let Se;class fl{constructor(t=!1){this.detached=t,this._active=!0,this.effects=[],this.cleanups=[],this._isPaused=!1,this.parent=Se,!t&&Se&&(this.index=(Se.scopes||(Se.scopes=[])).push(this)-1)}get active(){return this._active}pause(){if(this._active){this._isPaused=!0;let t,n;if(this.scopes)for(t=0,n=this.scopes.length;t0)return;if(kt){let t=kt;for(kt=void 0;t;){const n=t.next;t.next=void 0,t.flags&=-9,t=n}}let e;for(;Vt;){let t=Vt;for(Vt=void 0;t;){const n=t.next;if(t.next=void 0,t.flags&=-9,t.flags&1)try{t.trigger()}catch(s){e||(e=s)}t=n}}if(e)throw e}function vi(e){for(let t=e.deps;t;t=t.nextDep)t.version=-1,t.prevActiveLink=t.dep.activeLink,t.dep.activeLink=t}function wi(e){let t,n=e.depsTail,s=n;for(;s;){const r=s.prevDep;s.version===-1?(s===n&&(n=r),qs(s),dl(s)):t=s,s.dep.activeLink=s.prevActiveLink,s.prevActiveLink=void 0,s=r}e.deps=t,e.depsTail=n}function Ts(e){for(let t=e.deps;t;t=t.nextDep)if(t.dep.version!==t.version||t.dep.computed&&(Si(t.dep.computed)||t.dep.version!==t.version))return!0;return!!e._dirty}function Si(e){if(e.flags&4&&!(e.flags&16)||(e.flags&=-17,e.globalVersion===Gt))return;e.globalVersion=Gt;const t=e.dep;if(e.flags|=2,t.version>0&&!e.isSSR&&e.deps&&!Ts(e)){e.flags&=-3;return}const n=ne,s=He;ne=e,He=!0;try{vi(e);const r=e.fn(e._value);(t.version===0||nt(r,e._value))&&(e._value=r,t.version++)}catch(r){throw t.version++,r}finally{ne=n,He=s,wi(e),e.flags&=-3}}function qs(e,t=!1){const{dep:n,prevSub:s,nextSub:r}=e;if(s&&(s.nextSub=r,e.prevSub=void 0),r&&(r.prevSub=s,e.nextSub=void 0),n.subs===e&&(n.subs=s,!s&&n.computed)){n.computed.flags&=-5;for(let i=n.computed.deps;i;i=i.nextDep)qs(i,!0)}!t&&!--n.sc&&n.map&&n.map.delete(n.key)}function dl(e){const{prevDep:t,nextDep:n}=e;t&&(t.nextDep=n,e.prevDep=void 0),n&&(n.prevDep=t,e.nextDep=void 0)}let He=!0;const Ti=[];function ot(){Ti.push(He),He=!1}function lt(){const e=Ti.pop();He=e===void 0?!0:e}function dr(e){const{cleanup:t}=e;if(e.cleanup=void 0,t){const n=ne;ne=void 0;try{t()}finally{ne=n}}}let Gt=0;class hl{constructor(t,n){this.sub=t,this.dep=n,this.version=n.version,this.nextDep=this.prevDep=this.nextSub=this.prevSub=this.prevActiveLink=void 0}}class jn{constructor(t){this.computed=t,this.version=0,this.activeLink=void 0,this.subs=void 0,this.map=void 0,this.key=void 0,this.sc=0}track(t){if(!ne||!He||ne===this.computed)return;let n=this.activeLink;if(n===void 0||n.sub!==ne)n=this.activeLink=new hl(ne,this),ne.deps?(n.prevDep=ne.depsTail,ne.depsTail.nextDep=n,ne.depsTail=n):ne.deps=ne.depsTail=n,Ei(n);else if(n.version===-1&&(n.version=this.version,n.nextDep)){const s=n.nextDep;s.prevDep=n.prevDep,n.prevDep&&(n.prevDep.nextDep=s),n.prevDep=ne.depsTail,n.nextDep=void 0,ne.depsTail.nextDep=n,ne.depsTail=n,ne.deps===n&&(ne.deps=s)}return n}trigger(t){this.version++,Gt++,this.notify(t)}notify(t){Bs();try{for(let n=this.subs;n;n=n.prevSub)n.sub.notify()&&n.sub.dep.notify()}finally{Ks()}}}function Ei(e){if(e.dep.sc++,e.sub.flags&4){const t=e.dep.computed;if(t&&!e.dep.subs){t.flags|=20;for(let s=t.deps;s;s=s.nextDep)Ei(s)}const n=e.dep.subs;n!==e&&(e.prevSub=n,n&&(n.nextSub=e)),e.dep.subs=e}}const Rn=new WeakMap,pt=Symbol(""),Es=Symbol(""),Xt=Symbol("");function ye(e,t,n){if(He&&ne){let s=Rn.get(e);s||Rn.set(e,s=new Map);let r=s.get(n);r||(s.set(n,r=new jn),r.map=s,r.key=n),r.track()}}function Ge(e,t,n,s,r,i){const o=Rn.get(e);if(!o){Gt++;return}const l=c=>{c&&c.trigger()};if(Bs(),t==="clear")o.forEach(l);else{const c=K(e),f=c&&ks(n);if(c&&n==="length"){const a=Number(s);o.forEach((d,m)=>{(m==="length"||m===Xt||!Je(m)&&m>=a)&&l(d)})}else switch((n!==void 0||o.has(void 0))&&l(o.get(n)),f&&l(o.get(Xt)),t){case"add":c?f&&l(o.get("length")):(l(o.get(pt)),Mt(e)&&l(o.get(Es)));break;case"delete":c||(l(o.get(pt)),Mt(e)&&l(o.get(Es)));break;case"set":Mt(e)&&l(o.get(pt));break}}Ks()}function pl(e,t){const n=Rn.get(e);return n&&n.get(t)}function Tt(e){const t=z(e);return t===e?t:(ye(t,"iterate",Xt),Le(e)?t:t.map(be))}function Vn(e){return ye(e=z(e),"iterate",Xt),e}const gl={__proto__:null,[Symbol.iterator](){return ts(this,Symbol.iterator,be)},concat(...e){return Tt(this).concat(...e.map(t=>K(t)?Tt(t):t))},entries(){return ts(this,"entries",e=>(e[1]=be(e[1]),e))},every(e,t){return Be(this,"every",e,t,void 0,arguments)},filter(e,t){return Be(this,"filter",e,t,n=>n.map(be),arguments)},find(e,t){return Be(this,"find",e,t,be,arguments)},findIndex(e,t){return Be(this,"findIndex",e,t,void 0,arguments)},findLast(e,t){return Be(this,"findLast",e,t,be,arguments)},findLastIndex(e,t){return Be(this,"findLastIndex",e,t,void 0,arguments)},forEach(e,t){return Be(this,"forEach",e,t,void 0,arguments)},includes(...e){return ns(this,"includes",e)},indexOf(...e){return ns(this,"indexOf",e)},join(e){return Tt(this).join(e)},lastIndexOf(...e){return ns(this,"lastIndexOf",e)},map(e,t){return Be(this,"map",e,t,void 0,arguments)},pop(){return Dt(this,"pop")},push(...e){return Dt(this,"push",e)},reduce(e,...t){return hr(this,"reduce",e,t)},reduceRight(e,...t){return hr(this,"reduceRight",e,t)},shift(){return Dt(this,"shift")},some(e,t){return Be(this,"some",e,t,void 0,arguments)},splice(...e){return Dt(this,"splice",e)},toReversed(){return Tt(this).toReversed()},toSorted(e){return Tt(this).toSorted(e)},toSpliced(...e){return Tt(this).toSpliced(...e)},unshift(...e){return Dt(this,"unshift",e)},values(){return ts(this,"values",be)}};function ts(e,t,n){const s=Vn(e),r=s[t]();return s!==e&&!Le(e)&&(r._next=r.next,r.next=()=>{const i=r._next();return i.value&&(i.value=n(i.value)),i}),r}const ml=Array.prototype;function Be(e,t,n,s,r,i){const o=Vn(e),l=o!==e&&!Le(e),c=o[t];if(c!==ml[t]){const d=c.apply(e,i);return l?be(d):d}let f=n;o!==e&&(l?f=function(d,m){return n.call(this,be(d),m,e)}:n.length>2&&(f=function(d,m){return n.call(this,d,m,e)}));const a=c.call(o,f,s);return l&&r?r(a):a}function hr(e,t,n,s){const r=Vn(e);let i=n;return r!==e&&(Le(e)?n.length>3&&(i=function(o,l,c){return n.call(this,o,l,c,e)}):i=function(o,l,c){return n.call(this,o,be(l),c,e)}),r[t](i,...s)}function ns(e,t,n){const s=z(e);ye(s,"iterate",Xt);const r=s[t](...n);return(r===-1||r===!1)&&Ys(n[0])?(n[0]=z(n[0]),s[t](...n)):r}function Dt(e,t,n=[]){ot(),Bs();const s=z(e)[t].apply(e,n);return Ks(),lt(),s}const yl=$s("__proto__,__v_isRef,__isVue"),xi=new Set(Object.getOwnPropertyNames(Symbol).filter(e=>e!=="arguments"&&e!=="caller").map(e=>Symbol[e]).filter(Je));function bl(e){Je(e)||(e=String(e));const t=z(this);return ye(t,"has",e),t.hasOwnProperty(e)}class Ci{constructor(t=!1,n=!1){this._isReadonly=t,this._isShallow=n}get(t,n,s){if(n==="__v_skip")return t.__v_skip;const r=this._isReadonly,i=this._isShallow;if(n==="__v_isReactive")return!r;if(n==="__v_isReadonly")return r;if(n==="__v_isShallow")return i;if(n==="__v_raw")return s===(r?i?Rl:Oi:i?Mi:Ri).get(t)||Object.getPrototypeOf(t)===Object.getPrototypeOf(s)?t:void 0;const o=K(t);if(!r){let c;if(o&&(c=gl[n]))return c;if(n==="hasOwnProperty")return bl}const l=Reflect.get(t,n,ue(t)?t:s);return(Je(n)?xi.has(n):yl(n))||(r||ye(t,"get",n),i)?l:ue(l)?o&&ks(n)?l:l.value:se(l)?r?kn(l):Lt(l):l}}class Ai extends Ci{constructor(t=!1){super(!1,t)}set(t,n,s,r){let i=t[n];if(!this._isShallow){const c=wt(i);if(!Le(s)&&!wt(s)&&(i=z(i),s=z(s)),!K(t)&&ue(i)&&!ue(s))return c?!1:(i.value=s,!0)}const o=K(t)&&ks(n)?Number(n)e,fn=e=>Reflect.getPrototypeOf(e);function Tl(e,t,n){return function(...s){const r=this.__v_raw,i=z(r),o=Mt(i),l=e==="entries"||e===Symbol.iterator&&o,c=e==="keys"&&o,f=r[e](...s),a=n?xs:t?Cs:be;return!t&&ye(i,"iterate",c?Es:pt),{next(){const{value:d,done:m}=f.next();return m?{value:d,done:m}:{value:l?[a(d[0]),a(d[1])]:a(d),done:m}},[Symbol.iterator](){return this}}}}function un(e){return function(...t){return e==="delete"?!1:e==="clear"?void 0:this}}function El(e,t){const n={get(r){const i=this.__v_raw,o=z(i),l=z(r);e||(nt(r,l)&&ye(o,"get",r),ye(o,"get",l));const{has:c}=fn(o),f=t?xs:e?Cs:be;if(c.call(o,r))return f(i.get(r));if(c.call(o,l))return f(i.get(l));i!==o&&i.get(r)},get size(){const r=this.__v_raw;return!e&&ye(z(r),"iterate",pt),Reflect.get(r,"size",r)},has(r){const i=this.__v_raw,o=z(i),l=z(r);return e||(nt(r,l)&&ye(o,"has",r),ye(o,"has",l)),r===l?i.has(r):i.has(r)||i.has(l)},forEach(r,i){const o=this,l=o.__v_raw,c=z(l),f=t?xs:e?Cs:be;return!e&&ye(c,"iterate",pt),l.forEach((a,d)=>r.call(i,f(a),f(d),o))}};return pe(n,e?{add:un("add"),set:un("set"),delete:un("delete"),clear:un("clear")}:{add(r){!t&&!Le(r)&&!wt(r)&&(r=z(r));const i=z(this);return fn(i).has.call(i,r)||(i.add(r),Ge(i,"add",r,r)),this},set(r,i){!t&&!Le(i)&&!wt(i)&&(i=z(i));const o=z(this),{has:l,get:c}=fn(o);let f=l.call(o,r);f||(r=z(r),f=l.call(o,r));const a=c.call(o,r);return o.set(r,i),f?nt(i,a)&&Ge(o,"set",r,i):Ge(o,"add",r,i),this},delete(r){const i=z(this),{has:o,get:l}=fn(i);let c=o.call(i,r);c||(r=z(r),c=o.call(i,r)),l&&l.call(i,r);const f=i.delete(r);return c&&Ge(i,"delete",r,void 0),f},clear(){const r=z(this),i=r.size!==0,o=r.clear();return i&&Ge(r,"clear",void 0,void 0),o}}),["keys","values","entries",Symbol.iterator].forEach(r=>{n[r]=Tl(r,e,t)}),n}function Gs(e,t){const n=El(e,t);return(s,r,i)=>r==="__v_isReactive"?!e:r==="__v_isReadonly"?e:r==="__v_raw"?s:Reflect.get(Q(n,r)&&r in s?n:s,r,i)}const xl={get:Gs(!1,!1)},Cl={get:Gs(!1,!0)},Al={get:Gs(!0,!1)};const Ri=new WeakMap,Mi=new WeakMap,Oi=new WeakMap,Rl=new WeakMap;function Ml(e){switch(e){case"Object":case"Array":return 1;case"Map":case"Set":case"WeakMap":case"WeakSet":return 2;default:return 0}}function Ol(e){return e.__v_skip||!Object.isExtensible(e)?0:Ml(Zo(e))}function Lt(e){return wt(e)?e:Xs(e,!1,vl,xl,Ri)}function Pl(e){return Xs(e,!1,Sl,Cl,Mi)}function kn(e){return Xs(e,!0,wl,Al,Oi)}function Xs(e,t,n,s,r){if(!se(e)||e.__v_raw&&!(t&&e.__v_isReactive))return e;const i=r.get(e);if(i)return i;const o=Ol(e);if(o===0)return e;const l=new Proxy(e,o===2?s:n);return r.set(e,l),l}function gt(e){return wt(e)?gt(e.__v_raw):!!(e&&e.__v_isReactive)}function wt(e){return!!(e&&e.__v_isReadonly)}function Le(e){return!!(e&&e.__v_isShallow)}function Ys(e){return e?!!e.__v_raw:!1}function z(e){const t=e&&e.__v_raw;return t?z(t):e}function Tn(e){return!Q(e,"__v_skip")&&Object.isExtensible(e)&&di(e,"__v_skip",!0),e}const be=e=>se(e)?Lt(e):e,Cs=e=>se(e)?kn(e):e;function ue(e){return e?e.__v_isRef===!0:!1}function le(e){return Pi(e,!1)}function Un(e){return Pi(e,!0)}function Pi(e,t){return ue(e)?e:new Ll(e,t)}class Ll{constructor(t,n){this.dep=new jn,this.__v_isRef=!0,this.__v_isShallow=!1,this._rawValue=n?t:z(t),this._value=n?t:be(t),this.__v_isShallow=n}get value(){return this.dep.track(),this._value}set value(t){const n=this._rawValue,s=this.__v_isShallow||Le(t)||wt(t);t=s?t:z(t),nt(t,n)&&(this._rawValue=t,this._value=s?t:be(t),this.dep.trigger())}}function Js(e){return ue(e)?e.value:e}function ce(e){return q(e)?e():Js(e)}const Il={get:(e,t,n)=>t==="__v_raw"?e:Js(Reflect.get(e,t,n)),set:(e,t,n,s)=>{const r=e[t];return ue(r)&&!ue(n)?(r.value=n,!0):Reflect.set(e,t,n,s)}};function Li(e){return gt(e)?e:new Proxy(e,Il)}class Nl{constructor(t){this.__v_isRef=!0,this._value=void 0;const n=this.dep=new jn,{get:s,set:r}=t(n.track.bind(n),n.trigger.bind(n));this._get=s,this._set=r}get value(){return this._value=this._get()}set value(t){this._set(t)}}function Fl(e){return new Nl(e)}class Hl{constructor(t,n,s){this._object=t,this._key=n,this._defaultValue=s,this.__v_isRef=!0,this._value=void 0}get value(){const t=this._object[this._key];return this._value=t===void 0?this._defaultValue:t}set value(t){this._object[this._key]=t}get dep(){return pl(z(this._object),this._key)}}class Dl{constructor(t){this._getter=t,this.__v_isRef=!0,this.__v_isReadonly=!0,this._value=void 0}get value(){return this._value=this._getter()}}function $l(e,t,n){return ue(e)?e:q(e)?new Dl(e):se(e)&&arguments.length>1?jl(e,t,n):le(e)}function jl(e,t,n){const s=e[t];return ue(s)?s:new Hl(e,t,n)}class Vl{constructor(t,n,s){this.fn=t,this.setter=n,this._value=void 0,this.dep=new jn(this),this.__v_isRef=!0,this.deps=void 0,this.depsTail=void 0,this.flags=16,this.globalVersion=Gt-1,this.next=void 0,this.effect=this,this.__v_isReadonly=!n,this.isSSR=s}notify(){if(this.flags|=16,!(this.flags&8)&&ne!==this)return _i(this,!0),!0}get value(){const t=this.dep.track();return Si(this),t&&(t.version=this.dep.version),this._value}set value(t){this.setter&&this.setter(t)}}function kl(e,t,n=!1){let s,r;return q(e)?s=e:(s=e.get,r=e.set),new Vl(s,r,n)}const dn={},Mn=new WeakMap;let dt;function Ul(e,t=!1,n=dt){if(n){let s=Mn.get(n);s||Mn.set(n,s=[]),s.push(e)}}function Wl(e,t,n=ee){const{immediate:s,deep:r,once:i,scheduler:o,augmentJob:l,call:c}=n,f=g=>r?g:Le(g)||r===!1||r===0?Xe(g,1):Xe(g);let a,d,m,y,v=!1,_=!1;if(ue(e)?(d=()=>e.value,v=Le(e)):gt(e)?(d=()=>f(e),v=!0):K(e)?(_=!0,v=e.some(g=>gt(g)||Le(g)),d=()=>e.map(g=>{if(ue(g))return g.value;if(gt(g))return f(g);if(q(g))return c?c(g,2):g()})):q(e)?t?d=c?()=>c(e,2):e:d=()=>{if(m){ot();try{m()}finally{lt()}}const g=dt;dt=a;try{return c?c(e,3,[y]):e(y)}finally{dt=g}}:d=We,t&&r){const g=d,O=r===!0?1/0:r;d=()=>Xe(g(),O)}const k=mi(),P=()=>{a.stop(),k&&k.active&&Vs(k.effects,a)};if(i&&t){const g=t;t=(...O)=>{g(...O),P()}}let D=_?new Array(e.length).fill(dn):dn;const p=g=>{if(!(!(a.flags&1)||!a.dirty&&!g))if(t){const O=a.run();if(r||v||(_?O.some(($,R)=>nt($,D[R])):nt(O,D))){m&&m();const $=dt;dt=a;try{const R=[O,D===dn?void 0:_&&D[0]===dn?[]:D,y];c?c(t,3,R):t(...R),D=O}finally{dt=$}}}else a.run()};return l&&l(p),a=new yi(d),a.scheduler=o?()=>o(p,!1):p,y=g=>Ul(g,!1,a),m=a.onStop=()=>{const g=Mn.get(a);if(g){if(c)c(g,4);else for(const O of g)O();Mn.delete(a)}},t?s?p(!0):D=a.run():o?o(p.bind(null,!0),!0):a.run(),P.pause=a.pause.bind(a),P.resume=a.resume.bind(a),P.stop=P,P}function Xe(e,t=1/0,n){if(t<=0||!se(e)||e.__v_skip||(n=n||new Set,n.has(e)))return e;if(n.add(e),t--,ue(e))Xe(e.value,t,n);else if(K(e))for(let s=0;s{Xe(s,t,n)});else if(ui(e)){for(const s in e)Xe(e[s],t,n);for(const s of Object.getOwnPropertySymbols(e))Object.prototype.propertyIsEnumerable.call(e,s)&&Xe(e[s],t,n)}return e}/** +* @vue/runtime-core v3.5.13 +* (c) 2018-present Yuxi (Evan) You and Vue contributors +* @license MIT +**/function nn(e,t,n,s){try{return s?e(...s):e()}catch(r){sn(r,t,n)}}function De(e,t,n,s){if(q(e)){const r=nn(e,t,n,s);return r&&ai(r)&&r.catch(i=>{sn(i,t,n)}),r}if(K(e)){const r=[];for(let i=0;i>>1,r=Te[s],i=Yt(r);i=Yt(n)?Te.push(e):Te.splice(Kl(t),0,e),e.flags|=1,Ni()}}function Ni(){On||(On=Ii.then(Fi))}function ql(e){K(e)?Pt.push(...e):Ze&&e.id===-1?Ze.splice(xt+1,0,e):e.flags&1||(Pt.push(e),e.flags|=1),Ni()}function pr(e,t,n=ke+1){for(;nYt(n)-Yt(s));if(Pt.length=0,Ze){Ze.push(...t);return}for(Ze=t,xt=0;xte.id==null?e.flags&2?-1:1/0:e.id;function Fi(e){try{for(ke=0;ke{s._d&&Mr(-1);const i=Ln(t);let o;try{o=e(...r)}finally{Ln(i),s._d&&Mr(1)}return o};return s._n=!0,s._c=!0,s._d=!0,s}function Pf(e,t){if(he===null)return e;const n=Xn(he),s=e.dirs||(e.dirs=[]);for(let r=0;re.__isTeleport,Ut=e=>e&&(e.disabled||e.disabled===""),gr=e=>e&&(e.defer||e.defer===""),mr=e=>typeof SVGElement<"u"&&e instanceof SVGElement,yr=e=>typeof MathMLElement=="function"&&e instanceof MathMLElement,As=(e,t)=>{const n=e&&e.to;return oe(n)?t?t(n):null:n},ji={name:"Teleport",__isTeleport:!0,process(e,t,n,s,r,i,o,l,c,f){const{mc:a,pc:d,pbc:m,o:{insert:y,querySelector:v,createText:_,createComment:k}}=f,P=Ut(t.props);let{shapeFlag:D,children:p,dynamicChildren:g}=t;if(e==null){const O=t.el=_(""),$=t.anchor=_("");y(O,n,s),y($,n,s);const R=(T,M)=>{D&16&&(r&&r.isCE&&(r.ce._teleportTarget=T),a(p,T,M,r,i,o,l,c))},j=()=>{const T=t.target=As(t.props,v),M=Vi(T,t,_,y);T&&(o!=="svg"&&mr(T)?o="svg":o!=="mathml"&&yr(T)&&(o="mathml"),P||(R(T,M),En(t,!1)))};P&&(R(n,$),En(t,!0)),gr(t.props)?we(()=>{j(),t.el.__isMounted=!0},i):j()}else{if(gr(t.props)&&!e.el.__isMounted){we(()=>{ji.process(e,t,n,s,r,i,o,l,c,f),delete e.el.__isMounted},i);return}t.el=e.el,t.targetStart=e.targetStart;const O=t.anchor=e.anchor,$=t.target=e.target,R=t.targetAnchor=e.targetAnchor,j=Ut(e.props),T=j?n:$,M=j?O:R;if(o==="svg"||mr($)?o="svg":(o==="mathml"||yr($))&&(o="mathml"),g?(m(e.dynamicChildren,g,T,r,i,o,l),tr(e,t,!0)):c||d(e,t,T,M,r,i,o,l,!1),P)j?t.props&&e.props&&t.props.to!==e.props.to&&(t.props.to=e.props.to):hn(t,n,O,f,1);else if((t.props&&t.props.to)!==(e.props&&e.props.to)){const A=t.target=As(t.props,v);A&&hn(t,A,null,f,0)}else j&&hn(t,$,R,f,1);En(t,P)}},remove(e,t,n,{um:s,o:{remove:r}},i){const{shapeFlag:o,children:l,anchor:c,targetStart:f,targetAnchor:a,target:d,props:m}=e;if(d&&(r(f),r(a)),i&&r(c),o&16){const y=i||!Ut(m);for(let v=0;v{e.isMounted=!0}),Gi(()=>{e.isUnmounting=!0}),e}const Me=[Function,Array],ki={mode:String,appear:Boolean,persisted:Boolean,onBeforeEnter:Me,onEnter:Me,onAfterEnter:Me,onEnterCancelled:Me,onBeforeLeave:Me,onLeave:Me,onAfterLeave:Me,onLeaveCancelled:Me,onBeforeAppear:Me,onAppear:Me,onAfterAppear:Me,onAppearCancelled:Me},Ui=e=>{const t=e.subTree;return t.component?Ui(t.component):t},Jl={name:"BaseTransition",props:ki,setup(e,{slots:t}){const n=on(),s=Yl();return()=>{const r=t.default&&Ki(t.default(),!0);if(!r||!r.length)return;const i=Wi(r),o=z(e),{mode:l}=o;if(s.isLeaving)return ss(i);const c=br(i);if(!c)return ss(i);let f=Rs(c,o,s,n,d=>f=d);c.type!==_e&&Jt(c,f);let a=n.subTree&&br(n.subTree);if(a&&a.type!==_e&&!ht(c,a)&&Ui(n).type!==_e){let d=Rs(a,o,s,n);if(Jt(a,d),l==="out-in"&&c.type!==_e)return s.isLeaving=!0,d.afterLeave=()=>{s.isLeaving=!1,n.job.flags&8||n.update(),delete d.afterLeave,a=void 0},ss(i);l==="in-out"&&c.type!==_e?d.delayLeave=(m,y,v)=>{const _=Bi(s,a);_[String(a.key)]=a,m[et]=()=>{y(),m[et]=void 0,delete f.delayedLeave,a=void 0},f.delayedLeave=()=>{v(),delete f.delayedLeave,a=void 0}}:a=void 0}else a&&(a=void 0);return i}}};function Wi(e){let t=e[0];if(e.length>1){for(const n of e)if(n.type!==_e){t=n;break}}return t}const zl=Jl;function Bi(e,t){const{leavingVNodes:n}=e;let s=n.get(t.type);return s||(s=Object.create(null),n.set(t.type,s)),s}function Rs(e,t,n,s,r){const{appear:i,mode:o,persisted:l=!1,onBeforeEnter:c,onEnter:f,onAfterEnter:a,onEnterCancelled:d,onBeforeLeave:m,onLeave:y,onAfterLeave:v,onLeaveCancelled:_,onBeforeAppear:k,onAppear:P,onAfterAppear:D,onAppearCancelled:p}=t,g=String(e.key),O=Bi(n,e),$=(T,M)=>{T&&De(T,s,9,M)},R=(T,M)=>{const A=M[1];$(T,M),K(T)?T.every(w=>w.length<=1)&&A():T.length<=1&&A()},j={mode:o,persisted:l,beforeEnter(T){let M=c;if(!n.isMounted)if(i)M=k||c;else return;T[et]&&T[et](!0);const A=O[g];A&&ht(e,A)&&A.el[et]&&A.el[et](),$(M,[T])},enter(T){let M=f,A=a,w=d;if(!n.isMounted)if(i)M=P||f,A=D||a,w=p||d;else return;let F=!1;const Y=T[pn]=ie=>{F||(F=!0,ie?$(w,[T]):$(A,[T]),j.delayedLeave&&j.delayedLeave(),T[pn]=void 0)};M?R(M,[T,Y]):Y()},leave(T,M){const A=String(e.key);if(T[pn]&&T[pn](!0),n.isUnmounting)return M();$(m,[T]);let w=!1;const F=T[et]=Y=>{w||(w=!0,M(),Y?$(_,[T]):$(v,[T]),T[et]=void 0,O[A]===e&&delete O[A])};O[A]=e,y?R(y,[T,F]):F()},clone(T){const M=Rs(T,t,n,s,r);return r&&r(M),M}};return j}function ss(e){if(rn(e))return e=st(e),e.children=null,e}function br(e){if(!rn(e))return $i(e.type)&&e.children?Wi(e.children):e;const{shapeFlag:t,children:n}=e;if(n){if(t&16)return n[0];if(t&32&&q(n.default))return n.default()}}function Jt(e,t){e.shapeFlag&6&&e.component?(e.transition=t,Jt(e.component.subTree,t)):e.shapeFlag&128?(e.ssContent.transition=t.clone(e.ssContent),e.ssFallback.transition=t.clone(e.ssFallback)):e.transition=t}function Ki(e,t=!1,n){let s=[],r=0;for(let i=0;i1)for(let i=0;izt(v,t&&(K(t)?t[_]:t),n,s,r));return}if(mt(s)&&!r){s.shapeFlag&512&&s.type.__asyncResolved&&s.component.subTree.component&&zt(e,t,n,s.component.subTree);return}const i=s.shapeFlag&4?Xn(s.component):s.el,o=r?null:i,{i:l,r:c}=e,f=t&&t.r,a=l.refs===ee?l.refs={}:l.refs,d=l.setupState,m=z(d),y=d===ee?()=>!1:v=>Q(m,v);if(f!=null&&f!==c&&(oe(f)?(a[f]=null,y(f)&&(d[f]=null)):ue(f)&&(f.value=null)),q(c))nn(c,l,12,[o,a]);else{const v=oe(c),_=ue(c);if(v||_){const k=()=>{if(e.f){const P=v?y(c)?d[c]:a[c]:c.value;r?K(P)&&Vs(P,i):K(P)?P.includes(i)||P.push(i):v?(a[c]=[i],y(c)&&(d[c]=a[c])):(c.value=[i],e.k&&(a[e.k]=c.value))}else v?(a[c]=o,y(c)&&(d[c]=o)):_&&(c.value=o,e.k&&(a[e.k]=o))};o?(k.id=-1,we(k,n)):k()}}}let _r=!1;const Et=()=>{_r||(console.error("Hydration completed but contains mismatches."),_r=!0)},Ql=e=>e.namespaceURI.includes("svg")&&e.tagName!=="foreignObject",Zl=e=>e.namespaceURI.includes("MathML"),gn=e=>{if(e.nodeType===1){if(Ql(e))return"svg";if(Zl(e))return"mathml"}},At=e=>e.nodeType===8;function ec(e){const{mt:t,p:n,o:{patchProp:s,createText:r,nextSibling:i,parentNode:o,remove:l,insert:c,createComment:f}}=e,a=(p,g)=>{if(!g.hasChildNodes()){n(null,p,g),Pn(),g._vnode=p;return}d(g.firstChild,p,null,null,null),Pn(),g._vnode=p},d=(p,g,O,$,R,j=!1)=>{j=j||!!g.dynamicChildren;const T=At(p)&&p.data==="[",M=()=>_(p,g,O,$,R,T),{type:A,ref:w,shapeFlag:F,patchFlag:Y}=g;let ie=p.nodeType;g.el=p,Y===-2&&(j=!1,g.dynamicChildren=null);let U=null;switch(A){case _t:ie!==3?g.children===""?(c(g.el=r(""),o(p),p),U=p):U=M():(p.data!==g.children&&(Et(),p.data=g.children),U=i(p));break;case _e:D(p)?(U=i(p),P(g.el=p.content.firstChild,p,O)):ie!==8||T?U=M():U=i(p);break;case Bt:if(T&&(p=i(p),ie=p.nodeType),ie===1||ie===3){U=p;const X=!g.children.length;for(let V=0;V{j=j||!!g.dynamicChildren;const{type:T,props:M,patchFlag:A,shapeFlag:w,dirs:F,transition:Y}=g,ie=T==="input"||T==="option";if(ie||A!==-1){F&&Ue(g,null,O,"created");let U=!1;if(D(p)){U=ho(null,Y)&&O&&O.vnode.props&&O.vnode.props.appear;const V=p.content.firstChild;U&&Y.beforeEnter(V),P(V,p,O),g.el=p=V}if(w&16&&!(M&&(M.innerHTML||M.textContent))){let V=y(p.firstChild,g,p,O,$,R,j);for(;V;){mn(p,1)||Et();const fe=V;V=V.nextSibling,l(fe)}}else if(w&8){let V=g.children;V[0]===` +`&&(p.tagName==="PRE"||p.tagName==="TEXTAREA")&&(V=V.slice(1)),p.textContent!==V&&(mn(p,0)||Et(),p.textContent=g.children)}if(M){if(ie||!j||A&48){const V=p.tagName.includes("-");for(const fe in M)(ie&&(fe.endsWith("value")||fe==="indeterminate")||tn(fe)&&!Ot(fe)||fe[0]==="."||V)&&s(p,fe,null,M[fe],void 0,O)}else if(M.onClick)s(p,"onClick",null,M.onClick,void 0,O);else if(A&4&>(M.style))for(const V in M.style)M.style[V]}let X;(X=M&&M.onVnodeBeforeMount)&&Oe(X,O,g),F&&Ue(g,null,O,"beforeMount"),((X=M&&M.onVnodeMounted)||F||U)&&_o(()=>{X&&Oe(X,O,g),U&&Y.enter(p),F&&Ue(g,null,O,"mounted")},$)}return p.nextSibling},y=(p,g,O,$,R,j,T)=>{T=T||!!g.dynamicChildren;const M=g.children,A=M.length;for(let w=0;w{const{slotScopeIds:T}=g;T&&(R=R?R.concat(T):T);const M=o(p),A=y(i(p),g,M,O,$,R,j);return A&&At(A)&&A.data==="]"?i(g.anchor=A):(Et(),c(g.anchor=f("]"),M,A),A)},_=(p,g,O,$,R,j)=>{if(mn(p.parentElement,1)||Et(),g.el=null,j){const A=k(p);for(;;){const w=i(p);if(w&&w!==A)l(w);else break}}const T=i(p),M=o(p);return l(p),n(null,g,M,T,O,$,gn(M),R),O&&(O.vnode.el=g.el,yo(O,g.el)),T},k=(p,g="[",O="]")=>{let $=0;for(;p;)if(p=i(p),p&&At(p)&&(p.data===g&&$++,p.data===O)){if($===0)return i(p);$--}return p},P=(p,g,O)=>{const $=g.parentNode;$&&$.replaceChild(p,g);let R=O;for(;R;)R.vnode.el===g&&(R.vnode.el=R.subTree.el=p),R=R.parent},D=p=>p.nodeType===1&&p.tagName==="TEMPLATE";return[a,d]}const vr="data-allow-mismatch",tc={0:"text",1:"children",2:"class",3:"style",4:"attribute"};function mn(e,t){if(t===0||t===1)for(;e&&!e.hasAttribute(vr);)e=e.parentElement;const n=e&&e.getAttribute(vr);if(n==null)return!1;if(n==="")return!0;{const s=n.split(",");return t===0&&s.includes("children")?!0:n.split(",").includes(tc[t])}}$n().requestIdleCallback;$n().cancelIdleCallback;function nc(e,t){if(At(e)&&e.data==="["){let n=1,s=e.nextSibling;for(;s;){if(s.nodeType===1){if(t(s)===!1)break}else if(At(s))if(s.data==="]"){if(--n===0)break}else s.data==="["&&n++;s=s.nextSibling}}else t(e)}const mt=e=>!!e.type.__asyncLoader;/*! #__NO_SIDE_EFFECTS__ */function If(e){q(e)&&(e={loader:e});const{loader:t,loadingComponent:n,errorComponent:s,delay:r=200,hydrate:i,timeout:o,suspensible:l=!0,onError:c}=e;let f=null,a,d=0;const m=()=>(d++,f=null,y()),y=()=>{let v;return f||(v=f=t().catch(_=>{if(_=_ instanceof Error?_:new Error(String(_)),c)return new Promise((k,P)=>{c(_,()=>k(m()),()=>P(_),d+1)});throw _}).then(_=>v!==f&&f?f:(_&&(_.__esModule||_[Symbol.toStringTag]==="Module")&&(_=_.default),a=_,_)))};return Qs({name:"AsyncComponentWrapper",__asyncLoader:y,__asyncHydrate(v,_,k){const P=i?()=>{const D=i(k,p=>nc(v,p));D&&(_.bum||(_.bum=[])).push(D)}:k;a?P():y().then(()=>!_.isUnmounted&&P())},get __asyncResolved(){return a},setup(){const v=de;if(Zs(v),a)return()=>rs(a,v);const _=p=>{f=null,sn(p,v,13,!s)};if(l&&v.suspense||It)return y().then(p=>()=>rs(p,v)).catch(p=>(_(p),()=>s?ae(s,{error:p}):null));const k=le(!1),P=le(),D=le(!!r);return r&&setTimeout(()=>{D.value=!1},r),o!=null&&setTimeout(()=>{if(!k.value&&!P.value){const p=new Error(`Async component timed out after ${o}ms.`);_(p),P.value=p}},o),y().then(()=>{k.value=!0,v.parent&&rn(v.parent.vnode)&&v.parent.update()}).catch(p=>{_(p),P.value=p}),()=>{if(k.value&&a)return rs(a,v);if(P.value&&s)return ae(s,{error:P.value});if(n&&!D.value)return ae(n)}}})}function rs(e,t){const{ref:n,props:s,children:r,ce:i}=t.vnode,o=ae(e,s,r);return o.ref=n,o.ce=i,delete t.vnode.ce,o}const rn=e=>e.type.__isKeepAlive;function sc(e,t){qi(e,"a",t)}function rc(e,t){qi(e,"da",t)}function qi(e,t,n=de){const s=e.__wdc||(e.__wdc=()=>{let r=n;for(;r;){if(r.isDeactivated)return;r=r.parent}return e()});if(Bn(t,s,n),n){let r=n.parent;for(;r&&r.parent;)rn(r.parent.vnode)&&ic(s,t,n,r),r=r.parent}}function ic(e,t,n,s){const r=Bn(t,e,s,!0);Kn(()=>{Vs(s[t],r)},n)}function Bn(e,t,n=de,s=!1){if(n){const r=n[e]||(n[e]=[]),i=t.__weh||(t.__weh=(...o)=>{ot();const l=ln(n),c=De(t,n,e,o);return l(),lt(),c});return s?r.unshift(i):r.push(i),i}}const ze=e=>(t,n=de)=>{(!It||e==="sp")&&Bn(e,(...s)=>t(...s),n)},oc=ze("bm"),Nt=ze("m"),lc=ze("bu"),cc=ze("u"),Gi=ze("bum"),Kn=ze("um"),ac=ze("sp"),fc=ze("rtg"),uc=ze("rtc");function dc(e,t=de){Bn("ec",e,t)}const Xi="components";function Nf(e,t){return Ji(Xi,e,!0,t)||e}const Yi=Symbol.for("v-ndc");function Ff(e){return oe(e)?Ji(Xi,e,!1)||e:e||Yi}function Ji(e,t,n=!0,s=!1){const r=he||de;if(r){const i=r.type;{const l=Jc(i,!1);if(l&&(l===t||l===Ne(t)||l===Dn(Ne(t))))return i}const o=wr(r[e]||i[e],t)||wr(r.appContext[e],t);return!o&&s?i:o}}function wr(e,t){return e&&(e[t]||e[Ne(t)]||e[Dn(Ne(t))])}function Hf(e,t,n,s){let r;const i=n,o=K(e);if(o||oe(e)){const l=o&>(e);let c=!1;l&&(c=!Le(e),e=Vn(e)),r=new Array(e.length);for(let f=0,a=e.length;ft(l,c,void 0,i));else{const l=Object.keys(e);r=new Array(l.length);for(let c=0,f=l.length;cZt(t)?!(t.type===_e||t.type===Ee&&!zi(t.children)):!0)?e:null}function $f(e,t){const n={};for(const s in e)n[/[A-Z]/.test(s)?`on:${s}`:wn(s)]=e[s];return n}const Ms=e=>e?Eo(e)?Xn(e):Ms(e.parent):null,Wt=pe(Object.create(null),{$:e=>e,$el:e=>e.vnode.el,$data:e=>e.data,$props:e=>e.props,$attrs:e=>e.attrs,$slots:e=>e.slots,$refs:e=>e.refs,$parent:e=>Ms(e.parent),$root:e=>Ms(e.root),$host:e=>e.ce,$emit:e=>e.emit,$options:e=>Zi(e),$forceUpdate:e=>e.f||(e.f=()=>{zs(e.update)}),$nextTick:e=>e.n||(e.n=Wn.bind(e.proxy)),$watch:e=>Nc.bind(e)}),is=(e,t)=>e!==ee&&!e.__isScriptSetup&&Q(e,t),hc={get({_:e},t){if(t==="__v_skip")return!0;const{ctx:n,setupState:s,data:r,props:i,accessCache:o,type:l,appContext:c}=e;let f;if(t[0]!=="$"){const y=o[t];if(y!==void 0)switch(y){case 1:return s[t];case 2:return r[t];case 4:return n[t];case 3:return i[t]}else{if(is(s,t))return o[t]=1,s[t];if(r!==ee&&Q(r,t))return o[t]=2,r[t];if((f=e.propsOptions[0])&&Q(f,t))return o[t]=3,i[t];if(n!==ee&&Q(n,t))return o[t]=4,n[t];Os&&(o[t]=0)}}const a=Wt[t];let d,m;if(a)return t==="$attrs"&&ye(e.attrs,"get",""),a(e);if((d=l.__cssModules)&&(d=d[t]))return d;if(n!==ee&&Q(n,t))return o[t]=4,n[t];if(m=c.config.globalProperties,Q(m,t))return m[t]},set({_:e},t,n){const{data:s,setupState:r,ctx:i}=e;return is(r,t)?(r[t]=n,!0):s!==ee&&Q(s,t)?(s[t]=n,!0):Q(e.props,t)||t[0]==="$"&&t.slice(1)in e?!1:(i[t]=n,!0)},has({_:{data:e,setupState:t,accessCache:n,ctx:s,appContext:r,propsOptions:i}},o){let l;return!!n[o]||e!==ee&&Q(e,o)||is(t,o)||(l=i[0])&&Q(l,o)||Q(s,o)||Q(Wt,o)||Q(r.config.globalProperties,o)},defineProperty(e,t,n){return n.get!=null?e._.accessCache[t]=0:Q(n,"value")&&this.set(e,t,n.value,null),Reflect.defineProperty(e,t,n)}};function jf(){return pc().slots}function pc(){const e=on();return e.setupContext||(e.setupContext=Co(e))}function Sr(e){return K(e)?e.reduce((t,n)=>(t[n]=null,t),{}):e}let Os=!0;function gc(e){const t=Zi(e),n=e.proxy,s=e.ctx;Os=!1,t.beforeCreate&&Tr(t.beforeCreate,e,"bc");const{data:r,computed:i,methods:o,watch:l,provide:c,inject:f,created:a,beforeMount:d,mounted:m,beforeUpdate:y,updated:v,activated:_,deactivated:k,beforeDestroy:P,beforeUnmount:D,destroyed:p,unmounted:g,render:O,renderTracked:$,renderTriggered:R,errorCaptured:j,serverPrefetch:T,expose:M,inheritAttrs:A,components:w,directives:F,filters:Y}=t;if(f&&mc(f,s,null),o)for(const X in o){const V=o[X];q(V)&&(s[X]=V.bind(n))}if(r){const X=r.call(n,n);se(X)&&(e.data=Lt(X))}if(Os=!0,i)for(const X in i){const V=i[X],fe=q(V)?V.bind(n,n):q(V.get)?V.get.bind(n,n):We,cn=!q(V)&&q(V.set)?V.set.bind(n):We,ct=re({get:fe,set:cn});Object.defineProperty(s,X,{enumerable:!0,configurable:!0,get:()=>ct.value,set:je=>ct.value=je})}if(l)for(const X in l)Qi(l[X],s,n,X);if(c){const X=q(c)?c.call(n):c;Reflect.ownKeys(X).forEach(V=>{Sc(V,X[V])})}a&&Tr(a,e,"c");function U(X,V){K(V)?V.forEach(fe=>X(fe.bind(n))):V&&X(V.bind(n))}if(U(oc,d),U(Nt,m),U(lc,y),U(cc,v),U(sc,_),U(rc,k),U(dc,j),U(uc,$),U(fc,R),U(Gi,D),U(Kn,g),U(ac,T),K(M))if(M.length){const X=e.exposed||(e.exposed={});M.forEach(V=>{Object.defineProperty(X,V,{get:()=>n[V],set:fe=>n[V]=fe})})}else e.exposed||(e.exposed={});O&&e.render===We&&(e.render=O),A!=null&&(e.inheritAttrs=A),w&&(e.components=w),F&&(e.directives=F),T&&Zs(e)}function mc(e,t,n=We){K(e)&&(e=Ps(e));for(const s in e){const r=e[s];let i;se(r)?"default"in r?i=bt(r.from||s,r.default,!0):i=bt(r.from||s):i=bt(r),ue(i)?Object.defineProperty(t,s,{enumerable:!0,configurable:!0,get:()=>i.value,set:o=>i.value=o}):t[s]=i}}function Tr(e,t,n){De(K(e)?e.map(s=>s.bind(t.proxy)):e.bind(t.proxy),t,n)}function Qi(e,t,n,s){let r=s.includes(".")?go(n,s):()=>n[s];if(oe(e)){const i=t[e];q(i)&&Ie(r,i)}else if(q(e))Ie(r,e.bind(n));else if(se(e))if(K(e))e.forEach(i=>Qi(i,t,n,s));else{const i=q(e.handler)?e.handler.bind(n):t[e.handler];q(i)&&Ie(r,i,e)}}function Zi(e){const t=e.type,{mixins:n,extends:s}=t,{mixins:r,optionsCache:i,config:{optionMergeStrategies:o}}=e.appContext,l=i.get(t);let c;return l?c=l:!r.length&&!n&&!s?c=t:(c={},r.length&&r.forEach(f=>In(c,f,o,!0)),In(c,t,o)),se(t)&&i.set(t,c),c}function In(e,t,n,s=!1){const{mixins:r,extends:i}=t;i&&In(e,i,n,!0),r&&r.forEach(o=>In(e,o,n,!0));for(const o in t)if(!(s&&o==="expose")){const l=yc[o]||n&&n[o];e[o]=l?l(e[o],t[o]):t[o]}return e}const yc={data:Er,props:xr,emits:xr,methods:jt,computed:jt,beforeCreate:ve,created:ve,beforeMount:ve,mounted:ve,beforeUpdate:ve,updated:ve,beforeDestroy:ve,beforeUnmount:ve,destroyed:ve,unmounted:ve,activated:ve,deactivated:ve,errorCaptured:ve,serverPrefetch:ve,components:jt,directives:jt,watch:_c,provide:Er,inject:bc};function Er(e,t){return t?e?function(){return pe(q(e)?e.call(this,this):e,q(t)?t.call(this,this):t)}:t:e}function bc(e,t){return jt(Ps(e),Ps(t))}function Ps(e){if(K(e)){const t={};for(let n=0;n1)return n&&q(t)?t.call(s&&s.proxy):t}}function to(){return!!(de||he||yt)}const no={},so=()=>Object.create(no),ro=e=>Object.getPrototypeOf(e)===no;function Tc(e,t,n,s=!1){const r={},i=so();e.propsDefaults=Object.create(null),io(e,t,r,i);for(const o in e.propsOptions[0])o in r||(r[o]=void 0);n?e.props=s?r:Pl(r):e.type.props?e.props=r:e.props=i,e.attrs=i}function Ec(e,t,n,s){const{props:r,attrs:i,vnode:{patchFlag:o}}=e,l=z(r),[c]=e.propsOptions;let f=!1;if((s||o>0)&&!(o&16)){if(o&8){const a=e.vnode.dynamicProps;for(let d=0;d{c=!0;const[m,y]=oo(d,t,!0);pe(o,m),y&&l.push(...y)};!n&&t.mixins.length&&t.mixins.forEach(a),e.extends&&a(e.extends),e.mixins&&e.mixins.forEach(a)}if(!i&&!c)return se(e)&&s.set(e,Rt),Rt;if(K(i))for(let a=0;ae[0]==="_"||e==="$stable",er=e=>K(e)?e.map(Pe):[Pe(e)],Cc=(e,t,n)=>{if(t._n)return t;const s=Gl((...r)=>er(t(...r)),n);return s._c=!1,s},co=(e,t,n)=>{const s=e._ctx;for(const r in e){if(lo(r))continue;const i=e[r];if(q(i))t[r]=Cc(r,i,s);else if(i!=null){const o=er(i);t[r]=()=>o}}},ao=(e,t)=>{const n=er(t);e.slots.default=()=>n},fo=(e,t,n)=>{for(const s in t)(n||s!=="_")&&(e[s]=t[s])},Ac=(e,t,n)=>{const s=e.slots=so();if(e.vnode.shapeFlag&32){const r=t._;r?(fo(s,t,n),n&&di(s,"_",r,!0)):co(t,s)}else t&&ao(e,t)},Rc=(e,t,n)=>{const{vnode:s,slots:r}=e;let i=!0,o=ee;if(s.shapeFlag&32){const l=t._;l?n&&l===1?i=!1:fo(r,t,n):(i=!t.$stable,co(t,r)),o=t}else t&&(ao(e,t),o={default:1});if(i)for(const l in r)!lo(l)&&o[l]==null&&delete r[l]},we=_o;function Mc(e){return uo(e)}function Oc(e){return uo(e,ec)}function uo(e,t){const n=$n();n.__VUE__=!0;const{insert:s,remove:r,patchProp:i,createElement:o,createText:l,createComment:c,setText:f,setElementText:a,parentNode:d,nextSibling:m,setScopeId:y=We,insertStaticContent:v}=e,_=(u,h,b,x=null,S=null,E=null,N=void 0,I=null,L=!!h.dynamicChildren)=>{if(u===h)return;u&&!ht(u,h)&&(x=an(u),je(u,S,E,!0),u=null),h.patchFlag===-2&&(L=!1,h.dynamicChildren=null);const{type:C,ref:B,shapeFlag:H}=h;switch(C){case _t:k(u,h,b,x);break;case _e:P(u,h,b,x);break;case Bt:u==null&&D(h,b,x,N);break;case Ee:w(u,h,b,x,S,E,N,I,L);break;default:H&1?O(u,h,b,x,S,E,N,I,L):H&6?F(u,h,b,x,S,E,N,I,L):(H&64||H&128)&&C.process(u,h,b,x,S,E,N,I,L,St)}B!=null&&S&&zt(B,u&&u.ref,E,h||u,!h)},k=(u,h,b,x)=>{if(u==null)s(h.el=l(h.children),b,x);else{const S=h.el=u.el;h.children!==u.children&&f(S,h.children)}},P=(u,h,b,x)=>{u==null?s(h.el=c(h.children||""),b,x):h.el=u.el},D=(u,h,b,x)=>{[u.el,u.anchor]=v(u.children,h,b,x,u.el,u.anchor)},p=({el:u,anchor:h},b,x)=>{let S;for(;u&&u!==h;)S=m(u),s(u,b,x),u=S;s(h,b,x)},g=({el:u,anchor:h})=>{let b;for(;u&&u!==h;)b=m(u),r(u),u=b;r(h)},O=(u,h,b,x,S,E,N,I,L)=>{h.type==="svg"?N="svg":h.type==="math"&&(N="mathml"),u==null?$(h,b,x,S,E,N,I,L):T(u,h,S,E,N,I,L)},$=(u,h,b,x,S,E,N,I)=>{let L,C;const{props:B,shapeFlag:H,transition:W,dirs:G}=u;if(L=u.el=o(u.type,E,B&&B.is,B),H&8?a(L,u.children):H&16&&j(u.children,L,null,x,S,os(u,E),N,I),G&&Ue(u,null,x,"created"),R(L,u,u.scopeId,N,x),B){for(const te in B)te!=="value"&&!Ot(te)&&i(L,te,null,B[te],E,x);"value"in B&&i(L,"value",null,B.value,E),(C=B.onVnodeBeforeMount)&&Oe(C,x,u)}G&&Ue(u,null,x,"beforeMount");const J=ho(S,W);J&&W.beforeEnter(L),s(L,h,b),((C=B&&B.onVnodeMounted)||J||G)&&we(()=>{C&&Oe(C,x,u),J&&W.enter(L),G&&Ue(u,null,x,"mounted")},S)},R=(u,h,b,x,S)=>{if(b&&y(u,b),x)for(let E=0;E{for(let C=L;C{const I=h.el=u.el;let{patchFlag:L,dynamicChildren:C,dirs:B}=h;L|=u.patchFlag&16;const H=u.props||ee,W=h.props||ee;let G;if(b&&at(b,!1),(G=W.onVnodeBeforeUpdate)&&Oe(G,b,h,u),B&&Ue(h,u,b,"beforeUpdate"),b&&at(b,!0),(H.innerHTML&&W.innerHTML==null||H.textContent&&W.textContent==null)&&a(I,""),C?M(u.dynamicChildren,C,I,b,x,os(h,S),E):N||V(u,h,I,null,b,x,os(h,S),E,!1),L>0){if(L&16)A(I,H,W,b,S);else if(L&2&&H.class!==W.class&&i(I,"class",null,W.class,S),L&4&&i(I,"style",H.style,W.style,S),L&8){const J=h.dynamicProps;for(let te=0;te{G&&Oe(G,b,h,u),B&&Ue(h,u,b,"updated")},x)},M=(u,h,b,x,S,E,N)=>{for(let I=0;I{if(h!==b){if(h!==ee)for(const E in h)!Ot(E)&&!(E in b)&&i(u,E,h[E],null,S,x);for(const E in b){if(Ot(E))continue;const N=b[E],I=h[E];N!==I&&E!=="value"&&i(u,E,I,N,S,x)}"value"in b&&i(u,"value",h.value,b.value,S)}},w=(u,h,b,x,S,E,N,I,L)=>{const C=h.el=u?u.el:l(""),B=h.anchor=u?u.anchor:l("");let{patchFlag:H,dynamicChildren:W,slotScopeIds:G}=h;G&&(I=I?I.concat(G):G),u==null?(s(C,b,x),s(B,b,x),j(h.children||[],b,B,S,E,N,I,L)):H>0&&H&64&&W&&u.dynamicChildren?(M(u.dynamicChildren,W,b,S,E,N,I),(h.key!=null||S&&h===S.subTree)&&tr(u,h,!0)):V(u,h,b,B,S,E,N,I,L)},F=(u,h,b,x,S,E,N,I,L)=>{h.slotScopeIds=I,u==null?h.shapeFlag&512?S.ctx.activate(h,b,x,N,L):Y(h,b,x,S,E,N,L):ie(u,h,L)},Y=(u,h,b,x,S,E,N)=>{const I=u.component=qc(u,x,S);if(rn(u)&&(I.ctx.renderer=St),Gc(I,!1,N),I.asyncDep){if(S&&S.registerDep(I,U,N),!u.el){const L=I.subTree=ae(_e);P(null,L,h,b)}}else U(I,u,h,b,S,E,N)},ie=(u,h,b)=>{const x=h.component=u.component;if(jc(u,h,b))if(x.asyncDep&&!x.asyncResolved){X(x,h,b);return}else x.next=h,x.update();else h.el=u.el,x.vnode=h},U=(u,h,b,x,S,E,N)=>{const I=()=>{if(u.isMounted){let{next:H,bu:W,u:G,parent:J,vnode:te}=u;{const Ce=po(u);if(Ce){H&&(H.el=te.el,X(u,H,N)),Ce.asyncDep.then(()=>{u.isUnmounted||I()});return}}let Z=H,xe;at(u,!1),H?(H.el=te.el,X(u,H,N)):H=te,W&&Sn(W),(xe=H.props&&H.props.onVnodeBeforeUpdate)&&Oe(xe,J,H,te),at(u,!0);const ge=ls(u),Fe=u.subTree;u.subTree=ge,_(Fe,ge,d(Fe.el),an(Fe),u,S,E),H.el=ge.el,Z===null&&yo(u,ge.el),G&&we(G,S),(xe=H.props&&H.props.onVnodeUpdated)&&we(()=>Oe(xe,J,H,te),S)}else{let H;const{el:W,props:G}=h,{bm:J,m:te,parent:Z,root:xe,type:ge}=u,Fe=mt(h);if(at(u,!1),J&&Sn(J),!Fe&&(H=G&&G.onVnodeBeforeMount)&&Oe(H,Z,h),at(u,!0),W&&Qn){const Ce=()=>{u.subTree=ls(u),Qn(W,u.subTree,u,S,null)};Fe&&ge.__asyncHydrate?ge.__asyncHydrate(W,u,Ce):Ce()}else{xe.ce&&xe.ce._injectChildStyle(ge);const Ce=u.subTree=ls(u);_(null,Ce,b,x,u,S,E),h.el=Ce.el}if(te&&we(te,S),!Fe&&(H=G&&G.onVnodeMounted)){const Ce=h;we(()=>Oe(H,Z,Ce),S)}(h.shapeFlag&256||Z&&mt(Z.vnode)&&Z.vnode.shapeFlag&256)&&u.a&&we(u.a,S),u.isMounted=!0,h=b=x=null}};u.scope.on();const L=u.effect=new yi(I);u.scope.off();const C=u.update=L.run.bind(L),B=u.job=L.runIfDirty.bind(L);B.i=u,B.id=u.uid,L.scheduler=()=>zs(B),at(u,!0),C()},X=(u,h,b)=>{h.component=u;const x=u.vnode.props;u.vnode=h,u.next=null,Ec(u,h.props,x,b),Rc(u,h.children,b),ot(),pr(u),lt()},V=(u,h,b,x,S,E,N,I,L=!1)=>{const C=u&&u.children,B=u?u.shapeFlag:0,H=h.children,{patchFlag:W,shapeFlag:G}=h;if(W>0){if(W&128){cn(C,H,b,x,S,E,N,I,L);return}else if(W&256){fe(C,H,b,x,S,E,N,I,L);return}}G&8?(B&16&&Ft(C,S,E),H!==C&&a(b,H)):B&16?G&16?cn(C,H,b,x,S,E,N,I,L):Ft(C,S,E,!0):(B&8&&a(b,""),G&16&&j(H,b,x,S,E,N,I,L))},fe=(u,h,b,x,S,E,N,I,L)=>{u=u||Rt,h=h||Rt;const C=u.length,B=h.length,H=Math.min(C,B);let W;for(W=0;WB?Ft(u,S,E,!0,!1,H):j(h,b,x,S,E,N,I,L,H)},cn=(u,h,b,x,S,E,N,I,L)=>{let C=0;const B=h.length;let H=u.length-1,W=B-1;for(;C<=H&&C<=W;){const G=u[C],J=h[C]=L?tt(h[C]):Pe(h[C]);if(ht(G,J))_(G,J,b,null,S,E,N,I,L);else break;C++}for(;C<=H&&C<=W;){const G=u[H],J=h[W]=L?tt(h[W]):Pe(h[W]);if(ht(G,J))_(G,J,b,null,S,E,N,I,L);else break;H--,W--}if(C>H){if(C<=W){const G=W+1,J=GW)for(;C<=H;)je(u[C],S,E,!0),C++;else{const G=C,J=C,te=new Map;for(C=J;C<=W;C++){const Ae=h[C]=L?tt(h[C]):Pe(h[C]);Ae.key!=null&&te.set(Ae.key,C)}let Z,xe=0;const ge=W-J+1;let Fe=!1,Ce=0;const Ht=new Array(ge);for(C=0;C=ge){je(Ae,S,E,!0);continue}let Ve;if(Ae.key!=null)Ve=te.get(Ae.key);else for(Z=J;Z<=W;Z++)if(Ht[Z-J]===0&&ht(Ae,h[Z])){Ve=Z;break}Ve===void 0?je(Ae,S,E,!0):(Ht[Ve-J]=C+1,Ve>=Ce?Ce=Ve:Fe=!0,_(Ae,h[Ve],b,null,S,E,N,I,L),xe++)}const ar=Fe?Pc(Ht):Rt;for(Z=ar.length-1,C=ge-1;C>=0;C--){const Ae=J+C,Ve=h[Ae],fr=Ae+1{const{el:E,type:N,transition:I,children:L,shapeFlag:C}=u;if(C&6){ct(u.component.subTree,h,b,x);return}if(C&128){u.suspense.move(h,b,x);return}if(C&64){N.move(u,h,b,St);return}if(N===Ee){s(E,h,b);for(let H=0;HI.enter(E),S);else{const{leave:H,delayLeave:W,afterLeave:G}=I,J=()=>s(E,h,b),te=()=>{H(E,()=>{J(),G&&G()})};W?W(E,J,te):te()}else s(E,h,b)},je=(u,h,b,x=!1,S=!1)=>{const{type:E,props:N,ref:I,children:L,dynamicChildren:C,shapeFlag:B,patchFlag:H,dirs:W,cacheIndex:G}=u;if(H===-2&&(S=!1),I!=null&&zt(I,null,b,u,!0),G!=null&&(h.renderCache[G]=void 0),B&256){h.ctx.deactivate(u);return}const J=B&1&&W,te=!mt(u);let Z;if(te&&(Z=N&&N.onVnodeBeforeUnmount)&&Oe(Z,h,u),B&6)Jo(u.component,b,x);else{if(B&128){u.suspense.unmount(b,x);return}J&&Ue(u,null,h,"beforeUnmount"),B&64?u.type.remove(u,h,b,St,x):C&&!C.hasOnce&&(E!==Ee||H>0&&H&64)?Ft(C,h,b,!1,!0):(E===Ee&&H&384||!S&&B&16)&&Ft(L,h,b),x&&lr(u)}(te&&(Z=N&&N.onVnodeUnmounted)||J)&&we(()=>{Z&&Oe(Z,h,u),J&&Ue(u,null,h,"unmounted")},b)},lr=u=>{const{type:h,el:b,anchor:x,transition:S}=u;if(h===Ee){Yo(b,x);return}if(h===Bt){g(u);return}const E=()=>{r(b),S&&!S.persisted&&S.afterLeave&&S.afterLeave()};if(u.shapeFlag&1&&S&&!S.persisted){const{leave:N,delayLeave:I}=S,L=()=>N(b,E);I?I(u.el,E,L):L()}else E()},Yo=(u,h)=>{let b;for(;u!==h;)b=m(u),r(u),u=b;r(h)},Jo=(u,h,b)=>{const{bum:x,scope:S,job:E,subTree:N,um:I,m:L,a:C}=u;Ar(L),Ar(C),x&&Sn(x),S.stop(),E&&(E.flags|=8,je(N,u,h,b)),I&&we(I,h),we(()=>{u.isUnmounted=!0},h),h&&h.pendingBranch&&!h.isUnmounted&&u.asyncDep&&!u.asyncResolved&&u.suspenseId===h.pendingId&&(h.deps--,h.deps===0&&h.resolve())},Ft=(u,h,b,x=!1,S=!1,E=0)=>{for(let N=E;N{if(u.shapeFlag&6)return an(u.component.subTree);if(u.shapeFlag&128)return u.suspense.next();const h=m(u.anchor||u.el),b=h&&h[Di];return b?m(b):h};let Jn=!1;const cr=(u,h,b)=>{u==null?h._vnode&&je(h._vnode,null,null,!0):_(h._vnode||null,u,h,null,null,null,b),h._vnode=u,Jn||(Jn=!0,pr(),Pn(),Jn=!1)},St={p:_,um:je,m:ct,r:lr,mt:Y,mc:j,pc:V,pbc:M,n:an,o:e};let zn,Qn;return t&&([zn,Qn]=t(St)),{render:cr,hydrate:zn,createApp:wc(cr,zn)}}function os({type:e,props:t},n){return n==="svg"&&e==="foreignObject"||n==="mathml"&&e==="annotation-xml"&&t&&t.encoding&&t.encoding.includes("html")?void 0:n}function at({effect:e,job:t},n){n?(e.flags|=32,t.flags|=4):(e.flags&=-33,t.flags&=-5)}function ho(e,t){return(!e||e&&!e.pendingBranch)&&t&&!t.persisted}function tr(e,t,n=!1){const s=e.children,r=t.children;if(K(s)&&K(r))for(let i=0;i>1,e[n[l]]0&&(t[s]=n[i-1]),n[i]=s)}}for(i=n.length,o=n[i-1];i-- >0;)n[i]=o,o=t[o];return n}function po(e){const t=e.subTree.component;if(t)return t.asyncDep&&!t.asyncResolved?t:po(t)}function Ar(e){if(e)for(let t=0;tbt(Lc);function nr(e,t){return qn(e,null,t)}function Vf(e,t){return qn(e,null,{flush:"post"})}function Ie(e,t,n){return qn(e,t,n)}function qn(e,t,n=ee){const{immediate:s,deep:r,flush:i,once:o}=n,l=pe({},n),c=t&&s||!t&&i!=="post";let f;if(It){if(i==="sync"){const y=Ic();f=y.__watcherHandles||(y.__watcherHandles=[])}else if(!c){const y=()=>{};return y.stop=We,y.resume=We,y.pause=We,y}}const a=de;l.call=(y,v,_)=>De(y,a,v,_);let d=!1;i==="post"?l.scheduler=y=>{we(y,a&&a.suspense)}:i!=="sync"&&(d=!0,l.scheduler=(y,v)=>{v?y():zs(y)}),l.augmentJob=y=>{t&&(y.flags|=4),d&&(y.flags|=2,a&&(y.id=a.uid,y.i=a))};const m=Wl(e,t,l);return It&&(f?f.push(m):c&&m()),m}function Nc(e,t,n){const s=this.proxy,r=oe(e)?e.includes(".")?go(s,e):()=>s[e]:e.bind(s,s);let i;q(t)?i=t:(i=t.handler,n=t);const o=ln(this),l=qn(r,i.bind(s),n);return o(),l}function go(e,t){const n=t.split(".");return()=>{let s=e;for(let r=0;rt==="modelValue"||t==="model-value"?e.modelModifiers:e[`${t}Modifiers`]||e[`${Ne(t)}Modifiers`]||e[`${it(t)}Modifiers`];function Hc(e,t,...n){if(e.isUnmounted)return;const s=e.vnode.props||ee;let r=n;const i=t.startsWith("update:"),o=i&&Fc(s,t.slice(7));o&&(o.trim&&(r=n.map(a=>oe(a)?a.trim():a)),o.number&&(r=n.map(Ss)));let l,c=s[l=wn(t)]||s[l=wn(Ne(t))];!c&&i&&(c=s[l=wn(it(t))]),c&&De(c,e,6,r);const f=s[l+"Once"];if(f){if(!e.emitted)e.emitted={};else if(e.emitted[l])return;e.emitted[l]=!0,De(f,e,6,r)}}function mo(e,t,n=!1){const s=t.emitsCache,r=s.get(e);if(r!==void 0)return r;const i=e.emits;let o={},l=!1;if(!q(e)){const c=f=>{const a=mo(f,t,!0);a&&(l=!0,pe(o,a))};!n&&t.mixins.length&&t.mixins.forEach(c),e.extends&&c(e.extends),e.mixins&&e.mixins.forEach(c)}return!i&&!l?(se(e)&&s.set(e,null),null):(K(i)?i.forEach(c=>o[c]=null):pe(o,i),se(e)&&s.set(e,o),o)}function Gn(e,t){return!e||!tn(t)?!1:(t=t.slice(2).replace(/Once$/,""),Q(e,t[0].toLowerCase()+t.slice(1))||Q(e,it(t))||Q(e,t))}function ls(e){const{type:t,vnode:n,proxy:s,withProxy:r,propsOptions:[i],slots:o,attrs:l,emit:c,render:f,renderCache:a,props:d,data:m,setupState:y,ctx:v,inheritAttrs:_}=e,k=Ln(e);let P,D;try{if(n.shapeFlag&4){const g=r||s,O=g;P=Pe(f.call(O,g,a,d,y,m,v)),D=l}else{const g=t;P=Pe(g.length>1?g(d,{attrs:l,slots:o,emit:c}):g(d,null)),D=t.props?l:Dc(l)}}catch(g){Kt.length=0,sn(g,e,1),P=ae(_e)}let p=P;if(D&&_!==!1){const g=Object.keys(D),{shapeFlag:O}=p;g.length&&O&7&&(i&&g.some(js)&&(D=$c(D,i)),p=st(p,D,!1,!0))}return n.dirs&&(p=st(p,null,!1,!0),p.dirs=p.dirs?p.dirs.concat(n.dirs):n.dirs),n.transition&&Jt(p,n.transition),P=p,Ln(k),P}const Dc=e=>{let t;for(const n in e)(n==="class"||n==="style"||tn(n))&&((t||(t={}))[n]=e[n]);return t},$c=(e,t)=>{const n={};for(const s in e)(!js(s)||!(s.slice(9)in t))&&(n[s]=e[s]);return n};function jc(e,t,n){const{props:s,children:r,component:i}=e,{props:o,children:l,patchFlag:c}=t,f=i.emitsOptions;if(t.dirs||t.transition)return!0;if(n&&c>=0){if(c&1024)return!0;if(c&16)return s?Rr(s,o,f):!!o;if(c&8){const a=t.dynamicProps;for(let d=0;de.__isSuspense;function _o(e,t){t&&t.pendingBranch?K(e)?t.effects.push(...e):t.effects.push(e):ql(e)}const Ee=Symbol.for("v-fgt"),_t=Symbol.for("v-txt"),_e=Symbol.for("v-cmt"),Bt=Symbol.for("v-stc"),Kt=[];let Re=null;function Is(e=!1){Kt.push(Re=e?null:[])}function Vc(){Kt.pop(),Re=Kt[Kt.length-1]||null}let Qt=1;function Mr(e,t=!1){Qt+=e,e<0&&Re&&t&&(Re.hasOnce=!0)}function vo(e){return e.dynamicChildren=Qt>0?Re||Rt:null,Vc(),Qt>0&&Re&&Re.push(e),e}function kf(e,t,n,s,r,i){return vo(So(e,t,n,s,r,i,!0))}function Ns(e,t,n,s,r){return vo(ae(e,t,n,s,r,!0))}function Zt(e){return e?e.__v_isVNode===!0:!1}function ht(e,t){return e.type===t.type&&e.key===t.key}const wo=({key:e})=>e??null,xn=({ref:e,ref_key:t,ref_for:n})=>(typeof e=="number"&&(e=""+e),e!=null?oe(e)||ue(e)||q(e)?{i:he,r:e,k:t,f:!!n}:e:null);function So(e,t=null,n=null,s=0,r=null,i=e===Ee?0:1,o=!1,l=!1){const c={__v_isVNode:!0,__v_skip:!0,type:e,props:t,key:t&&wo(t),ref:t&&xn(t),scopeId:Hi,slotScopeIds:null,children:n,component:null,suspense:null,ssContent:null,ssFallback:null,dirs:null,transition:null,el:null,anchor:null,target:null,targetStart:null,targetAnchor:null,staticCount:0,shapeFlag:i,patchFlag:s,dynamicProps:r,dynamicChildren:null,appContext:null,ctx:he};return l?(sr(c,n),i&128&&e.normalize(c)):n&&(c.shapeFlag|=oe(n)?8:16),Qt>0&&!o&&Re&&(c.patchFlag>0||i&6)&&c.patchFlag!==32&&Re.push(c),c}const ae=kc;function kc(e,t=null,n=null,s=0,r=null,i=!1){if((!e||e===Yi)&&(e=_e),Zt(e)){const l=st(e,t,!0);return n&&sr(l,n),Qt>0&&!i&&Re&&(l.shapeFlag&6?Re[Re.indexOf(e)]=l:Re.push(l)),l.patchFlag=-2,l}if(zc(e)&&(e=e.__vccOpts),t){t=Uc(t);let{class:l,style:c}=t;l&&!oe(l)&&(t.class=Ws(l)),se(c)&&(Ys(c)&&!K(c)&&(c=pe({},c)),t.style=Us(c))}const o=oe(e)?1:bo(e)?128:$i(e)?64:se(e)?4:q(e)?2:0;return So(e,t,n,s,r,o,i,!0)}function Uc(e){return e?Ys(e)||ro(e)?pe({},e):e:null}function st(e,t,n=!1,s=!1){const{props:r,ref:i,patchFlag:o,children:l,transition:c}=e,f=t?Wc(r||{},t):r,a={__v_isVNode:!0,__v_skip:!0,type:e.type,props:f,key:f&&wo(f),ref:t&&t.ref?n&&i?K(i)?i.concat(xn(t)):[i,xn(t)]:xn(t):i,scopeId:e.scopeId,slotScopeIds:e.slotScopeIds,children:l,target:e.target,targetStart:e.targetStart,targetAnchor:e.targetAnchor,staticCount:e.staticCount,shapeFlag:e.shapeFlag,patchFlag:t&&e.type!==Ee?o===-1?16:o|16:o,dynamicProps:e.dynamicProps,dynamicChildren:e.dynamicChildren,appContext:e.appContext,dirs:e.dirs,transition:c,component:e.component,suspense:e.suspense,ssContent:e.ssContent&&st(e.ssContent),ssFallback:e.ssFallback&&st(e.ssFallback),el:e.el,anchor:e.anchor,ctx:e.ctx,ce:e.ce};return c&&s&&Jt(a,c.clone(a)),a}function To(e=" ",t=0){return ae(_t,null,e,t)}function Uf(e,t){const n=ae(Bt,null,e);return n.staticCount=t,n}function Wf(e="",t=!1){return t?(Is(),Ns(_e,null,e)):ae(_e,null,e)}function Pe(e){return e==null||typeof e=="boolean"?ae(_e):K(e)?ae(Ee,null,e.slice()):Zt(e)?tt(e):ae(_t,null,String(e))}function tt(e){return e.el===null&&e.patchFlag!==-1||e.memo?e:st(e)}function sr(e,t){let n=0;const{shapeFlag:s}=e;if(t==null)t=null;else if(K(t))n=16;else if(typeof t=="object")if(s&65){const r=t.default;r&&(r._c&&(r._d=!1),sr(e,r()),r._c&&(r._d=!0));return}else{n=32;const r=t._;!r&&!ro(t)?t._ctx=he:r===3&&he&&(he.slots._===1?t._=1:(t._=2,e.patchFlag|=1024))}else q(t)?(t={default:t,_ctx:he},n=32):(t=String(t),s&64?(n=16,t=[To(t)]):n=8);e.children=t,e.shapeFlag|=n}function Wc(...e){const t={};for(let n=0;nde||he;let Nn,Fs;{const e=$n(),t=(n,s)=>{let r;return(r=e[n])||(r=e[n]=[]),r.push(s),i=>{r.length>1?r.forEach(o=>o(i)):r[0](i)}};Nn=t("__VUE_INSTANCE_SETTERS__",n=>de=n),Fs=t("__VUE_SSR_SETTERS__",n=>It=n)}const ln=e=>{const t=de;return Nn(e),e.scope.on(),()=>{e.scope.off(),Nn(t)}},Or=()=>{de&&de.scope.off(),Nn(null)};function Eo(e){return e.vnode.shapeFlag&4}let It=!1;function Gc(e,t=!1,n=!1){t&&Fs(t);const{props:s,children:r}=e.vnode,i=Eo(e);Tc(e,s,i,t),Ac(e,r,n);const o=i?Xc(e,t):void 0;return t&&Fs(!1),o}function Xc(e,t){const n=e.type;e.accessCache=Object.create(null),e.proxy=new Proxy(e.ctx,hc);const{setup:s}=n;if(s){ot();const r=e.setupContext=s.length>1?Co(e):null,i=ln(e),o=nn(s,e,0,[e.props,r]),l=ai(o);if(lt(),i(),(l||e.sp)&&!mt(e)&&Zs(e),l){if(o.then(Or,Or),t)return o.then(c=>{Pr(e,c)}).catch(c=>{sn(c,e,0)});e.asyncDep=o}else Pr(e,o)}else xo(e)}function Pr(e,t,n){q(t)?e.type.__ssrInlineRender?e.ssrRender=t:e.render=t:se(t)&&(e.setupState=Li(t)),xo(e)}function xo(e,t,n){const s=e.type;e.render||(e.render=s.render||We);{const r=ln(e);ot();try{gc(e)}finally{lt(),r()}}}const Yc={get(e,t){return ye(e,"get",""),e[t]}};function Co(e){const t=n=>{e.exposed=n||{}};return{attrs:new Proxy(e.attrs,Yc),slots:e.slots,emit:e.emit,expose:t}}function Xn(e){return e.exposed?e.exposeProxy||(e.exposeProxy=new Proxy(Li(Tn(e.exposed)),{get(t,n){if(n in t)return t[n];if(n in Wt)return Wt[n](e)},has(t,n){return n in t||n in Wt}})):e.proxy}function Jc(e,t=!0){return q(e)?e.displayName||e.name:e.name||t&&e.__name}function zc(e){return q(e)&&"__vccOpts"in e}const re=(e,t)=>kl(e,t,It);function Hs(e,t,n){const s=arguments.length;return s===2?se(t)&&!K(t)?Zt(t)?ae(e,null,[t]):ae(e,t):ae(e,null,t):(s>3?n=Array.prototype.slice.call(arguments,2):s===3&&Zt(n)&&(n=[n]),ae(e,t,n))}const Qc="3.5.13";/** +* @vue/runtime-dom v3.5.13 +* (c) 2018-present Yuxi (Evan) You and Vue contributors +* @license MIT +**/let Ds;const Lr=typeof window<"u"&&window.trustedTypes;if(Lr)try{Ds=Lr.createPolicy("vue",{createHTML:e=>e})}catch{}const Ao=Ds?e=>Ds.createHTML(e):e=>e,Zc="http://www.w3.org/2000/svg",ea="http://www.w3.org/1998/Math/MathML",qe=typeof document<"u"?document:null,Ir=qe&&qe.createElement("template"),ta={insert:(e,t,n)=>{t.insertBefore(e,n||null)},remove:e=>{const t=e.parentNode;t&&t.removeChild(e)},createElement:(e,t,n,s)=>{const r=t==="svg"?qe.createElementNS(Zc,e):t==="mathml"?qe.createElementNS(ea,e):n?qe.createElement(e,{is:n}):qe.createElement(e);return e==="select"&&s&&s.multiple!=null&&r.setAttribute("multiple",s.multiple),r},createText:e=>qe.createTextNode(e),createComment:e=>qe.createComment(e),setText:(e,t)=>{e.nodeValue=t},setElementText:(e,t)=>{e.textContent=t},parentNode:e=>e.parentNode,nextSibling:e=>e.nextSibling,querySelector:e=>qe.querySelector(e),setScopeId(e,t){e.setAttribute(t,"")},insertStaticContent(e,t,n,s,r,i){const o=n?n.previousSibling:t.lastChild;if(r&&(r===i||r.nextSibling))for(;t.insertBefore(r.cloneNode(!0),n),!(r===i||!(r=r.nextSibling)););else{Ir.innerHTML=Ao(s==="svg"?`${e}`:s==="mathml"?`${e}`:e);const l=Ir.content;if(s==="svg"||s==="mathml"){const c=l.firstChild;for(;c.firstChild;)l.appendChild(c.firstChild);l.removeChild(c)}t.insertBefore(l,n)}return[o?o.nextSibling:t.firstChild,n?n.previousSibling:t.lastChild]}},Qe="transition",$t="animation",en=Symbol("_vtc"),Ro={name:String,type:String,css:{type:Boolean,default:!0},duration:[String,Number,Object],enterFromClass:String,enterActiveClass:String,enterToClass:String,appearFromClass:String,appearActiveClass:String,appearToClass:String,leaveFromClass:String,leaveActiveClass:String,leaveToClass:String},na=pe({},ki,Ro),sa=e=>(e.displayName="Transition",e.props=na,e),Bf=sa((e,{slots:t})=>Hs(zl,ra(e),t)),ft=(e,t=[])=>{K(e)?e.forEach(n=>n(...t)):e&&e(...t)},Nr=e=>e?K(e)?e.some(t=>t.length>1):e.length>1:!1;function ra(e){const t={};for(const w in e)w in Ro||(t[w]=e[w]);if(e.css===!1)return t;const{name:n="v",type:s,duration:r,enterFromClass:i=`${n}-enter-from`,enterActiveClass:o=`${n}-enter-active`,enterToClass:l=`${n}-enter-to`,appearFromClass:c=i,appearActiveClass:f=o,appearToClass:a=l,leaveFromClass:d=`${n}-leave-from`,leaveActiveClass:m=`${n}-leave-active`,leaveToClass:y=`${n}-leave-to`}=e,v=ia(r),_=v&&v[0],k=v&&v[1],{onBeforeEnter:P,onEnter:D,onEnterCancelled:p,onLeave:g,onLeaveCancelled:O,onBeforeAppear:$=P,onAppear:R=D,onAppearCancelled:j=p}=t,T=(w,F,Y,ie)=>{w._enterCancelled=ie,ut(w,F?a:l),ut(w,F?f:o),Y&&Y()},M=(w,F)=>{w._isLeaving=!1,ut(w,d),ut(w,y),ut(w,m),F&&F()},A=w=>(F,Y)=>{const ie=w?R:D,U=()=>T(F,w,Y);ft(ie,[F,U]),Fr(()=>{ut(F,w?c:i),Ke(F,w?a:l),Nr(ie)||Hr(F,s,_,U)})};return pe(t,{onBeforeEnter(w){ft(P,[w]),Ke(w,i),Ke(w,o)},onBeforeAppear(w){ft($,[w]),Ke(w,c),Ke(w,f)},onEnter:A(!1),onAppear:A(!0),onLeave(w,F){w._isLeaving=!0;const Y=()=>M(w,F);Ke(w,d),w._enterCancelled?(Ke(w,m),jr()):(jr(),Ke(w,m)),Fr(()=>{w._isLeaving&&(ut(w,d),Ke(w,y),Nr(g)||Hr(w,s,k,Y))}),ft(g,[w,Y])},onEnterCancelled(w){T(w,!1,void 0,!0),ft(p,[w])},onAppearCancelled(w){T(w,!0,void 0,!0),ft(j,[w])},onLeaveCancelled(w){M(w),ft(O,[w])}})}function ia(e){if(e==null)return null;if(se(e))return[cs(e.enter),cs(e.leave)];{const t=cs(e);return[t,t]}}function cs(e){return nl(e)}function Ke(e,t){t.split(/\s+/).forEach(n=>n&&e.classList.add(n)),(e[en]||(e[en]=new Set)).add(t)}function ut(e,t){t.split(/\s+/).forEach(s=>s&&e.classList.remove(s));const n=e[en];n&&(n.delete(t),n.size||(e[en]=void 0))}function Fr(e){requestAnimationFrame(()=>{requestAnimationFrame(e)})}let oa=0;function Hr(e,t,n,s){const r=e._endId=++oa,i=()=>{r===e._endId&&s()};if(n!=null)return setTimeout(i,n);const{type:o,timeout:l,propCount:c}=la(e,t);if(!o)return s();const f=o+"end";let a=0;const d=()=>{e.removeEventListener(f,m),i()},m=y=>{y.target===e&&++a>=c&&d()};setTimeout(()=>{a(n[v]||"").split(", "),r=s(`${Qe}Delay`),i=s(`${Qe}Duration`),o=Dr(r,i),l=s(`${$t}Delay`),c=s(`${$t}Duration`),f=Dr(l,c);let a=null,d=0,m=0;t===Qe?o>0&&(a=Qe,d=o,m=i.length):t===$t?f>0&&(a=$t,d=f,m=c.length):(d=Math.max(o,f),a=d>0?o>f?Qe:$t:null,m=a?a===Qe?i.length:c.length:0);const y=a===Qe&&/\b(transform|all)(,|$)/.test(s(`${Qe}Property`).toString());return{type:a,timeout:d,propCount:m,hasTransform:y}}function Dr(e,t){for(;e.length$r(n)+$r(e[s])))}function $r(e){return e==="auto"?0:Number(e.slice(0,-1).replace(",","."))*1e3}function jr(){return document.body.offsetHeight}function ca(e,t,n){const s=e[en];s&&(t=(t?[t,...s]:[...s]).join(" ")),t==null?e.removeAttribute("class"):n?e.setAttribute("class",t):e.className=t}const Vr=Symbol("_vod"),aa=Symbol("_vsh"),fa=Symbol(""),ua=/(^|;)\s*display\s*:/;function da(e,t,n){const s=e.style,r=oe(n);let i=!1;if(n&&!r){if(t)if(oe(t))for(const o of t.split(";")){const l=o.slice(0,o.indexOf(":")).trim();n[l]==null&&Cn(s,l,"")}else for(const o in t)n[o]==null&&Cn(s,o,"");for(const o in n)o==="display"&&(i=!0),Cn(s,o,n[o])}else if(r){if(t!==n){const o=s[fa];o&&(n+=";"+o),s.cssText=n,i=ua.test(n)}}else t&&e.removeAttribute("style");Vr in e&&(e[Vr]=i?s.display:"",e[aa]&&(s.display="none"))}const kr=/\s*!important$/;function Cn(e,t,n){if(K(n))n.forEach(s=>Cn(e,t,s));else if(n==null&&(n=""),t.startsWith("--"))e.setProperty(t,n);else{const s=ha(e,t);kr.test(n)?e.setProperty(it(s),n.replace(kr,""),"important"):e[s]=n}}const Ur=["Webkit","Moz","ms"],as={};function ha(e,t){const n=as[t];if(n)return n;let s=Ne(t);if(s!=="filter"&&s in e)return as[t]=s;s=Dn(s);for(let r=0;rfs||(ya.then(()=>fs=0),fs=Date.now());function _a(e,t){const n=s=>{if(!s._vts)s._vts=Date.now();else if(s._vts<=n.attached)return;De(va(s,n.value),t,5,[s])};return n.value=e,n.attached=ba(),n}function va(e,t){if(K(t)){const n=e.stopImmediatePropagation;return e.stopImmediatePropagation=()=>{n.call(e),e._stopped=!0},t.map(s=>r=>!r._stopped&&s&&s(r))}else return t}const Xr=e=>e.charCodeAt(0)===111&&e.charCodeAt(1)===110&&e.charCodeAt(2)>96&&e.charCodeAt(2)<123,wa=(e,t,n,s,r,i)=>{const o=r==="svg";t==="class"?ca(e,s,o):t==="style"?da(e,n,s):tn(t)?js(t)||ga(e,t,n,s,i):(t[0]==="."?(t=t.slice(1),!0):t[0]==="^"?(t=t.slice(1),!1):Sa(e,t,s,o))?(Kr(e,t,s),!e.tagName.includes("-")&&(t==="value"||t==="checked"||t==="selected")&&Br(e,t,s,o,i,t!=="value")):e._isVueCE&&(/[A-Z]/.test(t)||!oe(s))?Kr(e,Ne(t),s,i,t):(t==="true-value"?e._trueValue=s:t==="false-value"&&(e._falseValue=s),Br(e,t,s,o))};function Sa(e,t,n,s){if(s)return!!(t==="innerHTML"||t==="textContent"||t in e&&Xr(t)&&q(n));if(t==="spellcheck"||t==="draggable"||t==="translate"||t==="form"||t==="list"&&e.tagName==="INPUT"||t==="type"&&e.tagName==="TEXTAREA")return!1;if(t==="width"||t==="height"){const r=e.tagName;if(r==="IMG"||r==="VIDEO"||r==="CANVAS"||r==="SOURCE")return!1}return Xr(t)&&oe(n)?!1:t in e}const Yr=e=>{const t=e.props["onUpdate:modelValue"]||!1;return K(t)?n=>Sn(t,n):t};function Ta(e){e.target.composing=!0}function Jr(e){const t=e.target;t.composing&&(t.composing=!1,t.dispatchEvent(new Event("input")))}const us=Symbol("_assign"),Kf={created(e,{modifiers:{lazy:t,trim:n,number:s}},r){e[us]=Yr(r);const i=s||r.props&&r.props.type==="number";Ct(e,t?"change":"input",o=>{if(o.target.composing)return;let l=e.value;n&&(l=l.trim()),i&&(l=Ss(l)),e[us](l)}),n&&Ct(e,"change",()=>{e.value=e.value.trim()}),t||(Ct(e,"compositionstart",Ta),Ct(e,"compositionend",Jr),Ct(e,"change",Jr))},mounted(e,{value:t}){e.value=t??""},beforeUpdate(e,{value:t,oldValue:n,modifiers:{lazy:s,trim:r,number:i}},o){if(e[us]=Yr(o),e.composing)return;const l=(i||e.type==="number")&&!/^0\d/.test(e.value)?Ss(e.value):e.value,c=t??"";l!==c&&(document.activeElement===e&&e.type!=="range"&&(s&&t===n||r&&e.value.trim()===c)||(e.value=c))}},Ea=["ctrl","shift","alt","meta"],xa={stop:e=>e.stopPropagation(),prevent:e=>e.preventDefault(),self:e=>e.target!==e.currentTarget,ctrl:e=>!e.ctrlKey,shift:e=>!e.shiftKey,alt:e=>!e.altKey,meta:e=>!e.metaKey,left:e=>"button"in e&&e.button!==0,middle:e=>"button"in e&&e.button!==1,right:e=>"button"in e&&e.button!==2,exact:(e,t)=>Ea.some(n=>e[`${n}Key`]&&!t.includes(n))},qf=(e,t)=>{const n=e._withMods||(e._withMods={}),s=t.join(".");return n[s]||(n[s]=(r,...i)=>{for(let o=0;o{const n=e._withKeys||(e._withKeys={}),s=t.join(".");return n[s]||(n[s]=r=>{if(!("key"in r))return;const i=it(r.key);if(t.some(o=>o===i||Ca[o]===i))return e(r)})},Mo=pe({patchProp:wa},ta);let qt,zr=!1;function Aa(){return qt||(qt=Mc(Mo))}function Ra(){return qt=zr?qt:Oc(Mo),zr=!0,qt}const Xf=(...e)=>{const t=Aa().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Po(s);if(!r)return;const i=t._component;!q(i)&&!i.render&&!i.template&&(i.template=r.innerHTML),r.nodeType===1&&(r.textContent="");const o=n(r,!1,Oo(r));return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),o},t},Yf=(...e)=>{const t=Ra().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Po(s);if(r)return n(r,!0,Oo(r))},t};function Oo(e){if(e instanceof SVGElement)return"svg";if(typeof MathMLElement=="function"&&e instanceof MathMLElement)return"mathml"}function Po(e){return oe(e)?document.querySelector(e):e}const Ma=window.__VP_SITE_DATA__;function Lo(e){return mi()?(ul(e),!0):!1}const ds=new WeakMap,Oa=(...e)=>{var t;const n=e[0],s=(t=on())==null?void 0:t.proxy;if(s==null&&!to())throw new Error("injectLocal must be called in setup");return s&&ds.has(s)&&n in ds.get(s)?ds.get(s)[n]:bt(...e)},Io=typeof window<"u"&&typeof document<"u";typeof WorkerGlobalScope<"u"&&globalThis instanceof WorkerGlobalScope;const Jf=e=>e!=null,Pa=Object.prototype.toString,La=e=>Pa.call(e)==="[object Object]",rt=()=>{},Qr=Ia();function Ia(){var e,t;return Io&&((e=window==null?void 0:window.navigator)==null?void 0:e.userAgent)&&(/iP(?:ad|hone|od)/.test(window.navigator.userAgent)||((t=window==null?void 0:window.navigator)==null?void 0:t.maxTouchPoints)>2&&/iPad|Macintosh/.test(window==null?void 0:window.navigator.userAgent))}function rr(e,t){function n(...s){return new Promise((r,i)=>{Promise.resolve(e(()=>t.apply(this,s),{fn:t,thisArg:this,args:s})).then(r).catch(i)})}return n}const No=e=>e();function Fo(e,t={}){let n,s,r=rt;const i=c=>{clearTimeout(c),r(),r=rt};let o;return c=>{const f=ce(e),a=ce(t.maxWait);return n&&i(n),f<=0||a!==void 0&&a<=0?(s&&(i(s),s=null),Promise.resolve(c())):new Promise((d,m)=>{r=t.rejectOnCancel?m:d,o=c,a&&!s&&(s=setTimeout(()=>{n&&i(n),s=null,d(o())},a)),n=setTimeout(()=>{s&&i(s),s=null,d(c())},f)})}}function Na(...e){let t=0,n,s=!0,r=rt,i,o,l,c,f;!ue(e[0])&&typeof e[0]=="object"?{delay:o,trailing:l=!0,leading:c=!0,rejectOnCancel:f=!1}=e[0]:[o,l=!0,c=!0,f=!1]=e;const a=()=>{n&&(clearTimeout(n),n=void 0,r(),r=rt)};return m=>{const y=ce(o),v=Date.now()-t,_=()=>i=m();return a(),y<=0?(t=Date.now(),_()):(v>y&&(c||!s)?(t=Date.now(),_()):l&&(i=new Promise((k,P)=>{r=f?P:k,n=setTimeout(()=>{t=Date.now(),s=!0,k(_()),a()},Math.max(0,y-v))})),!c&&!n&&(n=setTimeout(()=>s=!0,y)),s=!1,i)}}function Fa(e=No){const t=le(!0);function n(){t.value=!1}function s(){t.value=!0}const r=(...i)=>{t.value&&e(...i)};return{isActive:kn(t),pause:n,resume:s,eventFilter:r}}function Zr(e){return e.endsWith("rem")?Number.parseFloat(e)*16:Number.parseFloat(e)}function Ha(e){return on()}function hs(e){return Array.isArray(e)?e:[e]}function Ho(...e){if(e.length!==1)return $l(...e);const t=e[0];return typeof t=="function"?kn(Fl(()=>({get:t,set:rt}))):le(t)}function Da(e,t=200,n={}){return rr(Fo(t,n),e)}function $a(e,t=200,n=!1,s=!0,r=!1){return rr(Na(t,n,s,r),e)}function Do(e,t,n={}){const{eventFilter:s=No,...r}=n;return Ie(e,rr(s,t),r)}function ja(e,t,n={}){const{eventFilter:s,...r}=n,{eventFilter:i,pause:o,resume:l,isActive:c}=Fa(s);return{stop:Do(e,t,{...r,eventFilter:i}),pause:o,resume:l,isActive:c}}function Yn(e,t=!0,n){Ha()?Nt(e,n):t?e():Wn(e)}function zf(e,t,n={}){const{debounce:s=0,maxWait:r=void 0,...i}=n;return Do(e,t,{...i,eventFilter:Fo(s,{maxWait:r})})}function Va(e,t,n){return Ie(e,t,{...n,immediate:!0})}function Qf(e,t,n){let s;ue(n)?s={evaluating:n}:s={};const{lazy:r=!1,evaluating:i=void 0,shallow:o=!0,onError:l=rt}=s,c=le(!r),f=o?Un(t):le(t);let a=0;return nr(async d=>{if(!c.value)return;a++;const m=a;let y=!1;i&&Promise.resolve().then(()=>{i.value=!0});try{const v=await e(_=>{d(()=>{i&&(i.value=!1),y||_()})});m===a&&(f.value=v)}catch(v){l(v)}finally{i&&m===a&&(i.value=!1),y=!0}}),r?re(()=>(c.value=!0,f.value)):f}const $e=Io?window:void 0;function ir(e){var t;const n=ce(e);return(t=n==null?void 0:n.$el)!=null?t:n}function Ye(...e){const t=[],n=()=>{t.forEach(l=>l()),t.length=0},s=(l,c,f,a)=>(l.addEventListener(c,f,a),()=>l.removeEventListener(c,f,a)),r=re(()=>{const l=hs(ce(e[0])).filter(c=>c!=null);return l.every(c=>typeof c!="string")?l:void 0}),i=Va(()=>{var l,c;return[(c=(l=r.value)==null?void 0:l.map(f=>ir(f)))!=null?c:[$e].filter(f=>f!=null),hs(ce(r.value?e[1]:e[0])),hs(Js(r.value?e[2]:e[1])),ce(r.value?e[3]:e[2])]},([l,c,f,a])=>{if(n(),!(l!=null&&l.length)||!(c!=null&&c.length)||!(f!=null&&f.length))return;const d=La(a)?{...a}:a;t.push(...l.flatMap(m=>c.flatMap(y=>f.map(v=>s(m,y,v,d)))))},{flush:"post"}),o=()=>{i(),n()};return Lo(n),o}function ka(){const e=le(!1),t=on();return t&&Nt(()=>{e.value=!0},t),e}function Ua(e){const t=ka();return re(()=>(t.value,!!e()))}function Wa(e){return typeof e=="function"?e:typeof e=="string"?t=>t.key===e:Array.isArray(e)?t=>e.includes(t.key):()=>!0}function Zf(...e){let t,n,s={};e.length===3?(t=e[0],n=e[1],s=e[2]):e.length===2?typeof e[1]=="object"?(t=!0,n=e[0],s=e[1]):(t=e[0],n=e[1]):(t=!0,n=e[0]);const{target:r=$e,eventName:i="keydown",passive:o=!1,dedupe:l=!1}=s,c=Wa(t);return Ye(r,i,a=>{a.repeat&&ce(l)||c(a)&&n(a)},o)}const Ba=Symbol("vueuse-ssr-width");function Ka(){const e=to()?Oa(Ba,null):null;return typeof e=="number"?e:void 0}function $o(e,t={}){const{window:n=$e,ssrWidth:s=Ka()}=t,r=Ua(()=>n&&"matchMedia"in n&&typeof n.matchMedia=="function"),i=le(typeof s=="number"),o=Un(),l=le(!1),c=f=>{l.value=f.matches};return nr(()=>{if(i.value){i.value=!r.value;const f=ce(e).split(",");l.value=f.some(a=>{const d=a.includes("not all"),m=a.match(/\(\s*min-width:\s*(-?\d+(?:\.\d*)?[a-z]+\s*)\)/),y=a.match(/\(\s*max-width:\s*(-?\d+(?:\.\d*)?[a-z]+\s*)\)/);let v=!!(m||y);return m&&v&&(v=s>=Zr(m[1])),y&&v&&(v=s<=Zr(y[1])),d?!v:v});return}r.value&&(o.value=n.matchMedia(ce(e)),l.value=o.value.matches)}),Ye(o,"change",c,{passive:!0}),re(()=>l.value)}const yn=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{},bn="__vueuse_ssr_handlers__",qa=Ga();function Ga(){return bn in yn||(yn[bn]=yn[bn]||{}),yn[bn]}function jo(e,t){return qa[e]||t}function Vo(e){return $o("(prefers-color-scheme: dark)",e)}function Xa(e){return e==null?"any":e instanceof Set?"set":e instanceof Map?"map":e instanceof Date?"date":typeof e=="boolean"?"boolean":typeof e=="string"?"string":typeof e=="object"?"object":Number.isNaN(e)?"any":"number"}const Ya={boolean:{read:e=>e==="true",write:e=>String(e)},object:{read:e=>JSON.parse(e),write:e=>JSON.stringify(e)},number:{read:e=>Number.parseFloat(e),write:e=>String(e)},any:{read:e=>e,write:e=>String(e)},string:{read:e=>e,write:e=>String(e)},map:{read:e=>new Map(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e.entries()))},set:{read:e=>new Set(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e))},date:{read:e=>new Date(e),write:e=>e.toISOString()}},ei="vueuse-storage";function or(e,t,n,s={}){var r;const{flush:i="pre",deep:o=!0,listenToStorageChanges:l=!0,writeDefaults:c=!0,mergeDefaults:f=!1,shallow:a,window:d=$e,eventFilter:m,onError:y=A=>{console.error(A)},initOnMounted:v}=s,_=(a?Un:le)(typeof t=="function"?t():t),k=re(()=>ce(e));if(!n)try{n=jo("getDefaultStorage",()=>{var A;return(A=$e)==null?void 0:A.localStorage})()}catch(A){y(A)}if(!n)return _;const P=ce(t),D=Xa(P),p=(r=s.serializer)!=null?r:Ya[D],{pause:g,resume:O}=ja(_,()=>R(_.value),{flush:i,deep:o,eventFilter:m});Ie(k,()=>T(),{flush:i}),d&&l&&Yn(()=>{n instanceof Storage?Ye(d,"storage",T,{passive:!0}):Ye(d,ei,M),v&&T()}),v||T();function $(A,w){if(d){const F={key:k.value,oldValue:A,newValue:w,storageArea:n};d.dispatchEvent(n instanceof Storage?new StorageEvent("storage",F):new CustomEvent(ei,{detail:F}))}}function R(A){try{const w=n.getItem(k.value);if(A==null)$(w,null),n.removeItem(k.value);else{const F=p.write(A);w!==F&&(n.setItem(k.value,F),$(w,F))}}catch(w){y(w)}}function j(A){const w=A?A.newValue:n.getItem(k.value);if(w==null)return c&&P!=null&&n.setItem(k.value,p.write(P)),P;if(!A&&f){const F=p.read(w);return typeof f=="function"?f(F,P):D==="object"&&!Array.isArray(F)?{...P,...F}:F}else return typeof w!="string"?w:p.read(w)}function T(A){if(!(A&&A.storageArea!==n)){if(A&&A.key==null){_.value=P;return}if(!(A&&A.key!==k.value)){g();try{(A==null?void 0:A.newValue)!==p.write(_.value)&&(_.value=j(A))}catch(w){y(w)}finally{A?Wn(O):O()}}}}function M(A){T(A.detail)}return _}const Ja="*,*::before,*::after{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}";function za(e={}){const{selector:t="html",attribute:n="class",initialValue:s="auto",window:r=$e,storage:i,storageKey:o="vueuse-color-scheme",listenToStorageChanges:l=!0,storageRef:c,emitAuto:f,disableTransition:a=!0}=e,d={auto:"",light:"light",dark:"dark",...e.modes||{}},m=Vo({window:r}),y=re(()=>m.value?"dark":"light"),v=c||(o==null?Ho(s):or(o,s,i,{window:r,listenToStorageChanges:l})),_=re(()=>v.value==="auto"?y.value:v.value),k=jo("updateHTMLAttrs",(g,O,$)=>{const R=typeof g=="string"?r==null?void 0:r.document.querySelector(g):ir(g);if(!R)return;const j=new Set,T=new Set;let M=null;if(O==="class"){const w=$.split(/\s/g);Object.values(d).flatMap(F=>(F||"").split(/\s/g)).filter(Boolean).forEach(F=>{w.includes(F)?j.add(F):T.add(F)})}else M={key:O,value:$};if(j.size===0&&T.size===0&&M===null)return;let A;a&&(A=r.document.createElement("style"),A.appendChild(document.createTextNode(Ja)),r.document.head.appendChild(A));for(const w of j)R.classList.add(w);for(const w of T)R.classList.remove(w);M&&R.setAttribute(M.key,M.value),a&&(r.getComputedStyle(A).opacity,document.head.removeChild(A))});function P(g){var O;k(t,n,(O=d[g])!=null?O:g)}function D(g){e.onChanged?e.onChanged(g,P):P(g)}Ie(_,D,{flush:"post",immediate:!0}),Yn(()=>D(_.value));const p=re({get(){return f?v.value:_.value},set(g){v.value=g}});return Object.assign(p,{store:v,system:y,state:_})}function Qa(e={}){const{valueDark:t="dark",valueLight:n=""}=e,s=za({...e,onChanged:(o,l)=>{var c;e.onChanged?(c=e.onChanged)==null||c.call(e,o==="dark",l,o):l(o)},modes:{dark:t,light:n}}),r=re(()=>s.system.value);return re({get(){return s.value==="dark"},set(o){const l=o?"dark":"light";r.value===l?s.value="auto":s.value=l}})}function ps(e){return typeof Window<"u"&&e instanceof Window?e.document.documentElement:typeof Document<"u"&&e instanceof Document?e.documentElement:e}const ti=1;function Za(e,t={}){const{throttle:n=0,idle:s=200,onStop:r=rt,onScroll:i=rt,offset:o={left:0,right:0,top:0,bottom:0},eventListenerOptions:l={capture:!1,passive:!0},behavior:c="auto",window:f=$e,onError:a=R=>{console.error(R)}}=t,d=le(0),m=le(0),y=re({get(){return d.value},set(R){_(R,void 0)}}),v=re({get(){return m.value},set(R){_(void 0,R)}});function _(R,j){var T,M,A,w;if(!f)return;const F=ce(e);if(!F)return;(A=F instanceof Document?f.document.body:F)==null||A.scrollTo({top:(T=ce(j))!=null?T:v.value,left:(M=ce(R))!=null?M:y.value,behavior:ce(c)});const Y=((w=F==null?void 0:F.document)==null?void 0:w.documentElement)||(F==null?void 0:F.documentElement)||F;y!=null&&(d.value=Y.scrollLeft),v!=null&&(m.value=Y.scrollTop)}const k=le(!1),P=Lt({left:!0,right:!1,top:!0,bottom:!1}),D=Lt({left:!1,right:!1,top:!1,bottom:!1}),p=R=>{k.value&&(k.value=!1,D.left=!1,D.right=!1,D.top=!1,D.bottom=!1,r(R))},g=Da(p,n+s),O=R=>{var j;if(!f)return;const T=((j=R==null?void 0:R.document)==null?void 0:j.documentElement)||(R==null?void 0:R.documentElement)||ir(R),{display:M,flexDirection:A,direction:w}=getComputedStyle(T),F=w==="rtl"?-1:1,Y=T.scrollLeft;D.left=Yd.value;const ie=Y*F<=(o.left||0),U=Y*F+T.clientWidth>=T.scrollWidth-(o.right||0)-ti;M==="flex"&&A==="row-reverse"?(P.left=U,P.right=ie):(P.left=ie,P.right=U),d.value=Y;let X=T.scrollTop;R===f.document&&!X&&(X=f.document.body.scrollTop),D.top=Xm.value;const V=X<=(o.top||0),fe=X+T.clientHeight>=T.scrollHeight-(o.bottom||0)-ti;M==="flex"&&A==="column-reverse"?(P.top=fe,P.bottom=V):(P.top=V,P.bottom=fe),m.value=X},$=R=>{var j;if(!f)return;const T=(j=R.target.documentElement)!=null?j:R.target;O(T),k.value=!0,g(R),i(R)};return Ye(e,"scroll",n?$a($,n,!0,!1):$,l),Yn(()=>{try{const R=ce(e);if(!R)return;O(R)}catch(R){a(R)}}),Ye(e,"scrollend",p,l),{x:y,y:v,isScrolling:k,arrivedState:P,directions:D,measure(){const R=ce(e);f&&R&&O(R)}}}function eu(e,t,n={}){const{window:s=$e}=n;return or(e,t,s==null?void 0:s.localStorage,n)}function ko(e){const t=window.getComputedStyle(e);if(t.overflowX==="scroll"||t.overflowY==="scroll"||t.overflowX==="auto"&&e.clientWidth1?!0:(t.preventDefault&&t.preventDefault(),!1)}const gs=new WeakMap;function tu(e,t=!1){const n=le(t);let s=null,r="";Ie(Ho(e),l=>{const c=ps(ce(l));if(c){const f=c;if(gs.get(f)||gs.set(f,f.style.overflow),f.style.overflow!=="hidden"&&(r=f.style.overflow),f.style.overflow==="hidden")return n.value=!0;if(n.value)return f.style.overflow="hidden"}},{immediate:!0});const i=()=>{const l=ps(ce(e));!l||n.value||(Qr&&(s=Ye(l,"touchmove",c=>{ef(c)},{passive:!1})),l.style.overflow="hidden",n.value=!0)},o=()=>{const l=ps(ce(e));!l||!n.value||(Qr&&(s==null||s()),l.style.overflow=r,gs.delete(l),n.value=!1)};return Lo(o),re({get(){return n.value},set(l){l?i():o()}})}function nu(e,t,n={}){const{window:s=$e}=n;return or(e,t,s==null?void 0:s.sessionStorage,n)}function su(e={}){const{window:t=$e,...n}=e;return Za(t,n)}function ru(e={}){const{window:t=$e,initialWidth:n=Number.POSITIVE_INFINITY,initialHeight:s=Number.POSITIVE_INFINITY,listenOrientation:r=!0,includeScrollbar:i=!0,type:o="inner"}=e,l=le(n),c=le(s),f=()=>{if(t)if(o==="outer")l.value=t.outerWidth,c.value=t.outerHeight;else if(o==="visual"&&t.visualViewport){const{width:d,height:m,scale:y}=t.visualViewport;l.value=Math.round(d*y),c.value=Math.round(m*y)}else i?(l.value=t.innerWidth,c.value=t.innerHeight):(l.value=t.document.documentElement.clientWidth,c.value=t.document.documentElement.clientHeight)};f(),Yn(f);const a={passive:!0};if(Ye("resize",f,a),t&&o==="visual"&&t.visualViewport&&Ye(t.visualViewport,"resize",f,a),r){const d=$o("(orientation: portrait)");Ie(d,()=>f())}return{width:l,height:c}}const ms={BASE_URL:"/dev/",DEV:!1,MODE:"production",PROD:!0,SSR:!1};var ys={};const Uo=/^(?:[a-z]+:|\/\/)/i,tf="vitepress-theme-appearance",nf=/#.*$/,sf=/[?#].*$/,rf=/(?:(^|\/)index)?\.(?:md|html)$/,me=typeof document<"u",Wo={relativePath:"404.md",filePath:"",title:"404",description:"Not Found",headers:[],frontmatter:{sidebar:!1,layout:"page"},lastUpdated:0,isNotFound:!0};function of(e,t,n=!1){if(t===void 0)return!1;if(e=ni(`/${e}`),n)return new RegExp(t).test(e);if(ni(t)!==e)return!1;const s=t.match(nf);return s?(me?location.hash:"")===s[0]:!0}function ni(e){return decodeURI(e).replace(sf,"").replace(rf,"$1")}function lf(e){return Uo.test(e)}function cf(e,t){return Object.keys((e==null?void 0:e.locales)||{}).find(n=>n!=="root"&&!lf(n)&&of(t,`/${n}/`,!0))||"root"}function af(e,t){var s,r,i,o,l,c,f;const n=cf(e,t);return Object.assign({},e,{localeIndex:n,lang:((s=e.locales[n])==null?void 0:s.lang)??e.lang,dir:((r=e.locales[n])==null?void 0:r.dir)??e.dir,title:((i=e.locales[n])==null?void 0:i.title)??e.title,titleTemplate:((o=e.locales[n])==null?void 0:o.titleTemplate)??e.titleTemplate,description:((l=e.locales[n])==null?void 0:l.description)??e.description,head:Ko(e.head,((c=e.locales[n])==null?void 0:c.head)??[]),themeConfig:{...e.themeConfig,...(f=e.locales[n])==null?void 0:f.themeConfig}})}function Bo(e,t){const n=t.title||e.title,s=t.titleTemplate??e.titleTemplate;if(typeof s=="string"&&s.includes(":title"))return s.replace(/:title/g,n);const r=ff(e.title,s);return n===r.slice(3)?n:`${n}${r}`}function ff(e,t){return t===!1?"":t===!0||t===void 0?` | ${e}`:e===t?"":` | ${t}`}function uf(e,t){const[n,s]=t;if(n!=="meta")return!1;const r=Object.entries(s)[0];return r==null?!1:e.some(([i,o])=>i===n&&o[r[0]]===r[1])}function Ko(e,t){return[...e.filter(n=>!uf(t,n)),...t]}const df=/[\u0000-\u001F"#$&*+,:;<=>?[\]^`{|}\u007F]/g,hf=/^[a-z]:/i;function si(e){const t=hf.exec(e),n=t?t[0]:"";return n+e.slice(n.length).replace(df,"_").replace(/(^|\/)_+(?=[^/]*$)/,"$1")}const bs=new Set;function pf(e){if(bs.size===0){const n=typeof process=="object"&&(ys==null?void 0:ys.VITE_EXTRA_EXTENSIONS)||(ms==null?void 0:ms.VITE_EXTRA_EXTENSIONS)||"";("3g2,3gp,aac,ai,apng,au,avif,bin,bmp,cer,class,conf,crl,css,csv,dll,doc,eps,epub,exe,gif,gz,ics,ief,jar,jpe,jpeg,jpg,js,json,jsonld,m4a,man,mid,midi,mjs,mov,mp2,mp3,mp4,mpe,mpeg,mpg,mpp,oga,ogg,ogv,ogx,opus,otf,p10,p7c,p7m,p7s,pdf,png,ps,qt,roff,rtf,rtx,ser,svg,t,tif,tiff,tr,ts,tsv,ttf,txt,vtt,wav,weba,webm,webp,woff,woff2,xhtml,xml,yaml,yml,zip"+(n&&typeof n=="string"?","+n:"")).split(",").forEach(s=>bs.add(s))}const t=e.split(".").pop();return t==null||!bs.has(t.toLowerCase())}function iu(e){return e.replace(/[|\\{}()[\]^$+*?.]/g,"\\$&").replace(/-/g,"\\x2d")}const gf=Symbol(),vt=Un(Ma);function ou(e){const t=re(()=>af(vt.value,e.data.relativePath)),n=t.value.appearance,s=n==="force-dark"?le(!0):n==="force-auto"?Vo():n?Qa({storageKey:tf,initialValue:()=>n==="dark"?"dark":"auto",...typeof n=="object"?n:{}}):le(!1),r=le(me?location.hash:"");return me&&window.addEventListener("hashchange",()=>{r.value=location.hash}),Ie(()=>e.data,()=>{r.value=me?location.hash:""}),{site:t,theme:re(()=>t.value.themeConfig),page:re(()=>e.data),frontmatter:re(()=>e.data.frontmatter),params:re(()=>e.data.params),lang:re(()=>t.value.lang),dir:re(()=>e.data.frontmatter.dir||t.value.dir),localeIndex:re(()=>t.value.localeIndex||"root"),title:re(()=>Bo(t.value,e.data)),description:re(()=>e.data.description||t.value.description),isDark:s,hash:re(()=>r.value)}}function mf(){const e=bt(gf);if(!e)throw new Error("vitepress data not properly injected in app");return e}function yf(e,t){return`${e}${t}`.replace(/\/+/g,"/")}function ri(e){return Uo.test(e)||!e.startsWith("/")?e:yf(vt.value.base,e)}function bf(e){let t=e.replace(/\.html$/,"");if(t=decodeURIComponent(t),t=t.replace(/\/$/,"/index"),me){const n="/dev/";t=si(t.slice(n.length).replace(/\//g,"_")||"index")+".md";let s=__VP_HASH_MAP__[t.toLowerCase()];if(s||(t=t.endsWith("_index.md")?t.slice(0,-9)+".md":t.slice(0,-3)+"_index.md",s=__VP_HASH_MAP__[t.toLowerCase()]),!s)return null;t=`${n}assets/${t}.${s}.js`}else t=`./${si(t.slice(1).replace(/\//g,"_"))}.md.js`;return t}let An=[];function lu(e){An.push(e),Kn(()=>{An=An.filter(t=>t!==e)})}function _f(){let e=vt.value.scrollOffset,t=0,n=24;if(typeof e=="object"&&"padding"in e&&(n=e.padding,e=e.selector),typeof e=="number")t=e;else if(typeof e=="string")t=ii(e,n);else if(Array.isArray(e))for(const s of e){const r=ii(s,n);if(r){t=r;break}}return t}function ii(e,t){const n=document.querySelector(e);if(!n)return 0;const s=n.getBoundingClientRect().bottom;return s<0?0:s+t}const vf=Symbol(),qo="http://a.com",wf=()=>({path:"/",component:null,data:Wo});function cu(e,t){const n=Lt(wf()),s={route:n,go:r};async function r(l=me?location.href:"/"){var c,f;l=_s(l),await((c=s.onBeforeRouteChange)==null?void 0:c.call(s,l))!==!1&&(me&&l!==_s(location.href)&&(history.replaceState({scrollPosition:window.scrollY},""),history.pushState({},"",l)),await o(l),await((f=s.onAfterRouteChange??s.onAfterRouteChanged)==null?void 0:f(l)))}let i=null;async function o(l,c=0,f=!1){var m,y;if(await((m=s.onBeforePageLoad)==null?void 0:m.call(s,l))===!1)return;const a=new URL(l,qo),d=i=a.pathname;try{let v=await e(d);if(!v)throw new Error(`Page not found: ${d}`);if(i===d){i=null;const{default:_,__pageData:k}=v;if(!_)throw new Error(`Invalid route component: ${_}`);await((y=s.onAfterPageLoad)==null?void 0:y.call(s,l)),n.path=me?d:ri(d),n.component=Tn(_),n.data=Tn(k),me&&Wn(()=>{let P=vt.value.base+k.relativePath.replace(/(?:(^|\/)index)?\.md$/,"$1");if(!vt.value.cleanUrls&&!P.endsWith("/")&&(P+=".html"),P!==a.pathname&&(a.pathname=P,l=P+a.search+a.hash,history.replaceState({},"",l)),a.hash&&!c){let D=null;try{D=document.getElementById(decodeURIComponent(a.hash).slice(1))}catch(p){console.warn(p)}if(D){oi(D,a.hash);return}}window.scrollTo(0,c)})}}catch(v){if(!/fetch|Page not found/.test(v.message)&&!/^\/404(\.html|\/)?$/.test(l)&&console.error(v),!f)try{const _=await fetch(vt.value.base+"hashmap.json");window.__VP_HASH_MAP__=await _.json(),await o(l,c,!0);return}catch{}if(i===d){i=null,n.path=me?d:ri(d),n.component=t?Tn(t):null;const _=me?d.replace(/(^|\/)$/,"$1index").replace(/(\.html)?$/,".md").replace(/^\//,""):"404.md";n.data={...Wo,relativePath:_}}}}return me&&(history.state===null&&history.replaceState({},""),window.addEventListener("click",l=>{if(l.defaultPrevented||!(l.target instanceof Element)||l.target.closest("button")||l.button!==0||l.ctrlKey||l.shiftKey||l.altKey||l.metaKey)return;const c=l.target.closest("a");if(!c||c.closest(".vp-raw")||c.hasAttribute("download")||c.hasAttribute("target"))return;const f=c.getAttribute("href")??(c instanceof SVGAElement?c.getAttribute("xlink:href"):null);if(f==null)return;const{href:a,origin:d,pathname:m,hash:y,search:v}=new URL(f,c.baseURI),_=new URL(location.href);d===_.origin&&pf(m)&&(l.preventDefault(),m===_.pathname&&v===_.search?(y!==_.hash&&(history.pushState({},"",a),window.dispatchEvent(new HashChangeEvent("hashchange",{oldURL:_.href,newURL:a}))),y?oi(c,y,c.classList.contains("header-anchor")):window.scrollTo(0,0)):r(a))},{capture:!0}),window.addEventListener("popstate",async l=>{var f;if(l.state===null)return;const c=_s(location.href);await o(c,l.state&&l.state.scrollPosition||0),await((f=s.onAfterRouteChange??s.onAfterRouteChanged)==null?void 0:f(c))}),window.addEventListener("hashchange",l=>{l.preventDefault()})),s}function Sf(){const e=bt(vf);if(!e)throw new Error("useRouter() is called without provider.");return e}function Go(){return Sf().route}function oi(e,t,n=!1){let s=null;try{s=e.classList.contains("header-anchor")?e:document.getElementById(decodeURIComponent(t).slice(1))}catch(r){console.warn(r)}if(s){let r=function(){!n||Math.abs(o-window.scrollY)>window.innerHeight?window.scrollTo(0,o):window.scrollTo({left:0,top:o,behavior:"smooth"})};const i=parseInt(window.getComputedStyle(s).paddingTop,10),o=window.scrollY+s.getBoundingClientRect().top-_f()+i;requestAnimationFrame(r)}}function _s(e){const t=new URL(e,qo);return t.pathname=t.pathname.replace(/(^|\/)index(\.html)?$/,"$1"),vt.value.cleanUrls?t.pathname=t.pathname.replace(/\.html$/,""):!t.pathname.endsWith("/")&&!t.pathname.endsWith(".html")&&(t.pathname+=".html"),t.pathname+t.search+t.hash}const _n=()=>An.forEach(e=>e()),au=Qs({name:"VitePressContent",props:{as:{type:[Object,String],default:"div"}},setup(e){const t=Go(),{frontmatter:n,site:s}=mf();return Ie(n,_n,{deep:!0,flush:"post"}),()=>Hs(e.as,s.value.contentProps??{style:{position:"relative"}},[t.component?Hs(t.component,{onVnodeMounted:_n,onVnodeUpdated:_n,onVnodeUnmounted:_n}):"404 Page Not Found"])}}),fu=(e,t)=>{const n=e.__vccOpts||e;for(const[s,r]of t)n[s]=r;return n},Tf="modulepreload",Ef=function(e){return"/dev/"+e},li={},uu=function(t,n,s){let r=Promise.resolve();if(n&&n.length>0){document.getElementsByTagName("link");const o=document.querySelector("meta[property=csp-nonce]"),l=(o==null?void 0:o.nonce)||(o==null?void 0:o.getAttribute("nonce"));r=Promise.allSettled(n.map(c=>{if(c=Ef(c),c in li)return;li[c]=!0;const f=c.endsWith(".css"),a=f?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${c}"]${a}`))return;const d=document.createElement("link");if(d.rel=f?"stylesheet":Tf,f||(d.as="script"),d.crossOrigin="",d.href=c,l&&d.setAttribute("nonce",l),document.head.appendChild(d),f)return new Promise((m,y)=>{d.addEventListener("load",m),d.addEventListener("error",()=>y(new Error(`Unable to preload CSS for ${c}`)))})}))}function i(o){const l=new Event("vite:preloadError",{cancelable:!0});if(l.payload=o,window.dispatchEvent(l),!l.defaultPrevented)throw o}return r.then(o=>{for(const l of o||[])l.status==="rejected"&&i(l.reason);return t().catch(i)})},du=Qs({setup(e,{slots:t}){const n=le(!1);return Nt(()=>{n.value=!0}),()=>n.value&&t.default?t.default():null}});function hu(){me&&window.addEventListener("click",e=>{var n;const t=e.target;if(t.matches(".vp-code-group input")){const s=(n=t.parentElement)==null?void 0:n.parentElement;if(!s)return;const r=Array.from(s.querySelectorAll("input")).indexOf(t);if(r<0)return;const i=s.querySelector(".blocks");if(!i)return;const o=Array.from(i.children).find(f=>f.classList.contains("active"));if(!o)return;const l=i.children[r];if(!l||o===l)return;o.classList.remove("active"),l.classList.add("active");const c=s==null?void 0:s.querySelector(`label[for="${t.id}"]`);c==null||c.scrollIntoView({block:"nearest"})}})}function pu(){if(me){const e=new WeakMap;window.addEventListener("click",t=>{var s;const n=t.target;if(n.matches('div[class*="language-"] > button.copy')){const r=n.parentElement,i=(s=n.nextElementSibling)==null?void 0:s.nextElementSibling;if(!r||!i)return;const o=/language-(shellscript|shell|bash|sh|zsh)/.test(r.className),l=[".vp-copy-ignore",".diff.remove"],c=i.cloneNode(!0);c.querySelectorAll(l.join(",")).forEach(a=>a.remove());let f=c.textContent||"";o&&(f=f.replace(/^ *(\$|>) /gm,"").trim()),xf(f).then(()=>{n.classList.add("copied"),clearTimeout(e.get(n));const a=setTimeout(()=>{n.classList.remove("copied"),n.blur(),e.delete(n)},2e3);e.set(n,a)})}})}}async function xf(e){try{return navigator.clipboard.writeText(e)}catch{const t=document.createElement("textarea"),n=document.activeElement;t.value=e,t.setAttribute("readonly",""),t.style.contain="strict",t.style.position="absolute",t.style.left="-9999px",t.style.fontSize="12pt";const s=document.getSelection(),r=s?s.rangeCount>0&&s.getRangeAt(0):null;document.body.appendChild(t),t.select(),t.selectionStart=0,t.selectionEnd=e.length,document.execCommand("copy"),document.body.removeChild(t),r&&(s.removeAllRanges(),s.addRange(r)),n&&n.focus()}}function gu(e,t){let n=!0,s=[];const r=i=>{if(n){n=!1,i.forEach(l=>{const c=vs(l);for(const f of document.head.children)if(f.isEqualNode(c)){s.push(f);return}});return}const o=i.map(vs);s.forEach((l,c)=>{const f=o.findIndex(a=>a==null?void 0:a.isEqualNode(l??null));f!==-1?delete o[f]:(l==null||l.remove(),delete s[c])}),o.forEach(l=>l&&document.head.appendChild(l)),s=[...s,...o].filter(Boolean)};nr(()=>{const i=e.data,o=t.value,l=i&&i.description,c=i&&i.frontmatter.head||[],f=Bo(o,i);f!==document.title&&(document.title=f);const a=l||o.description;let d=document.querySelector("meta[name=description]");d?d.getAttribute("content")!==a&&d.setAttribute("content",a):vs(["meta",{name:"description",content:a}]),r(Ko(o.head,Af(c)))})}function vs([e,t,n]){const s=document.createElement(e);for(const r in t)s.setAttribute(r,t[r]);return n&&(s.innerHTML=n),e==="script"&&t.async==null&&(s.async=!1),s}function Cf(e){return e[0]==="meta"&&e[1]&&e[1].name==="description"}function Af(e){return e.filter(t=>!Cf(t))}const ws=new Set,Xo=()=>document.createElement("link"),Rf=e=>{const t=Xo();t.rel="prefetch",t.href=e,document.head.appendChild(t)},Mf=e=>{const t=new XMLHttpRequest;t.open("GET",e,t.withCredentials=!0),t.send()};let vn;const Of=me&&(vn=Xo())&&vn.relList&&vn.relList.supports&&vn.relList.supports("prefetch")?Rf:Mf;function mu(){if(!me||!window.IntersectionObserver)return;let e;if((e=navigator.connection)&&(e.saveData||/2g/.test(e.effectiveType)))return;const t=window.requestIdleCallback||setTimeout;let n=null;const s=()=>{n&&n.disconnect(),n=new IntersectionObserver(i=>{i.forEach(o=>{if(o.isIntersecting){const l=o.target;n.unobserve(l);const{pathname:c}=l;if(!ws.has(c)){ws.add(c);const f=bf(c);f&&Of(f)}}})}),t(()=>{document.querySelectorAll("#app a").forEach(i=>{const{hostname:o,pathname:l}=new URL(i.href instanceof SVGAnimatedString?i.href.animVal:i.href,i.baseURI),c=l.match(/\.\w+$/);c&&c[0]!==".html"||i.target!=="_blank"&&o===location.hostname&&(l!==location.pathname?n.observe(i):ws.add(l))})})};Nt(s);const r=Go();Ie(()=>r.path,s),Kn(()=>{n&&n.disconnect()})}export{Gi as $,_f as A,Nf as B,Hf as C,Un as D,lu as E,Ee as F,ae as G,Ff as H,Uo as I,Go as J,Wc as K,bt as L,ru as M,Us as N,Zf as O,Wn as P,su as Q,me as R,kn as S,Bf as T,If as U,uu as V,tu as W,Sc as X,Gf as Y,$f as Z,fu as _,To as a,qf as a0,jf as a1,Uf as a2,Lt as a3,$l as a4,Hs as a5,gu as a6,vf as a7,ou as a8,gf as a9,iu as aA,au as aa,du as ab,vt as ac,Yf as ad,cu as ae,bf as af,mu as ag,pu as ah,hu as ai,ce as aj,hs as ak,ir as al,Jf as am,Lo as an,Qf as ao,nu as ap,eu as aq,zf as ar,Sf as as,Ye as at,Pf as au,Kf as av,ue as aw,Lf as ax,Tn as ay,Xf as az,Ns as b,kf as c,Qs as d,Wf as e,pf as f,ri as g,re as h,lf as i,So as j,Js as k,of as l,$o as m,Ws as n,Is as o,le as p,Ie as q,Df as r,nr as s,al as t,mf as u,Nt as v,Gl as w,Kn as x,Vf as y,cc as z}; diff --git a/dev/assets/chunks/framework.I-x9Gl6h.js b/dev/assets/chunks/framework.I-x9Gl6h.js deleted file mode 100644 index 7640a16829..0000000000 --- a/dev/assets/chunks/framework.I-x9Gl6h.js +++ /dev/null @@ -1,18 +0,0 @@ -/** -* @vue/shared v3.5.13 -* (c) 2018-present Yuxi (Evan) You and Vue contributors -* @license MIT -**//*! #__NO_SIDE_EFFECTS__ */function Hs(e){const t=Object.create(null);for(const n of e.split(","))t[n]=1;return n=>n in t}const ee={},Rt=[],We=()=>{},zo=()=>!1,tn=e=>e.charCodeAt(0)===111&&e.charCodeAt(1)===110&&(e.charCodeAt(2)>122||e.charCodeAt(2)<97),Ds=e=>e.startsWith("onUpdate:"),pe=Object.assign,$s=(e,t)=>{const n=e.indexOf(t);n>-1&&e.splice(n,1)},Qo=Object.prototype.hasOwnProperty,Q=(e,t)=>Qo.call(e,t),K=Array.isArray,Mt=e=>Fn(e)==="[object Map]",ci=e=>Fn(e)==="[object Set]",q=e=>typeof e=="function",oe=e=>typeof e=="string",Je=e=>typeof e=="symbol",se=e=>e!==null&&typeof e=="object",ai=e=>(se(e)||q(e))&&q(e.then)&&q(e.catch),fi=Object.prototype.toString,Fn=e=>fi.call(e),Zo=e=>Fn(e).slice(8,-1),ui=e=>Fn(e)==="[object Object]",js=e=>oe(e)&&e!=="NaN"&&e[0]!=="-"&&""+parseInt(e,10)===e,Ot=Hs(",key,ref,ref_for,ref_key,onVnodeBeforeMount,onVnodeMounted,onVnodeBeforeUpdate,onVnodeUpdated,onVnodeBeforeUnmount,onVnodeUnmounted"),Hn=e=>{const t=Object.create(null);return n=>t[n]||(t[n]=e(n))},el=/-(\w)/g,Ne=Hn(e=>e.replace(el,(t,n)=>n?n.toUpperCase():"")),tl=/\B([A-Z])/g,it=Hn(e=>e.replace(tl,"-$1").toLowerCase()),Dn=Hn(e=>e.charAt(0).toUpperCase()+e.slice(1)),wn=Hn(e=>e?`on${Dn(e)}`:""),nt=(e,t)=>!Object.is(e,t),Sn=(e,...t)=>{for(let n=0;n{Object.defineProperty(e,t,{configurable:!0,enumerable:!1,writable:s,value:n})},vs=e=>{const t=parseFloat(e);return isNaN(t)?e:t},nl=e=>{const t=oe(e)?Number(e):NaN;return isNaN(t)?e:t};let fr;const $n=()=>fr||(fr=typeof globalThis<"u"?globalThis:typeof self<"u"?self:typeof window<"u"?window:typeof global<"u"?global:{});function Vs(e){if(K(e)){const t={};for(let n=0;n{if(n){const s=n.split(rl);s.length>1&&(t[s[0].trim()]=s[1].trim())}}),t}function ks(e){let t="";if(oe(e))t=e;else if(K(e))for(let n=0;n!!(e&&e.__v_isRef===!0),al=e=>oe(e)?e:e==null?"":K(e)||se(e)&&(e.toString===fi||!q(e.toString))?pi(e)?al(e.value):JSON.stringify(e,gi,2):String(e),gi=(e,t)=>pi(t)?gi(e,t.value):Mt(t)?{[`Map(${t.size})`]:[...t.entries()].reduce((n,[s,r],i)=>(n[Qn(s,i)+" =>"]=r,n),{})}:ci(t)?{[`Set(${t.size})`]:[...t.values()].map(n=>Qn(n))}:Je(t)?Qn(t):se(t)&&!K(t)&&!ui(t)?String(t):t,Qn=(e,t="")=>{var n;return Je(e)?`Symbol(${(n=e.description)!=null?n:t})`:e};/** -* @vue/reactivity v3.5.13 -* (c) 2018-present Yuxi (Evan) You and Vue contributors -* @license MIT -**/let Se;class fl{constructor(t=!1){this.detached=t,this._active=!0,this.effects=[],this.cleanups=[],this._isPaused=!1,this.parent=Se,!t&&Se&&(this.index=(Se.scopes||(Se.scopes=[])).push(this)-1)}get active(){return this._active}pause(){if(this._active){this._isPaused=!0;let t,n;if(this.scopes)for(t=0,n=this.scopes.length;t0)return;if(kt){let t=kt;for(kt=void 0;t;){const n=t.next;t.next=void 0,t.flags&=-9,t=n}}let e;for(;Vt;){let t=Vt;for(Vt=void 0;t;){const n=t.next;if(t.next=void 0,t.flags&=-9,t.flags&1)try{t.trigger()}catch(s){e||(e=s)}t=n}}if(e)throw e}function vi(e){for(let t=e.deps;t;t=t.nextDep)t.version=-1,t.prevActiveLink=t.dep.activeLink,t.dep.activeLink=t}function wi(e){let t,n=e.depsTail,s=n;for(;s;){const r=s.prevDep;s.version===-1?(s===n&&(n=r),Bs(s),dl(s)):t=s,s.dep.activeLink=s.prevActiveLink,s.prevActiveLink=void 0,s=r}e.deps=t,e.depsTail=n}function ws(e){for(let t=e.deps;t;t=t.nextDep)if(t.dep.version!==t.version||t.dep.computed&&(Si(t.dep.computed)||t.dep.version!==t.version))return!0;return!!e._dirty}function Si(e){if(e.flags&4&&!(e.flags&16)||(e.flags&=-17,e.globalVersion===Gt))return;e.globalVersion=Gt;const t=e.dep;if(e.flags|=2,t.version>0&&!e.isSSR&&e.deps&&!ws(e)){e.flags&=-3;return}const n=ne,s=He;ne=e,He=!0;try{vi(e);const r=e.fn(e._value);(t.version===0||nt(r,e._value))&&(e._value=r,t.version++)}catch(r){throw t.version++,r}finally{ne=n,He=s,wi(e),e.flags&=-3}}function Bs(e,t=!1){const{dep:n,prevSub:s,nextSub:r}=e;if(s&&(s.nextSub=r,e.prevSub=void 0),r&&(r.prevSub=s,e.nextSub=void 0),n.subs===e&&(n.subs=s,!s&&n.computed)){n.computed.flags&=-5;for(let i=n.computed.deps;i;i=i.nextDep)Bs(i,!0)}!t&&!--n.sc&&n.map&&n.map.delete(n.key)}function dl(e){const{prevDep:t,nextDep:n}=e;t&&(t.nextDep=n,e.prevDep=void 0),n&&(n.prevDep=t,e.nextDep=void 0)}let He=!0;const Ei=[];function ot(){Ei.push(He),He=!1}function lt(){const e=Ei.pop();He=e===void 0?!0:e}function ur(e){const{cleanup:t}=e;if(e.cleanup=void 0,t){const n=ne;ne=void 0;try{t()}finally{ne=n}}}let Gt=0;class hl{constructor(t,n){this.sub=t,this.dep=n,this.version=n.version,this.nextDep=this.prevDep=this.nextSub=this.prevSub=this.prevActiveLink=void 0}}class jn{constructor(t){this.computed=t,this.version=0,this.activeLink=void 0,this.subs=void 0,this.map=void 0,this.key=void 0,this.sc=0}track(t){if(!ne||!He||ne===this.computed)return;let n=this.activeLink;if(n===void 0||n.sub!==ne)n=this.activeLink=new hl(ne,this),ne.deps?(n.prevDep=ne.depsTail,ne.depsTail.nextDep=n,ne.depsTail=n):ne.deps=ne.depsTail=n,Ti(n);else if(n.version===-1&&(n.version=this.version,n.nextDep)){const s=n.nextDep;s.prevDep=n.prevDep,n.prevDep&&(n.prevDep.nextDep=s),n.prevDep=ne.depsTail,n.nextDep=void 0,ne.depsTail.nextDep=n,ne.depsTail=n,ne.deps===n&&(ne.deps=s)}return n}trigger(t){this.version++,Gt++,this.notify(t)}notify(t){Us();try{for(let n=this.subs;n;n=n.prevSub)n.sub.notify()&&n.sub.dep.notify()}finally{Ws()}}}function Ti(e){if(e.dep.sc++,e.sub.flags&4){const t=e.dep.computed;if(t&&!e.dep.subs){t.flags|=20;for(let s=t.deps;s;s=s.nextDep)Ti(s)}const n=e.dep.subs;n!==e&&(e.prevSub=n,n&&(n.nextSub=e)),e.dep.subs=e}}const Rn=new WeakMap,pt=Symbol(""),Ss=Symbol(""),Xt=Symbol("");function ye(e,t,n){if(He&&ne){let s=Rn.get(e);s||Rn.set(e,s=new Map);let r=s.get(n);r||(s.set(n,r=new jn),r.map=s,r.key=n),r.track()}}function Ge(e,t,n,s,r,i){const o=Rn.get(e);if(!o){Gt++;return}const l=c=>{c&&c.trigger()};if(Us(),t==="clear")o.forEach(l);else{const c=K(e),f=c&&js(n);if(c&&n==="length"){const a=Number(s);o.forEach((d,m)=>{(m==="length"||m===Xt||!Je(m)&&m>=a)&&l(d)})}else switch((n!==void 0||o.has(void 0))&&l(o.get(n)),f&&l(o.get(Xt)),t){case"add":c?f&&l(o.get("length")):(l(o.get(pt)),Mt(e)&&l(o.get(Ss)));break;case"delete":c||(l(o.get(pt)),Mt(e)&&l(o.get(Ss)));break;case"set":Mt(e)&&l(o.get(pt));break}}Ws()}function pl(e,t){const n=Rn.get(e);return n&&n.get(t)}function Et(e){const t=z(e);return t===e?t:(ye(t,"iterate",Xt),Pe(e)?t:t.map(_e))}function Vn(e){return ye(e=z(e),"iterate",Xt),e}const gl={__proto__:null,[Symbol.iterator](){return es(this,Symbol.iterator,_e)},concat(...e){return Et(this).concat(...e.map(t=>K(t)?Et(t):t))},entries(){return es(this,"entries",e=>(e[1]=_e(e[1]),e))},every(e,t){return Be(this,"every",e,t,void 0,arguments)},filter(e,t){return Be(this,"filter",e,t,n=>n.map(_e),arguments)},find(e,t){return Be(this,"find",e,t,_e,arguments)},findIndex(e,t){return Be(this,"findIndex",e,t,void 0,arguments)},findLast(e,t){return Be(this,"findLast",e,t,_e,arguments)},findLastIndex(e,t){return Be(this,"findLastIndex",e,t,void 0,arguments)},forEach(e,t){return Be(this,"forEach",e,t,void 0,arguments)},includes(...e){return ts(this,"includes",e)},indexOf(...e){return ts(this,"indexOf",e)},join(e){return Et(this).join(e)},lastIndexOf(...e){return ts(this,"lastIndexOf",e)},map(e,t){return Be(this,"map",e,t,void 0,arguments)},pop(){return Dt(this,"pop")},push(...e){return Dt(this,"push",e)},reduce(e,...t){return dr(this,"reduce",e,t)},reduceRight(e,...t){return dr(this,"reduceRight",e,t)},shift(){return Dt(this,"shift")},some(e,t){return Be(this,"some",e,t,void 0,arguments)},splice(...e){return Dt(this,"splice",e)},toReversed(){return Et(this).toReversed()},toSorted(e){return Et(this).toSorted(e)},toSpliced(...e){return Et(this).toSpliced(...e)},unshift(...e){return Dt(this,"unshift",e)},values(){return es(this,"values",_e)}};function es(e,t,n){const s=Vn(e),r=s[t]();return s!==e&&!Pe(e)&&(r._next=r.next,r.next=()=>{const i=r._next();return i.value&&(i.value=n(i.value)),i}),r}const ml=Array.prototype;function Be(e,t,n,s,r,i){const o=Vn(e),l=o!==e&&!Pe(e),c=o[t];if(c!==ml[t]){const d=c.apply(e,i);return l?_e(d):d}let f=n;o!==e&&(l?f=function(d,m){return n.call(this,_e(d),m,e)}:n.length>2&&(f=function(d,m){return n.call(this,d,m,e)}));const a=c.call(o,f,s);return l&&r?r(a):a}function dr(e,t,n,s){const r=Vn(e);let i=n;return r!==e&&(Pe(e)?n.length>3&&(i=function(o,l,c){return n.call(this,o,l,c,e)}):i=function(o,l,c){return n.call(this,o,_e(l),c,e)}),r[t](i,...s)}function ts(e,t,n){const s=z(e);ye(s,"iterate",Xt);const r=s[t](...n);return(r===-1||r===!1)&&Gs(n[0])?(n[0]=z(n[0]),s[t](...n)):r}function Dt(e,t,n=[]){ot(),Us();const s=z(e)[t].apply(e,n);return Ws(),lt(),s}const yl=Hs("__proto__,__v_isRef,__isVue"),xi=new Set(Object.getOwnPropertyNames(Symbol).filter(e=>e!=="arguments"&&e!=="caller").map(e=>Symbol[e]).filter(Je));function _l(e){Je(e)||(e=String(e));const t=z(this);return ye(t,"has",e),t.hasOwnProperty(e)}class Ci{constructor(t=!1,n=!1){this._isReadonly=t,this._isShallow=n}get(t,n,s){if(n==="__v_skip")return t.__v_skip;const r=this._isReadonly,i=this._isShallow;if(n==="__v_isReactive")return!r;if(n==="__v_isReadonly")return r;if(n==="__v_isShallow")return i;if(n==="__v_raw")return s===(r?i?Rl:Oi:i?Mi:Ri).get(t)||Object.getPrototypeOf(t)===Object.getPrototypeOf(s)?t:void 0;const o=K(t);if(!r){let c;if(o&&(c=gl[n]))return c;if(n==="hasOwnProperty")return _l}const l=Reflect.get(t,n,fe(t)?t:s);return(Je(n)?xi.has(n):yl(n))||(r||ye(t,"get",n),i)?l:fe(l)?o&&js(n)?l:l.value:se(l)?r?kn(l):Pt(l):l}}class Ai extends Ci{constructor(t=!1){super(!1,t)}set(t,n,s,r){let i=t[n];if(!this._isShallow){const c=wt(i);if(!Pe(s)&&!wt(s)&&(i=z(i),s=z(s)),!K(t)&&fe(i)&&!fe(s))return c?!1:(i.value=s,!0)}const o=K(t)&&js(n)?Number(n)e,fn=e=>Reflect.getPrototypeOf(e);function El(e,t,n){return function(...s){const r=this.__v_raw,i=z(r),o=Mt(i),l=e==="entries"||e===Symbol.iterator&&o,c=e==="keys"&&o,f=r[e](...s),a=n?Es:t?Ts:_e;return!t&&ye(i,"iterate",c?Ss:pt),{next(){const{value:d,done:m}=f.next();return m?{value:d,done:m}:{value:l?[a(d[0]),a(d[1])]:a(d),done:m}},[Symbol.iterator](){return this}}}}function un(e){return function(...t){return e==="delete"?!1:e==="clear"?void 0:this}}function Tl(e,t){const n={get(r){const i=this.__v_raw,o=z(i),l=z(r);e||(nt(r,l)&&ye(o,"get",r),ye(o,"get",l));const{has:c}=fn(o),f=t?Es:e?Ts:_e;if(c.call(o,r))return f(i.get(r));if(c.call(o,l))return f(i.get(l));i!==o&&i.get(r)},get size(){const r=this.__v_raw;return!e&&ye(z(r),"iterate",pt),Reflect.get(r,"size",r)},has(r){const i=this.__v_raw,o=z(i),l=z(r);return e||(nt(r,l)&&ye(o,"has",r),ye(o,"has",l)),r===l?i.has(r):i.has(r)||i.has(l)},forEach(r,i){const o=this,l=o.__v_raw,c=z(l),f=t?Es:e?Ts:_e;return!e&&ye(c,"iterate",pt),l.forEach((a,d)=>r.call(i,f(a),f(d),o))}};return pe(n,e?{add:un("add"),set:un("set"),delete:un("delete"),clear:un("clear")}:{add(r){!t&&!Pe(r)&&!wt(r)&&(r=z(r));const i=z(this);return fn(i).has.call(i,r)||(i.add(r),Ge(i,"add",r,r)),this},set(r,i){!t&&!Pe(i)&&!wt(i)&&(i=z(i));const o=z(this),{has:l,get:c}=fn(o);let f=l.call(o,r);f||(r=z(r),f=l.call(o,r));const a=c.call(o,r);return o.set(r,i),f?nt(i,a)&&Ge(o,"set",r,i):Ge(o,"add",r,i),this},delete(r){const i=z(this),{has:o,get:l}=fn(i);let c=o.call(i,r);c||(r=z(r),c=o.call(i,r)),l&&l.call(i,r);const f=i.delete(r);return c&&Ge(i,"delete",r,void 0),f},clear(){const r=z(this),i=r.size!==0,o=r.clear();return i&&Ge(r,"clear",void 0,void 0),o}}),["keys","values","entries",Symbol.iterator].forEach(r=>{n[r]=El(r,e,t)}),n}function Ks(e,t){const n=Tl(e,t);return(s,r,i)=>r==="__v_isReactive"?!e:r==="__v_isReadonly"?e:r==="__v_raw"?s:Reflect.get(Q(n,r)&&r in s?n:s,r,i)}const xl={get:Ks(!1,!1)},Cl={get:Ks(!1,!0)},Al={get:Ks(!0,!1)};const Ri=new WeakMap,Mi=new WeakMap,Oi=new WeakMap,Rl=new WeakMap;function Ml(e){switch(e){case"Object":case"Array":return 1;case"Map":case"Set":case"WeakMap":case"WeakSet":return 2;default:return 0}}function Ol(e){return e.__v_skip||!Object.isExtensible(e)?0:Ml(Zo(e))}function Pt(e){return wt(e)?e:qs(e,!1,vl,xl,Ri)}function Ll(e){return qs(e,!1,Sl,Cl,Mi)}function kn(e){return qs(e,!0,wl,Al,Oi)}function qs(e,t,n,s,r){if(!se(e)||e.__v_raw&&!(t&&e.__v_isReactive))return e;const i=r.get(e);if(i)return i;const o=Ol(e);if(o===0)return e;const l=new Proxy(e,o===2?s:n);return r.set(e,l),l}function gt(e){return wt(e)?gt(e.__v_raw):!!(e&&e.__v_isReactive)}function wt(e){return!!(e&&e.__v_isReadonly)}function Pe(e){return!!(e&&e.__v_isShallow)}function Gs(e){return e?!!e.__v_raw:!1}function z(e){const t=e&&e.__v_raw;return t?z(t):e}function En(e){return!Q(e,"__v_skip")&&Object.isExtensible(e)&&di(e,"__v_skip",!0),e}const _e=e=>se(e)?Pt(e):e,Ts=e=>se(e)?kn(e):e;function fe(e){return e?e.__v_isRef===!0:!1}function le(e){return Li(e,!1)}function Xs(e){return Li(e,!0)}function Li(e,t){return fe(e)?e:new Pl(e,t)}class Pl{constructor(t,n){this.dep=new jn,this.__v_isRef=!0,this.__v_isShallow=!1,this._rawValue=n?t:z(t),this._value=n?t:_e(t),this.__v_isShallow=n}get value(){return this.dep.track(),this._value}set value(t){const n=this._rawValue,s=this.__v_isShallow||Pe(t)||wt(t);t=s?t:z(t),nt(t,n)&&(this._rawValue=t,this._value=s?t:_e(t),this.dep.trigger())}}function Pi(e){return fe(e)?e.value:e}function ue(e){return q(e)?e():Pi(e)}const Il={get:(e,t,n)=>t==="__v_raw"?e:Pi(Reflect.get(e,t,n)),set:(e,t,n,s)=>{const r=e[t];return fe(r)&&!fe(n)?(r.value=n,!0):Reflect.set(e,t,n,s)}};function Ii(e){return gt(e)?e:new Proxy(e,Il)}class Nl{constructor(t){this.__v_isRef=!0,this._value=void 0;const n=this.dep=new jn,{get:s,set:r}=t(n.track.bind(n),n.trigger.bind(n));this._get=s,this._set=r}get value(){return this._value=this._get()}set value(t){this._set(t)}}function Fl(e){return new Nl(e)}class Hl{constructor(t,n,s){this._object=t,this._key=n,this._defaultValue=s,this.__v_isRef=!0,this._value=void 0}get value(){const t=this._object[this._key];return this._value=t===void 0?this._defaultValue:t}set value(t){this._object[this._key]=t}get dep(){return pl(z(this._object),this._key)}}class Dl{constructor(t){this._getter=t,this.__v_isRef=!0,this.__v_isReadonly=!0,this._value=void 0}get value(){return this._value=this._getter()}}function $l(e,t,n){return fe(e)?e:q(e)?new Dl(e):se(e)&&arguments.length>1?jl(e,t,n):le(e)}function jl(e,t,n){const s=e[t];return fe(s)?s:new Hl(e,t,n)}class Vl{constructor(t,n,s){this.fn=t,this.setter=n,this._value=void 0,this.dep=new jn(this),this.__v_isRef=!0,this.deps=void 0,this.depsTail=void 0,this.flags=16,this.globalVersion=Gt-1,this.next=void 0,this.effect=this,this.__v_isReadonly=!n,this.isSSR=s}notify(){if(this.flags|=16,!(this.flags&8)&&ne!==this)return bi(this,!0),!0}get value(){const t=this.dep.track();return Si(this),t&&(t.version=this.dep.version),this._value}set value(t){this.setter&&this.setter(t)}}function kl(e,t,n=!1){let s,r;return q(e)?s=e:(s=e.get,r=e.set),new Vl(s,r,n)}const dn={},Mn=new WeakMap;let dt;function Ul(e,t=!1,n=dt){if(n){let s=Mn.get(n);s||Mn.set(n,s=[]),s.push(e)}}function Wl(e,t,n=ee){const{immediate:s,deep:r,once:i,scheduler:o,augmentJob:l,call:c}=n,f=g=>r?g:Pe(g)||r===!1||r===0?Xe(g,1):Xe(g);let a,d,m,y,v=!1,b=!1;if(fe(e)?(d=()=>e.value,v=Pe(e)):gt(e)?(d=()=>f(e),v=!0):K(e)?(b=!0,v=e.some(g=>gt(g)||Pe(g)),d=()=>e.map(g=>{if(fe(g))return g.value;if(gt(g))return f(g);if(q(g))return c?c(g,2):g()})):q(e)?t?d=c?()=>c(e,2):e:d=()=>{if(m){ot();try{m()}finally{lt()}}const g=dt;dt=a;try{return c?c(e,3,[y]):e(y)}finally{dt=g}}:d=We,t&&r){const g=d,O=r===!0?1/0:r;d=()=>Xe(g(),O)}const D=mi(),L=()=>{a.stop(),D&&D.active&&$s(D.effects,a)};if(i&&t){const g=t;t=(...O)=>{g(...O),L()}}let $=b?new Array(e.length).fill(dn):dn;const p=g=>{if(!(!(a.flags&1)||!a.dirty&&!g))if(t){const O=a.run();if(r||v||(b?O.some((j,R)=>nt(j,$[R])):nt(O,$))){m&&m();const j=dt;dt=a;try{const R=[O,$===dn?void 0:b&&$[0]===dn?[]:$,y];c?c(t,3,R):t(...R),$=O}finally{dt=j}}}else a.run()};return l&&l(p),a=new yi(d),a.scheduler=o?()=>o(p,!1):p,y=g=>Ul(g,!1,a),m=a.onStop=()=>{const g=Mn.get(a);if(g){if(c)c(g,4);else for(const O of g)O();Mn.delete(a)}},t?s?p(!0):$=a.run():o?o(p.bind(null,!0),!0):a.run(),L.pause=a.pause.bind(a),L.resume=a.resume.bind(a),L.stop=L,L}function Xe(e,t=1/0,n){if(t<=0||!se(e)||e.__v_skip||(n=n||new Set,n.has(e)))return e;if(n.add(e),t--,fe(e))Xe(e.value,t,n);else if(K(e))for(let s=0;s{Xe(s,t,n)});else if(ui(e)){for(const s in e)Xe(e[s],t,n);for(const s of Object.getOwnPropertySymbols(e))Object.prototype.propertyIsEnumerable.call(e,s)&&Xe(e[s],t,n)}return e}/** -* @vue/runtime-core v3.5.13 -* (c) 2018-present Yuxi (Evan) You and Vue contributors -* @license MIT -**/function nn(e,t,n,s){try{return s?e(...s):e()}catch(r){sn(r,t,n)}}function De(e,t,n,s){if(q(e)){const r=nn(e,t,n,s);return r&&ai(r)&&r.catch(i=>{sn(i,t,n)}),r}if(K(e)){const r=[];for(let i=0;i>>1,r=Ee[s],i=Yt(r);i=Yt(n)?Ee.push(e):Ee.splice(Kl(t),0,e),e.flags|=1,Fi()}}function Fi(){On||(On=Ni.then(Hi))}function ql(e){K(e)?Lt.push(...e):Ze&&e.id===-1?Ze.splice(xt+1,0,e):e.flags&1||(Lt.push(e),e.flags|=1),Fi()}function hr(e,t,n=ke+1){for(;nYt(n)-Yt(s));if(Lt.length=0,Ze){Ze.push(...t);return}for(Ze=t,xt=0;xte.id==null?e.flags&2?-1:1/0:e.id;function Hi(e){try{for(ke=0;ke{s._d&&Rr(-1);const i=Pn(t);let o;try{o=e(...r)}finally{Pn(i),s._d&&Rr(1)}return o};return s._n=!0,s._c=!0,s._d=!0,s}function Of(e,t){if(he===null)return e;const n=Gn(he),s=e.dirs||(e.dirs=[]);for(let r=0;re.__isTeleport,Ut=e=>e&&(e.disabled||e.disabled===""),pr=e=>e&&(e.defer||e.defer===""),gr=e=>typeof SVGElement<"u"&&e instanceof SVGElement,mr=e=>typeof MathMLElement=="function"&&e instanceof MathMLElement,xs=(e,t)=>{const n=e&&e.to;return oe(n)?t?t(n):null:n},Vi={name:"Teleport",__isTeleport:!0,process(e,t,n,s,r,i,o,l,c,f){const{mc:a,pc:d,pbc:m,o:{insert:y,querySelector:v,createText:b,createComment:D}}=f,L=Ut(t.props);let{shapeFlag:$,children:p,dynamicChildren:g}=t;if(e==null){const O=t.el=b(""),j=t.anchor=b("");y(O,n,s),y(j,n,s);const R=(E,M)=>{$&16&&(r&&r.isCE&&(r.ce._teleportTarget=E),a(p,E,M,r,i,o,l,c))},V=()=>{const E=t.target=xs(t.props,v),M=ki(E,t,b,y);E&&(o!=="svg"&&gr(E)?o="svg":o!=="mathml"&&mr(E)&&(o="mathml"),L||(R(E,M),Tn(t,!1)))};L&&(R(n,j),Tn(t,!0)),pr(t.props)?we(()=>{V(),t.el.__isMounted=!0},i):V()}else{if(pr(t.props)&&!e.el.__isMounted){we(()=>{Vi.process(e,t,n,s,r,i,o,l,c,f),delete e.el.__isMounted},i);return}t.el=e.el,t.targetStart=e.targetStart;const O=t.anchor=e.anchor,j=t.target=e.target,R=t.targetAnchor=e.targetAnchor,V=Ut(e.props),E=V?n:j,M=V?O:R;if(o==="svg"||gr(j)?o="svg":(o==="mathml"||mr(j))&&(o="mathml"),g?(m(e.dynamicChildren,g,E,r,i,o,l),Zs(e,t,!0)):c||d(e,t,E,M,r,i,o,l,!1),L)V?t.props&&e.props&&t.props.to!==e.props.to&&(t.props.to=e.props.to):hn(t,n,O,f,1);else if((t.props&&t.props.to)!==(e.props&&e.props.to)){const A=t.target=xs(t.props,v);A&&hn(t,A,null,f,0)}else V&&hn(t,j,R,f,1);Tn(t,L)}},remove(e,t,n,{um:s,o:{remove:r}},i){const{shapeFlag:o,children:l,anchor:c,targetStart:f,targetAnchor:a,target:d,props:m}=e;if(d&&(r(f),r(a)),i&&r(c),o&16){const y=i||!Ut(m);for(let v=0;v{e.isMounted=!0}),Xi(()=>{e.isUnmounting=!0}),e}const Me=[Function,Array],Ui={mode:String,appear:Boolean,persisted:Boolean,onBeforeEnter:Me,onEnter:Me,onAfterEnter:Me,onEnterCancelled:Me,onBeforeLeave:Me,onLeave:Me,onAfterLeave:Me,onLeaveCancelled:Me,onBeforeAppear:Me,onAppear:Me,onAfterAppear:Me,onAppearCancelled:Me},Wi=e=>{const t=e.subTree;return t.component?Wi(t.component):t},Jl={name:"BaseTransition",props:Ui,setup(e,{slots:t}){const n=on(),s=Yl();return()=>{const r=t.default&&qi(t.default(),!0);if(!r||!r.length)return;const i=Bi(r),o=z(e),{mode:l}=o;if(s.isLeaving)return ns(i);const c=yr(i);if(!c)return ns(i);let f=Cs(c,o,s,n,d=>f=d);c.type!==be&&Jt(c,f);let a=n.subTree&&yr(n.subTree);if(a&&a.type!==be&&!ht(c,a)&&Wi(n).type!==be){let d=Cs(a,o,s,n);if(Jt(a,d),l==="out-in"&&c.type!==be)return s.isLeaving=!0,d.afterLeave=()=>{s.isLeaving=!1,n.job.flags&8||n.update(),delete d.afterLeave,a=void 0},ns(i);l==="in-out"&&c.type!==be?d.delayLeave=(m,y,v)=>{const b=Ki(s,a);b[String(a.key)]=a,m[et]=()=>{y(),m[et]=void 0,delete f.delayedLeave,a=void 0},f.delayedLeave=()=>{v(),delete f.delayedLeave,a=void 0}}:a=void 0}else a&&(a=void 0);return i}}};function Bi(e){let t=e[0];if(e.length>1){for(const n of e)if(n.type!==be){t=n;break}}return t}const zl=Jl;function Ki(e,t){const{leavingVNodes:n}=e;let s=n.get(t.type);return s||(s=Object.create(null),n.set(t.type,s)),s}function Cs(e,t,n,s,r){const{appear:i,mode:o,persisted:l=!1,onBeforeEnter:c,onEnter:f,onAfterEnter:a,onEnterCancelled:d,onBeforeLeave:m,onLeave:y,onAfterLeave:v,onLeaveCancelled:b,onBeforeAppear:D,onAppear:L,onAfterAppear:$,onAppearCancelled:p}=t,g=String(e.key),O=Ki(n,e),j=(E,M)=>{E&&De(E,s,9,M)},R=(E,M)=>{const A=M[1];j(E,M),K(E)?E.every(w=>w.length<=1)&&A():E.length<=1&&A()},V={mode:o,persisted:l,beforeEnter(E){let M=c;if(!n.isMounted)if(i)M=D||c;else return;E[et]&&E[et](!0);const A=O[g];A&&ht(e,A)&&A.el[et]&&A.el[et](),j(M,[E])},enter(E){let M=f,A=a,w=d;if(!n.isMounted)if(i)M=L||f,A=$||a,w=p||d;else return;let F=!1;const Y=E[pn]=re=>{F||(F=!0,re?j(w,[E]):j(A,[E]),V.delayedLeave&&V.delayedLeave(),E[pn]=void 0)};M?R(M,[E,Y]):Y()},leave(E,M){const A=String(e.key);if(E[pn]&&E[pn](!0),n.isUnmounting)return M();j(m,[E]);let w=!1;const F=E[et]=Y=>{w||(w=!0,M(),Y?j(b,[E]):j(v,[E]),E[et]=void 0,O[A]===e&&delete O[A])};O[A]=e,y?R(y,[E,F]):F()},clone(E){const M=Cs(E,t,n,s,r);return r&&r(M),M}};return V}function ns(e){if(rn(e))return e=st(e),e.children=null,e}function yr(e){if(!rn(e))return ji(e.type)&&e.children?Bi(e.children):e;const{shapeFlag:t,children:n}=e;if(n){if(t&16)return n[0];if(t&32&&q(n.default))return n.default()}}function Jt(e,t){e.shapeFlag&6&&e.component?(e.transition=t,Jt(e.component.subTree,t)):e.shapeFlag&128?(e.ssContent.transition=t.clone(e.ssContent),e.ssFallback.transition=t.clone(e.ssFallback)):e.transition=t}function qi(e,t=!1,n){let s=[],r=0;for(let i=0;i1)for(let i=0;izt(v,t&&(K(t)?t[b]:t),n,s,r));return}if(mt(s)&&!r){s.shapeFlag&512&&s.type.__asyncResolved&&s.component.subTree.component&&zt(e,t,n,s.component.subTree);return}const i=s.shapeFlag&4?Gn(s.component):s.el,o=r?null:i,{i:l,r:c}=e,f=t&&t.r,a=l.refs===ee?l.refs={}:l.refs,d=l.setupState,m=z(d),y=d===ee?()=>!1:v=>Q(m,v);if(f!=null&&f!==c&&(oe(f)?(a[f]=null,y(f)&&(d[f]=null)):fe(f)&&(f.value=null)),q(c))nn(c,l,12,[o,a]);else{const v=oe(c),b=fe(c);if(v||b){const D=()=>{if(e.f){const L=v?y(c)?d[c]:a[c]:c.value;r?K(L)&&$s(L,i):K(L)?L.includes(i)||L.push(i):v?(a[c]=[i],y(c)&&(d[c]=a[c])):(c.value=[i],e.k&&(a[e.k]=c.value))}else v?(a[c]=o,y(c)&&(d[c]=o)):b&&(c.value=o,e.k&&(a[e.k]=o))};o?(D.id=-1,we(D,n)):D()}}}let _r=!1;const Tt=()=>{_r||(console.error("Hydration completed but contains mismatches."),_r=!0)},Ql=e=>e.namespaceURI.includes("svg")&&e.tagName!=="foreignObject",Zl=e=>e.namespaceURI.includes("MathML"),gn=e=>{if(e.nodeType===1){if(Ql(e))return"svg";if(Zl(e))return"mathml"}},At=e=>e.nodeType===8;function ec(e){const{mt:t,p:n,o:{patchProp:s,createText:r,nextSibling:i,parentNode:o,remove:l,insert:c,createComment:f}}=e,a=(p,g)=>{if(!g.hasChildNodes()){n(null,p,g),Ln(),g._vnode=p;return}d(g.firstChild,p,null,null,null),Ln(),g._vnode=p},d=(p,g,O,j,R,V=!1)=>{V=V||!!g.dynamicChildren;const E=At(p)&&p.data==="[",M=()=>b(p,g,O,j,R,E),{type:A,ref:w,shapeFlag:F,patchFlag:Y}=g;let re=p.nodeType;g.el=p,Y===-2&&(V=!1,g.dynamicChildren=null);let U=null;switch(A){case bt:re!==3?g.children===""?(c(g.el=r(""),o(p),p),U=p):U=M():(p.data!==g.children&&(Tt(),p.data=g.children),U=i(p));break;case be:$(p)?(U=i(p),L(g.el=p.content.firstChild,p,O)):re!==8||E?U=M():U=i(p);break;case Bt:if(E&&(p=i(p),re=p.nodeType),re===1||re===3){U=p;const X=!g.children.length;for(let k=0;k{V=V||!!g.dynamicChildren;const{type:E,props:M,patchFlag:A,shapeFlag:w,dirs:F,transition:Y}=g,re=E==="input"||E==="option";if(re||A!==-1){F&&Ue(g,null,O,"created");let U=!1;if($(p)){U=po(null,Y)&&O&&O.vnode.props&&O.vnode.props.appear;const k=p.content.firstChild;U&&Y.beforeEnter(k),L(k,p,O),g.el=p=k}if(w&16&&!(M&&(M.innerHTML||M.textContent))){let k=y(p.firstChild,g,p,O,j,R,V);for(;k;){mn(p,1)||Tt();const ae=k;k=k.nextSibling,l(ae)}}else if(w&8){let k=g.children;k[0]===` -`&&(p.tagName==="PRE"||p.tagName==="TEXTAREA")&&(k=k.slice(1)),p.textContent!==k&&(mn(p,0)||Tt(),p.textContent=g.children)}if(M){if(re||!V||A&48){const k=p.tagName.includes("-");for(const ae in M)(re&&(ae.endsWith("value")||ae==="indeterminate")||tn(ae)&&!Ot(ae)||ae[0]==="."||k)&&s(p,ae,null,M[ae],void 0,O)}else if(M.onClick)s(p,"onClick",null,M.onClick,void 0,O);else if(A&4&>(M.style))for(const k in M.style)M.style[k]}let X;(X=M&&M.onVnodeBeforeMount)&&Oe(X,O,g),F&&Ue(g,null,O,"beforeMount"),((X=M&&M.onVnodeMounted)||F||U)&&vo(()=>{X&&Oe(X,O,g),U&&Y.enter(p),F&&Ue(g,null,O,"mounted")},j)}return p.nextSibling},y=(p,g,O,j,R,V,E)=>{E=E||!!g.dynamicChildren;const M=g.children,A=M.length;for(let w=0;w{const{slotScopeIds:E}=g;E&&(R=R?R.concat(E):E);const M=o(p),A=y(i(p),g,M,O,j,R,V);return A&&At(A)&&A.data==="]"?i(g.anchor=A):(Tt(),c(g.anchor=f("]"),M,A),A)},b=(p,g,O,j,R,V)=>{if(mn(p.parentElement,1)||Tt(),g.el=null,V){const A=D(p);for(;;){const w=i(p);if(w&&w!==A)l(w);else break}}const E=i(p),M=o(p);return l(p),n(null,g,M,E,O,j,gn(M),R),O&&(O.vnode.el=g.el,_o(O,g.el)),E},D=(p,g="[",O="]")=>{let j=0;for(;p;)if(p=i(p),p&&At(p)&&(p.data===g&&j++,p.data===O)){if(j===0)return i(p);j--}return p},L=(p,g,O)=>{const j=g.parentNode;j&&j.replaceChild(p,g);let R=O;for(;R;)R.vnode.el===g&&(R.vnode.el=R.subTree.el=p),R=R.parent},$=p=>p.nodeType===1&&p.tagName==="TEMPLATE";return[a,d]}const br="data-allow-mismatch",tc={0:"text",1:"children",2:"class",3:"style",4:"attribute"};function mn(e,t){if(t===0||t===1)for(;e&&!e.hasAttribute(br);)e=e.parentElement;const n=e&&e.getAttribute(br);if(n==null)return!1;if(n==="")return!0;{const s=n.split(",");return t===0&&s.includes("children")?!0:n.split(",").includes(tc[t])}}$n().requestIdleCallback;$n().cancelIdleCallback;function nc(e,t){if(At(e)&&e.data==="["){let n=1,s=e.nextSibling;for(;s;){if(s.nodeType===1){if(t(s)===!1)break}else if(At(s))if(s.data==="]"){if(--n===0)break}else s.data==="["&&n++;s=s.nextSibling}}else t(e)}const mt=e=>!!e.type.__asyncLoader;/*! #__NO_SIDE_EFFECTS__ */function Pf(e){q(e)&&(e={loader:e});const{loader:t,loadingComponent:n,errorComponent:s,delay:r=200,hydrate:i,timeout:o,suspensible:l=!0,onError:c}=e;let f=null,a,d=0;const m=()=>(d++,f=null,y()),y=()=>{let v;return f||(v=f=t().catch(b=>{if(b=b instanceof Error?b:new Error(String(b)),c)return new Promise((D,L)=>{c(b,()=>D(m()),()=>L(b),d+1)});throw b}).then(b=>v!==f&&f?f:(b&&(b.__esModule||b[Symbol.toStringTag]==="Module")&&(b=b.default),a=b,b)))};return Js({name:"AsyncComponentWrapper",__asyncLoader:y,__asyncHydrate(v,b,D){const L=i?()=>{const $=i(D,p=>nc(v,p));$&&(b.bum||(b.bum=[])).push($)}:D;a?L():y().then(()=>!b.isUnmounted&&L())},get __asyncResolved(){return a},setup(){const v=de;if(zs(v),a)return()=>ss(a,v);const b=p=>{f=null,sn(p,v,13,!s)};if(l&&v.suspense||It)return y().then(p=>()=>ss(p,v)).catch(p=>(b(p),()=>s?ce(s,{error:p}):null));const D=le(!1),L=le(),$=le(!!r);return r&&setTimeout(()=>{$.value=!1},r),o!=null&&setTimeout(()=>{if(!D.value&&!L.value){const p=new Error(`Async component timed out after ${o}ms.`);b(p),L.value=p}},o),y().then(()=>{D.value=!0,v.parent&&rn(v.parent.vnode)&&v.parent.update()}).catch(p=>{b(p),L.value=p}),()=>{if(D.value&&a)return ss(a,v);if(L.value&&s)return ce(s,{error:L.value});if(n&&!$.value)return ce(n)}}})}function ss(e,t){const{ref:n,props:s,children:r,ce:i}=t.vnode,o=ce(e,s,r);return o.ref=n,o.ce=i,delete t.vnode.ce,o}const rn=e=>e.type.__isKeepAlive;function sc(e,t){Gi(e,"a",t)}function rc(e,t){Gi(e,"da",t)}function Gi(e,t,n=de){const s=e.__wdc||(e.__wdc=()=>{let r=n;for(;r;){if(r.isDeactivated)return;r=r.parent}return e()});if(Wn(t,s,n),n){let r=n.parent;for(;r&&r.parent;)rn(r.parent.vnode)&&ic(s,t,n,r),r=r.parent}}function ic(e,t,n,s){const r=Wn(t,e,s,!0);Bn(()=>{$s(s[t],r)},n)}function Wn(e,t,n=de,s=!1){if(n){const r=n[e]||(n[e]=[]),i=t.__weh||(t.__weh=(...o)=>{ot();const l=ln(n),c=De(t,n,e,o);return l(),lt(),c});return s?r.unshift(i):r.push(i),i}}const ze=e=>(t,n=de)=>{(!It||e==="sp")&&Wn(e,(...s)=>t(...s),n)},oc=ze("bm"),Nt=ze("m"),lc=ze("bu"),cc=ze("u"),Xi=ze("bum"),Bn=ze("um"),ac=ze("sp"),fc=ze("rtg"),uc=ze("rtc");function dc(e,t=de){Wn("ec",e,t)}const Yi="components";function If(e,t){return zi(Yi,e,!0,t)||e}const Ji=Symbol.for("v-ndc");function Nf(e){return oe(e)?zi(Yi,e,!1)||e:e||Ji}function zi(e,t,n=!0,s=!1){const r=he||de;if(r){const i=r.type;{const l=Jc(i,!1);if(l&&(l===t||l===Ne(t)||l===Dn(Ne(t))))return i}const o=vr(r[e]||i[e],t)||vr(r.appContext[e],t);return!o&&s?i:o}}function vr(e,t){return e&&(e[t]||e[Ne(t)]||e[Dn(Ne(t))])}function Ff(e,t,n,s){let r;const i=n,o=K(e);if(o||oe(e)){const l=o&>(e);let c=!1;l&&(c=!Pe(e),e=Vn(e)),r=new Array(e.length);for(let f=0,a=e.length;ft(l,c,void 0,i));else{const l=Object.keys(e);r=new Array(l.length);for(let c=0,f=l.length;cZt(t)?!(t.type===be||t.type===Te&&!Qi(t.children)):!0)?e:null}function Df(e,t){const n={};for(const s in e)n[/[A-Z]/.test(s)?`on:${s}`:wn(s)]=e[s];return n}const As=e=>e?xo(e)?Gn(e):As(e.parent):null,Wt=pe(Object.create(null),{$:e=>e,$el:e=>e.vnode.el,$data:e=>e.data,$props:e=>e.props,$attrs:e=>e.attrs,$slots:e=>e.slots,$refs:e=>e.refs,$parent:e=>As(e.parent),$root:e=>As(e.root),$host:e=>e.ce,$emit:e=>e.emit,$options:e=>eo(e),$forceUpdate:e=>e.f||(e.f=()=>{Ys(e.update)}),$nextTick:e=>e.n||(e.n=Un.bind(e.proxy)),$watch:e=>Nc.bind(e)}),rs=(e,t)=>e!==ee&&!e.__isScriptSetup&&Q(e,t),hc={get({_:e},t){if(t==="__v_skip")return!0;const{ctx:n,setupState:s,data:r,props:i,accessCache:o,type:l,appContext:c}=e;let f;if(t[0]!=="$"){const y=o[t];if(y!==void 0)switch(y){case 1:return s[t];case 2:return r[t];case 4:return n[t];case 3:return i[t]}else{if(rs(s,t))return o[t]=1,s[t];if(r!==ee&&Q(r,t))return o[t]=2,r[t];if((f=e.propsOptions[0])&&Q(f,t))return o[t]=3,i[t];if(n!==ee&&Q(n,t))return o[t]=4,n[t];Rs&&(o[t]=0)}}const a=Wt[t];let d,m;if(a)return t==="$attrs"&&ye(e.attrs,"get",""),a(e);if((d=l.__cssModules)&&(d=d[t]))return d;if(n!==ee&&Q(n,t))return o[t]=4,n[t];if(m=c.config.globalProperties,Q(m,t))return m[t]},set({_:e},t,n){const{data:s,setupState:r,ctx:i}=e;return rs(r,t)?(r[t]=n,!0):s!==ee&&Q(s,t)?(s[t]=n,!0):Q(e.props,t)||t[0]==="$"&&t.slice(1)in e?!1:(i[t]=n,!0)},has({_:{data:e,setupState:t,accessCache:n,ctx:s,appContext:r,propsOptions:i}},o){let l;return!!n[o]||e!==ee&&Q(e,o)||rs(t,o)||(l=i[0])&&Q(l,o)||Q(s,o)||Q(Wt,o)||Q(r.config.globalProperties,o)},defineProperty(e,t,n){return n.get!=null?e._.accessCache[t]=0:Q(n,"value")&&this.set(e,t,n.value,null),Reflect.defineProperty(e,t,n)}};function $f(){return pc().slots}function pc(){const e=on();return e.setupContext||(e.setupContext=Ao(e))}function wr(e){return K(e)?e.reduce((t,n)=>(t[n]=null,t),{}):e}let Rs=!0;function gc(e){const t=eo(e),n=e.proxy,s=e.ctx;Rs=!1,t.beforeCreate&&Sr(t.beforeCreate,e,"bc");const{data:r,computed:i,methods:o,watch:l,provide:c,inject:f,created:a,beforeMount:d,mounted:m,beforeUpdate:y,updated:v,activated:b,deactivated:D,beforeDestroy:L,beforeUnmount:$,destroyed:p,unmounted:g,render:O,renderTracked:j,renderTriggered:R,errorCaptured:V,serverPrefetch:E,expose:M,inheritAttrs:A,components:w,directives:F,filters:Y}=t;if(f&&mc(f,s,null),o)for(const X in o){const k=o[X];q(k)&&(s[X]=k.bind(n))}if(r){const X=r.call(n,n);se(X)&&(e.data=Pt(X))}if(Rs=!0,i)for(const X in i){const k=i[X],ae=q(k)?k.bind(n,n):q(k.get)?k.get.bind(n,n):We,cn=!q(k)&&q(k.set)?k.set.bind(n):We,ct=ie({get:ae,set:cn});Object.defineProperty(s,X,{enumerable:!0,configurable:!0,get:()=>ct.value,set:je=>ct.value=je})}if(l)for(const X in l)Zi(l[X],s,n,X);if(c){const X=q(c)?c.call(n):c;Reflect.ownKeys(X).forEach(k=>{Sc(k,X[k])})}a&&Sr(a,e,"c");function U(X,k){K(k)?k.forEach(ae=>X(ae.bind(n))):k&&X(k.bind(n))}if(U(oc,d),U(Nt,m),U(lc,y),U(cc,v),U(sc,b),U(rc,D),U(dc,V),U(uc,j),U(fc,R),U(Xi,$),U(Bn,g),U(ac,E),K(M))if(M.length){const X=e.exposed||(e.exposed={});M.forEach(k=>{Object.defineProperty(X,k,{get:()=>n[k],set:ae=>n[k]=ae})})}else e.exposed||(e.exposed={});O&&e.render===We&&(e.render=O),A!=null&&(e.inheritAttrs=A),w&&(e.components=w),F&&(e.directives=F),E&&zs(e)}function mc(e,t,n=We){K(e)&&(e=Ms(e));for(const s in e){const r=e[s];let i;se(r)?"default"in r?i=_t(r.from||s,r.default,!0):i=_t(r.from||s):i=_t(r),fe(i)?Object.defineProperty(t,s,{enumerable:!0,configurable:!0,get:()=>i.value,set:o=>i.value=o}):t[s]=i}}function Sr(e,t,n){De(K(e)?e.map(s=>s.bind(t.proxy)):e.bind(t.proxy),t,n)}function Zi(e,t,n,s){let r=s.includes(".")?mo(n,s):()=>n[s];if(oe(e)){const i=t[e];q(i)&&Ie(r,i)}else if(q(e))Ie(r,e.bind(n));else if(se(e))if(K(e))e.forEach(i=>Zi(i,t,n,s));else{const i=q(e.handler)?e.handler.bind(n):t[e.handler];q(i)&&Ie(r,i,e)}}function eo(e){const t=e.type,{mixins:n,extends:s}=t,{mixins:r,optionsCache:i,config:{optionMergeStrategies:o}}=e.appContext,l=i.get(t);let c;return l?c=l:!r.length&&!n&&!s?c=t:(c={},r.length&&r.forEach(f=>In(c,f,o,!0)),In(c,t,o)),se(t)&&i.set(t,c),c}function In(e,t,n,s=!1){const{mixins:r,extends:i}=t;i&&In(e,i,n,!0),r&&r.forEach(o=>In(e,o,n,!0));for(const o in t)if(!(s&&o==="expose")){const l=yc[o]||n&&n[o];e[o]=l?l(e[o],t[o]):t[o]}return e}const yc={data:Er,props:Tr,emits:Tr,methods:jt,computed:jt,beforeCreate:ve,created:ve,beforeMount:ve,mounted:ve,beforeUpdate:ve,updated:ve,beforeDestroy:ve,beforeUnmount:ve,destroyed:ve,unmounted:ve,activated:ve,deactivated:ve,errorCaptured:ve,serverPrefetch:ve,components:jt,directives:jt,watch:bc,provide:Er,inject:_c};function Er(e,t){return t?e?function(){return pe(q(e)?e.call(this,this):e,q(t)?t.call(this,this):t)}:t:e}function _c(e,t){return jt(Ms(e),Ms(t))}function Ms(e){if(K(e)){const t={};for(let n=0;n1)return n&&q(t)?t.call(s&&s.proxy):t}}function no(){return!!(de||he||yt)}const so={},ro=()=>Object.create(so),io=e=>Object.getPrototypeOf(e)===so;function Ec(e,t,n,s=!1){const r={},i=ro();e.propsDefaults=Object.create(null),oo(e,t,r,i);for(const o in e.propsOptions[0])o in r||(r[o]=void 0);n?e.props=s?r:Ll(r):e.type.props?e.props=r:e.props=i,e.attrs=i}function Tc(e,t,n,s){const{props:r,attrs:i,vnode:{patchFlag:o}}=e,l=z(r),[c]=e.propsOptions;let f=!1;if((s||o>0)&&!(o&16)){if(o&8){const a=e.vnode.dynamicProps;for(let d=0;d{c=!0;const[m,y]=lo(d,t,!0);pe(o,m),y&&l.push(...y)};!n&&t.mixins.length&&t.mixins.forEach(a),e.extends&&a(e.extends),e.mixins&&e.mixins.forEach(a)}if(!i&&!c)return se(e)&&s.set(e,Rt),Rt;if(K(i))for(let a=0;ae[0]==="_"||e==="$stable",Qs=e=>K(e)?e.map(Le):[Le(e)],Cc=(e,t,n)=>{if(t._n)return t;const s=Gl((...r)=>Qs(t(...r)),n);return s._c=!1,s},ao=(e,t,n)=>{const s=e._ctx;for(const r in e){if(co(r))continue;const i=e[r];if(q(i))t[r]=Cc(r,i,s);else if(i!=null){const o=Qs(i);t[r]=()=>o}}},fo=(e,t)=>{const n=Qs(t);e.slots.default=()=>n},uo=(e,t,n)=>{for(const s in t)(n||s!=="_")&&(e[s]=t[s])},Ac=(e,t,n)=>{const s=e.slots=ro();if(e.vnode.shapeFlag&32){const r=t._;r?(uo(s,t,n),n&&di(s,"_",r,!0)):ao(t,s)}else t&&fo(e,t)},Rc=(e,t,n)=>{const{vnode:s,slots:r}=e;let i=!0,o=ee;if(s.shapeFlag&32){const l=t._;l?n&&l===1?i=!1:uo(r,t,n):(i=!t.$stable,ao(t,r)),o=t}else t&&(fo(e,t),o={default:1});if(i)for(const l in r)!co(l)&&o[l]==null&&delete r[l]},we=vo;function Mc(e){return ho(e)}function Oc(e){return ho(e,ec)}function ho(e,t){const n=$n();n.__VUE__=!0;const{insert:s,remove:r,patchProp:i,createElement:o,createText:l,createComment:c,setText:f,setElementText:a,parentNode:d,nextSibling:m,setScopeId:y=We,insertStaticContent:v}=e,b=(u,h,_,x=null,S=null,T=null,N=void 0,I=null,P=!!h.dynamicChildren)=>{if(u===h)return;u&&!ht(u,h)&&(x=an(u),je(u,S,T,!0),u=null),h.patchFlag===-2&&(P=!1,h.dynamicChildren=null);const{type:C,ref:B,shapeFlag:H}=h;switch(C){case bt:D(u,h,_,x);break;case be:L(u,h,_,x);break;case Bt:u==null&&$(h,_,x,N);break;case Te:w(u,h,_,x,S,T,N,I,P);break;default:H&1?O(u,h,_,x,S,T,N,I,P):H&6?F(u,h,_,x,S,T,N,I,P):(H&64||H&128)&&C.process(u,h,_,x,S,T,N,I,P,St)}B!=null&&S&&zt(B,u&&u.ref,T,h||u,!h)},D=(u,h,_,x)=>{if(u==null)s(h.el=l(h.children),_,x);else{const S=h.el=u.el;h.children!==u.children&&f(S,h.children)}},L=(u,h,_,x)=>{u==null?s(h.el=c(h.children||""),_,x):h.el=u.el},$=(u,h,_,x)=>{[u.el,u.anchor]=v(u.children,h,_,x,u.el,u.anchor)},p=({el:u,anchor:h},_,x)=>{let S;for(;u&&u!==h;)S=m(u),s(u,_,x),u=S;s(h,_,x)},g=({el:u,anchor:h})=>{let _;for(;u&&u!==h;)_=m(u),r(u),u=_;r(h)},O=(u,h,_,x,S,T,N,I,P)=>{h.type==="svg"?N="svg":h.type==="math"&&(N="mathml"),u==null?j(h,_,x,S,T,N,I,P):E(u,h,S,T,N,I,P)},j=(u,h,_,x,S,T,N,I)=>{let P,C;const{props:B,shapeFlag:H,transition:W,dirs:G}=u;if(P=u.el=o(u.type,T,B&&B.is,B),H&8?a(P,u.children):H&16&&V(u.children,P,null,x,S,is(u,T),N,I),G&&Ue(u,null,x,"created"),R(P,u,u.scopeId,N,x),B){for(const te in B)te!=="value"&&!Ot(te)&&i(P,te,null,B[te],T,x);"value"in B&&i(P,"value",null,B.value,T),(C=B.onVnodeBeforeMount)&&Oe(C,x,u)}G&&Ue(u,null,x,"beforeMount");const J=po(S,W);J&&W.beforeEnter(P),s(P,h,_),((C=B&&B.onVnodeMounted)||J||G)&&we(()=>{C&&Oe(C,x,u),J&&W.enter(P),G&&Ue(u,null,x,"mounted")},S)},R=(u,h,_,x,S)=>{if(_&&y(u,_),x)for(let T=0;T{for(let C=P;C{const I=h.el=u.el;let{patchFlag:P,dynamicChildren:C,dirs:B}=h;P|=u.patchFlag&16;const H=u.props||ee,W=h.props||ee;let G;if(_&&at(_,!1),(G=W.onVnodeBeforeUpdate)&&Oe(G,_,h,u),B&&Ue(h,u,_,"beforeUpdate"),_&&at(_,!0),(H.innerHTML&&W.innerHTML==null||H.textContent&&W.textContent==null)&&a(I,""),C?M(u.dynamicChildren,C,I,_,x,is(h,S),T):N||k(u,h,I,null,_,x,is(h,S),T,!1),P>0){if(P&16)A(I,H,W,_,S);else if(P&2&&H.class!==W.class&&i(I,"class",null,W.class,S),P&4&&i(I,"style",H.style,W.style,S),P&8){const J=h.dynamicProps;for(let te=0;te{G&&Oe(G,_,h,u),B&&Ue(h,u,_,"updated")},x)},M=(u,h,_,x,S,T,N)=>{for(let I=0;I{if(h!==_){if(h!==ee)for(const T in h)!Ot(T)&&!(T in _)&&i(u,T,h[T],null,S,x);for(const T in _){if(Ot(T))continue;const N=_[T],I=h[T];N!==I&&T!=="value"&&i(u,T,I,N,S,x)}"value"in _&&i(u,"value",h.value,_.value,S)}},w=(u,h,_,x,S,T,N,I,P)=>{const C=h.el=u?u.el:l(""),B=h.anchor=u?u.anchor:l("");let{patchFlag:H,dynamicChildren:W,slotScopeIds:G}=h;G&&(I=I?I.concat(G):G),u==null?(s(C,_,x),s(B,_,x),V(h.children||[],_,B,S,T,N,I,P)):H>0&&H&64&&W&&u.dynamicChildren?(M(u.dynamicChildren,W,_,S,T,N,I),(h.key!=null||S&&h===S.subTree)&&Zs(u,h,!0)):k(u,h,_,B,S,T,N,I,P)},F=(u,h,_,x,S,T,N,I,P)=>{h.slotScopeIds=I,u==null?h.shapeFlag&512?S.ctx.activate(h,_,x,N,P):Y(h,_,x,S,T,N,P):re(u,h,P)},Y=(u,h,_,x,S,T,N)=>{const I=u.component=qc(u,x,S);if(rn(u)&&(I.ctx.renderer=St),Gc(I,!1,N),I.asyncDep){if(S&&S.registerDep(I,U,N),!u.el){const P=I.subTree=ce(be);L(null,P,h,_)}}else U(I,u,h,_,S,T,N)},re=(u,h,_)=>{const x=h.component=u.component;if(jc(u,h,_))if(x.asyncDep&&!x.asyncResolved){X(x,h,_);return}else x.next=h,x.update();else h.el=u.el,x.vnode=h},U=(u,h,_,x,S,T,N)=>{const I=()=>{if(u.isMounted){let{next:H,bu:W,u:G,parent:J,vnode:te}=u;{const Ce=go(u);if(Ce){H&&(H.el=te.el,X(u,H,N)),Ce.asyncDep.then(()=>{u.isUnmounted||I()});return}}let Z=H,xe;at(u,!1),H?(H.el=te.el,X(u,H,N)):H=te,W&&Sn(W),(xe=H.props&&H.props.onVnodeBeforeUpdate)&&Oe(xe,J,H,te),at(u,!0);const ge=os(u),Fe=u.subTree;u.subTree=ge,b(Fe,ge,d(Fe.el),an(Fe),u,S,T),H.el=ge.el,Z===null&&_o(u,ge.el),G&&we(G,S),(xe=H.props&&H.props.onVnodeUpdated)&&we(()=>Oe(xe,J,H,te),S)}else{let H;const{el:W,props:G}=h,{bm:J,m:te,parent:Z,root:xe,type:ge}=u,Fe=mt(h);if(at(u,!1),J&&Sn(J),!Fe&&(H=G&&G.onVnodeBeforeMount)&&Oe(H,Z,h),at(u,!0),W&&zn){const Ce=()=>{u.subTree=os(u),zn(W,u.subTree,u,S,null)};Fe&&ge.__asyncHydrate?ge.__asyncHydrate(W,u,Ce):Ce()}else{xe.ce&&xe.ce._injectChildStyle(ge);const Ce=u.subTree=os(u);b(null,Ce,_,x,u,S,T),h.el=Ce.el}if(te&&we(te,S),!Fe&&(H=G&&G.onVnodeMounted)){const Ce=h;we(()=>Oe(H,Z,Ce),S)}(h.shapeFlag&256||Z&&mt(Z.vnode)&&Z.vnode.shapeFlag&256)&&u.a&&we(u.a,S),u.isMounted=!0,h=_=x=null}};u.scope.on();const P=u.effect=new yi(I);u.scope.off();const C=u.update=P.run.bind(P),B=u.job=P.runIfDirty.bind(P);B.i=u,B.id=u.uid,P.scheduler=()=>Ys(B),at(u,!0),C()},X=(u,h,_)=>{h.component=u;const x=u.vnode.props;u.vnode=h,u.next=null,Tc(u,h.props,x,_),Rc(u,h.children,_),ot(),hr(u),lt()},k=(u,h,_,x,S,T,N,I,P=!1)=>{const C=u&&u.children,B=u?u.shapeFlag:0,H=h.children,{patchFlag:W,shapeFlag:G}=h;if(W>0){if(W&128){cn(C,H,_,x,S,T,N,I,P);return}else if(W&256){ae(C,H,_,x,S,T,N,I,P);return}}G&8?(B&16&&Ft(C,S,T),H!==C&&a(_,H)):B&16?G&16?cn(C,H,_,x,S,T,N,I,P):Ft(C,S,T,!0):(B&8&&a(_,""),G&16&&V(H,_,x,S,T,N,I,P))},ae=(u,h,_,x,S,T,N,I,P)=>{u=u||Rt,h=h||Rt;const C=u.length,B=h.length,H=Math.min(C,B);let W;for(W=0;WB?Ft(u,S,T,!0,!1,H):V(h,_,x,S,T,N,I,P,H)},cn=(u,h,_,x,S,T,N,I,P)=>{let C=0;const B=h.length;let H=u.length-1,W=B-1;for(;C<=H&&C<=W;){const G=u[C],J=h[C]=P?tt(h[C]):Le(h[C]);if(ht(G,J))b(G,J,_,null,S,T,N,I,P);else break;C++}for(;C<=H&&C<=W;){const G=u[H],J=h[W]=P?tt(h[W]):Le(h[W]);if(ht(G,J))b(G,J,_,null,S,T,N,I,P);else break;H--,W--}if(C>H){if(C<=W){const G=W+1,J=GW)for(;C<=H;)je(u[C],S,T,!0),C++;else{const G=C,J=C,te=new Map;for(C=J;C<=W;C++){const Ae=h[C]=P?tt(h[C]):Le(h[C]);Ae.key!=null&&te.set(Ae.key,C)}let Z,xe=0;const ge=W-J+1;let Fe=!1,Ce=0;const Ht=new Array(ge);for(C=0;C=ge){je(Ae,S,T,!0);continue}let Ve;if(Ae.key!=null)Ve=te.get(Ae.key);else for(Z=J;Z<=W;Z++)if(Ht[Z-J]===0&&ht(Ae,h[Z])){Ve=Z;break}Ve===void 0?je(Ae,S,T,!0):(Ht[Ve-J]=C+1,Ve>=Ce?Ce=Ve:Fe=!0,b(Ae,h[Ve],_,null,S,T,N,I,P),xe++)}const cr=Fe?Lc(Ht):Rt;for(Z=cr.length-1,C=ge-1;C>=0;C--){const Ae=J+C,Ve=h[Ae],ar=Ae+1{const{el:T,type:N,transition:I,children:P,shapeFlag:C}=u;if(C&6){ct(u.component.subTree,h,_,x);return}if(C&128){u.suspense.move(h,_,x);return}if(C&64){N.move(u,h,_,St);return}if(N===Te){s(T,h,_);for(let H=0;HI.enter(T),S);else{const{leave:H,delayLeave:W,afterLeave:G}=I,J=()=>s(T,h,_),te=()=>{H(T,()=>{J(),G&&G()})};W?W(T,J,te):te()}else s(T,h,_)},je=(u,h,_,x=!1,S=!1)=>{const{type:T,props:N,ref:I,children:P,dynamicChildren:C,shapeFlag:B,patchFlag:H,dirs:W,cacheIndex:G}=u;if(H===-2&&(S=!1),I!=null&&zt(I,null,_,u,!0),G!=null&&(h.renderCache[G]=void 0),B&256){h.ctx.deactivate(u);return}const J=B&1&&W,te=!mt(u);let Z;if(te&&(Z=N&&N.onVnodeBeforeUnmount)&&Oe(Z,h,u),B&6)Jo(u.component,_,x);else{if(B&128){u.suspense.unmount(_,x);return}J&&Ue(u,null,h,"beforeUnmount"),B&64?u.type.remove(u,h,_,St,x):C&&!C.hasOnce&&(T!==Te||H>0&&H&64)?Ft(C,h,_,!1,!0):(T===Te&&H&384||!S&&B&16)&&Ft(P,h,_),x&&or(u)}(te&&(Z=N&&N.onVnodeUnmounted)||J)&&we(()=>{Z&&Oe(Z,h,u),J&&Ue(u,null,h,"unmounted")},_)},or=u=>{const{type:h,el:_,anchor:x,transition:S}=u;if(h===Te){Yo(_,x);return}if(h===Bt){g(u);return}const T=()=>{r(_),S&&!S.persisted&&S.afterLeave&&S.afterLeave()};if(u.shapeFlag&1&&S&&!S.persisted){const{leave:N,delayLeave:I}=S,P=()=>N(_,T);I?I(u.el,T,P):P()}else T()},Yo=(u,h)=>{let _;for(;u!==h;)_=m(u),r(u),u=_;r(h)},Jo=(u,h,_)=>{const{bum:x,scope:S,job:T,subTree:N,um:I,m:P,a:C}=u;Cr(P),Cr(C),x&&Sn(x),S.stop(),T&&(T.flags|=8,je(N,u,h,_)),I&&we(I,h),we(()=>{u.isUnmounted=!0},h),h&&h.pendingBranch&&!h.isUnmounted&&u.asyncDep&&!u.asyncResolved&&u.suspenseId===h.pendingId&&(h.deps--,h.deps===0&&h.resolve())},Ft=(u,h,_,x=!1,S=!1,T=0)=>{for(let N=T;N{if(u.shapeFlag&6)return an(u.component.subTree);if(u.shapeFlag&128)return u.suspense.next();const h=m(u.anchor||u.el),_=h&&h[$i];return _?m(_):h};let Yn=!1;const lr=(u,h,_)=>{u==null?h._vnode&&je(h._vnode,null,null,!0):b(h._vnode||null,u,h,null,null,null,_),h._vnode=u,Yn||(Yn=!0,hr(),Ln(),Yn=!1)},St={p:b,um:je,m:ct,r:or,mt:Y,mc:V,pc:k,pbc:M,n:an,o:e};let Jn,zn;return t&&([Jn,zn]=t(St)),{render:lr,hydrate:Jn,createApp:wc(lr,Jn)}}function is({type:e,props:t},n){return n==="svg"&&e==="foreignObject"||n==="mathml"&&e==="annotation-xml"&&t&&t.encoding&&t.encoding.includes("html")?void 0:n}function at({effect:e,job:t},n){n?(e.flags|=32,t.flags|=4):(e.flags&=-33,t.flags&=-5)}function po(e,t){return(!e||e&&!e.pendingBranch)&&t&&!t.persisted}function Zs(e,t,n=!1){const s=e.children,r=t.children;if(K(s)&&K(r))for(let i=0;i>1,e[n[l]]0&&(t[s]=n[i-1]),n[i]=s)}}for(i=n.length,o=n[i-1];i-- >0;)n[i]=o,o=t[o];return n}function go(e){const t=e.subTree.component;if(t)return t.asyncDep&&!t.asyncResolved?t:go(t)}function Cr(e){if(e)for(let t=0;t_t(Pc);function er(e,t){return Kn(e,null,t)}function jf(e,t){return Kn(e,null,{flush:"post"})}function Ie(e,t,n){return Kn(e,t,n)}function Kn(e,t,n=ee){const{immediate:s,deep:r,flush:i,once:o}=n,l=pe({},n),c=t&&s||!t&&i!=="post";let f;if(It){if(i==="sync"){const y=Ic();f=y.__watcherHandles||(y.__watcherHandles=[])}else if(!c){const y=()=>{};return y.stop=We,y.resume=We,y.pause=We,y}}const a=de;l.call=(y,v,b)=>De(y,a,v,b);let d=!1;i==="post"?l.scheduler=y=>{we(y,a&&a.suspense)}:i!=="sync"&&(d=!0,l.scheduler=(y,v)=>{v?y():Ys(y)}),l.augmentJob=y=>{t&&(y.flags|=4),d&&(y.flags|=2,a&&(y.id=a.uid,y.i=a))};const m=Wl(e,t,l);return It&&(f?f.push(m):c&&m()),m}function Nc(e,t,n){const s=this.proxy,r=oe(e)?e.includes(".")?mo(s,e):()=>s[e]:e.bind(s,s);let i;q(t)?i=t:(i=t.handler,n=t);const o=ln(this),l=Kn(r,i.bind(s),n);return o(),l}function mo(e,t){const n=t.split(".");return()=>{let s=e;for(let r=0;rt==="modelValue"||t==="model-value"?e.modelModifiers:e[`${t}Modifiers`]||e[`${Ne(t)}Modifiers`]||e[`${it(t)}Modifiers`];function Hc(e,t,...n){if(e.isUnmounted)return;const s=e.vnode.props||ee;let r=n;const i=t.startsWith("update:"),o=i&&Fc(s,t.slice(7));o&&(o.trim&&(r=n.map(a=>oe(a)?a.trim():a)),o.number&&(r=n.map(vs)));let l,c=s[l=wn(t)]||s[l=wn(Ne(t))];!c&&i&&(c=s[l=wn(it(t))]),c&&De(c,e,6,r);const f=s[l+"Once"];if(f){if(!e.emitted)e.emitted={};else if(e.emitted[l])return;e.emitted[l]=!0,De(f,e,6,r)}}function yo(e,t,n=!1){const s=t.emitsCache,r=s.get(e);if(r!==void 0)return r;const i=e.emits;let o={},l=!1;if(!q(e)){const c=f=>{const a=yo(f,t,!0);a&&(l=!0,pe(o,a))};!n&&t.mixins.length&&t.mixins.forEach(c),e.extends&&c(e.extends),e.mixins&&e.mixins.forEach(c)}return!i&&!l?(se(e)&&s.set(e,null),null):(K(i)?i.forEach(c=>o[c]=null):pe(o,i),se(e)&&s.set(e,o),o)}function qn(e,t){return!e||!tn(t)?!1:(t=t.slice(2).replace(/Once$/,""),Q(e,t[0].toLowerCase()+t.slice(1))||Q(e,it(t))||Q(e,t))}function os(e){const{type:t,vnode:n,proxy:s,withProxy:r,propsOptions:[i],slots:o,attrs:l,emit:c,render:f,renderCache:a,props:d,data:m,setupState:y,ctx:v,inheritAttrs:b}=e,D=Pn(e);let L,$;try{if(n.shapeFlag&4){const g=r||s,O=g;L=Le(f.call(O,g,a,d,y,m,v)),$=l}else{const g=t;L=Le(g.length>1?g(d,{attrs:l,slots:o,emit:c}):g(d,null)),$=t.props?l:Dc(l)}}catch(g){Kt.length=0,sn(g,e,1),L=ce(be)}let p=L;if($&&b!==!1){const g=Object.keys($),{shapeFlag:O}=p;g.length&&O&7&&(i&&g.some(Ds)&&($=$c($,i)),p=st(p,$,!1,!0))}return n.dirs&&(p=st(p,null,!1,!0),p.dirs=p.dirs?p.dirs.concat(n.dirs):n.dirs),n.transition&&Jt(p,n.transition),L=p,Pn(D),L}const Dc=e=>{let t;for(const n in e)(n==="class"||n==="style"||tn(n))&&((t||(t={}))[n]=e[n]);return t},$c=(e,t)=>{const n={};for(const s in e)(!Ds(s)||!(s.slice(9)in t))&&(n[s]=e[s]);return n};function jc(e,t,n){const{props:s,children:r,component:i}=e,{props:o,children:l,patchFlag:c}=t,f=i.emitsOptions;if(t.dirs||t.transition)return!0;if(n&&c>=0){if(c&1024)return!0;if(c&16)return s?Ar(s,o,f):!!o;if(c&8){const a=t.dynamicProps;for(let d=0;de.__isSuspense;function vo(e,t){t&&t.pendingBranch?K(e)?t.effects.push(...e):t.effects.push(e):ql(e)}const Te=Symbol.for("v-fgt"),bt=Symbol.for("v-txt"),be=Symbol.for("v-cmt"),Bt=Symbol.for("v-stc"),Kt=[];let Re=null;function Ls(e=!1){Kt.push(Re=e?null:[])}function Vc(){Kt.pop(),Re=Kt[Kt.length-1]||null}let Qt=1;function Rr(e,t=!1){Qt+=e,e<0&&Re&&t&&(Re.hasOnce=!0)}function wo(e){return e.dynamicChildren=Qt>0?Re||Rt:null,Vc(),Qt>0&&Re&&Re.push(e),e}function Vf(e,t,n,s,r,i){return wo(Eo(e,t,n,s,r,i,!0))}function Ps(e,t,n,s,r){return wo(ce(e,t,n,s,r,!0))}function Zt(e){return e?e.__v_isVNode===!0:!1}function ht(e,t){return e.type===t.type&&e.key===t.key}const So=({key:e})=>e??null,xn=({ref:e,ref_key:t,ref_for:n})=>(typeof e=="number"&&(e=""+e),e!=null?oe(e)||fe(e)||q(e)?{i:he,r:e,k:t,f:!!n}:e:null);function Eo(e,t=null,n=null,s=0,r=null,i=e===Te?0:1,o=!1,l=!1){const c={__v_isVNode:!0,__v_skip:!0,type:e,props:t,key:t&&So(t),ref:t&&xn(t),scopeId:Di,slotScopeIds:null,children:n,component:null,suspense:null,ssContent:null,ssFallback:null,dirs:null,transition:null,el:null,anchor:null,target:null,targetStart:null,targetAnchor:null,staticCount:0,shapeFlag:i,patchFlag:s,dynamicProps:r,dynamicChildren:null,appContext:null,ctx:he};return l?(tr(c,n),i&128&&e.normalize(c)):n&&(c.shapeFlag|=oe(n)?8:16),Qt>0&&!o&&Re&&(c.patchFlag>0||i&6)&&c.patchFlag!==32&&Re.push(c),c}const ce=kc;function kc(e,t=null,n=null,s=0,r=null,i=!1){if((!e||e===Ji)&&(e=be),Zt(e)){const l=st(e,t,!0);return n&&tr(l,n),Qt>0&&!i&&Re&&(l.shapeFlag&6?Re[Re.indexOf(e)]=l:Re.push(l)),l.patchFlag=-2,l}if(zc(e)&&(e=e.__vccOpts),t){t=Uc(t);let{class:l,style:c}=t;l&&!oe(l)&&(t.class=ks(l)),se(c)&&(Gs(c)&&!K(c)&&(c=pe({},c)),t.style=Vs(c))}const o=oe(e)?1:bo(e)?128:ji(e)?64:se(e)?4:q(e)?2:0;return Eo(e,t,n,s,r,o,i,!0)}function Uc(e){return e?Gs(e)||io(e)?pe({},e):e:null}function st(e,t,n=!1,s=!1){const{props:r,ref:i,patchFlag:o,children:l,transition:c}=e,f=t?Wc(r||{},t):r,a={__v_isVNode:!0,__v_skip:!0,type:e.type,props:f,key:f&&So(f),ref:t&&t.ref?n&&i?K(i)?i.concat(xn(t)):[i,xn(t)]:xn(t):i,scopeId:e.scopeId,slotScopeIds:e.slotScopeIds,children:l,target:e.target,targetStart:e.targetStart,targetAnchor:e.targetAnchor,staticCount:e.staticCount,shapeFlag:e.shapeFlag,patchFlag:t&&e.type!==Te?o===-1?16:o|16:o,dynamicProps:e.dynamicProps,dynamicChildren:e.dynamicChildren,appContext:e.appContext,dirs:e.dirs,transition:c,component:e.component,suspense:e.suspense,ssContent:e.ssContent&&st(e.ssContent),ssFallback:e.ssFallback&&st(e.ssFallback),el:e.el,anchor:e.anchor,ctx:e.ctx,ce:e.ce};return c&&s&&Jt(a,c.clone(a)),a}function To(e=" ",t=0){return ce(bt,null,e,t)}function kf(e,t){const n=ce(Bt,null,e);return n.staticCount=t,n}function Uf(e="",t=!1){return t?(Ls(),Ps(be,null,e)):ce(be,null,e)}function Le(e){return e==null||typeof e=="boolean"?ce(be):K(e)?ce(Te,null,e.slice()):Zt(e)?tt(e):ce(bt,null,String(e))}function tt(e){return e.el===null&&e.patchFlag!==-1||e.memo?e:st(e)}function tr(e,t){let n=0;const{shapeFlag:s}=e;if(t==null)t=null;else if(K(t))n=16;else if(typeof t=="object")if(s&65){const r=t.default;r&&(r._c&&(r._d=!1),tr(e,r()),r._c&&(r._d=!0));return}else{n=32;const r=t._;!r&&!io(t)?t._ctx=he:r===3&&he&&(he.slots._===1?t._=1:(t._=2,e.patchFlag|=1024))}else q(t)?(t={default:t,_ctx:he},n=32):(t=String(t),s&64?(n=16,t=[To(t)]):n=8);e.children=t,e.shapeFlag|=n}function Wc(...e){const t={};for(let n=0;nde||he;let Nn,Is;{const e=$n(),t=(n,s)=>{let r;return(r=e[n])||(r=e[n]=[]),r.push(s),i=>{r.length>1?r.forEach(o=>o(i)):r[0](i)}};Nn=t("__VUE_INSTANCE_SETTERS__",n=>de=n),Is=t("__VUE_SSR_SETTERS__",n=>It=n)}const ln=e=>{const t=de;return Nn(e),e.scope.on(),()=>{e.scope.off(),Nn(t)}},Mr=()=>{de&&de.scope.off(),Nn(null)};function xo(e){return e.vnode.shapeFlag&4}let It=!1;function Gc(e,t=!1,n=!1){t&&Is(t);const{props:s,children:r}=e.vnode,i=xo(e);Ec(e,s,i,t),Ac(e,r,n);const o=i?Xc(e,t):void 0;return t&&Is(!1),o}function Xc(e,t){const n=e.type;e.accessCache=Object.create(null),e.proxy=new Proxy(e.ctx,hc);const{setup:s}=n;if(s){ot();const r=e.setupContext=s.length>1?Ao(e):null,i=ln(e),o=nn(s,e,0,[e.props,r]),l=ai(o);if(lt(),i(),(l||e.sp)&&!mt(e)&&zs(e),l){if(o.then(Mr,Mr),t)return o.then(c=>{Or(e,c)}).catch(c=>{sn(c,e,0)});e.asyncDep=o}else Or(e,o)}else Co(e)}function Or(e,t,n){q(t)?e.type.__ssrInlineRender?e.ssrRender=t:e.render=t:se(t)&&(e.setupState=Ii(t)),Co(e)}function Co(e,t,n){const s=e.type;e.render||(e.render=s.render||We);{const r=ln(e);ot();try{gc(e)}finally{lt(),r()}}}const Yc={get(e,t){return ye(e,"get",""),e[t]}};function Ao(e){const t=n=>{e.exposed=n||{}};return{attrs:new Proxy(e.attrs,Yc),slots:e.slots,emit:e.emit,expose:t}}function Gn(e){return e.exposed?e.exposeProxy||(e.exposeProxy=new Proxy(Ii(En(e.exposed)),{get(t,n){if(n in t)return t[n];if(n in Wt)return Wt[n](e)},has(t,n){return n in t||n in Wt}})):e.proxy}function Jc(e,t=!0){return q(e)?e.displayName||e.name:e.name||t&&e.__name}function zc(e){return q(e)&&"__vccOpts"in e}const ie=(e,t)=>kl(e,t,It);function Ns(e,t,n){const s=arguments.length;return s===2?se(t)&&!K(t)?Zt(t)?ce(e,null,[t]):ce(e,t):ce(e,null,t):(s>3?n=Array.prototype.slice.call(arguments,2):s===3&&Zt(n)&&(n=[n]),ce(e,t,n))}const Qc="3.5.13";/** -* @vue/runtime-dom v3.5.13 -* (c) 2018-present Yuxi (Evan) You and Vue contributors -* @license MIT -**/let Fs;const Lr=typeof window<"u"&&window.trustedTypes;if(Lr)try{Fs=Lr.createPolicy("vue",{createHTML:e=>e})}catch{}const Ro=Fs?e=>Fs.createHTML(e):e=>e,Zc="http://www.w3.org/2000/svg",ea="http://www.w3.org/1998/Math/MathML",qe=typeof document<"u"?document:null,Pr=qe&&qe.createElement("template"),ta={insert:(e,t,n)=>{t.insertBefore(e,n||null)},remove:e=>{const t=e.parentNode;t&&t.removeChild(e)},createElement:(e,t,n,s)=>{const r=t==="svg"?qe.createElementNS(Zc,e):t==="mathml"?qe.createElementNS(ea,e):n?qe.createElement(e,{is:n}):qe.createElement(e);return e==="select"&&s&&s.multiple!=null&&r.setAttribute("multiple",s.multiple),r},createText:e=>qe.createTextNode(e),createComment:e=>qe.createComment(e),setText:(e,t)=>{e.nodeValue=t},setElementText:(e,t)=>{e.textContent=t},parentNode:e=>e.parentNode,nextSibling:e=>e.nextSibling,querySelector:e=>qe.querySelector(e),setScopeId(e,t){e.setAttribute(t,"")},insertStaticContent(e,t,n,s,r,i){const o=n?n.previousSibling:t.lastChild;if(r&&(r===i||r.nextSibling))for(;t.insertBefore(r.cloneNode(!0),n),!(r===i||!(r=r.nextSibling)););else{Pr.innerHTML=Ro(s==="svg"?`${e}`:s==="mathml"?`${e}`:e);const l=Pr.content;if(s==="svg"||s==="mathml"){const c=l.firstChild;for(;c.firstChild;)l.appendChild(c.firstChild);l.removeChild(c)}t.insertBefore(l,n)}return[o?o.nextSibling:t.firstChild,n?n.previousSibling:t.lastChild]}},Qe="transition",$t="animation",en=Symbol("_vtc"),Mo={name:String,type:String,css:{type:Boolean,default:!0},duration:[String,Number,Object],enterFromClass:String,enterActiveClass:String,enterToClass:String,appearFromClass:String,appearActiveClass:String,appearToClass:String,leaveFromClass:String,leaveActiveClass:String,leaveToClass:String},na=pe({},Ui,Mo),sa=e=>(e.displayName="Transition",e.props=na,e),Wf=sa((e,{slots:t})=>Ns(zl,ra(e),t)),ft=(e,t=[])=>{K(e)?e.forEach(n=>n(...t)):e&&e(...t)},Ir=e=>e?K(e)?e.some(t=>t.length>1):e.length>1:!1;function ra(e){const t={};for(const w in e)w in Mo||(t[w]=e[w]);if(e.css===!1)return t;const{name:n="v",type:s,duration:r,enterFromClass:i=`${n}-enter-from`,enterActiveClass:o=`${n}-enter-active`,enterToClass:l=`${n}-enter-to`,appearFromClass:c=i,appearActiveClass:f=o,appearToClass:a=l,leaveFromClass:d=`${n}-leave-from`,leaveActiveClass:m=`${n}-leave-active`,leaveToClass:y=`${n}-leave-to`}=e,v=ia(r),b=v&&v[0],D=v&&v[1],{onBeforeEnter:L,onEnter:$,onEnterCancelled:p,onLeave:g,onLeaveCancelled:O,onBeforeAppear:j=L,onAppear:R=$,onAppearCancelled:V=p}=t,E=(w,F,Y,re)=>{w._enterCancelled=re,ut(w,F?a:l),ut(w,F?f:o),Y&&Y()},M=(w,F)=>{w._isLeaving=!1,ut(w,d),ut(w,y),ut(w,m),F&&F()},A=w=>(F,Y)=>{const re=w?R:$,U=()=>E(F,w,Y);ft(re,[F,U]),Nr(()=>{ut(F,w?c:i),Ke(F,w?a:l),Ir(re)||Fr(F,s,b,U)})};return pe(t,{onBeforeEnter(w){ft(L,[w]),Ke(w,i),Ke(w,o)},onBeforeAppear(w){ft(j,[w]),Ke(w,c),Ke(w,f)},onEnter:A(!1),onAppear:A(!0),onLeave(w,F){w._isLeaving=!0;const Y=()=>M(w,F);Ke(w,d),w._enterCancelled?(Ke(w,m),$r()):($r(),Ke(w,m)),Nr(()=>{w._isLeaving&&(ut(w,d),Ke(w,y),Ir(g)||Fr(w,s,D,Y))}),ft(g,[w,Y])},onEnterCancelled(w){E(w,!1,void 0,!0),ft(p,[w])},onAppearCancelled(w){E(w,!0,void 0,!0),ft(V,[w])},onLeaveCancelled(w){M(w),ft(O,[w])}})}function ia(e){if(e==null)return null;if(se(e))return[ls(e.enter),ls(e.leave)];{const t=ls(e);return[t,t]}}function ls(e){return nl(e)}function Ke(e,t){t.split(/\s+/).forEach(n=>n&&e.classList.add(n)),(e[en]||(e[en]=new Set)).add(t)}function ut(e,t){t.split(/\s+/).forEach(s=>s&&e.classList.remove(s));const n=e[en];n&&(n.delete(t),n.size||(e[en]=void 0))}function Nr(e){requestAnimationFrame(()=>{requestAnimationFrame(e)})}let oa=0;function Fr(e,t,n,s){const r=e._endId=++oa,i=()=>{r===e._endId&&s()};if(n!=null)return setTimeout(i,n);const{type:o,timeout:l,propCount:c}=la(e,t);if(!o)return s();const f=o+"end";let a=0;const d=()=>{e.removeEventListener(f,m),i()},m=y=>{y.target===e&&++a>=c&&d()};setTimeout(()=>{a(n[v]||"").split(", "),r=s(`${Qe}Delay`),i=s(`${Qe}Duration`),o=Hr(r,i),l=s(`${$t}Delay`),c=s(`${$t}Duration`),f=Hr(l,c);let a=null,d=0,m=0;t===Qe?o>0&&(a=Qe,d=o,m=i.length):t===$t?f>0&&(a=$t,d=f,m=c.length):(d=Math.max(o,f),a=d>0?o>f?Qe:$t:null,m=a?a===Qe?i.length:c.length:0);const y=a===Qe&&/\b(transform|all)(,|$)/.test(s(`${Qe}Property`).toString());return{type:a,timeout:d,propCount:m,hasTransform:y}}function Hr(e,t){for(;e.lengthDr(n)+Dr(e[s])))}function Dr(e){return e==="auto"?0:Number(e.slice(0,-1).replace(",","."))*1e3}function $r(){return document.body.offsetHeight}function ca(e,t,n){const s=e[en];s&&(t=(t?[t,...s]:[...s]).join(" ")),t==null?e.removeAttribute("class"):n?e.setAttribute("class",t):e.className=t}const jr=Symbol("_vod"),aa=Symbol("_vsh"),fa=Symbol(""),ua=/(^|;)\s*display\s*:/;function da(e,t,n){const s=e.style,r=oe(n);let i=!1;if(n&&!r){if(t)if(oe(t))for(const o of t.split(";")){const l=o.slice(0,o.indexOf(":")).trim();n[l]==null&&Cn(s,l,"")}else for(const o in t)n[o]==null&&Cn(s,o,"");for(const o in n)o==="display"&&(i=!0),Cn(s,o,n[o])}else if(r){if(t!==n){const o=s[fa];o&&(n+=";"+o),s.cssText=n,i=ua.test(n)}}else t&&e.removeAttribute("style");jr in e&&(e[jr]=i?s.display:"",e[aa]&&(s.display="none"))}const Vr=/\s*!important$/;function Cn(e,t,n){if(K(n))n.forEach(s=>Cn(e,t,s));else if(n==null&&(n=""),t.startsWith("--"))e.setProperty(t,n);else{const s=ha(e,t);Vr.test(n)?e.setProperty(it(s),n.replace(Vr,""),"important"):e[s]=n}}const kr=["Webkit","Moz","ms"],cs={};function ha(e,t){const n=cs[t];if(n)return n;let s=Ne(t);if(s!=="filter"&&s in e)return cs[t]=s;s=Dn(s);for(let r=0;ras||(ya.then(()=>as=0),as=Date.now());function ba(e,t){const n=s=>{if(!s._vts)s._vts=Date.now();else if(s._vts<=n.attached)return;De(va(s,n.value),t,5,[s])};return n.value=e,n.attached=_a(),n}function va(e,t){if(K(t)){const n=e.stopImmediatePropagation;return e.stopImmediatePropagation=()=>{n.call(e),e._stopped=!0},t.map(s=>r=>!r._stopped&&s&&s(r))}else return t}const Gr=e=>e.charCodeAt(0)===111&&e.charCodeAt(1)===110&&e.charCodeAt(2)>96&&e.charCodeAt(2)<123,wa=(e,t,n,s,r,i)=>{const o=r==="svg";t==="class"?ca(e,s,o):t==="style"?da(e,n,s):tn(t)?Ds(t)||ga(e,t,n,s,i):(t[0]==="."?(t=t.slice(1),!0):t[0]==="^"?(t=t.slice(1),!1):Sa(e,t,s,o))?(Br(e,t,s),!e.tagName.includes("-")&&(t==="value"||t==="checked"||t==="selected")&&Wr(e,t,s,o,i,t!=="value")):e._isVueCE&&(/[A-Z]/.test(t)||!oe(s))?Br(e,Ne(t),s,i,t):(t==="true-value"?e._trueValue=s:t==="false-value"&&(e._falseValue=s),Wr(e,t,s,o))};function Sa(e,t,n,s){if(s)return!!(t==="innerHTML"||t==="textContent"||t in e&&Gr(t)&&q(n));if(t==="spellcheck"||t==="draggable"||t==="translate"||t==="form"||t==="list"&&e.tagName==="INPUT"||t==="type"&&e.tagName==="TEXTAREA")return!1;if(t==="width"||t==="height"){const r=e.tagName;if(r==="IMG"||r==="VIDEO"||r==="CANVAS"||r==="SOURCE")return!1}return Gr(t)&&oe(n)?!1:t in e}const Xr=e=>{const t=e.props["onUpdate:modelValue"]||!1;return K(t)?n=>Sn(t,n):t};function Ea(e){e.target.composing=!0}function Yr(e){const t=e.target;t.composing&&(t.composing=!1,t.dispatchEvent(new Event("input")))}const fs=Symbol("_assign"),Bf={created(e,{modifiers:{lazy:t,trim:n,number:s}},r){e[fs]=Xr(r);const i=s||r.props&&r.props.type==="number";Ct(e,t?"change":"input",o=>{if(o.target.composing)return;let l=e.value;n&&(l=l.trim()),i&&(l=vs(l)),e[fs](l)}),n&&Ct(e,"change",()=>{e.value=e.value.trim()}),t||(Ct(e,"compositionstart",Ea),Ct(e,"compositionend",Yr),Ct(e,"change",Yr))},mounted(e,{value:t}){e.value=t??""},beforeUpdate(e,{value:t,oldValue:n,modifiers:{lazy:s,trim:r,number:i}},o){if(e[fs]=Xr(o),e.composing)return;const l=(i||e.type==="number")&&!/^0\d/.test(e.value)?vs(e.value):e.value,c=t??"";l!==c&&(document.activeElement===e&&e.type!=="range"&&(s&&t===n||r&&e.value.trim()===c)||(e.value=c))}},Ta=["ctrl","shift","alt","meta"],xa={stop:e=>e.stopPropagation(),prevent:e=>e.preventDefault(),self:e=>e.target!==e.currentTarget,ctrl:e=>!e.ctrlKey,shift:e=>!e.shiftKey,alt:e=>!e.altKey,meta:e=>!e.metaKey,left:e=>"button"in e&&e.button!==0,middle:e=>"button"in e&&e.button!==1,right:e=>"button"in e&&e.button!==2,exact:(e,t)=>Ta.some(n=>e[`${n}Key`]&&!t.includes(n))},Kf=(e,t)=>{const n=e._withMods||(e._withMods={}),s=t.join(".");return n[s]||(n[s]=(r,...i)=>{for(let o=0;o{const n=e._withKeys||(e._withKeys={}),s=t.join(".");return n[s]||(n[s]=r=>{if(!("key"in r))return;const i=it(r.key);if(t.some(o=>o===i||Ca[o]===i))return e(r)})},Oo=pe({patchProp:wa},ta);let qt,Jr=!1;function Aa(){return qt||(qt=Mc(Oo))}function Ra(){return qt=Jr?qt:Oc(Oo),Jr=!0,qt}const Gf=(...e)=>{const t=Aa().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Po(s);if(!r)return;const i=t._component;!q(i)&&!i.render&&!i.template&&(i.template=r.innerHTML),r.nodeType===1&&(r.textContent="");const o=n(r,!1,Lo(r));return r instanceof Element&&(r.removeAttribute("v-cloak"),r.setAttribute("data-v-app","")),o},t},Xf=(...e)=>{const t=Ra().createApp(...e),{mount:n}=t;return t.mount=s=>{const r=Po(s);if(r)return n(r,!0,Lo(r))},t};function Lo(e){if(e instanceof SVGElement)return"svg";if(typeof MathMLElement=="function"&&e instanceof MathMLElement)return"mathml"}function Po(e){return oe(e)?document.querySelector(e):e}const Ma=window.__VP_SITE_DATA__;function nr(e){return mi()?(ul(e),!0):!1}const us=new WeakMap,Oa=(...e)=>{var t;const n=e[0],s=(t=on())==null?void 0:t.proxy;if(s==null&&!no())throw new Error("injectLocal must be called in setup");return s&&us.has(s)&&n in us.get(s)?us.get(s)[n]:_t(...e)},Io=typeof window<"u"&&typeof document<"u";typeof WorkerGlobalScope<"u"&&globalThis instanceof WorkerGlobalScope;const Yf=e=>e!=null,La=Object.prototype.toString,Pa=e=>La.call(e)==="[object Object]",Ye=()=>{},zr=Ia();function Ia(){var e,t;return Io&&((e=window==null?void 0:window.navigator)==null?void 0:e.userAgent)&&(/iP(?:ad|hone|od)/.test(window.navigator.userAgent)||((t=window==null?void 0:window.navigator)==null?void 0:t.maxTouchPoints)>2&&/iPad|Macintosh/.test(window==null?void 0:window.navigator.userAgent))}function sr(e,t){function n(...s){return new Promise((r,i)=>{Promise.resolve(e(()=>t.apply(this,s),{fn:t,thisArg:this,args:s})).then(r).catch(i)})}return n}const No=e=>e();function Fo(e,t={}){let n,s,r=Ye;const i=l=>{clearTimeout(l),r(),r=Ye};return l=>{const c=ue(e),f=ue(t.maxWait);return n&&i(n),c<=0||f!==void 0&&f<=0?(s&&(i(s),s=null),Promise.resolve(l())):new Promise((a,d)=>{r=t.rejectOnCancel?d:a,f&&!s&&(s=setTimeout(()=>{n&&i(n),s=null,a(l())},f)),n=setTimeout(()=>{s&&i(s),s=null,a(l())},c)})}}function Na(...e){let t=0,n,s=!0,r=Ye,i,o,l,c,f;!fe(e[0])&&typeof e[0]=="object"?{delay:o,trailing:l=!0,leading:c=!0,rejectOnCancel:f=!1}=e[0]:[o,l=!0,c=!0,f=!1]=e;const a=()=>{n&&(clearTimeout(n),n=void 0,r(),r=Ye)};return m=>{const y=ue(o),v=Date.now()-t,b=()=>i=m();return a(),y<=0?(t=Date.now(),b()):(v>y&&(c||!s)?(t=Date.now(),b()):l&&(i=new Promise((D,L)=>{r=f?L:D,n=setTimeout(()=>{t=Date.now(),s=!0,D(b()),a()},Math.max(0,y-v))})),!c&&!n&&(n=setTimeout(()=>s=!0,y)),s=!1,i)}}function Fa(e=No){const t=le(!0);function n(){t.value=!1}function s(){t.value=!0}const r=(...i)=>{t.value&&e(...i)};return{isActive:kn(t),pause:n,resume:s,eventFilter:r}}function Qr(e){return e.endsWith("rem")?Number.parseFloat(e)*16:Number.parseFloat(e)}function Ha(e){return on()}function Zr(e){return Array.isArray(e)?e:[e]}function Ho(...e){if(e.length!==1)return $l(...e);const t=e[0];return typeof t=="function"?kn(Fl(()=>({get:t,set:Ye}))):le(t)}function Da(e,t=200,n={}){return sr(Fo(t,n),e)}function $a(e,t=200,n=!1,s=!0,r=!1){return sr(Na(t,n,s,r),e)}function Do(e,t,n={}){const{eventFilter:s=No,...r}=n;return Ie(e,sr(s,t),r)}function ja(e,t,n={}){const{eventFilter:s,...r}=n,{eventFilter:i,pause:o,resume:l,isActive:c}=Fa(s);return{stop:Do(e,t,{...r,eventFilter:i}),pause:o,resume:l,isActive:c}}function Xn(e,t=!0,n){Ha()?Nt(e,n):t?e():Un(e)}function Jf(e,t,n={}){const{debounce:s=0,maxWait:r=void 0,...i}=n;return Do(e,t,{...i,eventFilter:Fo(s,{maxWait:r})})}function zf(e,t,n){let s;fe(n)?s={evaluating:n}:s={};const{lazy:r=!1,evaluating:i=void 0,shallow:o=!0,onError:l=Ye}=s,c=le(!r),f=o?Xs(t):le(t);let a=0;return er(async d=>{if(!c.value)return;a++;const m=a;let y=!1;i&&Promise.resolve().then(()=>{i.value=!0});try{const v=await e(b=>{d(()=>{i&&(i.value=!1),y||b()})});m===a&&(f.value=v)}catch(v){l(v)}finally{i&&m===a&&(i.value=!1),y=!0}}),r?ie(()=>(c.value=!0,f.value)):f}const $e=Io?window:void 0;function rr(e){var t;const n=ue(e);return(t=n==null?void 0:n.$el)!=null?t:n}function rt(...e){let t,n,s,r;if(typeof e[0]=="string"||Array.isArray(e[0])?([n,s,r]=e,t=$e):[t,n,s,r]=e,!t)return Ye;n=Zr(n),s=Zr(s);const i=[],o=()=>{i.forEach(a=>a()),i.length=0},l=(a,d,m,y)=>(a.addEventListener(d,m,y),()=>a.removeEventListener(d,m,y)),c=Ie(()=>[rr(t),ue(r)],([a,d])=>{if(o(),!a)return;const m=Pa(d)?{...d}:d;i.push(...n.flatMap(y=>s.map(v=>l(a,y,v,m))))},{immediate:!0,flush:"post"}),f=()=>{c(),o()};return nr(f),f}function Va(){const e=le(!1),t=on();return t&&Nt(()=>{e.value=!0},t),e}function ka(e){const t=Va();return ie(()=>(t.value,!!e()))}function Ua(e){return typeof e=="function"?e:typeof e=="string"?t=>t.key===e:Array.isArray(e)?t=>e.includes(t.key):()=>!0}function Qf(...e){let t,n,s={};e.length===3?(t=e[0],n=e[1],s=e[2]):e.length===2?typeof e[1]=="object"?(t=!0,n=e[0],s=e[1]):(t=e[0],n=e[1]):(t=!0,n=e[0]);const{target:r=$e,eventName:i="keydown",passive:o=!1,dedupe:l=!1}=s,c=Ua(t);return rt(r,i,a=>{a.repeat&&ue(l)||c(a)&&n(a)},o)}const Wa=Symbol("vueuse-ssr-width");function Ba(){const e=no()?Oa(Wa,null):null;return typeof e=="number"?e:void 0}function $o(e,t={}){const{window:n=$e,ssrWidth:s=Ba()}=t,r=ka(()=>n&&"matchMedia"in n&&typeof n.matchMedia=="function"),i=le(typeof s=="number");let o;const l=le(!1),c=d=>{l.value=d.matches},f=()=>{o&&("removeEventListener"in o?o.removeEventListener("change",c):o.removeListener(c))},a=er(()=>{if(i.value){i.value=!r.value;const d=ue(e).split(",");l.value=d.some(m=>{const y=m.includes("not all"),v=m.match(/\(\s*min-width:\s*(-?\d+(?:\.\d*)?[a-z]+\s*)\)/),b=m.match(/\(\s*max-width:\s*(-?\d+(?:\.\d*)?[a-z]+\s*)\)/);let D=!!(v||b);return v&&D&&(D=s>=Qr(v[1])),b&&D&&(D=s<=Qr(b[1])),y?!D:D});return}r.value&&(f(),o=n.matchMedia(ue(e)),"addEventListener"in o?o.addEventListener("change",c):o.addListener(c),l.value=o.matches)});return nr(()=>{a(),f(),o=void 0}),ie(()=>l.value)}const yn=typeof globalThis<"u"?globalThis:typeof window<"u"?window:typeof global<"u"?global:typeof self<"u"?self:{},_n="__vueuse_ssr_handlers__",Ka=qa();function qa(){return _n in yn||(yn[_n]=yn[_n]||{}),yn[_n]}function jo(e,t){return Ka[e]||t}function Vo(e){return $o("(prefers-color-scheme: dark)",e)}function Ga(e){return e==null?"any":e instanceof Set?"set":e instanceof Map?"map":e instanceof Date?"date":typeof e=="boolean"?"boolean":typeof e=="string"?"string":typeof e=="object"?"object":Number.isNaN(e)?"any":"number"}const Xa={boolean:{read:e=>e==="true",write:e=>String(e)},object:{read:e=>JSON.parse(e),write:e=>JSON.stringify(e)},number:{read:e=>Number.parseFloat(e),write:e=>String(e)},any:{read:e=>e,write:e=>String(e)},string:{read:e=>e,write:e=>String(e)},map:{read:e=>new Map(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e.entries()))},set:{read:e=>new Set(JSON.parse(e)),write:e=>JSON.stringify(Array.from(e))},date:{read:e=>new Date(e),write:e=>e.toISOString()}},ei="vueuse-storage";function ir(e,t,n,s={}){var r;const{flush:i="pre",deep:o=!0,listenToStorageChanges:l=!0,writeDefaults:c=!0,mergeDefaults:f=!1,shallow:a,window:d=$e,eventFilter:m,onError:y=A=>{console.error(A)},initOnMounted:v}=s,b=(a?Xs:le)(typeof t=="function"?t():t),D=ie(()=>ue(e));if(!n)try{n=jo("getDefaultStorage",()=>{var A;return(A=$e)==null?void 0:A.localStorage})()}catch(A){y(A)}if(!n)return b;const L=ue(t),$=Ga(L),p=(r=s.serializer)!=null?r:Xa[$],{pause:g,resume:O}=ja(b,()=>R(b.value),{flush:i,deep:o,eventFilter:m});Ie(D,()=>E(),{flush:i}),d&&l&&Xn(()=>{n instanceof Storage?rt(d,"storage",E,{passive:!0}):rt(d,ei,M),v&&E()}),v||E();function j(A,w){if(d){const F={key:D.value,oldValue:A,newValue:w,storageArea:n};d.dispatchEvent(n instanceof Storage?new StorageEvent("storage",F):new CustomEvent(ei,{detail:F}))}}function R(A){try{const w=n.getItem(D.value);if(A==null)j(w,null),n.removeItem(D.value);else{const F=p.write(A);w!==F&&(n.setItem(D.value,F),j(w,F))}}catch(w){y(w)}}function V(A){const w=A?A.newValue:n.getItem(D.value);if(w==null)return c&&L!=null&&n.setItem(D.value,p.write(L)),L;if(!A&&f){const F=p.read(w);return typeof f=="function"?f(F,L):$==="object"&&!Array.isArray(F)?{...L,...F}:F}else return typeof w!="string"?w:p.read(w)}function E(A){if(!(A&&A.storageArea!==n)){if(A&&A.key==null){b.value=L;return}if(!(A&&A.key!==D.value)){g();try{(A==null?void 0:A.newValue)!==p.write(b.value)&&(b.value=V(A))}catch(w){y(w)}finally{A?Un(O):O()}}}}function M(A){E(A.detail)}return b}const Ya="*,*::before,*::after{-webkit-transition:none!important;-moz-transition:none!important;-o-transition:none!important;-ms-transition:none!important;transition:none!important}";function Ja(e={}){const{selector:t="html",attribute:n="class",initialValue:s="auto",window:r=$e,storage:i,storageKey:o="vueuse-color-scheme",listenToStorageChanges:l=!0,storageRef:c,emitAuto:f,disableTransition:a=!0}=e,d={auto:"",light:"light",dark:"dark",...e.modes||{}},m=Vo({window:r}),y=ie(()=>m.value?"dark":"light"),v=c||(o==null?Ho(s):ir(o,s,i,{window:r,listenToStorageChanges:l})),b=ie(()=>v.value==="auto"?y.value:v.value),D=jo("updateHTMLAttrs",(g,O,j)=>{const R=typeof g=="string"?r==null?void 0:r.document.querySelector(g):rr(g);if(!R)return;const V=new Set,E=new Set;let M=null;if(O==="class"){const w=j.split(/\s/g);Object.values(d).flatMap(F=>(F||"").split(/\s/g)).filter(Boolean).forEach(F=>{w.includes(F)?V.add(F):E.add(F)})}else M={key:O,value:j};if(V.size===0&&E.size===0&&M===null)return;let A;a&&(A=r.document.createElement("style"),A.appendChild(document.createTextNode(Ya)),r.document.head.appendChild(A));for(const w of V)R.classList.add(w);for(const w of E)R.classList.remove(w);M&&R.setAttribute(M.key,M.value),a&&(r.getComputedStyle(A).opacity,document.head.removeChild(A))});function L(g){var O;D(t,n,(O=d[g])!=null?O:g)}function $(g){e.onChanged?e.onChanged(g,L):L(g)}Ie(b,$,{flush:"post",immediate:!0}),Xn(()=>$(b.value));const p=ie({get(){return f?v.value:b.value},set(g){v.value=g}});return Object.assign(p,{store:v,system:y,state:b})}function za(e={}){const{valueDark:t="dark",valueLight:n=""}=e,s=Ja({...e,onChanged:(o,l)=>{var c;e.onChanged?(c=e.onChanged)==null||c.call(e,o==="dark",l,o):l(o)},modes:{dark:t,light:n}}),r=ie(()=>s.system.value);return ie({get(){return s.value==="dark"},set(o){const l=o?"dark":"light";r.value===l?s.value="auto":s.value=l}})}function ds(e){return typeof Window<"u"&&e instanceof Window?e.document.documentElement:typeof Document<"u"&&e instanceof Document?e.documentElement:e}const ti=1;function Qa(e,t={}){const{throttle:n=0,idle:s=200,onStop:r=Ye,onScroll:i=Ye,offset:o={left:0,right:0,top:0,bottom:0},eventListenerOptions:l={capture:!1,passive:!0},behavior:c="auto",window:f=$e,onError:a=R=>{console.error(R)}}=t,d=le(0),m=le(0),y=ie({get(){return d.value},set(R){b(R,void 0)}}),v=ie({get(){return m.value},set(R){b(void 0,R)}});function b(R,V){var E,M,A,w;if(!f)return;const F=ue(e);if(!F)return;(A=F instanceof Document?f.document.body:F)==null||A.scrollTo({top:(E=ue(V))!=null?E:v.value,left:(M=ue(R))!=null?M:y.value,behavior:ue(c)});const Y=((w=F==null?void 0:F.document)==null?void 0:w.documentElement)||(F==null?void 0:F.documentElement)||F;y!=null&&(d.value=Y.scrollLeft),v!=null&&(m.value=Y.scrollTop)}const D=le(!1),L=Pt({left:!0,right:!1,top:!0,bottom:!1}),$=Pt({left:!1,right:!1,top:!1,bottom:!1}),p=R=>{D.value&&(D.value=!1,$.left=!1,$.right=!1,$.top=!1,$.bottom=!1,r(R))},g=Da(p,n+s),O=R=>{var V;if(!f)return;const E=((V=R==null?void 0:R.document)==null?void 0:V.documentElement)||(R==null?void 0:R.documentElement)||rr(R),{display:M,flexDirection:A,direction:w}=getComputedStyle(E),F=w==="rtl"?-1:1,Y=E.scrollLeft;$.left=Yd.value;const re=Y*F<=(o.left||0),U=Y*F+E.clientWidth>=E.scrollWidth-(o.right||0)-ti;M==="flex"&&A==="row-reverse"?(L.left=U,L.right=re):(L.left=re,L.right=U),d.value=Y;let X=E.scrollTop;R===f.document&&!X&&(X=f.document.body.scrollTop),$.top=Xm.value;const k=X<=(o.top||0),ae=X+E.clientHeight>=E.scrollHeight-(o.bottom||0)-ti;M==="flex"&&A==="column-reverse"?(L.top=ae,L.bottom=k):(L.top=k,L.bottom=ae),m.value=X},j=R=>{var V;if(!f)return;const E=(V=R.target.documentElement)!=null?V:R.target;O(E),D.value=!0,g(R),i(R)};return rt(e,"scroll",n?$a(j,n,!0,!1):j,l),Xn(()=>{try{const R=ue(e);if(!R)return;O(R)}catch(R){a(R)}}),rt(e,"scrollend",p,l),{x:y,y:v,isScrolling:D,arrivedState:L,directions:$,measure(){const R=ue(e);f&&R&&O(R)}}}function Zf(e,t,n={}){const{window:s=$e}=n;return ir(e,t,s==null?void 0:s.localStorage,n)}function ko(e){const t=window.getComputedStyle(e);if(t.overflowX==="scroll"||t.overflowY==="scroll"||t.overflowX==="auto"&&e.clientWidth1?!0:(t.preventDefault&&t.preventDefault(),!1)}const hs=new WeakMap;function eu(e,t=!1){const n=le(t);let s=null,r="";Ie(Ho(e),l=>{const c=ds(ue(l));if(c){const f=c;if(hs.get(f)||hs.set(f,f.style.overflow),f.style.overflow!=="hidden"&&(r=f.style.overflow),f.style.overflow==="hidden")return n.value=!0;if(n.value)return f.style.overflow="hidden"}},{immediate:!0});const i=()=>{const l=ds(ue(e));!l||n.value||(zr&&(s=rt(l,"touchmove",c=>{Za(c)},{passive:!1})),l.style.overflow="hidden",n.value=!0)},o=()=>{const l=ds(ue(e));!l||!n.value||(zr&&(s==null||s()),l.style.overflow=r,hs.delete(l),n.value=!1)};return nr(o),ie({get(){return n.value},set(l){l?i():o()}})}function tu(e,t,n={}){const{window:s=$e}=n;return ir(e,t,s==null?void 0:s.sessionStorage,n)}function nu(e={}){const{window:t=$e,...n}=e;return Qa(t,n)}function su(e={}){const{window:t=$e,initialWidth:n=Number.POSITIVE_INFINITY,initialHeight:s=Number.POSITIVE_INFINITY,listenOrientation:r=!0,includeScrollbar:i=!0,type:o="inner"}=e,l=le(n),c=le(s),f=()=>{if(t)if(o==="outer")l.value=t.outerWidth,c.value=t.outerHeight;else if(o==="visual"&&t.visualViewport){const{width:d,height:m,scale:y}=t.visualViewport;l.value=Math.round(d*y),c.value=Math.round(m*y)}else i?(l.value=t.innerWidth,c.value=t.innerHeight):(l.value=t.document.documentElement.clientWidth,c.value=t.document.documentElement.clientHeight)};f(),Xn(f);const a={passive:!0};if(rt("resize",f,a),t&&o==="visual"&&t.visualViewport&&rt(t.visualViewport,"resize",f,a),r){const d=$o("(orientation: portrait)");Ie(d,()=>f())}return{width:l,height:c}}const ps={BASE_URL:"/dev/",DEV:!1,MODE:"production",PROD:!0,SSR:!1};var gs={};const Uo=/^(?:[a-z]+:|\/\/)/i,ef="vitepress-theme-appearance",tf=/#.*$/,nf=/[?#].*$/,sf=/(?:(^|\/)index)?\.(?:md|html)$/,me=typeof document<"u",Wo={relativePath:"404.md",filePath:"",title:"404",description:"Not Found",headers:[],frontmatter:{sidebar:!1,layout:"page"},lastUpdated:0,isNotFound:!0};function rf(e,t,n=!1){if(t===void 0)return!1;if(e=ni(`/${e}`),n)return new RegExp(t).test(e);if(ni(t)!==e)return!1;const s=t.match(tf);return s?(me?location.hash:"")===s[0]:!0}function ni(e){return decodeURI(e).replace(nf,"").replace(sf,"$1")}function of(e){return Uo.test(e)}function lf(e,t){return Object.keys((e==null?void 0:e.locales)||{}).find(n=>n!=="root"&&!of(n)&&rf(t,`/${n}/`,!0))||"root"}function cf(e,t){var s,r,i,o,l,c,f;const n=lf(e,t);return Object.assign({},e,{localeIndex:n,lang:((s=e.locales[n])==null?void 0:s.lang)??e.lang,dir:((r=e.locales[n])==null?void 0:r.dir)??e.dir,title:((i=e.locales[n])==null?void 0:i.title)??e.title,titleTemplate:((o=e.locales[n])==null?void 0:o.titleTemplate)??e.titleTemplate,description:((l=e.locales[n])==null?void 0:l.description)??e.description,head:Ko(e.head,((c=e.locales[n])==null?void 0:c.head)??[]),themeConfig:{...e.themeConfig,...(f=e.locales[n])==null?void 0:f.themeConfig}})}function Bo(e,t){const n=t.title||e.title,s=t.titleTemplate??e.titleTemplate;if(typeof s=="string"&&s.includes(":title"))return s.replace(/:title/g,n);const r=af(e.title,s);return n===r.slice(3)?n:`${n}${r}`}function af(e,t){return t===!1?"":t===!0||t===void 0?` | ${e}`:e===t?"":` | ${t}`}function ff(e,t){const[n,s]=t;if(n!=="meta")return!1;const r=Object.entries(s)[0];return r==null?!1:e.some(([i,o])=>i===n&&o[r[0]]===r[1])}function Ko(e,t){return[...e.filter(n=>!ff(t,n)),...t]}const uf=/[\u0000-\u001F"#$&*+,:;<=>?[\]^`{|}\u007F]/g,df=/^[a-z]:/i;function si(e){const t=df.exec(e),n=t?t[0]:"";return n+e.slice(n.length).replace(uf,"_").replace(/(^|\/)_+(?=[^/]*$)/,"$1")}const ms=new Set;function hf(e){if(ms.size===0){const n=typeof process=="object"&&(gs==null?void 0:gs.VITE_EXTRA_EXTENSIONS)||(ps==null?void 0:ps.VITE_EXTRA_EXTENSIONS)||"";("3g2,3gp,aac,ai,apng,au,avif,bin,bmp,cer,class,conf,crl,css,csv,dll,doc,eps,epub,exe,gif,gz,ics,ief,jar,jpe,jpeg,jpg,js,json,jsonld,m4a,man,mid,midi,mjs,mov,mp2,mp3,mp4,mpe,mpeg,mpg,mpp,oga,ogg,ogv,ogx,opus,otf,p10,p7c,p7m,p7s,pdf,png,ps,qt,roff,rtf,rtx,ser,svg,t,tif,tiff,tr,ts,tsv,ttf,txt,vtt,wav,weba,webm,webp,woff,woff2,xhtml,xml,yaml,yml,zip"+(n&&typeof n=="string"?","+n:"")).split(",").forEach(s=>ms.add(s))}const t=e.split(".").pop();return t==null||!ms.has(t.toLowerCase())}function ru(e){return e.replace(/[|\\{}()[\]^$+*?.]/g,"\\$&").replace(/-/g,"\\x2d")}const pf=Symbol(),vt=Xs(Ma);function iu(e){const t=ie(()=>cf(vt.value,e.data.relativePath)),n=t.value.appearance,s=n==="force-dark"?le(!0):n==="force-auto"?Vo():n?za({storageKey:ef,initialValue:()=>n==="dark"?"dark":"auto",...typeof n=="object"?n:{}}):le(!1),r=le(me?location.hash:"");return me&&window.addEventListener("hashchange",()=>{r.value=location.hash}),Ie(()=>e.data,()=>{r.value=me?location.hash:""}),{site:t,theme:ie(()=>t.value.themeConfig),page:ie(()=>e.data),frontmatter:ie(()=>e.data.frontmatter),params:ie(()=>e.data.params),lang:ie(()=>t.value.lang),dir:ie(()=>e.data.frontmatter.dir||t.value.dir),localeIndex:ie(()=>t.value.localeIndex||"root"),title:ie(()=>Bo(t.value,e.data)),description:ie(()=>e.data.description||t.value.description),isDark:s,hash:ie(()=>r.value)}}function gf(){const e=_t(pf);if(!e)throw new Error("vitepress data not properly injected in app");return e}function mf(e,t){return`${e}${t}`.replace(/\/+/g,"/")}function ri(e){return Uo.test(e)||!e.startsWith("/")?e:mf(vt.value.base,e)}function yf(e){let t=e.replace(/\.html$/,"");if(t=decodeURIComponent(t),t=t.replace(/\/$/,"/index"),me){const n="/dev/";t=si(t.slice(n.length).replace(/\//g,"_")||"index")+".md";let s=__VP_HASH_MAP__[t.toLowerCase()];if(s||(t=t.endsWith("_index.md")?t.slice(0,-9)+".md":t.slice(0,-3)+"_index.md",s=__VP_HASH_MAP__[t.toLowerCase()]),!s)return null;t=`${n}assets/${t}.${s}.js`}else t=`./${si(t.slice(1).replace(/\//g,"_"))}.md.js`;return t}let An=[];function ou(e){An.push(e),Bn(()=>{An=An.filter(t=>t!==e)})}function _f(){let e=vt.value.scrollOffset,t=0,n=24;if(typeof e=="object"&&"padding"in e&&(n=e.padding,e=e.selector),typeof e=="number")t=e;else if(typeof e=="string")t=ii(e,n);else if(Array.isArray(e))for(const s of e){const r=ii(s,n);if(r){t=r;break}}return t}function ii(e,t){const n=document.querySelector(e);if(!n)return 0;const s=n.getBoundingClientRect().bottom;return s<0?0:s+t}const bf=Symbol(),qo="http://a.com",vf=()=>({path:"/",component:null,data:Wo});function lu(e,t){const n=Pt(vf()),s={route:n,go:r};async function r(l=me?location.href:"/"){var c,f;l=ys(l),await((c=s.onBeforeRouteChange)==null?void 0:c.call(s,l))!==!1&&(me&&l!==ys(location.href)&&(history.replaceState({scrollPosition:window.scrollY},""),history.pushState({},"",l)),await o(l),await((f=s.onAfterRouteChange??s.onAfterRouteChanged)==null?void 0:f(l)))}let i=null;async function o(l,c=0,f=!1){var m,y;if(await((m=s.onBeforePageLoad)==null?void 0:m.call(s,l))===!1)return;const a=new URL(l,qo),d=i=a.pathname;try{let v=await e(d);if(!v)throw new Error(`Page not found: ${d}`);if(i===d){i=null;const{default:b,__pageData:D}=v;if(!b)throw new Error(`Invalid route component: ${b}`);await((y=s.onAfterPageLoad)==null?void 0:y.call(s,l)),n.path=me?d:ri(d),n.component=En(b),n.data=En(D),me&&Un(()=>{let L=vt.value.base+D.relativePath.replace(/(?:(^|\/)index)?\.md$/,"$1");if(!vt.value.cleanUrls&&!L.endsWith("/")&&(L+=".html"),L!==a.pathname&&(a.pathname=L,l=L+a.search+a.hash,history.replaceState({},"",l)),a.hash&&!c){let $=null;try{$=document.getElementById(decodeURIComponent(a.hash).slice(1))}catch(p){console.warn(p)}if($){oi($,a.hash);return}}window.scrollTo(0,c)})}}catch(v){if(!/fetch|Page not found/.test(v.message)&&!/^\/404(\.html|\/)?$/.test(l)&&console.error(v),!f)try{const b=await fetch(vt.value.base+"hashmap.json");window.__VP_HASH_MAP__=await b.json(),await o(l,c,!0);return}catch{}if(i===d){i=null,n.path=me?d:ri(d),n.component=t?En(t):null;const b=me?d.replace(/(^|\/)$/,"$1index").replace(/(\.html)?$/,".md").replace(/^\//,""):"404.md";n.data={...Wo,relativePath:b}}}}return me&&(history.state===null&&history.replaceState({},""),window.addEventListener("click",l=>{if(l.defaultPrevented||!(l.target instanceof Element)||l.target.closest("button")||l.button!==0||l.ctrlKey||l.shiftKey||l.altKey||l.metaKey)return;const c=l.target.closest("a");if(!c||c.closest(".vp-raw")||c.hasAttribute("download")||c.hasAttribute("target"))return;const f=c.getAttribute("href")??(c instanceof SVGAElement?c.getAttribute("xlink:href"):null);if(f==null)return;const{href:a,origin:d,pathname:m,hash:y,search:v}=new URL(f,c.baseURI),b=new URL(location.href);d===b.origin&&hf(m)&&(l.preventDefault(),m===b.pathname&&v===b.search?(y!==b.hash&&(history.pushState({},"",a),window.dispatchEvent(new HashChangeEvent("hashchange",{oldURL:b.href,newURL:a}))),y?oi(c,y,c.classList.contains("header-anchor")):window.scrollTo(0,0)):r(a))},{capture:!0}),window.addEventListener("popstate",async l=>{var f;if(l.state===null)return;const c=ys(location.href);await o(c,l.state&&l.state.scrollPosition||0),await((f=s.onAfterRouteChange??s.onAfterRouteChanged)==null?void 0:f(c))}),window.addEventListener("hashchange",l=>{l.preventDefault()})),s}function wf(){const e=_t(bf);if(!e)throw new Error("useRouter() is called without provider.");return e}function Go(){return wf().route}function oi(e,t,n=!1){let s=null;try{s=e.classList.contains("header-anchor")?e:document.getElementById(decodeURIComponent(t).slice(1))}catch(r){console.warn(r)}if(s){let r=function(){!n||Math.abs(o-window.scrollY)>window.innerHeight?window.scrollTo(0,o):window.scrollTo({left:0,top:o,behavior:"smooth"})};const i=parseInt(window.getComputedStyle(s).paddingTop,10),o=window.scrollY+s.getBoundingClientRect().top-_f()+i;requestAnimationFrame(r)}}function ys(e){const t=new URL(e,qo);return t.pathname=t.pathname.replace(/(^|\/)index(\.html)?$/,"$1"),vt.value.cleanUrls?t.pathname=t.pathname.replace(/\.html$/,""):!t.pathname.endsWith("/")&&!t.pathname.endsWith(".html")&&(t.pathname+=".html"),t.pathname+t.search+t.hash}const bn=()=>An.forEach(e=>e()),cu=Js({name:"VitePressContent",props:{as:{type:[Object,String],default:"div"}},setup(e){const t=Go(),{frontmatter:n,site:s}=gf();return Ie(n,bn,{deep:!0,flush:"post"}),()=>Ns(e.as,s.value.contentProps??{style:{position:"relative"}},[t.component?Ns(t.component,{onVnodeMounted:bn,onVnodeUpdated:bn,onVnodeUnmounted:bn}):"404 Page Not Found"])}}),au=(e,t)=>{const n=e.__vccOpts||e;for(const[s,r]of t)n[s]=r;return n},Sf="modulepreload",Ef=function(e){return"/dev/"+e},li={},fu=function(t,n,s){let r=Promise.resolve();if(n&&n.length>0){document.getElementsByTagName("link");const o=document.querySelector("meta[property=csp-nonce]"),l=(o==null?void 0:o.nonce)||(o==null?void 0:o.getAttribute("nonce"));r=Promise.allSettled(n.map(c=>{if(c=Ef(c),c in li)return;li[c]=!0;const f=c.endsWith(".css"),a=f?'[rel="stylesheet"]':"";if(document.querySelector(`link[href="${c}"]${a}`))return;const d=document.createElement("link");if(d.rel=f?"stylesheet":Sf,f||(d.as="script"),d.crossOrigin="",d.href=c,l&&d.setAttribute("nonce",l),document.head.appendChild(d),f)return new Promise((m,y)=>{d.addEventListener("load",m),d.addEventListener("error",()=>y(new Error(`Unable to preload CSS for ${c}`)))})}))}function i(o){const l=new Event("vite:preloadError",{cancelable:!0});if(l.payload=o,window.dispatchEvent(l),!l.defaultPrevented)throw o}return r.then(o=>{for(const l of o||[])l.status==="rejected"&&i(l.reason);return t().catch(i)})},uu=Js({setup(e,{slots:t}){const n=le(!1);return Nt(()=>{n.value=!0}),()=>n.value&&t.default?t.default():null}});function du(){me&&window.addEventListener("click",e=>{var n;const t=e.target;if(t.matches(".vp-code-group input")){const s=(n=t.parentElement)==null?void 0:n.parentElement;if(!s)return;const r=Array.from(s.querySelectorAll("input")).indexOf(t);if(r<0)return;const i=s.querySelector(".blocks");if(!i)return;const o=Array.from(i.children).find(f=>f.classList.contains("active"));if(!o)return;const l=i.children[r];if(!l||o===l)return;o.classList.remove("active"),l.classList.add("active");const c=s==null?void 0:s.querySelector(`label[for="${t.id}"]`);c==null||c.scrollIntoView({block:"nearest"})}})}function hu(){if(me){const e=new WeakMap;window.addEventListener("click",t=>{var s;const n=t.target;if(n.matches('div[class*="language-"] > button.copy')){const r=n.parentElement,i=(s=n.nextElementSibling)==null?void 0:s.nextElementSibling;if(!r||!i)return;const o=/language-(shellscript|shell|bash|sh|zsh)/.test(r.className),l=[".vp-copy-ignore",".diff.remove"],c=i.cloneNode(!0);c.querySelectorAll(l.join(",")).forEach(a=>a.remove());let f=c.textContent||"";o&&(f=f.replace(/^ *(\$|>) /gm,"").trim()),Tf(f).then(()=>{n.classList.add("copied"),clearTimeout(e.get(n));const a=setTimeout(()=>{n.classList.remove("copied"),n.blur(),e.delete(n)},2e3);e.set(n,a)})}})}}async function Tf(e){try{return navigator.clipboard.writeText(e)}catch{const t=document.createElement("textarea"),n=document.activeElement;t.value=e,t.setAttribute("readonly",""),t.style.contain="strict",t.style.position="absolute",t.style.left="-9999px",t.style.fontSize="12pt";const s=document.getSelection(),r=s?s.rangeCount>0&&s.getRangeAt(0):null;document.body.appendChild(t),t.select(),t.selectionStart=0,t.selectionEnd=e.length,document.execCommand("copy"),document.body.removeChild(t),r&&(s.removeAllRanges(),s.addRange(r)),n&&n.focus()}}function pu(e,t){let n=!0,s=[];const r=i=>{if(n){n=!1,i.forEach(l=>{const c=_s(l);for(const f of document.head.children)if(f.isEqualNode(c)){s.push(f);return}});return}const o=i.map(_s);s.forEach((l,c)=>{const f=o.findIndex(a=>a==null?void 0:a.isEqualNode(l??null));f!==-1?delete o[f]:(l==null||l.remove(),delete s[c])}),o.forEach(l=>l&&document.head.appendChild(l)),s=[...s,...o].filter(Boolean)};er(()=>{const i=e.data,o=t.value,l=i&&i.description,c=i&&i.frontmatter.head||[],f=Bo(o,i);f!==document.title&&(document.title=f);const a=l||o.description;let d=document.querySelector("meta[name=description]");d?d.getAttribute("content")!==a&&d.setAttribute("content",a):_s(["meta",{name:"description",content:a}]),r(Ko(o.head,Cf(c)))})}function _s([e,t,n]){const s=document.createElement(e);for(const r in t)s.setAttribute(r,t[r]);return n&&(s.innerHTML=n),e==="script"&&t.async==null&&(s.async=!1),s}function xf(e){return e[0]==="meta"&&e[1]&&e[1].name==="description"}function Cf(e){return e.filter(t=>!xf(t))}const bs=new Set,Xo=()=>document.createElement("link"),Af=e=>{const t=Xo();t.rel="prefetch",t.href=e,document.head.appendChild(t)},Rf=e=>{const t=new XMLHttpRequest;t.open("GET",e,t.withCredentials=!0),t.send()};let vn;const Mf=me&&(vn=Xo())&&vn.relList&&vn.relList.supports&&vn.relList.supports("prefetch")?Af:Rf;function gu(){if(!me||!window.IntersectionObserver)return;let e;if((e=navigator.connection)&&(e.saveData||/2g/.test(e.effectiveType)))return;const t=window.requestIdleCallback||setTimeout;let n=null;const s=()=>{n&&n.disconnect(),n=new IntersectionObserver(i=>{i.forEach(o=>{if(o.isIntersecting){const l=o.target;n.unobserve(l);const{pathname:c}=l;if(!bs.has(c)){bs.add(c);const f=yf(c);f&&Mf(f)}}})}),t(()=>{document.querySelectorAll("#app a").forEach(i=>{const{hostname:o,pathname:l}=new URL(i.href instanceof SVGAnimatedString?i.href.animVal:i.href,i.baseURI),c=l.match(/\.\w+$/);c&&c[0]!==".html"||i.target!=="_blank"&&o===location.hostname&&(l!==location.pathname?n.observe(i):bs.add(l))})})};Nt(s);const r=Go();Ie(()=>r.path,s),Bn(()=>{n&&n.disconnect()})}export{Xi as $,_f as A,If as B,Ff as C,Xs as D,ou as E,Te as F,ce as G,Nf as H,Uo as I,Go as J,Wc as K,_t as L,su as M,Vs as N,Qf as O,Un as P,nu as Q,me as R,kn as S,Wf as T,Pf as U,fu as V,eu as W,Sc as X,qf as Y,Df as Z,au as _,To as a,Kf as a0,$f as a1,kf as a2,Pt as a3,$l as a4,Ns as a5,pu as a6,bf as a7,iu as a8,pf as a9,ru as aA,cu as aa,uu as ab,vt as ac,Xf as ad,lu as ae,yf as af,gu as ag,hu as ah,du as ai,ue as aj,Zr as ak,rr as al,Yf as am,nr as an,zf as ao,tu as ap,Zf as aq,Jf as ar,wf as as,rt as at,Of as au,Bf as av,fe as aw,Lf as ax,En as ay,Gf as az,Ps as b,Vf as c,Js as d,Uf as e,hf as f,ri as g,ie as h,of as i,Eo as j,Pi as k,rf as l,$o as m,ks as n,Ls as o,le as p,Ie as q,Hf as r,er as s,al as t,gf as u,Nt as v,Gl as w,Bn as x,jf as y,cc as z}; diff --git a/dev/assets/chunks/theme.Dw8Jqbck.js b/dev/assets/chunks/theme.Cah_qJpF.js similarity index 99% rename from dev/assets/chunks/theme.Dw8Jqbck.js rename to dev/assets/chunks/theme.Cah_qJpF.js index c69ea0a4f5..5f6b0ca78d 100644 --- a/dev/assets/chunks/theme.Dw8Jqbck.js +++ b/dev/assets/chunks/theme.Cah_qJpF.js @@ -1,2 +1,2 @@ -const __vite__mapDeps=(i,m=__vite__mapDeps,d=(m.f||(m.f=["assets/chunks/VPLocalSearchBox.DSrXxC9l.js","assets/chunks/framework.I-x9Gl6h.js"])))=>i.map(i=>d[i]); -import{d as b,o as a,c as d,r as u,n as I,a as G,t as N,b as k,w as f,e as _,T as de,_ as $,u as Te,i as Ke,f as We,g as pe,h as P,j as v,k as r,l as z,m as re,p as T,q as D,s as Z,v as F,x as ve,y as fe,z as qe,A as Je,B as K,F as M,C as E,D as we,E as x,G as g,H,I as Ne,J as ee,K as j,L as q,M as Ye,N as Ie,O as ie,P as he,Q as Me,R as te,S as Xe,U as Qe,V as Ze,W as Ce,X as me,Y as xe,Z as et,$ as tt,a0 as nt,a1 as Ae,a2 as st,a3 as ot,a4 as at,a5 as Pe}from"./framework.I-x9Gl6h.js";const rt=b({__name:"VPBadge",props:{text:{},type:{default:"tip"}},setup(o){return(e,t)=>(a(),d("span",{class:I(["VPBadge",e.type])},[u(e.$slots,"default",{},()=>[G(N(e.text),1)])],2))}}),it={key:0,class:"VPBackdrop"},lt=b({__name:"VPBackdrop",props:{show:{type:Boolean}},setup(o){return(e,t)=>(a(),k(de,{name:"fade"},{default:f(()=>[e.show?(a(),d("div",it)):_("",!0)]),_:1}))}}),ct=$(lt,[["__scopeId","data-v-b06cdb19"]]),L=Te;function ut(o,e){let t,s=!1;return()=>{t&&clearTimeout(t),s?t=setTimeout(o,e):(o(),(s=!0)&&setTimeout(()=>s=!1,e))}}function le(o){return o.startsWith("/")?o:`/${o}`}function _e(o){const{pathname:e,search:t,hash:s,protocol:n}=new URL(o,"http://a.com");if(Ke(o)||o.startsWith("#")||!n.startsWith("http")||!We(e))return o;const{site:i}=L(),l=e.endsWith("/")||e.endsWith(".html")?o:o.replace(/(?:(^\.+)\/)?.*$/,`$1${e.replace(/(\.md)?$/,i.value.cleanUrls?"":".html")}${t}${s}`);return pe(l)}function Y({correspondingLink:o=!1}={}){const{site:e,localeIndex:t,page:s,theme:n,hash:i}=L(),l=P(()=>{var c,h;return{label:(c=e.value.locales[t.value])==null?void 0:c.label,link:((h=e.value.locales[t.value])==null?void 0:h.link)||(t.value==="root"?"/":`/${t.value}/`)}});return{localeLinks:P(()=>Object.entries(e.value.locales).flatMap(([c,h])=>l.value.label===h.label?[]:{text:h.label,link:dt(h.link||(c==="root"?"/":`/${c}/`),n.value.i18nRouting!==!1&&o,s.value.relativePath.slice(l.value.link.length-1),!e.value.cleanUrls)+i.value})),currentLang:l}}function dt(o,e,t,s){return e?o.replace(/\/$/,"")+le(t.replace(/(^|\/)index\.md$/,"$1").replace(/\.md$/,s?".html":"")):o}const pt={class:"NotFound"},vt={class:"code"},ft={class:"title"},ht={class:"quote"},mt={class:"action"},_t=["href","aria-label"],bt=b({__name:"NotFound",setup(o){const{theme:e}=L(),{currentLang:t}=Y();return(s,n)=>{var i,l,p,c,h;return a(),d("div",pt,[v("p",vt,N(((i=r(e).notFound)==null?void 0:i.code)??"404"),1),v("h1",ft,N(((l=r(e).notFound)==null?void 0:l.title)??"PAGE NOT FOUND"),1),n[0]||(n[0]=v("div",{class:"divider"},null,-1)),v("blockquote",ht,N(((p=r(e).notFound)==null?void 0:p.quote)??"But if you don't change your direction, and if you keep looking, you may end up where you are heading."),1),v("div",mt,[v("a",{class:"link",href:r(pe)(r(t).link),"aria-label":((c=r(e).notFound)==null?void 0:c.linkLabel)??"go to home"},N(((h=r(e).notFound)==null?void 0:h.linkText)??"Take me home"),9,_t)])])}}}),kt=$(bt,[["__scopeId","data-v-951cab6c"]]);function Ee(o,e){if(Array.isArray(o))return X(o);if(o==null)return[];e=le(e);const t=Object.keys(o).sort((n,i)=>i.split("/").length-n.split("/").length).find(n=>e.startsWith(le(n))),s=t?o[t]:[];return Array.isArray(s)?X(s):X(s.items,s.base)}function gt(o){const e=[];let t=0;for(const s in o){const n=o[s];if(n.items){t=e.push(n);continue}e[t]||e.push({items:[]}),e[t].items.push(n)}return e}function $t(o){const e=[];function t(s){for(const n of s)n.text&&n.link&&e.push({text:n.text,link:n.link,docFooterText:n.docFooterText}),n.items&&t(n.items)}return t(o),e}function ce(o,e){return Array.isArray(e)?e.some(t=>ce(o,t)):z(o,e.link)?!0:e.items?ce(o,e.items):!1}function X(o,e){return[...o].map(t=>{const s={...t},n=s.base||e;return n&&s.link&&(s.link=n+s.link),s.items&&(s.items=X(s.items,n)),s})}function R(){const{frontmatter:o,page:e,theme:t}=L(),s=re("(min-width: 960px)"),n=T(!1),i=P(()=>{const A=t.value.sidebar,w=e.value.relativePath;return A?Ee(A,w):[]}),l=T(i.value);D(i,(A,w)=>{JSON.stringify(A)!==JSON.stringify(w)&&(l.value=i.value)});const p=P(()=>o.value.sidebar!==!1&&l.value.length>0&&o.value.layout!=="home"),c=P(()=>h?o.value.aside==null?t.value.aside==="left":o.value.aside==="left":!1),h=P(()=>o.value.layout==="home"?!1:o.value.aside!=null?!!o.value.aside:t.value.aside!==!1),y=P(()=>p.value&&s.value),m=P(()=>p.value?gt(l.value):[]);function S(){n.value=!0}function V(){n.value=!1}function C(){n.value?V():S()}return{isOpen:n,sidebar:l,sidebarGroups:m,hasSidebar:p,hasAside:h,leftAside:c,isSidebarEnabled:y,open:S,close:V,toggle:C}}function yt(o,e){let t;Z(()=>{t=o.value?document.activeElement:void 0}),F(()=>{window.addEventListener("keyup",s)}),ve(()=>{window.removeEventListener("keyup",s)});function s(n){n.key==="Escape"&&o.value&&(e(),t==null||t.focus())}}function Pt(o){const{page:e,hash:t}=L(),s=T(!1),n=P(()=>o.value.collapsed!=null),i=P(()=>!!o.value.link),l=T(!1),p=()=>{l.value=z(e.value.relativePath,o.value.link)};D([e,o,t],p),F(p);const c=P(()=>l.value?!0:o.value.items?ce(e.value.relativePath,o.value.items):!1),h=P(()=>!!(o.value.items&&o.value.items.length));Z(()=>{s.value=!!(n.value&&o.value.collapsed)}),fe(()=>{(l.value||c.value)&&(s.value=!1)});function y(){n.value&&(s.value=!s.value)}return{collapsed:s,collapsible:n,isLink:i,isActiveLink:l,hasActiveLink:c,hasChildren:h,toggle:y}}function St(){const{hasSidebar:o}=R(),e=re("(min-width: 960px)"),t=re("(min-width: 1280px)");return{isAsideEnabled:P(()=>!t.value&&!e.value?!1:o.value?t.value:e.value)}}const Vt=/\b(?:VPBadge|header-anchor|footnote-ref|ignore-header)\b/,ue=[];function Be(o){return typeof o.outline=="object"&&!Array.isArray(o.outline)&&o.outline.label||o.outlineTitle||"On this page"}function be(o){const e=[...document.querySelectorAll(".VPDoc :where(h1,h2,h3,h4,h5,h6)")].filter(t=>t.id&&t.hasChildNodes()).map(t=>{const s=Number(t.tagName[1]);return{element:t,title:Lt(t),link:"#"+t.id,level:s}});return Tt(e,o)}function Lt(o){let e="";for(const t of o.childNodes)if(t.nodeType===1){if(Vt.test(t.className))continue;e+=t.textContent}else t.nodeType===3&&(e+=t.textContent);return e.trim()}function Tt(o,e){if(e===!1)return[];const t=(typeof e=="object"&&!Array.isArray(e)?e.level:e)||2,[s,n]=typeof t=="number"?[t,t]:t==="deep"?[2,6]:t;return It(o,s,n)}function wt(o,e){const{isAsideEnabled:t}=St(),s=ut(i,100);let n=null;F(()=>{requestAnimationFrame(i),window.addEventListener("scroll",s)}),qe(()=>{l(location.hash)}),ve(()=>{window.removeEventListener("scroll",s)});function i(){if(!t.value)return;const p=window.scrollY,c=window.innerHeight,h=document.body.offsetHeight,y=Math.abs(p+c-h)<1,m=ue.map(({element:V,link:C})=>({link:C,top:Nt(V)})).filter(({top:V})=>!Number.isNaN(V)).sort((V,C)=>V.top-C.top);if(!m.length){l(null);return}if(p<1){l(null);return}if(y){l(m[m.length-1].link);return}let S=null;for(const{link:V,top:C}of m){if(C>p+Je()+4)break;S=V}l(S)}function l(p){n&&n.classList.remove("active"),p==null?n=null:n=o.value.querySelector(`a[href="${decodeURIComponent(p)}"]`);const c=n;c?(c.classList.add("active"),e.value.style.top=c.offsetTop+39+"px",e.value.style.opacity="1"):(e.value.style.top="33px",e.value.style.opacity="0")}}function Nt(o){let e=0;for(;o!==document.body;){if(o===null)return NaN;e+=o.offsetTop,o=o.offsetParent}return e}function It(o,e,t){ue.length=0;const s=[],n=[];return o.forEach(i=>{const l={...i,children:[]};let p=n[n.length-1];for(;p&&p.level>=l.level;)n.pop(),p=n[n.length-1];if(l.element.classList.contains("ignore-header")||p&&"shouldIgnore"in p){n.push({level:l.level,shouldIgnore:!0});return}l.level>t||l.level{const n=K("VPDocOutlineItem",!0);return a(),d("ul",{class:I(["VPDocOutlineItem",t.root?"root":"nested"])},[(a(!0),d(M,null,E(t.headers,({children:i,link:l,title:p})=>(a(),d("li",null,[v("a",{class:"outline-link",href:l,onClick:e,title:p},N(p),9,Mt),i!=null&&i.length?(a(),k(n,{key:0,headers:i},null,8,["headers"])):_("",!0)]))),256))],2)}}}),He=$(Ct,[["__scopeId","data-v-3f927ebe"]]),At={class:"content"},Et={"aria-level":"2",class:"outline-title",id:"doc-outline-aria-label",role:"heading"},Bt=b({__name:"VPDocAsideOutline",setup(o){const{frontmatter:e,theme:t}=L(),s=we([]);x(()=>{s.value=be(e.value.outline??t.value.outline)});const n=T(),i=T();return wt(n,i),(l,p)=>(a(),d("nav",{"aria-labelledby":"doc-outline-aria-label",class:I(["VPDocAsideOutline",{"has-outline":s.value.length>0}]),ref_key:"container",ref:n},[v("div",At,[v("div",{class:"outline-marker",ref_key:"marker",ref:i},null,512),v("div",Et,N(r(Be)(r(t))),1),g(He,{headers:s.value,root:!0},null,8,["headers"])])],2))}}),Ht=$(Bt,[["__scopeId","data-v-b38bf2ff"]]),Ot={class:"VPDocAsideCarbonAds"},Dt=b({__name:"VPDocAsideCarbonAds",props:{carbonAds:{}},setup(o){const e=()=>null;return(t,s)=>(a(),d("div",Ot,[g(r(e),{"carbon-ads":t.carbonAds},null,8,["carbon-ads"])]))}}),Ft={class:"VPDocAside"},Rt=b({__name:"VPDocAside",setup(o){const{theme:e}=L();return(t,s)=>(a(),d("div",Ft,[u(t.$slots,"aside-top",{},void 0,!0),u(t.$slots,"aside-outline-before",{},void 0,!0),g(Ht),u(t.$slots,"aside-outline-after",{},void 0,!0),s[0]||(s[0]=v("div",{class:"spacer"},null,-1)),u(t.$slots,"aside-ads-before",{},void 0,!0),r(e).carbonAds?(a(),k(Dt,{key:0,"carbon-ads":r(e).carbonAds},null,8,["carbon-ads"])):_("",!0),u(t.$slots,"aside-ads-after",{},void 0,!0),u(t.$slots,"aside-bottom",{},void 0,!0)]))}}),Ut=$(Rt,[["__scopeId","data-v-6d7b3c46"]]);function jt(){const{theme:o,page:e}=L();return P(()=>{const{text:t="Edit this page",pattern:s=""}=o.value.editLink||{};let n;return typeof s=="function"?n=s(e.value):n=s.replace(/:path/g,e.value.filePath),{url:n,text:t}})}function Gt(){const{page:o,theme:e,frontmatter:t}=L();return P(()=>{var h,y,m,S,V,C,A,w;const s=Ee(e.value.sidebar,o.value.relativePath),n=$t(s),i=zt(n,B=>B.link.replace(/[?#].*$/,"")),l=i.findIndex(B=>z(o.value.relativePath,B.link)),p=((h=e.value.docFooter)==null?void 0:h.prev)===!1&&!t.value.prev||t.value.prev===!1,c=((y=e.value.docFooter)==null?void 0:y.next)===!1&&!t.value.next||t.value.next===!1;return{prev:p?void 0:{text:(typeof t.value.prev=="string"?t.value.prev:typeof t.value.prev=="object"?t.value.prev.text:void 0)??((m=i[l-1])==null?void 0:m.docFooterText)??((S=i[l-1])==null?void 0:S.text),link:(typeof t.value.prev=="object"?t.value.prev.link:void 0)??((V=i[l-1])==null?void 0:V.link)},next:c?void 0:{text:(typeof t.value.next=="string"?t.value.next:typeof t.value.next=="object"?t.value.next.text:void 0)??((C=i[l+1])==null?void 0:C.docFooterText)??((A=i[l+1])==null?void 0:A.text),link:(typeof t.value.next=="object"?t.value.next.link:void 0)??((w=i[l+1])==null?void 0:w.link)}}})}function zt(o,e){const t=new Set;return o.filter(s=>{const n=e(s);return t.has(n)?!1:t.add(n)})}const O=b({__name:"VPLink",props:{tag:{},href:{},noIcon:{type:Boolean},target:{},rel:{}},setup(o){const e=o,t=P(()=>e.tag??(e.href?"a":"span")),s=P(()=>e.href&&Ne.test(e.href)||e.target==="_blank");return(n,i)=>(a(),k(H(t.value),{class:I(["VPLink",{link:n.href,"vp-external-link-icon":s.value,"no-icon":n.noIcon}]),href:n.href?r(_e)(n.href):void 0,target:n.target??(s.value?"_blank":void 0),rel:n.rel??(s.value?"noreferrer":void 0)},{default:f(()=>[u(n.$slots,"default")]),_:3},8,["class","href","target","rel"]))}}),Kt={class:"VPLastUpdated"},Wt=["datetime"],qt=b({__name:"VPDocFooterLastUpdated",setup(o){const{theme:e,page:t,lang:s}=L(),n=P(()=>new Date(t.value.lastUpdated)),i=P(()=>n.value.toISOString()),l=T("");return F(()=>{Z(()=>{var p,c,h;l.value=new Intl.DateTimeFormat((c=(p=e.value.lastUpdated)==null?void 0:p.formatOptions)!=null&&c.forceLocale?s.value:void 0,((h=e.value.lastUpdated)==null?void 0:h.formatOptions)??{dateStyle:"short",timeStyle:"short"}).format(n.value)})}),(p,c)=>{var h;return a(),d("p",Kt,[G(N(((h=r(e).lastUpdated)==null?void 0:h.text)||r(e).lastUpdatedText||"Last updated")+": ",1),v("time",{datetime:i.value},N(l.value),9,Wt)])}}}),Jt=$(qt,[["__scopeId","data-v-475f71b8"]]),Yt={key:0,class:"VPDocFooter"},Xt={key:0,class:"edit-info"},Qt={key:0,class:"edit-link"},Zt={key:1,class:"last-updated"},xt={key:1,class:"prev-next","aria-labelledby":"doc-footer-aria-label"},en={class:"pager"},tn=["innerHTML"],nn=["innerHTML"],sn={class:"pager"},on=["innerHTML"],an=["innerHTML"],rn=b({__name:"VPDocFooter",setup(o){const{theme:e,page:t,frontmatter:s}=L(),n=jt(),i=Gt(),l=P(()=>e.value.editLink&&s.value.editLink!==!1),p=P(()=>t.value.lastUpdated),c=P(()=>l.value||p.value||i.value.prev||i.value.next);return(h,y)=>{var m,S,V,C;return c.value?(a(),d("footer",Yt,[u(h.$slots,"doc-footer-before",{},void 0,!0),l.value||p.value?(a(),d("div",Xt,[l.value?(a(),d("div",Qt,[g(O,{class:"edit-link-button",href:r(n).url,"no-icon":!0},{default:f(()=>[y[0]||(y[0]=v("span",{class:"vpi-square-pen edit-link-icon"},null,-1)),G(" "+N(r(n).text),1)]),_:1},8,["href"])])):_("",!0),p.value?(a(),d("div",Zt,[g(Jt)])):_("",!0)])):_("",!0),(m=r(i).prev)!=null&&m.link||(S=r(i).next)!=null&&S.link?(a(),d("nav",xt,[y[1]||(y[1]=v("span",{class:"visually-hidden",id:"doc-footer-aria-label"},"Pager",-1)),v("div",en,[(V=r(i).prev)!=null&&V.link?(a(),k(O,{key:0,class:"pager-link prev",href:r(i).prev.link},{default:f(()=>{var A;return[v("span",{class:"desc",innerHTML:((A=r(e).docFooter)==null?void 0:A.prev)||"Previous page"},null,8,tn),v("span",{class:"title",innerHTML:r(i).prev.text},null,8,nn)]}),_:1},8,["href"])):_("",!0)]),v("div",sn,[(C=r(i).next)!=null&&C.link?(a(),k(O,{key:0,class:"pager-link next",href:r(i).next.link},{default:f(()=>{var A;return[v("span",{class:"desc",innerHTML:((A=r(e).docFooter)==null?void 0:A.next)||"Next page"},null,8,on),v("span",{class:"title",innerHTML:r(i).next.text},null,8,an)]}),_:1},8,["href"])):_("",!0)])])):_("",!0)])):_("",!0)}}}),ln=$(rn,[["__scopeId","data-v-4f9813fa"]]),cn={class:"container"},un={class:"aside-container"},dn={class:"aside-content"},pn={class:"content"},vn={class:"content-container"},fn={class:"main"},hn=b({__name:"VPDoc",setup(o){const{theme:e}=L(),t=ee(),{hasSidebar:s,hasAside:n,leftAside:i}=R(),l=P(()=>t.path.replace(/[./]+/g,"_").replace(/_html$/,""));return(p,c)=>{const h=K("Content");return a(),d("div",{class:I(["VPDoc",{"has-sidebar":r(s),"has-aside":r(n)}])},[u(p.$slots,"doc-top",{},void 0,!0),v("div",cn,[r(n)?(a(),d("div",{key:0,class:I(["aside",{"left-aside":r(i)}])},[c[0]||(c[0]=v("div",{class:"aside-curtain"},null,-1)),v("div",un,[v("div",dn,[g(Ut,null,{"aside-top":f(()=>[u(p.$slots,"aside-top",{},void 0,!0)]),"aside-bottom":f(()=>[u(p.$slots,"aside-bottom",{},void 0,!0)]),"aside-outline-before":f(()=>[u(p.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(p.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(p.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(p.$slots,"aside-ads-after",{},void 0,!0)]),_:3})])])],2)):_("",!0),v("div",pn,[v("div",vn,[u(p.$slots,"doc-before",{},void 0,!0),v("main",fn,[g(h,{class:I(["vp-doc",[l.value,r(e).externalLinkIcon&&"external-link-icon-enabled"]])},null,8,["class"])]),g(ln,null,{"doc-footer-before":f(()=>[u(p.$slots,"doc-footer-before",{},void 0,!0)]),_:3}),u(p.$slots,"doc-after",{},void 0,!0)])])]),u(p.$slots,"doc-bottom",{},void 0,!0)],2)}}}),mn=$(hn,[["__scopeId","data-v-83890dd9"]]),_n=b({__name:"VPButton",props:{tag:{},size:{default:"medium"},theme:{default:"brand"},text:{},href:{},target:{},rel:{}},setup(o){const e=o,t=P(()=>e.href&&Ne.test(e.href)),s=P(()=>e.tag||(e.href?"a":"button"));return(n,i)=>(a(),k(H(s.value),{class:I(["VPButton",[n.size,n.theme]]),href:n.href?r(_e)(n.href):void 0,target:e.target??(t.value?"_blank":void 0),rel:e.rel??(t.value?"noreferrer":void 0)},{default:f(()=>[G(N(n.text),1)]),_:1},8,["class","href","target","rel"]))}}),bn=$(_n,[["__scopeId","data-v-906d7fb4"]]),kn=["src","alt"],gn=b({inheritAttrs:!1,__name:"VPImage",props:{image:{},alt:{}},setup(o){return(e,t)=>{const s=K("VPImage",!0);return e.image?(a(),d(M,{key:0},[typeof e.image=="string"||"src"in e.image?(a(),d("img",j({key:0,class:"VPImage"},typeof e.image=="string"?e.$attrs:{...e.image,...e.$attrs},{src:r(pe)(typeof e.image=="string"?e.image:e.image.src),alt:e.alt??(typeof e.image=="string"?"":e.image.alt||"")}),null,16,kn)):(a(),d(M,{key:1},[g(s,j({class:"dark",image:e.image.dark,alt:e.image.alt},e.$attrs),null,16,["image","alt"]),g(s,j({class:"light",image:e.image.light,alt:e.image.alt},e.$attrs),null,16,["image","alt"])],64))],64)):_("",!0)}}}),Q=$(gn,[["__scopeId","data-v-35a7d0b8"]]),$n={class:"container"},yn={class:"main"},Pn={class:"heading"},Sn=["innerHTML"],Vn=["innerHTML"],Ln=["innerHTML"],Tn={key:0,class:"actions"},wn={key:0,class:"image"},Nn={class:"image-container"},In=b({__name:"VPHero",props:{name:{},text:{},tagline:{},image:{},actions:{}},setup(o){const e=q("hero-image-slot-exists");return(t,s)=>(a(),d("div",{class:I(["VPHero",{"has-image":t.image||r(e)}])},[v("div",$n,[v("div",yn,[u(t.$slots,"home-hero-info-before",{},void 0,!0),u(t.$slots,"home-hero-info",{},()=>[v("h1",Pn,[t.name?(a(),d("span",{key:0,innerHTML:t.name,class:"name clip"},null,8,Sn)):_("",!0),t.text?(a(),d("span",{key:1,innerHTML:t.text,class:"text"},null,8,Vn)):_("",!0)]),t.tagline?(a(),d("p",{key:0,innerHTML:t.tagline,class:"tagline"},null,8,Ln)):_("",!0)],!0),u(t.$slots,"home-hero-info-after",{},void 0,!0),t.actions?(a(),d("div",Tn,[(a(!0),d(M,null,E(t.actions,n=>(a(),d("div",{key:n.link,class:"action"},[g(bn,{tag:"a",size:"medium",theme:n.theme,text:n.text,href:n.link,target:n.target,rel:n.rel},null,8,["theme","text","href","target","rel"])]))),128))])):_("",!0),u(t.$slots,"home-hero-actions-after",{},void 0,!0)]),t.image||r(e)?(a(),d("div",wn,[v("div",Nn,[s[0]||(s[0]=v("div",{class:"image-bg"},null,-1)),u(t.$slots,"home-hero-image",{},()=>[t.image?(a(),k(Q,{key:0,class:"image-src",image:t.image},null,8,["image"])):_("",!0)],!0)])])):_("",!0)])],2))}}),Mn=$(In,[["__scopeId","data-v-3d256e5e"]]),Cn=b({__name:"VPHomeHero",setup(o){const{frontmatter:e}=L();return(t,s)=>r(e).hero?(a(),k(Mn,{key:0,class:"VPHomeHero",name:r(e).hero.name,text:r(e).hero.text,tagline:r(e).hero.tagline,image:r(e).hero.image,actions:r(e).hero.actions},{"home-hero-info-before":f(()=>[u(t.$slots,"home-hero-info-before")]),"home-hero-info":f(()=>[u(t.$slots,"home-hero-info")]),"home-hero-info-after":f(()=>[u(t.$slots,"home-hero-info-after")]),"home-hero-actions-after":f(()=>[u(t.$slots,"home-hero-actions-after")]),"home-hero-image":f(()=>[u(t.$slots,"home-hero-image")]),_:3},8,["name","text","tagline","image","actions"])):_("",!0)}}),An={class:"box"},En={key:0,class:"icon"},Bn=["innerHTML"],Hn=["innerHTML"],On=["innerHTML"],Dn={key:4,class:"link-text"},Fn={class:"link-text-value"},Rn=b({__name:"VPFeature",props:{icon:{},title:{},details:{},link:{},linkText:{},rel:{},target:{}},setup(o){return(e,t)=>(a(),k(O,{class:"VPFeature",href:e.link,rel:e.rel,target:e.target,"no-icon":!0,tag:e.link?"a":"div"},{default:f(()=>[v("article",An,[typeof e.icon=="object"&&e.icon.wrap?(a(),d("div",En,[g(Q,{image:e.icon,alt:e.icon.alt,height:e.icon.height||48,width:e.icon.width||48},null,8,["image","alt","height","width"])])):typeof e.icon=="object"?(a(),k(Q,{key:1,image:e.icon,alt:e.icon.alt,height:e.icon.height||48,width:e.icon.width||48},null,8,["image","alt","height","width"])):e.icon?(a(),d("div",{key:2,class:"icon",innerHTML:e.icon},null,8,Bn)):_("",!0),v("h2",{class:"title",innerHTML:e.title},null,8,Hn),e.details?(a(),d("p",{key:3,class:"details",innerHTML:e.details},null,8,On)):_("",!0),e.linkText?(a(),d("div",Dn,[v("p",Fn,[G(N(e.linkText)+" ",1),t[0]||(t[0]=v("span",{class:"vpi-arrow-right link-text-icon"},null,-1))])])):_("",!0)])]),_:1},8,["href","rel","target","tag"]))}}),Un=$(Rn,[["__scopeId","data-v-f5e9645b"]]),jn={key:0,class:"VPFeatures"},Gn={class:"container"},zn={class:"items"},Kn=b({__name:"VPFeatures",props:{features:{}},setup(o){const e=o,t=P(()=>{const s=e.features.length;if(s){if(s===2)return"grid-2";if(s===3)return"grid-3";if(s%3===0)return"grid-6";if(s>3)return"grid-4"}else return});return(s,n)=>s.features?(a(),d("div",jn,[v("div",Gn,[v("div",zn,[(a(!0),d(M,null,E(s.features,i=>(a(),d("div",{key:i.title,class:I(["item",[t.value]])},[g(Un,{icon:i.icon,title:i.title,details:i.details,link:i.link,"link-text":i.linkText,rel:i.rel,target:i.target},null,8,["icon","title","details","link","link-text","rel","target"])],2))),128))])])])):_("",!0)}}),Wn=$(Kn,[["__scopeId","data-v-d0a190d7"]]),qn=b({__name:"VPHomeFeatures",setup(o){const{frontmatter:e}=L();return(t,s)=>r(e).features?(a(),k(Wn,{key:0,class:"VPHomeFeatures",features:r(e).features},null,8,["features"])):_("",!0)}}),Jn=b({__name:"VPHomeContent",setup(o){const{width:e}=Ye({initialWidth:0,includeScrollbar:!1});return(t,s)=>(a(),d("div",{class:"vp-doc container",style:Ie(r(e)?{"--vp-offset":`calc(50% - ${r(e)/2}px)`}:{})},[u(t.$slots,"default",{},void 0,!0)],4))}}),Yn=$(Jn,[["__scopeId","data-v-7a48a447"]]),Xn=b({__name:"VPHome",setup(o){const{frontmatter:e,theme:t}=L();return(s,n)=>{const i=K("Content");return a(),d("div",{class:I(["VPHome",{"external-link-icon-enabled":r(t).externalLinkIcon}])},[u(s.$slots,"home-hero-before",{},void 0,!0),g(Cn,null,{"home-hero-info-before":f(()=>[u(s.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(s.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(s.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(s.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(s.$slots,"home-hero-image",{},void 0,!0)]),_:3}),u(s.$slots,"home-hero-after",{},void 0,!0),u(s.$slots,"home-features-before",{},void 0,!0),g(qn),u(s.$slots,"home-features-after",{},void 0,!0),r(e).markdownStyles!==!1?(a(),k(Yn,{key:0},{default:f(()=>[g(i)]),_:1})):(a(),k(i,{key:1}))],2)}}}),Qn=$(Xn,[["__scopeId","data-v-e40e30de"]]),Zn={},xn={class:"VPPage"};function es(o,e){const t=K("Content");return a(),d("div",xn,[u(o.$slots,"page-top"),g(t),u(o.$slots,"page-bottom")])}const ts=$(Zn,[["render",es]]),ns=b({__name:"VPContent",setup(o){const{page:e,frontmatter:t}=L(),{hasSidebar:s}=R();return(n,i)=>(a(),d("div",{class:I(["VPContent",{"has-sidebar":r(s),"is-home":r(t).layout==="home"}]),id:"VPContent"},[r(e).isNotFound?u(n.$slots,"not-found",{key:0},()=>[g(kt)],!0):r(t).layout==="page"?(a(),k(ts,{key:1},{"page-top":f(()=>[u(n.$slots,"page-top",{},void 0,!0)]),"page-bottom":f(()=>[u(n.$slots,"page-bottom",{},void 0,!0)]),_:3})):r(t).layout==="home"?(a(),k(Qn,{key:2},{"home-hero-before":f(()=>[u(n.$slots,"home-hero-before",{},void 0,!0)]),"home-hero-info-before":f(()=>[u(n.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(n.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(n.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(n.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(n.$slots,"home-hero-image",{},void 0,!0)]),"home-hero-after":f(()=>[u(n.$slots,"home-hero-after",{},void 0,!0)]),"home-features-before":f(()=>[u(n.$slots,"home-features-before",{},void 0,!0)]),"home-features-after":f(()=>[u(n.$slots,"home-features-after",{},void 0,!0)]),_:3})):r(t).layout&&r(t).layout!=="doc"?(a(),k(H(r(t).layout),{key:3})):(a(),k(mn,{key:4},{"doc-top":f(()=>[u(n.$slots,"doc-top",{},void 0,!0)]),"doc-bottom":f(()=>[u(n.$slots,"doc-bottom",{},void 0,!0)]),"doc-footer-before":f(()=>[u(n.$slots,"doc-footer-before",{},void 0,!0)]),"doc-before":f(()=>[u(n.$slots,"doc-before",{},void 0,!0)]),"doc-after":f(()=>[u(n.$slots,"doc-after",{},void 0,!0)]),"aside-top":f(()=>[u(n.$slots,"aside-top",{},void 0,!0)]),"aside-outline-before":f(()=>[u(n.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(n.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(n.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(n.$slots,"aside-ads-after",{},void 0,!0)]),"aside-bottom":f(()=>[u(n.$slots,"aside-bottom",{},void 0,!0)]),_:3}))],2))}}),ss=$(ns,[["__scopeId","data-v-91765379"]]),os={class:"container"},as=["innerHTML"],rs=["innerHTML"],is=b({__name:"VPFooter",setup(o){const{theme:e,frontmatter:t}=L(),{hasSidebar:s}=R();return(n,i)=>r(e).footer&&r(t).footer!==!1?(a(),d("footer",{key:0,class:I(["VPFooter",{"has-sidebar":r(s)}])},[v("div",os,[r(e).footer.message?(a(),d("p",{key:0,class:"message",innerHTML:r(e).footer.message},null,8,as)):_("",!0),r(e).footer.copyright?(a(),d("p",{key:1,class:"copyright",innerHTML:r(e).footer.copyright},null,8,rs)):_("",!0)])],2)):_("",!0)}}),ls=$(is,[["__scopeId","data-v-c970a860"]]);function cs(){const{theme:o,frontmatter:e}=L(),t=we([]),s=P(()=>t.value.length>0);return x(()=>{t.value=be(e.value.outline??o.value.outline)}),{headers:t,hasLocalNav:s}}const us={class:"menu-text"},ds={class:"header"},ps={class:"outline"},vs=b({__name:"VPLocalNavOutlineDropdown",props:{headers:{},navHeight:{}},setup(o){const e=o,{theme:t}=L(),s=T(!1),n=T(0),i=T(),l=T();function p(m){var S;(S=i.value)!=null&&S.contains(m.target)||(s.value=!1)}D(s,m=>{if(m){document.addEventListener("click",p);return}document.removeEventListener("click",p)}),ie("Escape",()=>{s.value=!1}),x(()=>{s.value=!1});function c(){s.value=!s.value,n.value=window.innerHeight+Math.min(window.scrollY-e.navHeight,0)}function h(m){m.target.classList.contains("outline-link")&&(l.value&&(l.value.style.transition="none"),he(()=>{s.value=!1}))}function y(){s.value=!1,window.scrollTo({top:0,left:0,behavior:"smooth"})}return(m,S)=>(a(),d("div",{class:"VPLocalNavOutlineDropdown",style:Ie({"--vp-vh":n.value+"px"}),ref_key:"main",ref:i},[m.headers.length>0?(a(),d("button",{key:0,onClick:c,class:I({open:s.value})},[v("span",us,N(r(Be)(r(t))),1),S[0]||(S[0]=v("span",{class:"vpi-chevron-right icon"},null,-1))],2)):(a(),d("button",{key:1,onClick:y},N(r(t).returnToTopLabel||"Return to top"),1)),g(de,{name:"flyout"},{default:f(()=>[s.value?(a(),d("div",{key:0,ref_key:"items",ref:l,class:"items",onClick:h},[v("div",ds,[v("a",{class:"top-link",href:"#",onClick:y},N(r(t).returnToTopLabel||"Return to top"),1)]),v("div",ps,[g(He,{headers:m.headers},null,8,["headers"])])],512)):_("",!0)]),_:1})],4))}}),fs=$(vs,[["__scopeId","data-v-168ddf5d"]]),hs={class:"container"},ms=["aria-expanded"],_s={class:"menu-text"},bs=b({__name:"VPLocalNav",props:{open:{type:Boolean}},emits:["open-menu"],setup(o){const{theme:e,frontmatter:t}=L(),{hasSidebar:s}=R(),{headers:n}=cs(),{y:i}=Me(),l=T(0);F(()=>{l.value=parseInt(getComputedStyle(document.documentElement).getPropertyValue("--vp-nav-height"))}),x(()=>{n.value=be(t.value.outline??e.value.outline)});const p=P(()=>n.value.length===0),c=P(()=>p.value&&!s.value),h=P(()=>({VPLocalNav:!0,"has-sidebar":s.value,empty:p.value,fixed:c.value}));return(y,m)=>r(t).layout!=="home"&&(!c.value||r(i)>=l.value)?(a(),d("div",{key:0,class:I(h.value)},[v("div",hs,[r(s)?(a(),d("button",{key:0,class:"menu","aria-expanded":y.open,"aria-controls":"VPSidebarNav",onClick:m[0]||(m[0]=S=>y.$emit("open-menu"))},[m[1]||(m[1]=v("span",{class:"vpi-align-left menu-icon"},null,-1)),v("span",_s,N(r(e).sidebarMenuLabel||"Menu"),1)],8,ms)):_("",!0),g(fs,{headers:r(n),navHeight:l.value},null,8,["headers","navHeight"])])],2)):_("",!0)}}),ks=$(bs,[["__scopeId","data-v-070ab83d"]]);function gs(){const o=T(!1);function e(){o.value=!0,window.addEventListener("resize",n)}function t(){o.value=!1,window.removeEventListener("resize",n)}function s(){o.value?t():e()}function n(){window.outerWidth>=768&&t()}const i=ee();return D(()=>i.path,t),{isScreenOpen:o,openScreen:e,closeScreen:t,toggleScreen:s}}const $s={},ys={class:"VPSwitch",type:"button",role:"switch"},Ps={class:"check"},Ss={key:0,class:"icon"};function Vs(o,e){return a(),d("button",ys,[v("span",Ps,[o.$slots.default?(a(),d("span",Ss,[u(o.$slots,"default",{},void 0,!0)])):_("",!0)])])}const Ls=$($s,[["render",Vs],["__scopeId","data-v-4a1c76db"]]),Ts=b({__name:"VPSwitchAppearance",setup(o){const{isDark:e,theme:t}=L(),s=q("toggle-appearance",()=>{e.value=!e.value}),n=T("");return fe(()=>{n.value=e.value?t.value.lightModeSwitchTitle||"Switch to light theme":t.value.darkModeSwitchTitle||"Switch to dark theme"}),(i,l)=>(a(),k(Ls,{title:n.value,class:"VPSwitchAppearance","aria-checked":r(e),onClick:r(s)},{default:f(()=>l[0]||(l[0]=[v("span",{class:"vpi-sun sun"},null,-1),v("span",{class:"vpi-moon moon"},null,-1)])),_:1},8,["title","aria-checked","onClick"]))}}),ke=$(Ts,[["__scopeId","data-v-e40a8bb6"]]),ws={key:0,class:"VPNavBarAppearance"},Ns=b({__name:"VPNavBarAppearance",setup(o){const{site:e}=L();return(t,s)=>r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",ws,[g(ke)])):_("",!0)}}),Is=$(Ns,[["__scopeId","data-v-af096f4a"]]),ge=T();let Oe=!1,ae=0;function Ms(o){const e=T(!1);if(te){!Oe&&Cs(),ae++;const t=D(ge,s=>{var n,i,l;s===o.el.value||(n=o.el.value)!=null&&n.contains(s)?(e.value=!0,(i=o.onFocus)==null||i.call(o)):(e.value=!1,(l=o.onBlur)==null||l.call(o))});ve(()=>{t(),ae--,ae||As()})}return Xe(e)}function Cs(){document.addEventListener("focusin",De),Oe=!0,ge.value=document.activeElement}function As(){document.removeEventListener("focusin",De)}function De(){ge.value=document.activeElement}const Es={class:"VPMenuLink"},Bs=["innerHTML"],Hs=b({__name:"VPMenuLink",props:{item:{}},setup(o){const{page:e}=L();return(t,s)=>(a(),d("div",Es,[g(O,{class:I({active:r(z)(r(e).relativePath,t.item.activeMatch||t.item.link,!!t.item.activeMatch)}),href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,Bs)]),_:1},8,["class","href","target","rel","no-icon"])]))}}),ne=$(Hs,[["__scopeId","data-v-acbfed09"]]),Os={class:"VPMenuGroup"},Ds={key:0,class:"title"},Fs=b({__name:"VPMenuGroup",props:{text:{},items:{}},setup(o){return(e,t)=>(a(),d("div",Os,[e.text?(a(),d("p",Ds,N(e.text),1)):_("",!0),(a(!0),d(M,null,E(e.items,s=>(a(),d(M,null,["link"in s?(a(),k(ne,{key:0,item:s},null,8,["item"])):_("",!0)],64))),256))]))}}),Rs=$(Fs,[["__scopeId","data-v-48c802d0"]]),Us={class:"VPMenu"},js={key:0,class:"items"},Gs=b({__name:"VPMenu",props:{items:{}},setup(o){return(e,t)=>(a(),d("div",Us,[e.items?(a(),d("div",js,[(a(!0),d(M,null,E(e.items,s=>(a(),d(M,{key:JSON.stringify(s)},["link"in s?(a(),k(ne,{key:0,item:s},null,8,["item"])):"component"in s?(a(),k(H(s.component),j({key:1,ref_for:!0},s.props),null,16)):(a(),k(Rs,{key:2,text:s.text,items:s.items},null,8,["text","items"]))],64))),128))])):_("",!0),u(e.$slots,"default",{},void 0,!0)]))}}),zs=$(Gs,[["__scopeId","data-v-7dd3104a"]]),Ks=["aria-expanded","aria-label"],Ws={key:0,class:"text"},qs=["innerHTML"],Js={key:1,class:"vpi-more-horizontal icon"},Ys={class:"menu"},Xs=b({__name:"VPFlyout",props:{icon:{},button:{},label:{},items:{}},setup(o){const e=T(!1),t=T();Ms({el:t,onBlur:s});function s(){e.value=!1}return(n,i)=>(a(),d("div",{class:"VPFlyout",ref_key:"el",ref:t,onMouseenter:i[1]||(i[1]=l=>e.value=!0),onMouseleave:i[2]||(i[2]=l=>e.value=!1)},[v("button",{type:"button",class:"button","aria-haspopup":"true","aria-expanded":e.value,"aria-label":n.label,onClick:i[0]||(i[0]=l=>e.value=!e.value)},[n.button||n.icon?(a(),d("span",Ws,[n.icon?(a(),d("span",{key:0,class:I([n.icon,"option-icon"])},null,2)):_("",!0),n.button?(a(),d("span",{key:1,innerHTML:n.button},null,8,qs)):_("",!0),i[3]||(i[3]=v("span",{class:"vpi-chevron-down text-icon"},null,-1))])):(a(),d("span",Js))],8,Ks),v("div",Ys,[g(zs,{items:n.items},{default:f(()=>[u(n.$slots,"default",{},void 0,!0)]),_:3},8,["items"])])],544))}}),$e=$(Xs,[["__scopeId","data-v-04f5c5e9"]]),Qs=["href","aria-label","innerHTML"],Zs=b({__name:"VPSocialLink",props:{icon:{},link:{},ariaLabel:{}},setup(o){const e=o,t=T();F(async()=>{var i;await he();const n=(i=t.value)==null?void 0:i.children[0];n instanceof HTMLElement&&n.className.startsWith("vpi-social-")&&(getComputedStyle(n).maskImage||getComputedStyle(n).webkitMaskImage)==="none"&&n.style.setProperty("--icon",`url('https://api.iconify.design/simple-icons/${e.icon}.svg')`)});const s=P(()=>typeof e.icon=="object"?e.icon.svg:``);return(n,i)=>(a(),d("a",{ref_key:"el",ref:t,class:"VPSocialLink no-icon",href:n.link,"aria-label":n.ariaLabel??(typeof n.icon=="string"?n.icon:""),target:"_blank",rel:"noopener",innerHTML:s.value},null,8,Qs))}}),xs=$(Zs,[["__scopeId","data-v-d26d30cb"]]),eo={class:"VPSocialLinks"},to=b({__name:"VPSocialLinks",props:{links:{}},setup(o){return(e,t)=>(a(),d("div",eo,[(a(!0),d(M,null,E(e.links,({link:s,icon:n,ariaLabel:i})=>(a(),k(xs,{key:s,icon:n,link:s,ariaLabel:i},null,8,["icon","link","ariaLabel"]))),128))]))}}),ye=$(to,[["__scopeId","data-v-ee7a9424"]]),no={key:0,class:"group translations"},so={class:"trans-title"},oo={key:1,class:"group"},ao={class:"item appearance"},ro={class:"label"},io={class:"appearance-action"},lo={key:2,class:"group"},co={class:"item social-links"},uo=b({__name:"VPNavBarExtra",setup(o){const{site:e,theme:t}=L(),{localeLinks:s,currentLang:n}=Y({correspondingLink:!0}),i=P(()=>s.value.length&&n.value.label||e.value.appearance||t.value.socialLinks);return(l,p)=>i.value?(a(),k($e,{key:0,class:"VPNavBarExtra",label:"extra navigation"},{default:f(()=>[r(s).length&&r(n).label?(a(),d("div",no,[v("p",so,N(r(n).label),1),(a(!0),d(M,null,E(r(s),c=>(a(),k(ne,{key:c.link,item:c},null,8,["item"]))),128))])):_("",!0),r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",oo,[v("div",ao,[v("p",ro,N(r(t).darkModeSwitchLabel||"Appearance"),1),v("div",io,[g(ke)])])])):_("",!0),r(t).socialLinks?(a(),d("div",lo,[v("div",co,[g(ye,{class:"social-links-list",links:r(t).socialLinks},null,8,["links"])])])):_("",!0)]),_:1})):_("",!0)}}),po=$(uo,[["__scopeId","data-v-925effce"]]),vo=["aria-expanded"],fo=b({__name:"VPNavBarHamburger",props:{active:{type:Boolean}},emits:["click"],setup(o){return(e,t)=>(a(),d("button",{type:"button",class:I(["VPNavBarHamburger",{active:e.active}]),"aria-label":"mobile navigation","aria-expanded":e.active,"aria-controls":"VPNavScreen",onClick:t[0]||(t[0]=s=>e.$emit("click"))},t[1]||(t[1]=[v("span",{class:"container"},[v("span",{class:"top"}),v("span",{class:"middle"}),v("span",{class:"bottom"})],-1)]),10,vo))}}),ho=$(fo,[["__scopeId","data-v-5dea55bf"]]),mo=["innerHTML"],_o=b({__name:"VPNavBarMenuLink",props:{item:{}},setup(o){const{page:e}=L();return(t,s)=>(a(),k(O,{class:I({VPNavBarMenuLink:!0,active:r(z)(r(e).relativePath,t.item.activeMatch||t.item.link,!!t.item.activeMatch)}),href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,tabindex:"0"},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,mo)]),_:1},8,["class","href","target","rel","no-icon"]))}}),bo=$(_o,[["__scopeId","data-v-956ec74c"]]),Fe=b({__name:"VPNavBarMenuGroup",props:{item:{}},setup(o){const e=o,{page:t}=L(),s=i=>"component"in i?!1:"link"in i?z(t.value.relativePath,i.link,!!e.item.activeMatch):i.items.some(s),n=P(()=>s(e.item));return(i,l)=>(a(),k($e,{class:I({VPNavBarMenuGroup:!0,active:r(z)(r(t).relativePath,i.item.activeMatch,!!i.item.activeMatch)||n.value}),button:i.item.text,items:i.item.items},null,8,["class","button","items"]))}}),ko={key:0,"aria-labelledby":"main-nav-aria-label",class:"VPNavBarMenu"},go=b({__name:"VPNavBarMenu",setup(o){const{theme:e}=L();return(t,s)=>r(e).nav?(a(),d("nav",ko,[s[0]||(s[0]=v("span",{id:"main-nav-aria-label",class:"visually-hidden"}," Main Navigation ",-1)),(a(!0),d(M,null,E(r(e).nav,n=>(a(),d(M,{key:JSON.stringify(n)},["link"in n?(a(),k(bo,{key:0,item:n},null,8,["item"])):"component"in n?(a(),k(H(n.component),j({key:1,ref_for:!0},n.props),null,16)):(a(),k(Fe,{key:2,item:n},null,8,["item"]))],64))),128))])):_("",!0)}}),$o=$(go,[["__scopeId","data-v-e6d46098"]]);function yo(o){const{localeIndex:e,theme:t}=L();function s(n){var C,A,w;const i=n.split("."),l=(C=t.value.search)==null?void 0:C.options,p=l&&typeof l=="object",c=p&&((w=(A=l.locales)==null?void 0:A[e.value])==null?void 0:w.translations)||null,h=p&&l.translations||null;let y=c,m=h,S=o;const V=i.pop();for(const B of i){let U=null;const W=S==null?void 0:S[B];W&&(U=S=W);const se=m==null?void 0:m[B];se&&(U=m=se);const oe=y==null?void 0:y[B];oe&&(U=y=oe),W||(S=U),se||(m=U),oe||(y=U)}return(y==null?void 0:y[V])??(m==null?void 0:m[V])??(S==null?void 0:S[V])??""}return s}const Po=["aria-label"],So={class:"DocSearch-Button-Container"},Vo={class:"DocSearch-Button-Placeholder"},Se=b({__name:"VPNavBarSearchButton",setup(o){const t=yo({button:{buttonText:"Search",buttonAriaLabel:"Search"}});return(s,n)=>(a(),d("button",{type:"button",class:"DocSearch DocSearch-Button","aria-label":r(t)("button.buttonAriaLabel")},[v("span",So,[n[0]||(n[0]=v("span",{class:"vp-icon DocSearch-Search-Icon"},null,-1)),v("span",Vo,N(r(t)("button.buttonText")),1)]),n[1]||(n[1]=v("span",{class:"DocSearch-Button-Keys"},[v("kbd",{class:"DocSearch-Button-Key"}),v("kbd",{class:"DocSearch-Button-Key"},"K")],-1))],8,Po))}}),Lo={class:"VPNavBarSearch"},To={id:"local-search"},wo={key:1,id:"docsearch"},No=b({__name:"VPNavBarSearch",setup(o){const e=Qe(()=>Ze(()=>import("./VPLocalSearchBox.DSrXxC9l.js"),__vite__mapDeps([0,1]))),t=()=>null,{theme:s}=L(),n=T(!1),i=T(!1);F(()=>{});function l(){n.value||(n.value=!0,setTimeout(p,16))}function p(){const m=new Event("keydown");m.key="k",m.metaKey=!0,window.dispatchEvent(m),setTimeout(()=>{document.querySelector(".DocSearch-Modal")||p()},16)}function c(m){const S=m.target,V=S.tagName;return S.isContentEditable||V==="INPUT"||V==="SELECT"||V==="TEXTAREA"}const h=T(!1);ie("k",m=>{(m.ctrlKey||m.metaKey)&&(m.preventDefault(),h.value=!0)}),ie("/",m=>{c(m)||(m.preventDefault(),h.value=!0)});const y="local";return(m,S)=>{var V;return a(),d("div",Lo,[r(y)==="local"?(a(),d(M,{key:0},[h.value?(a(),k(r(e),{key:0,onClose:S[0]||(S[0]=C=>h.value=!1)})):_("",!0),v("div",To,[g(Se,{onClick:S[1]||(S[1]=C=>h.value=!0)})])],64)):r(y)==="algolia"?(a(),d(M,{key:1},[n.value?(a(),k(r(t),{key:0,algolia:((V=r(s).search)==null?void 0:V.options)??r(s).algolia,onVnodeBeforeMount:S[2]||(S[2]=C=>i.value=!0)},null,8,["algolia"])):_("",!0),i.value?_("",!0):(a(),d("div",wo,[g(Se,{onClick:l})]))],64)):_("",!0)])}}}),Io=b({__name:"VPNavBarSocialLinks",setup(o){const{theme:e}=L();return(t,s)=>r(e).socialLinks?(a(),k(ye,{key:0,class:"VPNavBarSocialLinks",links:r(e).socialLinks},null,8,["links"])):_("",!0)}}),Mo=$(Io,[["__scopeId","data-v-164c457f"]]),Co=["href","rel","target"],Ao=["innerHTML"],Eo={key:2},Bo=b({__name:"VPNavBarTitle",setup(o){const{site:e,theme:t}=L(),{hasSidebar:s}=R(),{currentLang:n}=Y(),i=P(()=>{var c;return typeof t.value.logoLink=="string"?t.value.logoLink:(c=t.value.logoLink)==null?void 0:c.link}),l=P(()=>{var c;return typeof t.value.logoLink=="string"||(c=t.value.logoLink)==null?void 0:c.rel}),p=P(()=>{var c;return typeof t.value.logoLink=="string"||(c=t.value.logoLink)==null?void 0:c.target});return(c,h)=>(a(),d("div",{class:I(["VPNavBarTitle",{"has-sidebar":r(s)}])},[v("a",{class:"title",href:i.value??r(_e)(r(n).link),rel:l.value,target:p.value},[u(c.$slots,"nav-bar-title-before",{},void 0,!0),r(t).logo?(a(),k(Q,{key:0,class:"logo",image:r(t).logo},null,8,["image"])):_("",!0),r(t).siteTitle?(a(),d("span",{key:1,innerHTML:r(t).siteTitle},null,8,Ao)):r(t).siteTitle===void 0?(a(),d("span",Eo,N(r(e).title),1)):_("",!0),u(c.$slots,"nav-bar-title-after",{},void 0,!0)],8,Co)],2))}}),Ho=$(Bo,[["__scopeId","data-v-0f4f798b"]]),Oo={class:"items"},Do={class:"title"},Fo=b({__name:"VPNavBarTranslations",setup(o){const{theme:e}=L(),{localeLinks:t,currentLang:s}=Y({correspondingLink:!0});return(n,i)=>r(t).length&&r(s).label?(a(),k($e,{key:0,class:"VPNavBarTranslations",icon:"vpi-languages",label:r(e).langMenuLabel||"Change language"},{default:f(()=>[v("div",Oo,[v("p",Do,N(r(s).label),1),(a(!0),d(M,null,E(r(t),l=>(a(),k(ne,{key:l.link,item:l},null,8,["item"]))),128))])]),_:1},8,["label"])):_("",!0)}}),Ro=$(Fo,[["__scopeId","data-v-c80d9ad0"]]),Uo={class:"wrapper"},jo={class:"container"},Go={class:"title"},zo={class:"content"},Ko={class:"content-body"},Wo=b({__name:"VPNavBar",props:{isScreenOpen:{type:Boolean}},emits:["toggle-screen"],setup(o){const e=o,{y:t}=Me(),{hasSidebar:s}=R(),{frontmatter:n}=L(),i=T({});return fe(()=>{i.value={"has-sidebar":s.value,home:n.value.layout==="home",top:t.value===0,"screen-open":e.isScreenOpen}}),(l,p)=>(a(),d("div",{class:I(["VPNavBar",i.value])},[v("div",Uo,[v("div",jo,[v("div",Go,[g(Ho,null,{"nav-bar-title-before":f(()=>[u(l.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(l.$slots,"nav-bar-title-after",{},void 0,!0)]),_:3})]),v("div",zo,[v("div",Ko,[u(l.$slots,"nav-bar-content-before",{},void 0,!0),g(No,{class:"search"}),g($o,{class:"menu"}),g(Ro,{class:"translations"}),g(Is,{class:"appearance"}),g(Mo,{class:"social-links"}),g(po,{class:"extra"}),u(l.$slots,"nav-bar-content-after",{},void 0,!0),g(ho,{class:"hamburger",active:l.isScreenOpen,onClick:p[0]||(p[0]=c=>l.$emit("toggle-screen"))},null,8,["active"])])])])]),p[1]||(p[1]=v("div",{class:"divider"},[v("div",{class:"divider-line"})],-1))],2))}}),qo=$(Wo,[["__scopeId","data-v-822684d1"]]),Jo={key:0,class:"VPNavScreenAppearance"},Yo={class:"text"},Xo=b({__name:"VPNavScreenAppearance",setup(o){const{site:e,theme:t}=L();return(s,n)=>r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",Jo,[v("p",Yo,N(r(t).darkModeSwitchLabel||"Appearance"),1),g(ke)])):_("",!0)}}),Qo=$(Xo,[["__scopeId","data-v-ffb44008"]]),Zo=["innerHTML"],xo=b({__name:"VPNavScreenMenuLink",props:{item:{}},setup(o){const e=q("close-screen");return(t,s)=>(a(),k(O,{class:"VPNavScreenMenuLink",href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,onClick:r(e)},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,Zo)]),_:1},8,["href","target","rel","no-icon","onClick"]))}}),ea=$(xo,[["__scopeId","data-v-735512b8"]]),ta=["innerHTML"],na=b({__name:"VPNavScreenMenuGroupLink",props:{item:{}},setup(o){const e=q("close-screen");return(t,s)=>(a(),k(O,{class:"VPNavScreenMenuGroupLink",href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,onClick:r(e)},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,ta)]),_:1},8,["href","target","rel","no-icon","onClick"]))}}),Re=$(na,[["__scopeId","data-v-372ae7c0"]]),sa={class:"VPNavScreenMenuGroupSection"},oa={key:0,class:"title"},aa=b({__name:"VPNavScreenMenuGroupSection",props:{text:{},items:{}},setup(o){return(e,t)=>(a(),d("div",sa,[e.text?(a(),d("p",oa,N(e.text),1)):_("",!0),(a(!0),d(M,null,E(e.items,s=>(a(),k(Re,{key:s.text,item:s},null,8,["item"]))),128))]))}}),ra=$(aa,[["__scopeId","data-v-4b8941ac"]]),ia=["aria-controls","aria-expanded"],la=["innerHTML"],ca=["id"],ua={key:0,class:"item"},da={key:1,class:"item"},pa={key:2,class:"group"},va=b({__name:"VPNavScreenMenuGroup",props:{text:{},items:{}},setup(o){const e=o,t=T(!1),s=P(()=>`NavScreenGroup-${e.text.replace(" ","-").toLowerCase()}`);function n(){t.value=!t.value}return(i,l)=>(a(),d("div",{class:I(["VPNavScreenMenuGroup",{open:t.value}])},[v("button",{class:"button","aria-controls":s.value,"aria-expanded":t.value,onClick:n},[v("span",{class:"button-text",innerHTML:i.text},null,8,la),l[0]||(l[0]=v("span",{class:"vpi-plus button-icon"},null,-1))],8,ia),v("div",{id:s.value,class:"items"},[(a(!0),d(M,null,E(i.items,p=>(a(),d(M,{key:JSON.stringify(p)},["link"in p?(a(),d("div",ua,[g(Re,{item:p},null,8,["item"])])):"component"in p?(a(),d("div",da,[(a(),k(H(p.component),j({ref_for:!0},p.props,{"screen-menu":""}),null,16))])):(a(),d("div",pa,[g(ra,{text:p.text,items:p.items},null,8,["text","items"])]))],64))),128))],8,ca)],2))}}),Ue=$(va,[["__scopeId","data-v-875057a5"]]),fa={key:0,class:"VPNavScreenMenu"},ha=b({__name:"VPNavScreenMenu",setup(o){const{theme:e}=L();return(t,s)=>r(e).nav?(a(),d("nav",fa,[(a(!0),d(M,null,E(r(e).nav,n=>(a(),d(M,{key:JSON.stringify(n)},["link"in n?(a(),k(ea,{key:0,item:n},null,8,["item"])):"component"in n?(a(),k(H(n.component),j({key:1,ref_for:!0},n.props,{"screen-menu":""}),null,16)):(a(),k(Ue,{key:2,text:n.text||"",items:n.items},null,8,["text","items"]))],64))),128))])):_("",!0)}}),ma=b({__name:"VPNavScreenSocialLinks",setup(o){const{theme:e}=L();return(t,s)=>r(e).socialLinks?(a(),k(ye,{key:0,class:"VPNavScreenSocialLinks",links:r(e).socialLinks},null,8,["links"])):_("",!0)}}),_a={class:"list"},ba=b({__name:"VPNavScreenTranslations",setup(o){const{localeLinks:e,currentLang:t}=Y({correspondingLink:!0}),s=T(!1);function n(){s.value=!s.value}return(i,l)=>r(e).length&&r(t).label?(a(),d("div",{key:0,class:I(["VPNavScreenTranslations",{open:s.value}])},[v("button",{class:"title",onClick:n},[l[0]||(l[0]=v("span",{class:"vpi-languages icon lang"},null,-1)),G(" "+N(r(t).label)+" ",1),l[1]||(l[1]=v("span",{class:"vpi-chevron-down icon chevron"},null,-1))]),v("ul",_a,[(a(!0),d(M,null,E(r(e),p=>(a(),d("li",{key:p.link,class:"item"},[g(O,{class:"link",href:p.link},{default:f(()=>[G(N(p.text),1)]),_:2},1032,["href"])]))),128))])],2)):_("",!0)}}),ka=$(ba,[["__scopeId","data-v-362991c2"]]),ga={class:"container"},$a=b({__name:"VPNavScreen",props:{open:{type:Boolean}},setup(o){const e=T(null),t=Ce(te?document.body:null);return(s,n)=>(a(),k(de,{name:"fade",onEnter:n[0]||(n[0]=i=>t.value=!0),onAfterLeave:n[1]||(n[1]=i=>t.value=!1)},{default:f(()=>[s.open?(a(),d("div",{key:0,class:"VPNavScreen",ref_key:"screen",ref:e,id:"VPNavScreen"},[v("div",ga,[u(s.$slots,"nav-screen-content-before",{},void 0,!0),g(ha,{class:"menu"}),g(ka,{class:"translations"}),g(Qo,{class:"appearance"}),g(ma,{class:"social-links"}),u(s.$slots,"nav-screen-content-after",{},void 0,!0)])],512)):_("",!0)]),_:3}))}}),ya=$($a,[["__scopeId","data-v-833aabba"]]),Pa={key:0,class:"VPNav"},Sa=b({__name:"VPNav",setup(o){const{isScreenOpen:e,closeScreen:t,toggleScreen:s}=gs(),{frontmatter:n}=L(),i=P(()=>n.value.navbar!==!1);return me("close-screen",t),Z(()=>{te&&document.documentElement.classList.toggle("hide-nav",!i.value)}),(l,p)=>i.value?(a(),d("header",Pa,[g(qo,{"is-screen-open":r(e),onToggleScreen:r(s)},{"nav-bar-title-before":f(()=>[u(l.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(l.$slots,"nav-bar-title-after",{},void 0,!0)]),"nav-bar-content-before":f(()=>[u(l.$slots,"nav-bar-content-before",{},void 0,!0)]),"nav-bar-content-after":f(()=>[u(l.$slots,"nav-bar-content-after",{},void 0,!0)]),_:3},8,["is-screen-open","onToggleScreen"]),g(ya,{open:r(e)},{"nav-screen-content-before":f(()=>[u(l.$slots,"nav-screen-content-before",{},void 0,!0)]),"nav-screen-content-after":f(()=>[u(l.$slots,"nav-screen-content-after",{},void 0,!0)]),_:3},8,["open"])])):_("",!0)}}),Va=$(Sa,[["__scopeId","data-v-f1e365da"]]),La=["role","tabindex"],Ta={key:1,class:"items"},wa=b({__name:"VPSidebarItem",props:{item:{},depth:{}},setup(o){const e=o,{collapsed:t,collapsible:s,isLink:n,isActiveLink:i,hasActiveLink:l,hasChildren:p,toggle:c}=Pt(P(()=>e.item)),h=P(()=>p.value?"section":"div"),y=P(()=>n.value?"a":"div"),m=P(()=>p.value?e.depth+2===7?"p":`h${e.depth+2}`:"p"),S=P(()=>n.value?void 0:"button"),V=P(()=>[[`level-${e.depth}`],{collapsible:s.value},{collapsed:t.value},{"is-link":n.value},{"is-active":i.value},{"has-active":l.value}]);function C(w){"key"in w&&w.key!=="Enter"||!e.item.link&&c()}function A(){e.item.link&&c()}return(w,B)=>{const U=K("VPSidebarItem",!0);return a(),k(H(h.value),{class:I(["VPSidebarItem",V.value])},{default:f(()=>[w.item.text?(a(),d("div",j({key:0,class:"item",role:S.value},et(w.item.items?{click:C,keydown:C}:{},!0),{tabindex:w.item.items&&0}),[B[1]||(B[1]=v("div",{class:"indicator"},null,-1)),w.item.link?(a(),k(O,{key:0,tag:y.value,class:"link",href:w.item.link,rel:w.item.rel,target:w.item.target},{default:f(()=>[(a(),k(H(m.value),{class:"text",innerHTML:w.item.text},null,8,["innerHTML"]))]),_:1},8,["tag","href","rel","target"])):(a(),k(H(m.value),{key:1,class:"text",innerHTML:w.item.text},null,8,["innerHTML"])),w.item.collapsed!=null&&w.item.items&&w.item.items.length?(a(),d("div",{key:2,class:"caret",role:"button","aria-label":"toggle section",onClick:A,onKeydown:xe(A,["enter"]),tabindex:"0"},B[0]||(B[0]=[v("span",{class:"vpi-chevron-right caret-icon"},null,-1)]),32)):_("",!0)],16,La)):_("",!0),w.item.items&&w.item.items.length?(a(),d("div",Ta,[w.depth<5?(a(!0),d(M,{key:0},E(w.item.items,W=>(a(),k(U,{key:W.text,item:W,depth:w.depth+1},null,8,["item","depth"]))),128)):_("",!0)])):_("",!0)]),_:1},8,["class"])}}}),Na=$(wa,[["__scopeId","data-v-a4b0d9bf"]]),Ia=b({__name:"VPSidebarGroup",props:{items:{}},setup(o){const e=T(!0);let t=null;return F(()=>{t=setTimeout(()=>{t=null,e.value=!1},300)}),tt(()=>{t!=null&&(clearTimeout(t),t=null)}),(s,n)=>(a(!0),d(M,null,E(s.items,i=>(a(),d("div",{key:i.text,class:I(["group",{"no-transition":e.value}])},[g(Na,{item:i,depth:0},null,8,["item"])],2))),128))}}),Ma=$(Ia,[["__scopeId","data-v-9e426adc"]]),Ca={class:"nav",id:"VPSidebarNav","aria-labelledby":"sidebar-aria-label",tabindex:"-1"},Aa=b({__name:"VPSidebar",props:{open:{type:Boolean}},setup(o){const{sidebarGroups:e,hasSidebar:t}=R(),s=o,n=T(null),i=Ce(te?document.body:null);D([s,n],()=>{var p;s.open?(i.value=!0,(p=n.value)==null||p.focus()):i.value=!1},{immediate:!0,flush:"post"});const l=T(0);return D(e,()=>{l.value+=1},{deep:!0}),(p,c)=>r(t)?(a(),d("aside",{key:0,class:I(["VPSidebar",{open:p.open}]),ref_key:"navEl",ref:n,onClick:c[0]||(c[0]=nt(()=>{},["stop"]))},[c[2]||(c[2]=v("div",{class:"curtain"},null,-1)),v("nav",Ca,[c[1]||(c[1]=v("span",{class:"visually-hidden",id:"sidebar-aria-label"}," Sidebar Navigation ",-1)),u(p.$slots,"sidebar-nav-before",{},void 0,!0),(a(),k(Ma,{items:r(e),key:l.value},null,8,["items"])),u(p.$slots,"sidebar-nav-after",{},void 0,!0)])],2)):_("",!0)}}),Ea=$(Aa,[["__scopeId","data-v-18756405"]]),Ba=b({__name:"VPSkipLink",setup(o){const{theme:e}=L(),t=ee(),s=T();D(()=>t.path,()=>s.value.focus());function n({target:i}){const l=document.getElementById(decodeURIComponent(i.hash).slice(1));if(l){const p=()=>{l.removeAttribute("tabindex"),l.removeEventListener("blur",p)};l.setAttribute("tabindex","-1"),l.addEventListener("blur",p),l.focus(),window.scrollTo(0,0)}}return(i,l)=>(a(),d(M,null,[v("span",{ref_key:"backToTop",ref:s,tabindex:"-1"},null,512),v("a",{href:"#VPContent",class:"VPSkipLink visually-hidden",onClick:n},N(r(e).skipToContentLabel||"Skip to content"),1)],64))}}),Ha=$(Ba,[["__scopeId","data-v-492508fc"]]),Oa=b({__name:"Layout",setup(o){const{isOpen:e,open:t,close:s}=R(),n=ee();D(()=>n.path,s),yt(e,s);const{frontmatter:i}=L(),l=Ae(),p=P(()=>!!l["home-hero-image"]);return me("hero-image-slot-exists",p),(c,h)=>{const y=K("Content");return r(i).layout!==!1?(a(),d("div",{key:0,class:I(["Layout",r(i).pageClass])},[u(c.$slots,"layout-top",{},void 0,!0),g(Ha),g(ct,{class:"backdrop",show:r(e),onClick:r(s)},null,8,["show","onClick"]),g(Va,null,{"nav-bar-title-before":f(()=>[u(c.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(c.$slots,"nav-bar-title-after",{},void 0,!0)]),"nav-bar-content-before":f(()=>[u(c.$slots,"nav-bar-content-before",{},void 0,!0)]),"nav-bar-content-after":f(()=>[u(c.$slots,"nav-bar-content-after",{},void 0,!0)]),"nav-screen-content-before":f(()=>[u(c.$slots,"nav-screen-content-before",{},void 0,!0)]),"nav-screen-content-after":f(()=>[u(c.$slots,"nav-screen-content-after",{},void 0,!0)]),_:3}),g(ks,{open:r(e),onOpenMenu:r(t)},null,8,["open","onOpenMenu"]),g(Ea,{open:r(e)},{"sidebar-nav-before":f(()=>[u(c.$slots,"sidebar-nav-before",{},void 0,!0)]),"sidebar-nav-after":f(()=>[u(c.$slots,"sidebar-nav-after",{},void 0,!0)]),_:3},8,["open"]),g(ss,null,{"page-top":f(()=>[u(c.$slots,"page-top",{},void 0,!0)]),"page-bottom":f(()=>[u(c.$slots,"page-bottom",{},void 0,!0)]),"not-found":f(()=>[u(c.$slots,"not-found",{},void 0,!0)]),"home-hero-before":f(()=>[u(c.$slots,"home-hero-before",{},void 0,!0)]),"home-hero-info-before":f(()=>[u(c.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(c.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(c.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(c.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(c.$slots,"home-hero-image",{},void 0,!0)]),"home-hero-after":f(()=>[u(c.$slots,"home-hero-after",{},void 0,!0)]),"home-features-before":f(()=>[u(c.$slots,"home-features-before",{},void 0,!0)]),"home-features-after":f(()=>[u(c.$slots,"home-features-after",{},void 0,!0)]),"doc-footer-before":f(()=>[u(c.$slots,"doc-footer-before",{},void 0,!0)]),"doc-before":f(()=>[u(c.$slots,"doc-before",{},void 0,!0)]),"doc-after":f(()=>[u(c.$slots,"doc-after",{},void 0,!0)]),"doc-top":f(()=>[u(c.$slots,"doc-top",{},void 0,!0)]),"doc-bottom":f(()=>[u(c.$slots,"doc-bottom",{},void 0,!0)]),"aside-top":f(()=>[u(c.$slots,"aside-top",{},void 0,!0)]),"aside-bottom":f(()=>[u(c.$slots,"aside-bottom",{},void 0,!0)]),"aside-outline-before":f(()=>[u(c.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(c.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(c.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(c.$slots,"aside-ads-after",{},void 0,!0)]),_:3}),g(ls),u(c.$slots,"layout-bottom",{},void 0,!0)],2)):(a(),k(y,{key:1}))}}}),Da=$(Oa,[["__scopeId","data-v-a9a9e638"]]),Ve={Layout:Da,enhanceApp:({app:o})=>{o.component("Badge",rt)}},Fa={};function Ra(o,e){return e[0]||(e[0]=st('

    Trusted by

    Scientific Computing

    SciML.ai

    Machine Learning

    ',3))}const Ua=$(Fa,[["render",Ra]]),ja=b({__name:"VersionPicker",props:{screenMenu:{type:Boolean}},setup(o){const e=T([]),t=T("Versions"),s=T(!1);Te();const n=()=>typeof window<"u"&&(window.location.hostname==="localhost"||window.location.hostname==="127.0.0.1"),i=()=>{if(typeof window>"u")return"";const{origin:c,pathname:h}=window.location;if(c.includes("github.io")){const y=h.split("/").filter(Boolean),m=y.length>0?`/${y[0]}/`:"/";return`${c}${m}`}else return c},l=()=>new Promise(c=>{if(n()){c(!1);return}const h=setInterval(()=>{window.DOC_VERSIONS&&window.DOCUMENTER_CURRENT_VERSION&&(clearInterval(h),c(!0))},100);setTimeout(()=>{clearInterval(h),c(!1)},5e3)});return F(async()=>{if(!(typeof window>"u")){try{if(n()){const c=["dev"];e.value=c.map(h=>({text:h,link:"/"})),t.value="dev"}else{const c=await l(),h=P(()=>i());if(c&&window.DOC_VERSIONS&&window.DOCUMENTER_CURRENT_VERSION)e.value=window.DOC_VERSIONS.map(y=>({text:y,link:`${h.value}/${y}/`})),t.value=window.DOCUMENTER_CURRENT_VERSION;else{const y=["dev"];e.value=y.map(m=>({text:m,link:`${h.value}/${m}/`})),t.value="dev"}}}catch(c){console.warn("Error loading versions:",c);const h=["dev"],y=P(()=>i());e.value=h.map(m=>({text:m,link:`${y.value}/${m}/`})),t.value="dev"}s.value=!0}}),(c,h)=>s.value?(a(),d(M,{key:0},[!c.screenMenu&&e.value.length>0?(a(),k(Fe,{key:0,item:{text:t.value,items:e.value},class:"VPVersionPicker"},null,8,["item"])):c.screenMenu&&e.value.length>0?(a(),k(Ue,{key:1,text:t.value,items:e.value,class:"VPVersionPicker"},null,8,["text","items"])):_("",!0)],64)):_("",!0)}}),Ga=$(ja,[["__scopeId","data-v-d483b3a6"]]),za=o=>{if(typeof document>"u")return{stabilizeScrollPosition:n=>async(...i)=>n(...i)};const e=document.documentElement;return{stabilizeScrollPosition:s=>async(...n)=>{const i=s(...n),l=o.value;if(!l)return i;const p=l.offsetTop-e.scrollTop;return await he(),e.scrollTop=l.offsetTop-p,i}}},je="vitepress:tabSharedState",J=typeof localStorage<"u"?localStorage:null,Ge="vitepress:tabsSharedState",Ka=()=>{const o=J==null?void 0:J.getItem(Ge);if(o)try{return JSON.parse(o)}catch{}return{}},Wa=o=>{J&&J.setItem(Ge,JSON.stringify(o))},qa=o=>{const e=ot({});D(()=>e.content,(t,s)=>{t&&s&&Wa(t)},{deep:!0}),o.provide(je,e)},Ja=(o,e)=>{const t=q(je);if(!t)throw new Error("[vitepress-plugin-tabs] TabsSharedState should be injected");F(()=>{t.content||(t.content=Ka())});const s=T(),n=P({get(){var c;const l=e.value,p=o.value;if(l){const h=(c=t.content)==null?void 0:c[l];if(h&&p.includes(h))return h}else{const h=s.value;if(h)return h}return p[0]},set(l){const p=e.value;p?t.content&&(t.content[p]=l):s.value=l}});return{selected:n,select:l=>{n.value=l}}};let Le=0;const Ya=()=>(Le++,""+Le);function Xa(){const o=Ae();return P(()=>{var s;const t=(s=o.default)==null?void 0:s.call(o);return t?t.filter(n=>typeof n.type=="object"&&"__name"in n.type&&n.type.__name==="PluginTabsTab"&&n.props).map(n=>{var i;return(i=n.props)==null?void 0:i.label}):[]})}const ze="vitepress:tabSingleState",Qa=o=>{me(ze,o)},Za=()=>{const o=q(ze);if(!o)throw new Error("[vitepress-plugin-tabs] TabsSingleState should be injected");return o},xa={class:"plugin-tabs"},er=["id","aria-selected","aria-controls","tabindex","onClick"],tr=b({__name:"PluginTabs",props:{sharedStateKey:{}},setup(o){const e=o,t=Xa(),{selected:s,select:n}=Ja(t,at(e,"sharedStateKey")),i=T(),{stabilizeScrollPosition:l}=za(i),p=l(n),c=T([]),h=m=>{var C;const S=t.value.indexOf(s.value);let V;m.key==="ArrowLeft"?V=S>=1?S-1:t.value.length-1:m.key==="ArrowRight"&&(V=S(a(),d("div",xa,[v("div",{ref_key:"tablist",ref:i,class:"plugin-tabs--tab-list",role:"tablist",onKeydown:h},[(a(!0),d(M,null,E(r(t),V=>(a(),d("button",{id:`tab-${V}-${r(y)}`,ref_for:!0,ref_key:"buttonRefs",ref:c,key:V,role:"tab",class:"plugin-tabs--tab","aria-selected":V===r(s),"aria-controls":`panel-${V}-${r(y)}`,tabindex:V===r(s)?0:-1,onClick:()=>r(p)(V)},N(V),9,er))),128))],544),u(m.$slots,"default")]))}}),nr=["id","aria-labelledby"],sr=b({__name:"PluginTabsTab",props:{label:{}},setup(o){const{uid:e,selected:t}=Za();return(s,n)=>r(t)===s.label?(a(),d("div",{key:0,id:`panel-${s.label}-${r(e)}`,class:"plugin-tabs--content",role:"tabpanel",tabindex:"0","aria-labelledby":`tab-${s.label}-${r(e)}`},[u(s.$slots,"default",{},void 0,!0)],8,nr)):_("",!0)}}),or=$(sr,[["__scopeId","data-v-9b0d03d2"]]),ar=o=>{qa(o),o.component("PluginTabs",tr),o.component("PluginTabsTab",or)},ir={extends:Ve,Layout(){return Pe(Ve.Layout,null,{"aside-ads-before":()=>Pe(Ua)})},enhanceApp({app:o}){ar(o),o.component("VersionPicker",Ga)}};export{ir as R,yo as c,L as u}; +const __vite__mapDeps=(i,m=__vite__mapDeps,d=(m.f||(m.f=["assets/chunks/VPLocalSearchBox.DM1AnBlx.js","assets/chunks/framework.BetCMmtc.js"])))=>i.map(i=>d[i]); +import{d as b,o as a,c as d,r as u,n as I,a as G,t as N,b as k,w as f,e as _,T as de,_ as $,u as Te,i as Ke,f as We,g as pe,h as P,j as v,k as r,l as z,m as re,p as T,q as D,s as Z,v as F,x as ve,y as fe,z as qe,A as Je,B as K,F as M,C as E,D as we,E as x,G as g,H,I as Ne,J as ee,K as j,L as q,M as Ye,N as Ie,O as ie,P as he,Q as Me,R as te,S as Xe,U as Qe,V as Ze,W as Ce,X as me,Y as xe,Z as et,$ as tt,a0 as nt,a1 as Ae,a2 as st,a3 as ot,a4 as at,a5 as Pe}from"./framework.BetCMmtc.js";const rt=b({__name:"VPBadge",props:{text:{},type:{default:"tip"}},setup(o){return(e,t)=>(a(),d("span",{class:I(["VPBadge",e.type])},[u(e.$slots,"default",{},()=>[G(N(e.text),1)])],2))}}),it={key:0,class:"VPBackdrop"},lt=b({__name:"VPBackdrop",props:{show:{type:Boolean}},setup(o){return(e,t)=>(a(),k(de,{name:"fade"},{default:f(()=>[e.show?(a(),d("div",it)):_("",!0)]),_:1}))}}),ct=$(lt,[["__scopeId","data-v-b06cdb19"]]),L=Te;function ut(o,e){let t,s=!1;return()=>{t&&clearTimeout(t),s?t=setTimeout(o,e):(o(),(s=!0)&&setTimeout(()=>s=!1,e))}}function le(o){return o.startsWith("/")?o:`/${o}`}function _e(o){const{pathname:e,search:t,hash:s,protocol:n}=new URL(o,"http://a.com");if(Ke(o)||o.startsWith("#")||!n.startsWith("http")||!We(e))return o;const{site:i}=L(),l=e.endsWith("/")||e.endsWith(".html")?o:o.replace(/(?:(^\.+)\/)?.*$/,`$1${e.replace(/(\.md)?$/,i.value.cleanUrls?"":".html")}${t}${s}`);return pe(l)}function Y({correspondingLink:o=!1}={}){const{site:e,localeIndex:t,page:s,theme:n,hash:i}=L(),l=P(()=>{var c,h;return{label:(c=e.value.locales[t.value])==null?void 0:c.label,link:((h=e.value.locales[t.value])==null?void 0:h.link)||(t.value==="root"?"/":`/${t.value}/`)}});return{localeLinks:P(()=>Object.entries(e.value.locales).flatMap(([c,h])=>l.value.label===h.label?[]:{text:h.label,link:dt(h.link||(c==="root"?"/":`/${c}/`),n.value.i18nRouting!==!1&&o,s.value.relativePath.slice(l.value.link.length-1),!e.value.cleanUrls)+i.value})),currentLang:l}}function dt(o,e,t,s){return e?o.replace(/\/$/,"")+le(t.replace(/(^|\/)index\.md$/,"$1").replace(/\.md$/,s?".html":"")):o}const pt={class:"NotFound"},vt={class:"code"},ft={class:"title"},ht={class:"quote"},mt={class:"action"},_t=["href","aria-label"],bt=b({__name:"NotFound",setup(o){const{theme:e}=L(),{currentLang:t}=Y();return(s,n)=>{var i,l,p,c,h;return a(),d("div",pt,[v("p",vt,N(((i=r(e).notFound)==null?void 0:i.code)??"404"),1),v("h1",ft,N(((l=r(e).notFound)==null?void 0:l.title)??"PAGE NOT FOUND"),1),n[0]||(n[0]=v("div",{class:"divider"},null,-1)),v("blockquote",ht,N(((p=r(e).notFound)==null?void 0:p.quote)??"But if you don't change your direction, and if you keep looking, you may end up where you are heading."),1),v("div",mt,[v("a",{class:"link",href:r(pe)(r(t).link),"aria-label":((c=r(e).notFound)==null?void 0:c.linkLabel)??"go to home"},N(((h=r(e).notFound)==null?void 0:h.linkText)??"Take me home"),9,_t)])])}}}),kt=$(bt,[["__scopeId","data-v-951cab6c"]]);function Ee(o,e){if(Array.isArray(o))return X(o);if(o==null)return[];e=le(e);const t=Object.keys(o).sort((n,i)=>i.split("/").length-n.split("/").length).find(n=>e.startsWith(le(n))),s=t?o[t]:[];return Array.isArray(s)?X(s):X(s.items,s.base)}function gt(o){const e=[];let t=0;for(const s in o){const n=o[s];if(n.items){t=e.push(n);continue}e[t]||e.push({items:[]}),e[t].items.push(n)}return e}function $t(o){const e=[];function t(s){for(const n of s)n.text&&n.link&&e.push({text:n.text,link:n.link,docFooterText:n.docFooterText}),n.items&&t(n.items)}return t(o),e}function ce(o,e){return Array.isArray(e)?e.some(t=>ce(o,t)):z(o,e.link)?!0:e.items?ce(o,e.items):!1}function X(o,e){return[...o].map(t=>{const s={...t},n=s.base||e;return n&&s.link&&(s.link=n+s.link),s.items&&(s.items=X(s.items,n)),s})}function R(){const{frontmatter:o,page:e,theme:t}=L(),s=re("(min-width: 960px)"),n=T(!1),i=P(()=>{const A=t.value.sidebar,w=e.value.relativePath;return A?Ee(A,w):[]}),l=T(i.value);D(i,(A,w)=>{JSON.stringify(A)!==JSON.stringify(w)&&(l.value=i.value)});const p=P(()=>o.value.sidebar!==!1&&l.value.length>0&&o.value.layout!=="home"),c=P(()=>h?o.value.aside==null?t.value.aside==="left":o.value.aside==="left":!1),h=P(()=>o.value.layout==="home"?!1:o.value.aside!=null?!!o.value.aside:t.value.aside!==!1),y=P(()=>p.value&&s.value),m=P(()=>p.value?gt(l.value):[]);function S(){n.value=!0}function V(){n.value=!1}function C(){n.value?V():S()}return{isOpen:n,sidebar:l,sidebarGroups:m,hasSidebar:p,hasAside:h,leftAside:c,isSidebarEnabled:y,open:S,close:V,toggle:C}}function yt(o,e){let t;Z(()=>{t=o.value?document.activeElement:void 0}),F(()=>{window.addEventListener("keyup",s)}),ve(()=>{window.removeEventListener("keyup",s)});function s(n){n.key==="Escape"&&o.value&&(e(),t==null||t.focus())}}function Pt(o){const{page:e,hash:t}=L(),s=T(!1),n=P(()=>o.value.collapsed!=null),i=P(()=>!!o.value.link),l=T(!1),p=()=>{l.value=z(e.value.relativePath,o.value.link)};D([e,o,t],p),F(p);const c=P(()=>l.value?!0:o.value.items?ce(e.value.relativePath,o.value.items):!1),h=P(()=>!!(o.value.items&&o.value.items.length));Z(()=>{s.value=!!(n.value&&o.value.collapsed)}),fe(()=>{(l.value||c.value)&&(s.value=!1)});function y(){n.value&&(s.value=!s.value)}return{collapsed:s,collapsible:n,isLink:i,isActiveLink:l,hasActiveLink:c,hasChildren:h,toggle:y}}function St(){const{hasSidebar:o}=R(),e=re("(min-width: 960px)"),t=re("(min-width: 1280px)");return{isAsideEnabled:P(()=>!t.value&&!e.value?!1:o.value?t.value:e.value)}}const Vt=/\b(?:VPBadge|header-anchor|footnote-ref|ignore-header)\b/,ue=[];function Be(o){return typeof o.outline=="object"&&!Array.isArray(o.outline)&&o.outline.label||o.outlineTitle||"On this page"}function be(o){const e=[...document.querySelectorAll(".VPDoc :where(h1,h2,h3,h4,h5,h6)")].filter(t=>t.id&&t.hasChildNodes()).map(t=>{const s=Number(t.tagName[1]);return{element:t,title:Lt(t),link:"#"+t.id,level:s}});return Tt(e,o)}function Lt(o){let e="";for(const t of o.childNodes)if(t.nodeType===1){if(Vt.test(t.className))continue;e+=t.textContent}else t.nodeType===3&&(e+=t.textContent);return e.trim()}function Tt(o,e){if(e===!1)return[];const t=(typeof e=="object"&&!Array.isArray(e)?e.level:e)||2,[s,n]=typeof t=="number"?[t,t]:t==="deep"?[2,6]:t;return It(o,s,n)}function wt(o,e){const{isAsideEnabled:t}=St(),s=ut(i,100);let n=null;F(()=>{requestAnimationFrame(i),window.addEventListener("scroll",s)}),qe(()=>{l(location.hash)}),ve(()=>{window.removeEventListener("scroll",s)});function i(){if(!t.value)return;const p=window.scrollY,c=window.innerHeight,h=document.body.offsetHeight,y=Math.abs(p+c-h)<1,m=ue.map(({element:V,link:C})=>({link:C,top:Nt(V)})).filter(({top:V})=>!Number.isNaN(V)).sort((V,C)=>V.top-C.top);if(!m.length){l(null);return}if(p<1){l(null);return}if(y){l(m[m.length-1].link);return}let S=null;for(const{link:V,top:C}of m){if(C>p+Je()+4)break;S=V}l(S)}function l(p){n&&n.classList.remove("active"),p==null?n=null:n=o.value.querySelector(`a[href="${decodeURIComponent(p)}"]`);const c=n;c?(c.classList.add("active"),e.value.style.top=c.offsetTop+39+"px",e.value.style.opacity="1"):(e.value.style.top="33px",e.value.style.opacity="0")}}function Nt(o){let e=0;for(;o!==document.body;){if(o===null)return NaN;e+=o.offsetTop,o=o.offsetParent}return e}function It(o,e,t){ue.length=0;const s=[],n=[];return o.forEach(i=>{const l={...i,children:[]};let p=n[n.length-1];for(;p&&p.level>=l.level;)n.pop(),p=n[n.length-1];if(l.element.classList.contains("ignore-header")||p&&"shouldIgnore"in p){n.push({level:l.level,shouldIgnore:!0});return}l.level>t||l.level{const n=K("VPDocOutlineItem",!0);return a(),d("ul",{class:I(["VPDocOutlineItem",t.root?"root":"nested"])},[(a(!0),d(M,null,E(t.headers,({children:i,link:l,title:p})=>(a(),d("li",null,[v("a",{class:"outline-link",href:l,onClick:e,title:p},N(p),9,Mt),i!=null&&i.length?(a(),k(n,{key:0,headers:i},null,8,["headers"])):_("",!0)]))),256))],2)}}}),He=$(Ct,[["__scopeId","data-v-3f927ebe"]]),At={class:"content"},Et={"aria-level":"2",class:"outline-title",id:"doc-outline-aria-label",role:"heading"},Bt=b({__name:"VPDocAsideOutline",setup(o){const{frontmatter:e,theme:t}=L(),s=we([]);x(()=>{s.value=be(e.value.outline??t.value.outline)});const n=T(),i=T();return wt(n,i),(l,p)=>(a(),d("nav",{"aria-labelledby":"doc-outline-aria-label",class:I(["VPDocAsideOutline",{"has-outline":s.value.length>0}]),ref_key:"container",ref:n},[v("div",At,[v("div",{class:"outline-marker",ref_key:"marker",ref:i},null,512),v("div",Et,N(r(Be)(r(t))),1),g(He,{headers:s.value,root:!0},null,8,["headers"])])],2))}}),Ht=$(Bt,[["__scopeId","data-v-b38bf2ff"]]),Ot={class:"VPDocAsideCarbonAds"},Dt=b({__name:"VPDocAsideCarbonAds",props:{carbonAds:{}},setup(o){const e=()=>null;return(t,s)=>(a(),d("div",Ot,[g(r(e),{"carbon-ads":t.carbonAds},null,8,["carbon-ads"])]))}}),Ft={class:"VPDocAside"},Rt=b({__name:"VPDocAside",setup(o){const{theme:e}=L();return(t,s)=>(a(),d("div",Ft,[u(t.$slots,"aside-top",{},void 0,!0),u(t.$slots,"aside-outline-before",{},void 0,!0),g(Ht),u(t.$slots,"aside-outline-after",{},void 0,!0),s[0]||(s[0]=v("div",{class:"spacer"},null,-1)),u(t.$slots,"aside-ads-before",{},void 0,!0),r(e).carbonAds?(a(),k(Dt,{key:0,"carbon-ads":r(e).carbonAds},null,8,["carbon-ads"])):_("",!0),u(t.$slots,"aside-ads-after",{},void 0,!0),u(t.$slots,"aside-bottom",{},void 0,!0)]))}}),Ut=$(Rt,[["__scopeId","data-v-6d7b3c46"]]);function jt(){const{theme:o,page:e}=L();return P(()=>{const{text:t="Edit this page",pattern:s=""}=o.value.editLink||{};let n;return typeof s=="function"?n=s(e.value):n=s.replace(/:path/g,e.value.filePath),{url:n,text:t}})}function Gt(){const{page:o,theme:e,frontmatter:t}=L();return P(()=>{var h,y,m,S,V,C,A,w;const s=Ee(e.value.sidebar,o.value.relativePath),n=$t(s),i=zt(n,B=>B.link.replace(/[?#].*$/,"")),l=i.findIndex(B=>z(o.value.relativePath,B.link)),p=((h=e.value.docFooter)==null?void 0:h.prev)===!1&&!t.value.prev||t.value.prev===!1,c=((y=e.value.docFooter)==null?void 0:y.next)===!1&&!t.value.next||t.value.next===!1;return{prev:p?void 0:{text:(typeof t.value.prev=="string"?t.value.prev:typeof t.value.prev=="object"?t.value.prev.text:void 0)??((m=i[l-1])==null?void 0:m.docFooterText)??((S=i[l-1])==null?void 0:S.text),link:(typeof t.value.prev=="object"?t.value.prev.link:void 0)??((V=i[l-1])==null?void 0:V.link)},next:c?void 0:{text:(typeof t.value.next=="string"?t.value.next:typeof t.value.next=="object"?t.value.next.text:void 0)??((C=i[l+1])==null?void 0:C.docFooterText)??((A=i[l+1])==null?void 0:A.text),link:(typeof t.value.next=="object"?t.value.next.link:void 0)??((w=i[l+1])==null?void 0:w.link)}}})}function zt(o,e){const t=new Set;return o.filter(s=>{const n=e(s);return t.has(n)?!1:t.add(n)})}const O=b({__name:"VPLink",props:{tag:{},href:{},noIcon:{type:Boolean},target:{},rel:{}},setup(o){const e=o,t=P(()=>e.tag??(e.href?"a":"span")),s=P(()=>e.href&&Ne.test(e.href)||e.target==="_blank");return(n,i)=>(a(),k(H(t.value),{class:I(["VPLink",{link:n.href,"vp-external-link-icon":s.value,"no-icon":n.noIcon}]),href:n.href?r(_e)(n.href):void 0,target:n.target??(s.value?"_blank":void 0),rel:n.rel??(s.value?"noreferrer":void 0)},{default:f(()=>[u(n.$slots,"default")]),_:3},8,["class","href","target","rel"]))}}),Kt={class:"VPLastUpdated"},Wt=["datetime"],qt=b({__name:"VPDocFooterLastUpdated",setup(o){const{theme:e,page:t,lang:s}=L(),n=P(()=>new Date(t.value.lastUpdated)),i=P(()=>n.value.toISOString()),l=T("");return F(()=>{Z(()=>{var p,c,h;l.value=new Intl.DateTimeFormat((c=(p=e.value.lastUpdated)==null?void 0:p.formatOptions)!=null&&c.forceLocale?s.value:void 0,((h=e.value.lastUpdated)==null?void 0:h.formatOptions)??{dateStyle:"short",timeStyle:"short"}).format(n.value)})}),(p,c)=>{var h;return a(),d("p",Kt,[G(N(((h=r(e).lastUpdated)==null?void 0:h.text)||r(e).lastUpdatedText||"Last updated")+": ",1),v("time",{datetime:i.value},N(l.value),9,Wt)])}}}),Jt=$(qt,[["__scopeId","data-v-475f71b8"]]),Yt={key:0,class:"VPDocFooter"},Xt={key:0,class:"edit-info"},Qt={key:0,class:"edit-link"},Zt={key:1,class:"last-updated"},xt={key:1,class:"prev-next","aria-labelledby":"doc-footer-aria-label"},en={class:"pager"},tn=["innerHTML"],nn=["innerHTML"],sn={class:"pager"},on=["innerHTML"],an=["innerHTML"],rn=b({__name:"VPDocFooter",setup(o){const{theme:e,page:t,frontmatter:s}=L(),n=jt(),i=Gt(),l=P(()=>e.value.editLink&&s.value.editLink!==!1),p=P(()=>t.value.lastUpdated),c=P(()=>l.value||p.value||i.value.prev||i.value.next);return(h,y)=>{var m,S,V,C;return c.value?(a(),d("footer",Yt,[u(h.$slots,"doc-footer-before",{},void 0,!0),l.value||p.value?(a(),d("div",Xt,[l.value?(a(),d("div",Qt,[g(O,{class:"edit-link-button",href:r(n).url,"no-icon":!0},{default:f(()=>[y[0]||(y[0]=v("span",{class:"vpi-square-pen edit-link-icon"},null,-1)),G(" "+N(r(n).text),1)]),_:1},8,["href"])])):_("",!0),p.value?(a(),d("div",Zt,[g(Jt)])):_("",!0)])):_("",!0),(m=r(i).prev)!=null&&m.link||(S=r(i).next)!=null&&S.link?(a(),d("nav",xt,[y[1]||(y[1]=v("span",{class:"visually-hidden",id:"doc-footer-aria-label"},"Pager",-1)),v("div",en,[(V=r(i).prev)!=null&&V.link?(a(),k(O,{key:0,class:"pager-link prev",href:r(i).prev.link},{default:f(()=>{var A;return[v("span",{class:"desc",innerHTML:((A=r(e).docFooter)==null?void 0:A.prev)||"Previous page"},null,8,tn),v("span",{class:"title",innerHTML:r(i).prev.text},null,8,nn)]}),_:1},8,["href"])):_("",!0)]),v("div",sn,[(C=r(i).next)!=null&&C.link?(a(),k(O,{key:0,class:"pager-link next",href:r(i).next.link},{default:f(()=>{var A;return[v("span",{class:"desc",innerHTML:((A=r(e).docFooter)==null?void 0:A.next)||"Next page"},null,8,on),v("span",{class:"title",innerHTML:r(i).next.text},null,8,an)]}),_:1},8,["href"])):_("",!0)])])):_("",!0)])):_("",!0)}}}),ln=$(rn,[["__scopeId","data-v-4f9813fa"]]),cn={class:"container"},un={class:"aside-container"},dn={class:"aside-content"},pn={class:"content"},vn={class:"content-container"},fn={class:"main"},hn=b({__name:"VPDoc",setup(o){const{theme:e}=L(),t=ee(),{hasSidebar:s,hasAside:n,leftAside:i}=R(),l=P(()=>t.path.replace(/[./]+/g,"_").replace(/_html$/,""));return(p,c)=>{const h=K("Content");return a(),d("div",{class:I(["VPDoc",{"has-sidebar":r(s),"has-aside":r(n)}])},[u(p.$slots,"doc-top",{},void 0,!0),v("div",cn,[r(n)?(a(),d("div",{key:0,class:I(["aside",{"left-aside":r(i)}])},[c[0]||(c[0]=v("div",{class:"aside-curtain"},null,-1)),v("div",un,[v("div",dn,[g(Ut,null,{"aside-top":f(()=>[u(p.$slots,"aside-top",{},void 0,!0)]),"aside-bottom":f(()=>[u(p.$slots,"aside-bottom",{},void 0,!0)]),"aside-outline-before":f(()=>[u(p.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(p.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(p.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(p.$slots,"aside-ads-after",{},void 0,!0)]),_:3})])])],2)):_("",!0),v("div",pn,[v("div",vn,[u(p.$slots,"doc-before",{},void 0,!0),v("main",fn,[g(h,{class:I(["vp-doc",[l.value,r(e).externalLinkIcon&&"external-link-icon-enabled"]])},null,8,["class"])]),g(ln,null,{"doc-footer-before":f(()=>[u(p.$slots,"doc-footer-before",{},void 0,!0)]),_:3}),u(p.$slots,"doc-after",{},void 0,!0)])])]),u(p.$slots,"doc-bottom",{},void 0,!0)],2)}}}),mn=$(hn,[["__scopeId","data-v-83890dd9"]]),_n=b({__name:"VPButton",props:{tag:{},size:{default:"medium"},theme:{default:"brand"},text:{},href:{},target:{},rel:{}},setup(o){const e=o,t=P(()=>e.href&&Ne.test(e.href)),s=P(()=>e.tag||(e.href?"a":"button"));return(n,i)=>(a(),k(H(s.value),{class:I(["VPButton",[n.size,n.theme]]),href:n.href?r(_e)(n.href):void 0,target:e.target??(t.value?"_blank":void 0),rel:e.rel??(t.value?"noreferrer":void 0)},{default:f(()=>[G(N(n.text),1)]),_:1},8,["class","href","target","rel"]))}}),bn=$(_n,[["__scopeId","data-v-906d7fb4"]]),kn=["src","alt"],gn=b({inheritAttrs:!1,__name:"VPImage",props:{image:{},alt:{}},setup(o){return(e,t)=>{const s=K("VPImage",!0);return e.image?(a(),d(M,{key:0},[typeof e.image=="string"||"src"in e.image?(a(),d("img",j({key:0,class:"VPImage"},typeof e.image=="string"?e.$attrs:{...e.image,...e.$attrs},{src:r(pe)(typeof e.image=="string"?e.image:e.image.src),alt:e.alt??(typeof e.image=="string"?"":e.image.alt||"")}),null,16,kn)):(a(),d(M,{key:1},[g(s,j({class:"dark",image:e.image.dark,alt:e.image.alt},e.$attrs),null,16,["image","alt"]),g(s,j({class:"light",image:e.image.light,alt:e.image.alt},e.$attrs),null,16,["image","alt"])],64))],64)):_("",!0)}}}),Q=$(gn,[["__scopeId","data-v-35a7d0b8"]]),$n={class:"container"},yn={class:"main"},Pn={class:"heading"},Sn=["innerHTML"],Vn=["innerHTML"],Ln=["innerHTML"],Tn={key:0,class:"actions"},wn={key:0,class:"image"},Nn={class:"image-container"},In=b({__name:"VPHero",props:{name:{},text:{},tagline:{},image:{},actions:{}},setup(o){const e=q("hero-image-slot-exists");return(t,s)=>(a(),d("div",{class:I(["VPHero",{"has-image":t.image||r(e)}])},[v("div",$n,[v("div",yn,[u(t.$slots,"home-hero-info-before",{},void 0,!0),u(t.$slots,"home-hero-info",{},()=>[v("h1",Pn,[t.name?(a(),d("span",{key:0,innerHTML:t.name,class:"name clip"},null,8,Sn)):_("",!0),t.text?(a(),d("span",{key:1,innerHTML:t.text,class:"text"},null,8,Vn)):_("",!0)]),t.tagline?(a(),d("p",{key:0,innerHTML:t.tagline,class:"tagline"},null,8,Ln)):_("",!0)],!0),u(t.$slots,"home-hero-info-after",{},void 0,!0),t.actions?(a(),d("div",Tn,[(a(!0),d(M,null,E(t.actions,n=>(a(),d("div",{key:n.link,class:"action"},[g(bn,{tag:"a",size:"medium",theme:n.theme,text:n.text,href:n.link,target:n.target,rel:n.rel},null,8,["theme","text","href","target","rel"])]))),128))])):_("",!0),u(t.$slots,"home-hero-actions-after",{},void 0,!0)]),t.image||r(e)?(a(),d("div",wn,[v("div",Nn,[s[0]||(s[0]=v("div",{class:"image-bg"},null,-1)),u(t.$slots,"home-hero-image",{},()=>[t.image?(a(),k(Q,{key:0,class:"image-src",image:t.image},null,8,["image"])):_("",!0)],!0)])])):_("",!0)])],2))}}),Mn=$(In,[["__scopeId","data-v-3d256e5e"]]),Cn=b({__name:"VPHomeHero",setup(o){const{frontmatter:e}=L();return(t,s)=>r(e).hero?(a(),k(Mn,{key:0,class:"VPHomeHero",name:r(e).hero.name,text:r(e).hero.text,tagline:r(e).hero.tagline,image:r(e).hero.image,actions:r(e).hero.actions},{"home-hero-info-before":f(()=>[u(t.$slots,"home-hero-info-before")]),"home-hero-info":f(()=>[u(t.$slots,"home-hero-info")]),"home-hero-info-after":f(()=>[u(t.$slots,"home-hero-info-after")]),"home-hero-actions-after":f(()=>[u(t.$slots,"home-hero-actions-after")]),"home-hero-image":f(()=>[u(t.$slots,"home-hero-image")]),_:3},8,["name","text","tagline","image","actions"])):_("",!0)}}),An={class:"box"},En={key:0,class:"icon"},Bn=["innerHTML"],Hn=["innerHTML"],On=["innerHTML"],Dn={key:4,class:"link-text"},Fn={class:"link-text-value"},Rn=b({__name:"VPFeature",props:{icon:{},title:{},details:{},link:{},linkText:{},rel:{},target:{}},setup(o){return(e,t)=>(a(),k(O,{class:"VPFeature",href:e.link,rel:e.rel,target:e.target,"no-icon":!0,tag:e.link?"a":"div"},{default:f(()=>[v("article",An,[typeof e.icon=="object"&&e.icon.wrap?(a(),d("div",En,[g(Q,{image:e.icon,alt:e.icon.alt,height:e.icon.height||48,width:e.icon.width||48},null,8,["image","alt","height","width"])])):typeof e.icon=="object"?(a(),k(Q,{key:1,image:e.icon,alt:e.icon.alt,height:e.icon.height||48,width:e.icon.width||48},null,8,["image","alt","height","width"])):e.icon?(a(),d("div",{key:2,class:"icon",innerHTML:e.icon},null,8,Bn)):_("",!0),v("h2",{class:"title",innerHTML:e.title},null,8,Hn),e.details?(a(),d("p",{key:3,class:"details",innerHTML:e.details},null,8,On)):_("",!0),e.linkText?(a(),d("div",Dn,[v("p",Fn,[G(N(e.linkText)+" ",1),t[0]||(t[0]=v("span",{class:"vpi-arrow-right link-text-icon"},null,-1))])])):_("",!0)])]),_:1},8,["href","rel","target","tag"]))}}),Un=$(Rn,[["__scopeId","data-v-f5e9645b"]]),jn={key:0,class:"VPFeatures"},Gn={class:"container"},zn={class:"items"},Kn=b({__name:"VPFeatures",props:{features:{}},setup(o){const e=o,t=P(()=>{const s=e.features.length;if(s){if(s===2)return"grid-2";if(s===3)return"grid-3";if(s%3===0)return"grid-6";if(s>3)return"grid-4"}else return});return(s,n)=>s.features?(a(),d("div",jn,[v("div",Gn,[v("div",zn,[(a(!0),d(M,null,E(s.features,i=>(a(),d("div",{key:i.title,class:I(["item",[t.value]])},[g(Un,{icon:i.icon,title:i.title,details:i.details,link:i.link,"link-text":i.linkText,rel:i.rel,target:i.target},null,8,["icon","title","details","link","link-text","rel","target"])],2))),128))])])])):_("",!0)}}),Wn=$(Kn,[["__scopeId","data-v-d0a190d7"]]),qn=b({__name:"VPHomeFeatures",setup(o){const{frontmatter:e}=L();return(t,s)=>r(e).features?(a(),k(Wn,{key:0,class:"VPHomeFeatures",features:r(e).features},null,8,["features"])):_("",!0)}}),Jn=b({__name:"VPHomeContent",setup(o){const{width:e}=Ye({initialWidth:0,includeScrollbar:!1});return(t,s)=>(a(),d("div",{class:"vp-doc container",style:Ie(r(e)?{"--vp-offset":`calc(50% - ${r(e)/2}px)`}:{})},[u(t.$slots,"default",{},void 0,!0)],4))}}),Yn=$(Jn,[["__scopeId","data-v-7a48a447"]]),Xn=b({__name:"VPHome",setup(o){const{frontmatter:e,theme:t}=L();return(s,n)=>{const i=K("Content");return a(),d("div",{class:I(["VPHome",{"external-link-icon-enabled":r(t).externalLinkIcon}])},[u(s.$slots,"home-hero-before",{},void 0,!0),g(Cn,null,{"home-hero-info-before":f(()=>[u(s.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(s.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(s.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(s.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(s.$slots,"home-hero-image",{},void 0,!0)]),_:3}),u(s.$slots,"home-hero-after",{},void 0,!0),u(s.$slots,"home-features-before",{},void 0,!0),g(qn),u(s.$slots,"home-features-after",{},void 0,!0),r(e).markdownStyles!==!1?(a(),k(Yn,{key:0},{default:f(()=>[g(i)]),_:1})):(a(),k(i,{key:1}))],2)}}}),Qn=$(Xn,[["__scopeId","data-v-e40e30de"]]),Zn={},xn={class:"VPPage"};function es(o,e){const t=K("Content");return a(),d("div",xn,[u(o.$slots,"page-top"),g(t),u(o.$slots,"page-bottom")])}const ts=$(Zn,[["render",es]]),ns=b({__name:"VPContent",setup(o){const{page:e,frontmatter:t}=L(),{hasSidebar:s}=R();return(n,i)=>(a(),d("div",{class:I(["VPContent",{"has-sidebar":r(s),"is-home":r(t).layout==="home"}]),id:"VPContent"},[r(e).isNotFound?u(n.$slots,"not-found",{key:0},()=>[g(kt)],!0):r(t).layout==="page"?(a(),k(ts,{key:1},{"page-top":f(()=>[u(n.$slots,"page-top",{},void 0,!0)]),"page-bottom":f(()=>[u(n.$slots,"page-bottom",{},void 0,!0)]),_:3})):r(t).layout==="home"?(a(),k(Qn,{key:2},{"home-hero-before":f(()=>[u(n.$slots,"home-hero-before",{},void 0,!0)]),"home-hero-info-before":f(()=>[u(n.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(n.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(n.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(n.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(n.$slots,"home-hero-image",{},void 0,!0)]),"home-hero-after":f(()=>[u(n.$slots,"home-hero-after",{},void 0,!0)]),"home-features-before":f(()=>[u(n.$slots,"home-features-before",{},void 0,!0)]),"home-features-after":f(()=>[u(n.$slots,"home-features-after",{},void 0,!0)]),_:3})):r(t).layout&&r(t).layout!=="doc"?(a(),k(H(r(t).layout),{key:3})):(a(),k(mn,{key:4},{"doc-top":f(()=>[u(n.$slots,"doc-top",{},void 0,!0)]),"doc-bottom":f(()=>[u(n.$slots,"doc-bottom",{},void 0,!0)]),"doc-footer-before":f(()=>[u(n.$slots,"doc-footer-before",{},void 0,!0)]),"doc-before":f(()=>[u(n.$slots,"doc-before",{},void 0,!0)]),"doc-after":f(()=>[u(n.$slots,"doc-after",{},void 0,!0)]),"aside-top":f(()=>[u(n.$slots,"aside-top",{},void 0,!0)]),"aside-outline-before":f(()=>[u(n.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(n.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(n.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(n.$slots,"aside-ads-after",{},void 0,!0)]),"aside-bottom":f(()=>[u(n.$slots,"aside-bottom",{},void 0,!0)]),_:3}))],2))}}),ss=$(ns,[["__scopeId","data-v-91765379"]]),os={class:"container"},as=["innerHTML"],rs=["innerHTML"],is=b({__name:"VPFooter",setup(o){const{theme:e,frontmatter:t}=L(),{hasSidebar:s}=R();return(n,i)=>r(e).footer&&r(t).footer!==!1?(a(),d("footer",{key:0,class:I(["VPFooter",{"has-sidebar":r(s)}])},[v("div",os,[r(e).footer.message?(a(),d("p",{key:0,class:"message",innerHTML:r(e).footer.message},null,8,as)):_("",!0),r(e).footer.copyright?(a(),d("p",{key:1,class:"copyright",innerHTML:r(e).footer.copyright},null,8,rs)):_("",!0)])],2)):_("",!0)}}),ls=$(is,[["__scopeId","data-v-c970a860"]]);function cs(){const{theme:o,frontmatter:e}=L(),t=we([]),s=P(()=>t.value.length>0);return x(()=>{t.value=be(e.value.outline??o.value.outline)}),{headers:t,hasLocalNav:s}}const us={class:"menu-text"},ds={class:"header"},ps={class:"outline"},vs=b({__name:"VPLocalNavOutlineDropdown",props:{headers:{},navHeight:{}},setup(o){const e=o,{theme:t}=L(),s=T(!1),n=T(0),i=T(),l=T();function p(m){var S;(S=i.value)!=null&&S.contains(m.target)||(s.value=!1)}D(s,m=>{if(m){document.addEventListener("click",p);return}document.removeEventListener("click",p)}),ie("Escape",()=>{s.value=!1}),x(()=>{s.value=!1});function c(){s.value=!s.value,n.value=window.innerHeight+Math.min(window.scrollY-e.navHeight,0)}function h(m){m.target.classList.contains("outline-link")&&(l.value&&(l.value.style.transition="none"),he(()=>{s.value=!1}))}function y(){s.value=!1,window.scrollTo({top:0,left:0,behavior:"smooth"})}return(m,S)=>(a(),d("div",{class:"VPLocalNavOutlineDropdown",style:Ie({"--vp-vh":n.value+"px"}),ref_key:"main",ref:i},[m.headers.length>0?(a(),d("button",{key:0,onClick:c,class:I({open:s.value})},[v("span",us,N(r(Be)(r(t))),1),S[0]||(S[0]=v("span",{class:"vpi-chevron-right icon"},null,-1))],2)):(a(),d("button",{key:1,onClick:y},N(r(t).returnToTopLabel||"Return to top"),1)),g(de,{name:"flyout"},{default:f(()=>[s.value?(a(),d("div",{key:0,ref_key:"items",ref:l,class:"items",onClick:h},[v("div",ds,[v("a",{class:"top-link",href:"#",onClick:y},N(r(t).returnToTopLabel||"Return to top"),1)]),v("div",ps,[g(He,{headers:m.headers},null,8,["headers"])])],512)):_("",!0)]),_:1})],4))}}),fs=$(vs,[["__scopeId","data-v-168ddf5d"]]),hs={class:"container"},ms=["aria-expanded"],_s={class:"menu-text"},bs=b({__name:"VPLocalNav",props:{open:{type:Boolean}},emits:["open-menu"],setup(o){const{theme:e,frontmatter:t}=L(),{hasSidebar:s}=R(),{headers:n}=cs(),{y:i}=Me(),l=T(0);F(()=>{l.value=parseInt(getComputedStyle(document.documentElement).getPropertyValue("--vp-nav-height"))}),x(()=>{n.value=be(t.value.outline??e.value.outline)});const p=P(()=>n.value.length===0),c=P(()=>p.value&&!s.value),h=P(()=>({VPLocalNav:!0,"has-sidebar":s.value,empty:p.value,fixed:c.value}));return(y,m)=>r(t).layout!=="home"&&(!c.value||r(i)>=l.value)?(a(),d("div",{key:0,class:I(h.value)},[v("div",hs,[r(s)?(a(),d("button",{key:0,class:"menu","aria-expanded":y.open,"aria-controls":"VPSidebarNav",onClick:m[0]||(m[0]=S=>y.$emit("open-menu"))},[m[1]||(m[1]=v("span",{class:"vpi-align-left menu-icon"},null,-1)),v("span",_s,N(r(e).sidebarMenuLabel||"Menu"),1)],8,ms)):_("",!0),g(fs,{headers:r(n),navHeight:l.value},null,8,["headers","navHeight"])])],2)):_("",!0)}}),ks=$(bs,[["__scopeId","data-v-070ab83d"]]);function gs(){const o=T(!1);function e(){o.value=!0,window.addEventListener("resize",n)}function t(){o.value=!1,window.removeEventListener("resize",n)}function s(){o.value?t():e()}function n(){window.outerWidth>=768&&t()}const i=ee();return D(()=>i.path,t),{isScreenOpen:o,openScreen:e,closeScreen:t,toggleScreen:s}}const $s={},ys={class:"VPSwitch",type:"button",role:"switch"},Ps={class:"check"},Ss={key:0,class:"icon"};function Vs(o,e){return a(),d("button",ys,[v("span",Ps,[o.$slots.default?(a(),d("span",Ss,[u(o.$slots,"default",{},void 0,!0)])):_("",!0)])])}const Ls=$($s,[["render",Vs],["__scopeId","data-v-4a1c76db"]]),Ts=b({__name:"VPSwitchAppearance",setup(o){const{isDark:e,theme:t}=L(),s=q("toggle-appearance",()=>{e.value=!e.value}),n=T("");return fe(()=>{n.value=e.value?t.value.lightModeSwitchTitle||"Switch to light theme":t.value.darkModeSwitchTitle||"Switch to dark theme"}),(i,l)=>(a(),k(Ls,{title:n.value,class:"VPSwitchAppearance","aria-checked":r(e),onClick:r(s)},{default:f(()=>l[0]||(l[0]=[v("span",{class:"vpi-sun sun"},null,-1),v("span",{class:"vpi-moon moon"},null,-1)])),_:1},8,["title","aria-checked","onClick"]))}}),ke=$(Ts,[["__scopeId","data-v-e40a8bb6"]]),ws={key:0,class:"VPNavBarAppearance"},Ns=b({__name:"VPNavBarAppearance",setup(o){const{site:e}=L();return(t,s)=>r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",ws,[g(ke)])):_("",!0)}}),Is=$(Ns,[["__scopeId","data-v-af096f4a"]]),ge=T();let Oe=!1,ae=0;function Ms(o){const e=T(!1);if(te){!Oe&&Cs(),ae++;const t=D(ge,s=>{var n,i,l;s===o.el.value||(n=o.el.value)!=null&&n.contains(s)?(e.value=!0,(i=o.onFocus)==null||i.call(o)):(e.value=!1,(l=o.onBlur)==null||l.call(o))});ve(()=>{t(),ae--,ae||As()})}return Xe(e)}function Cs(){document.addEventListener("focusin",De),Oe=!0,ge.value=document.activeElement}function As(){document.removeEventListener("focusin",De)}function De(){ge.value=document.activeElement}const Es={class:"VPMenuLink"},Bs=["innerHTML"],Hs=b({__name:"VPMenuLink",props:{item:{}},setup(o){const{page:e}=L();return(t,s)=>(a(),d("div",Es,[g(O,{class:I({active:r(z)(r(e).relativePath,t.item.activeMatch||t.item.link,!!t.item.activeMatch)}),href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,Bs)]),_:1},8,["class","href","target","rel","no-icon"])]))}}),ne=$(Hs,[["__scopeId","data-v-acbfed09"]]),Os={class:"VPMenuGroup"},Ds={key:0,class:"title"},Fs=b({__name:"VPMenuGroup",props:{text:{},items:{}},setup(o){return(e,t)=>(a(),d("div",Os,[e.text?(a(),d("p",Ds,N(e.text),1)):_("",!0),(a(!0),d(M,null,E(e.items,s=>(a(),d(M,null,["link"in s?(a(),k(ne,{key:0,item:s},null,8,["item"])):_("",!0)],64))),256))]))}}),Rs=$(Fs,[["__scopeId","data-v-48c802d0"]]),Us={class:"VPMenu"},js={key:0,class:"items"},Gs=b({__name:"VPMenu",props:{items:{}},setup(o){return(e,t)=>(a(),d("div",Us,[e.items?(a(),d("div",js,[(a(!0),d(M,null,E(e.items,s=>(a(),d(M,{key:JSON.stringify(s)},["link"in s?(a(),k(ne,{key:0,item:s},null,8,["item"])):"component"in s?(a(),k(H(s.component),j({key:1,ref_for:!0},s.props),null,16)):(a(),k(Rs,{key:2,text:s.text,items:s.items},null,8,["text","items"]))],64))),128))])):_("",!0),u(e.$slots,"default",{},void 0,!0)]))}}),zs=$(Gs,[["__scopeId","data-v-7dd3104a"]]),Ks=["aria-expanded","aria-label"],Ws={key:0,class:"text"},qs=["innerHTML"],Js={key:1,class:"vpi-more-horizontal icon"},Ys={class:"menu"},Xs=b({__name:"VPFlyout",props:{icon:{},button:{},label:{},items:{}},setup(o){const e=T(!1),t=T();Ms({el:t,onBlur:s});function s(){e.value=!1}return(n,i)=>(a(),d("div",{class:"VPFlyout",ref_key:"el",ref:t,onMouseenter:i[1]||(i[1]=l=>e.value=!0),onMouseleave:i[2]||(i[2]=l=>e.value=!1)},[v("button",{type:"button",class:"button","aria-haspopup":"true","aria-expanded":e.value,"aria-label":n.label,onClick:i[0]||(i[0]=l=>e.value=!e.value)},[n.button||n.icon?(a(),d("span",Ws,[n.icon?(a(),d("span",{key:0,class:I([n.icon,"option-icon"])},null,2)):_("",!0),n.button?(a(),d("span",{key:1,innerHTML:n.button},null,8,qs)):_("",!0),i[3]||(i[3]=v("span",{class:"vpi-chevron-down text-icon"},null,-1))])):(a(),d("span",Js))],8,Ks),v("div",Ys,[g(zs,{items:n.items},{default:f(()=>[u(n.$slots,"default",{},void 0,!0)]),_:3},8,["items"])])],544))}}),$e=$(Xs,[["__scopeId","data-v-04f5c5e9"]]),Qs=["href","aria-label","innerHTML"],Zs=b({__name:"VPSocialLink",props:{icon:{},link:{},ariaLabel:{}},setup(o){const e=o,t=T();F(async()=>{var i;await he();const n=(i=t.value)==null?void 0:i.children[0];n instanceof HTMLElement&&n.className.startsWith("vpi-social-")&&(getComputedStyle(n).maskImage||getComputedStyle(n).webkitMaskImage)==="none"&&n.style.setProperty("--icon",`url('https://api.iconify.design/simple-icons/${e.icon}.svg')`)});const s=P(()=>typeof e.icon=="object"?e.icon.svg:``);return(n,i)=>(a(),d("a",{ref_key:"el",ref:t,class:"VPSocialLink no-icon",href:n.link,"aria-label":n.ariaLabel??(typeof n.icon=="string"?n.icon:""),target:"_blank",rel:"noopener",innerHTML:s.value},null,8,Qs))}}),xs=$(Zs,[["__scopeId","data-v-d26d30cb"]]),eo={class:"VPSocialLinks"},to=b({__name:"VPSocialLinks",props:{links:{}},setup(o){return(e,t)=>(a(),d("div",eo,[(a(!0),d(M,null,E(e.links,({link:s,icon:n,ariaLabel:i})=>(a(),k(xs,{key:s,icon:n,link:s,ariaLabel:i},null,8,["icon","link","ariaLabel"]))),128))]))}}),ye=$(to,[["__scopeId","data-v-ee7a9424"]]),no={key:0,class:"group translations"},so={class:"trans-title"},oo={key:1,class:"group"},ao={class:"item appearance"},ro={class:"label"},io={class:"appearance-action"},lo={key:2,class:"group"},co={class:"item social-links"},uo=b({__name:"VPNavBarExtra",setup(o){const{site:e,theme:t}=L(),{localeLinks:s,currentLang:n}=Y({correspondingLink:!0}),i=P(()=>s.value.length&&n.value.label||e.value.appearance||t.value.socialLinks);return(l,p)=>i.value?(a(),k($e,{key:0,class:"VPNavBarExtra",label:"extra navigation"},{default:f(()=>[r(s).length&&r(n).label?(a(),d("div",no,[v("p",so,N(r(n).label),1),(a(!0),d(M,null,E(r(s),c=>(a(),k(ne,{key:c.link,item:c},null,8,["item"]))),128))])):_("",!0),r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",oo,[v("div",ao,[v("p",ro,N(r(t).darkModeSwitchLabel||"Appearance"),1),v("div",io,[g(ke)])])])):_("",!0),r(t).socialLinks?(a(),d("div",lo,[v("div",co,[g(ye,{class:"social-links-list",links:r(t).socialLinks},null,8,["links"])])])):_("",!0)]),_:1})):_("",!0)}}),po=$(uo,[["__scopeId","data-v-925effce"]]),vo=["aria-expanded"],fo=b({__name:"VPNavBarHamburger",props:{active:{type:Boolean}},emits:["click"],setup(o){return(e,t)=>(a(),d("button",{type:"button",class:I(["VPNavBarHamburger",{active:e.active}]),"aria-label":"mobile navigation","aria-expanded":e.active,"aria-controls":"VPNavScreen",onClick:t[0]||(t[0]=s=>e.$emit("click"))},t[1]||(t[1]=[v("span",{class:"container"},[v("span",{class:"top"}),v("span",{class:"middle"}),v("span",{class:"bottom"})],-1)]),10,vo))}}),ho=$(fo,[["__scopeId","data-v-5dea55bf"]]),mo=["innerHTML"],_o=b({__name:"VPNavBarMenuLink",props:{item:{}},setup(o){const{page:e}=L();return(t,s)=>(a(),k(O,{class:I({VPNavBarMenuLink:!0,active:r(z)(r(e).relativePath,t.item.activeMatch||t.item.link,!!t.item.activeMatch)}),href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,tabindex:"0"},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,mo)]),_:1},8,["class","href","target","rel","no-icon"]))}}),bo=$(_o,[["__scopeId","data-v-956ec74c"]]),Fe=b({__name:"VPNavBarMenuGroup",props:{item:{}},setup(o){const e=o,{page:t}=L(),s=i=>"component"in i?!1:"link"in i?z(t.value.relativePath,i.link,!!e.item.activeMatch):i.items.some(s),n=P(()=>s(e.item));return(i,l)=>(a(),k($e,{class:I({VPNavBarMenuGroup:!0,active:r(z)(r(t).relativePath,i.item.activeMatch,!!i.item.activeMatch)||n.value}),button:i.item.text,items:i.item.items},null,8,["class","button","items"]))}}),ko={key:0,"aria-labelledby":"main-nav-aria-label",class:"VPNavBarMenu"},go=b({__name:"VPNavBarMenu",setup(o){const{theme:e}=L();return(t,s)=>r(e).nav?(a(),d("nav",ko,[s[0]||(s[0]=v("span",{id:"main-nav-aria-label",class:"visually-hidden"}," Main Navigation ",-1)),(a(!0),d(M,null,E(r(e).nav,n=>(a(),d(M,{key:JSON.stringify(n)},["link"in n?(a(),k(bo,{key:0,item:n},null,8,["item"])):"component"in n?(a(),k(H(n.component),j({key:1,ref_for:!0},n.props),null,16)):(a(),k(Fe,{key:2,item:n},null,8,["item"]))],64))),128))])):_("",!0)}}),$o=$(go,[["__scopeId","data-v-e6d46098"]]);function yo(o){const{localeIndex:e,theme:t}=L();function s(n){var C,A,w;const i=n.split("."),l=(C=t.value.search)==null?void 0:C.options,p=l&&typeof l=="object",c=p&&((w=(A=l.locales)==null?void 0:A[e.value])==null?void 0:w.translations)||null,h=p&&l.translations||null;let y=c,m=h,S=o;const V=i.pop();for(const B of i){let U=null;const W=S==null?void 0:S[B];W&&(U=S=W);const se=m==null?void 0:m[B];se&&(U=m=se);const oe=y==null?void 0:y[B];oe&&(U=y=oe),W||(S=U),se||(m=U),oe||(y=U)}return(y==null?void 0:y[V])??(m==null?void 0:m[V])??(S==null?void 0:S[V])??""}return s}const Po=["aria-label"],So={class:"DocSearch-Button-Container"},Vo={class:"DocSearch-Button-Placeholder"},Se=b({__name:"VPNavBarSearchButton",setup(o){const t=yo({button:{buttonText:"Search",buttonAriaLabel:"Search"}});return(s,n)=>(a(),d("button",{type:"button",class:"DocSearch DocSearch-Button","aria-label":r(t)("button.buttonAriaLabel")},[v("span",So,[n[0]||(n[0]=v("span",{class:"vp-icon DocSearch-Search-Icon"},null,-1)),v("span",Vo,N(r(t)("button.buttonText")),1)]),n[1]||(n[1]=v("span",{class:"DocSearch-Button-Keys"},[v("kbd",{class:"DocSearch-Button-Key"}),v("kbd",{class:"DocSearch-Button-Key"},"K")],-1))],8,Po))}}),Lo={class:"VPNavBarSearch"},To={id:"local-search"},wo={key:1,id:"docsearch"},No=b({__name:"VPNavBarSearch",setup(o){const e=Qe(()=>Ze(()=>import("./VPLocalSearchBox.DM1AnBlx.js"),__vite__mapDeps([0,1]))),t=()=>null,{theme:s}=L(),n=T(!1),i=T(!1);F(()=>{});function l(){n.value||(n.value=!0,setTimeout(p,16))}function p(){const m=new Event("keydown");m.key="k",m.metaKey=!0,window.dispatchEvent(m),setTimeout(()=>{document.querySelector(".DocSearch-Modal")||p()},16)}function c(m){const S=m.target,V=S.tagName;return S.isContentEditable||V==="INPUT"||V==="SELECT"||V==="TEXTAREA"}const h=T(!1);ie("k",m=>{(m.ctrlKey||m.metaKey)&&(m.preventDefault(),h.value=!0)}),ie("/",m=>{c(m)||(m.preventDefault(),h.value=!0)});const y="local";return(m,S)=>{var V;return a(),d("div",Lo,[r(y)==="local"?(a(),d(M,{key:0},[h.value?(a(),k(r(e),{key:0,onClose:S[0]||(S[0]=C=>h.value=!1)})):_("",!0),v("div",To,[g(Se,{onClick:S[1]||(S[1]=C=>h.value=!0)})])],64)):r(y)==="algolia"?(a(),d(M,{key:1},[n.value?(a(),k(r(t),{key:0,algolia:((V=r(s).search)==null?void 0:V.options)??r(s).algolia,onVnodeBeforeMount:S[2]||(S[2]=C=>i.value=!0)},null,8,["algolia"])):_("",!0),i.value?_("",!0):(a(),d("div",wo,[g(Se,{onClick:l})]))],64)):_("",!0)])}}}),Io=b({__name:"VPNavBarSocialLinks",setup(o){const{theme:e}=L();return(t,s)=>r(e).socialLinks?(a(),k(ye,{key:0,class:"VPNavBarSocialLinks",links:r(e).socialLinks},null,8,["links"])):_("",!0)}}),Mo=$(Io,[["__scopeId","data-v-164c457f"]]),Co=["href","rel","target"],Ao=["innerHTML"],Eo={key:2},Bo=b({__name:"VPNavBarTitle",setup(o){const{site:e,theme:t}=L(),{hasSidebar:s}=R(),{currentLang:n}=Y(),i=P(()=>{var c;return typeof t.value.logoLink=="string"?t.value.logoLink:(c=t.value.logoLink)==null?void 0:c.link}),l=P(()=>{var c;return typeof t.value.logoLink=="string"||(c=t.value.logoLink)==null?void 0:c.rel}),p=P(()=>{var c;return typeof t.value.logoLink=="string"||(c=t.value.logoLink)==null?void 0:c.target});return(c,h)=>(a(),d("div",{class:I(["VPNavBarTitle",{"has-sidebar":r(s)}])},[v("a",{class:"title",href:i.value??r(_e)(r(n).link),rel:l.value,target:p.value},[u(c.$slots,"nav-bar-title-before",{},void 0,!0),r(t).logo?(a(),k(Q,{key:0,class:"logo",image:r(t).logo},null,8,["image"])):_("",!0),r(t).siteTitle?(a(),d("span",{key:1,innerHTML:r(t).siteTitle},null,8,Ao)):r(t).siteTitle===void 0?(a(),d("span",Eo,N(r(e).title),1)):_("",!0),u(c.$slots,"nav-bar-title-after",{},void 0,!0)],8,Co)],2))}}),Ho=$(Bo,[["__scopeId","data-v-0f4f798b"]]),Oo={class:"items"},Do={class:"title"},Fo=b({__name:"VPNavBarTranslations",setup(o){const{theme:e}=L(),{localeLinks:t,currentLang:s}=Y({correspondingLink:!0});return(n,i)=>r(t).length&&r(s).label?(a(),k($e,{key:0,class:"VPNavBarTranslations",icon:"vpi-languages",label:r(e).langMenuLabel||"Change language"},{default:f(()=>[v("div",Oo,[v("p",Do,N(r(s).label),1),(a(!0),d(M,null,E(r(t),l=>(a(),k(ne,{key:l.link,item:l},null,8,["item"]))),128))])]),_:1},8,["label"])):_("",!0)}}),Ro=$(Fo,[["__scopeId","data-v-c80d9ad0"]]),Uo={class:"wrapper"},jo={class:"container"},Go={class:"title"},zo={class:"content"},Ko={class:"content-body"},Wo=b({__name:"VPNavBar",props:{isScreenOpen:{type:Boolean}},emits:["toggle-screen"],setup(o){const e=o,{y:t}=Me(),{hasSidebar:s}=R(),{frontmatter:n}=L(),i=T({});return fe(()=>{i.value={"has-sidebar":s.value,home:n.value.layout==="home",top:t.value===0,"screen-open":e.isScreenOpen}}),(l,p)=>(a(),d("div",{class:I(["VPNavBar",i.value])},[v("div",Uo,[v("div",jo,[v("div",Go,[g(Ho,null,{"nav-bar-title-before":f(()=>[u(l.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(l.$slots,"nav-bar-title-after",{},void 0,!0)]),_:3})]),v("div",zo,[v("div",Ko,[u(l.$slots,"nav-bar-content-before",{},void 0,!0),g(No,{class:"search"}),g($o,{class:"menu"}),g(Ro,{class:"translations"}),g(Is,{class:"appearance"}),g(Mo,{class:"social-links"}),g(po,{class:"extra"}),u(l.$slots,"nav-bar-content-after",{},void 0,!0),g(ho,{class:"hamburger",active:l.isScreenOpen,onClick:p[0]||(p[0]=c=>l.$emit("toggle-screen"))},null,8,["active"])])])])]),p[1]||(p[1]=v("div",{class:"divider"},[v("div",{class:"divider-line"})],-1))],2))}}),qo=$(Wo,[["__scopeId","data-v-822684d1"]]),Jo={key:0,class:"VPNavScreenAppearance"},Yo={class:"text"},Xo=b({__name:"VPNavScreenAppearance",setup(o){const{site:e,theme:t}=L();return(s,n)=>r(e).appearance&&r(e).appearance!=="force-dark"&&r(e).appearance!=="force-auto"?(a(),d("div",Jo,[v("p",Yo,N(r(t).darkModeSwitchLabel||"Appearance"),1),g(ke)])):_("",!0)}}),Qo=$(Xo,[["__scopeId","data-v-ffb44008"]]),Zo=["innerHTML"],xo=b({__name:"VPNavScreenMenuLink",props:{item:{}},setup(o){const e=q("close-screen");return(t,s)=>(a(),k(O,{class:"VPNavScreenMenuLink",href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,onClick:r(e)},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,Zo)]),_:1},8,["href","target","rel","no-icon","onClick"]))}}),ea=$(xo,[["__scopeId","data-v-735512b8"]]),ta=["innerHTML"],na=b({__name:"VPNavScreenMenuGroupLink",props:{item:{}},setup(o){const e=q("close-screen");return(t,s)=>(a(),k(O,{class:"VPNavScreenMenuGroupLink",href:t.item.link,target:t.item.target,rel:t.item.rel,"no-icon":t.item.noIcon,onClick:r(e)},{default:f(()=>[v("span",{innerHTML:t.item.text},null,8,ta)]),_:1},8,["href","target","rel","no-icon","onClick"]))}}),Re=$(na,[["__scopeId","data-v-372ae7c0"]]),sa={class:"VPNavScreenMenuGroupSection"},oa={key:0,class:"title"},aa=b({__name:"VPNavScreenMenuGroupSection",props:{text:{},items:{}},setup(o){return(e,t)=>(a(),d("div",sa,[e.text?(a(),d("p",oa,N(e.text),1)):_("",!0),(a(!0),d(M,null,E(e.items,s=>(a(),k(Re,{key:s.text,item:s},null,8,["item"]))),128))]))}}),ra=$(aa,[["__scopeId","data-v-4b8941ac"]]),ia=["aria-controls","aria-expanded"],la=["innerHTML"],ca=["id"],ua={key:0,class:"item"},da={key:1,class:"item"},pa={key:2,class:"group"},va=b({__name:"VPNavScreenMenuGroup",props:{text:{},items:{}},setup(o){const e=o,t=T(!1),s=P(()=>`NavScreenGroup-${e.text.replace(" ","-").toLowerCase()}`);function n(){t.value=!t.value}return(i,l)=>(a(),d("div",{class:I(["VPNavScreenMenuGroup",{open:t.value}])},[v("button",{class:"button","aria-controls":s.value,"aria-expanded":t.value,onClick:n},[v("span",{class:"button-text",innerHTML:i.text},null,8,la),l[0]||(l[0]=v("span",{class:"vpi-plus button-icon"},null,-1))],8,ia),v("div",{id:s.value,class:"items"},[(a(!0),d(M,null,E(i.items,p=>(a(),d(M,{key:JSON.stringify(p)},["link"in p?(a(),d("div",ua,[g(Re,{item:p},null,8,["item"])])):"component"in p?(a(),d("div",da,[(a(),k(H(p.component),j({ref_for:!0},p.props,{"screen-menu":""}),null,16))])):(a(),d("div",pa,[g(ra,{text:p.text,items:p.items},null,8,["text","items"])]))],64))),128))],8,ca)],2))}}),Ue=$(va,[["__scopeId","data-v-875057a5"]]),fa={key:0,class:"VPNavScreenMenu"},ha=b({__name:"VPNavScreenMenu",setup(o){const{theme:e}=L();return(t,s)=>r(e).nav?(a(),d("nav",fa,[(a(!0),d(M,null,E(r(e).nav,n=>(a(),d(M,{key:JSON.stringify(n)},["link"in n?(a(),k(ea,{key:0,item:n},null,8,["item"])):"component"in n?(a(),k(H(n.component),j({key:1,ref_for:!0},n.props,{"screen-menu":""}),null,16)):(a(),k(Ue,{key:2,text:n.text||"",items:n.items},null,8,["text","items"]))],64))),128))])):_("",!0)}}),ma=b({__name:"VPNavScreenSocialLinks",setup(o){const{theme:e}=L();return(t,s)=>r(e).socialLinks?(a(),k(ye,{key:0,class:"VPNavScreenSocialLinks",links:r(e).socialLinks},null,8,["links"])):_("",!0)}}),_a={class:"list"},ba=b({__name:"VPNavScreenTranslations",setup(o){const{localeLinks:e,currentLang:t}=Y({correspondingLink:!0}),s=T(!1);function n(){s.value=!s.value}return(i,l)=>r(e).length&&r(t).label?(a(),d("div",{key:0,class:I(["VPNavScreenTranslations",{open:s.value}])},[v("button",{class:"title",onClick:n},[l[0]||(l[0]=v("span",{class:"vpi-languages icon lang"},null,-1)),G(" "+N(r(t).label)+" ",1),l[1]||(l[1]=v("span",{class:"vpi-chevron-down icon chevron"},null,-1))]),v("ul",_a,[(a(!0),d(M,null,E(r(e),p=>(a(),d("li",{key:p.link,class:"item"},[g(O,{class:"link",href:p.link},{default:f(()=>[G(N(p.text),1)]),_:2},1032,["href"])]))),128))])],2)):_("",!0)}}),ka=$(ba,[["__scopeId","data-v-362991c2"]]),ga={class:"container"},$a=b({__name:"VPNavScreen",props:{open:{type:Boolean}},setup(o){const e=T(null),t=Ce(te?document.body:null);return(s,n)=>(a(),k(de,{name:"fade",onEnter:n[0]||(n[0]=i=>t.value=!0),onAfterLeave:n[1]||(n[1]=i=>t.value=!1)},{default:f(()=>[s.open?(a(),d("div",{key:0,class:"VPNavScreen",ref_key:"screen",ref:e,id:"VPNavScreen"},[v("div",ga,[u(s.$slots,"nav-screen-content-before",{},void 0,!0),g(ha,{class:"menu"}),g(ka,{class:"translations"}),g(Qo,{class:"appearance"}),g(ma,{class:"social-links"}),u(s.$slots,"nav-screen-content-after",{},void 0,!0)])],512)):_("",!0)]),_:3}))}}),ya=$($a,[["__scopeId","data-v-833aabba"]]),Pa={key:0,class:"VPNav"},Sa=b({__name:"VPNav",setup(o){const{isScreenOpen:e,closeScreen:t,toggleScreen:s}=gs(),{frontmatter:n}=L(),i=P(()=>n.value.navbar!==!1);return me("close-screen",t),Z(()=>{te&&document.documentElement.classList.toggle("hide-nav",!i.value)}),(l,p)=>i.value?(a(),d("header",Pa,[g(qo,{"is-screen-open":r(e),onToggleScreen:r(s)},{"nav-bar-title-before":f(()=>[u(l.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(l.$slots,"nav-bar-title-after",{},void 0,!0)]),"nav-bar-content-before":f(()=>[u(l.$slots,"nav-bar-content-before",{},void 0,!0)]),"nav-bar-content-after":f(()=>[u(l.$slots,"nav-bar-content-after",{},void 0,!0)]),_:3},8,["is-screen-open","onToggleScreen"]),g(ya,{open:r(e)},{"nav-screen-content-before":f(()=>[u(l.$slots,"nav-screen-content-before",{},void 0,!0)]),"nav-screen-content-after":f(()=>[u(l.$slots,"nav-screen-content-after",{},void 0,!0)]),_:3},8,["open"])])):_("",!0)}}),Va=$(Sa,[["__scopeId","data-v-f1e365da"]]),La=["role","tabindex"],Ta={key:1,class:"items"},wa=b({__name:"VPSidebarItem",props:{item:{},depth:{}},setup(o){const e=o,{collapsed:t,collapsible:s,isLink:n,isActiveLink:i,hasActiveLink:l,hasChildren:p,toggle:c}=Pt(P(()=>e.item)),h=P(()=>p.value?"section":"div"),y=P(()=>n.value?"a":"div"),m=P(()=>p.value?e.depth+2===7?"p":`h${e.depth+2}`:"p"),S=P(()=>n.value?void 0:"button"),V=P(()=>[[`level-${e.depth}`],{collapsible:s.value},{collapsed:t.value},{"is-link":n.value},{"is-active":i.value},{"has-active":l.value}]);function C(w){"key"in w&&w.key!=="Enter"||!e.item.link&&c()}function A(){e.item.link&&c()}return(w,B)=>{const U=K("VPSidebarItem",!0);return a(),k(H(h.value),{class:I(["VPSidebarItem",V.value])},{default:f(()=>[w.item.text?(a(),d("div",j({key:0,class:"item",role:S.value},et(w.item.items?{click:C,keydown:C}:{},!0),{tabindex:w.item.items&&0}),[B[1]||(B[1]=v("div",{class:"indicator"},null,-1)),w.item.link?(a(),k(O,{key:0,tag:y.value,class:"link",href:w.item.link,rel:w.item.rel,target:w.item.target},{default:f(()=>[(a(),k(H(m.value),{class:"text",innerHTML:w.item.text},null,8,["innerHTML"]))]),_:1},8,["tag","href","rel","target"])):(a(),k(H(m.value),{key:1,class:"text",innerHTML:w.item.text},null,8,["innerHTML"])),w.item.collapsed!=null&&w.item.items&&w.item.items.length?(a(),d("div",{key:2,class:"caret",role:"button","aria-label":"toggle section",onClick:A,onKeydown:xe(A,["enter"]),tabindex:"0"},B[0]||(B[0]=[v("span",{class:"vpi-chevron-right caret-icon"},null,-1)]),32)):_("",!0)],16,La)):_("",!0),w.item.items&&w.item.items.length?(a(),d("div",Ta,[w.depth<5?(a(!0),d(M,{key:0},E(w.item.items,W=>(a(),k(U,{key:W.text,item:W,depth:w.depth+1},null,8,["item","depth"]))),128)):_("",!0)])):_("",!0)]),_:1},8,["class"])}}}),Na=$(wa,[["__scopeId","data-v-a4b0d9bf"]]),Ia=b({__name:"VPSidebarGroup",props:{items:{}},setup(o){const e=T(!0);let t=null;return F(()=>{t=setTimeout(()=>{t=null,e.value=!1},300)}),tt(()=>{t!=null&&(clearTimeout(t),t=null)}),(s,n)=>(a(!0),d(M,null,E(s.items,i=>(a(),d("div",{key:i.text,class:I(["group",{"no-transition":e.value}])},[g(Na,{item:i,depth:0},null,8,["item"])],2))),128))}}),Ma=$(Ia,[["__scopeId","data-v-9e426adc"]]),Ca={class:"nav",id:"VPSidebarNav","aria-labelledby":"sidebar-aria-label",tabindex:"-1"},Aa=b({__name:"VPSidebar",props:{open:{type:Boolean}},setup(o){const{sidebarGroups:e,hasSidebar:t}=R(),s=o,n=T(null),i=Ce(te?document.body:null);D([s,n],()=>{var p;s.open?(i.value=!0,(p=n.value)==null||p.focus()):i.value=!1},{immediate:!0,flush:"post"});const l=T(0);return D(e,()=>{l.value+=1},{deep:!0}),(p,c)=>r(t)?(a(),d("aside",{key:0,class:I(["VPSidebar",{open:p.open}]),ref_key:"navEl",ref:n,onClick:c[0]||(c[0]=nt(()=>{},["stop"]))},[c[2]||(c[2]=v("div",{class:"curtain"},null,-1)),v("nav",Ca,[c[1]||(c[1]=v("span",{class:"visually-hidden",id:"sidebar-aria-label"}," Sidebar Navigation ",-1)),u(p.$slots,"sidebar-nav-before",{},void 0,!0),(a(),k(Ma,{items:r(e),key:l.value},null,8,["items"])),u(p.$slots,"sidebar-nav-after",{},void 0,!0)])],2)):_("",!0)}}),Ea=$(Aa,[["__scopeId","data-v-18756405"]]),Ba=b({__name:"VPSkipLink",setup(o){const{theme:e}=L(),t=ee(),s=T();D(()=>t.path,()=>s.value.focus());function n({target:i}){const l=document.getElementById(decodeURIComponent(i.hash).slice(1));if(l){const p=()=>{l.removeAttribute("tabindex"),l.removeEventListener("blur",p)};l.setAttribute("tabindex","-1"),l.addEventListener("blur",p),l.focus(),window.scrollTo(0,0)}}return(i,l)=>(a(),d(M,null,[v("span",{ref_key:"backToTop",ref:s,tabindex:"-1"},null,512),v("a",{href:"#VPContent",class:"VPSkipLink visually-hidden",onClick:n},N(r(e).skipToContentLabel||"Skip to content"),1)],64))}}),Ha=$(Ba,[["__scopeId","data-v-492508fc"]]),Oa=b({__name:"Layout",setup(o){const{isOpen:e,open:t,close:s}=R(),n=ee();D(()=>n.path,s),yt(e,s);const{frontmatter:i}=L(),l=Ae(),p=P(()=>!!l["home-hero-image"]);return me("hero-image-slot-exists",p),(c,h)=>{const y=K("Content");return r(i).layout!==!1?(a(),d("div",{key:0,class:I(["Layout",r(i).pageClass])},[u(c.$slots,"layout-top",{},void 0,!0),g(Ha),g(ct,{class:"backdrop",show:r(e),onClick:r(s)},null,8,["show","onClick"]),g(Va,null,{"nav-bar-title-before":f(()=>[u(c.$slots,"nav-bar-title-before",{},void 0,!0)]),"nav-bar-title-after":f(()=>[u(c.$slots,"nav-bar-title-after",{},void 0,!0)]),"nav-bar-content-before":f(()=>[u(c.$slots,"nav-bar-content-before",{},void 0,!0)]),"nav-bar-content-after":f(()=>[u(c.$slots,"nav-bar-content-after",{},void 0,!0)]),"nav-screen-content-before":f(()=>[u(c.$slots,"nav-screen-content-before",{},void 0,!0)]),"nav-screen-content-after":f(()=>[u(c.$slots,"nav-screen-content-after",{},void 0,!0)]),_:3}),g(ks,{open:r(e),onOpenMenu:r(t)},null,8,["open","onOpenMenu"]),g(Ea,{open:r(e)},{"sidebar-nav-before":f(()=>[u(c.$slots,"sidebar-nav-before",{},void 0,!0)]),"sidebar-nav-after":f(()=>[u(c.$slots,"sidebar-nav-after",{},void 0,!0)]),_:3},8,["open"]),g(ss,null,{"page-top":f(()=>[u(c.$slots,"page-top",{},void 0,!0)]),"page-bottom":f(()=>[u(c.$slots,"page-bottom",{},void 0,!0)]),"not-found":f(()=>[u(c.$slots,"not-found",{},void 0,!0)]),"home-hero-before":f(()=>[u(c.$slots,"home-hero-before",{},void 0,!0)]),"home-hero-info-before":f(()=>[u(c.$slots,"home-hero-info-before",{},void 0,!0)]),"home-hero-info":f(()=>[u(c.$slots,"home-hero-info",{},void 0,!0)]),"home-hero-info-after":f(()=>[u(c.$slots,"home-hero-info-after",{},void 0,!0)]),"home-hero-actions-after":f(()=>[u(c.$slots,"home-hero-actions-after",{},void 0,!0)]),"home-hero-image":f(()=>[u(c.$slots,"home-hero-image",{},void 0,!0)]),"home-hero-after":f(()=>[u(c.$slots,"home-hero-after",{},void 0,!0)]),"home-features-before":f(()=>[u(c.$slots,"home-features-before",{},void 0,!0)]),"home-features-after":f(()=>[u(c.$slots,"home-features-after",{},void 0,!0)]),"doc-footer-before":f(()=>[u(c.$slots,"doc-footer-before",{},void 0,!0)]),"doc-before":f(()=>[u(c.$slots,"doc-before",{},void 0,!0)]),"doc-after":f(()=>[u(c.$slots,"doc-after",{},void 0,!0)]),"doc-top":f(()=>[u(c.$slots,"doc-top",{},void 0,!0)]),"doc-bottom":f(()=>[u(c.$slots,"doc-bottom",{},void 0,!0)]),"aside-top":f(()=>[u(c.$slots,"aside-top",{},void 0,!0)]),"aside-bottom":f(()=>[u(c.$slots,"aside-bottom",{},void 0,!0)]),"aside-outline-before":f(()=>[u(c.$slots,"aside-outline-before",{},void 0,!0)]),"aside-outline-after":f(()=>[u(c.$slots,"aside-outline-after",{},void 0,!0)]),"aside-ads-before":f(()=>[u(c.$slots,"aside-ads-before",{},void 0,!0)]),"aside-ads-after":f(()=>[u(c.$slots,"aside-ads-after",{},void 0,!0)]),_:3}),g(ls),u(c.$slots,"layout-bottom",{},void 0,!0)],2)):(a(),k(y,{key:1}))}}}),Da=$(Oa,[["__scopeId","data-v-a9a9e638"]]),Ve={Layout:Da,enhanceApp:({app:o})=>{o.component("Badge",rt)}},Fa={};function Ra(o,e){return e[0]||(e[0]=st('

    Trusted by

    Scientific Computing

    SciML.ai

    Machine Learning

    ',3))}const Ua=$(Fa,[["render",Ra]]),ja=b({__name:"VersionPicker",props:{screenMenu:{type:Boolean}},setup(o){const e=T([]),t=T("Versions"),s=T(!1);Te();const n=()=>typeof window<"u"&&(window.location.hostname==="localhost"||window.location.hostname==="127.0.0.1"),i=()=>{if(typeof window>"u")return"";const{origin:c,pathname:h}=window.location;if(c.includes("github.io")){const y=h.split("/").filter(Boolean),m=y.length>0?`/${y[0]}/`:"/";return`${c}${m}`}else return c},l=()=>new Promise(c=>{if(n()){c(!1);return}const h=setInterval(()=>{window.DOC_VERSIONS&&window.DOCUMENTER_CURRENT_VERSION&&(clearInterval(h),c(!0))},100);setTimeout(()=>{clearInterval(h),c(!1)},5e3)});return F(async()=>{if(!(typeof window>"u")){try{if(n()){const c=["dev"];e.value=c.map(h=>({text:h,link:"/"})),t.value="dev"}else{const c=await l(),h=P(()=>i());if(c&&window.DOC_VERSIONS&&window.DOCUMENTER_CURRENT_VERSION)e.value=window.DOC_VERSIONS.map(y=>({text:y,link:`${h.value}/${y}/`})),t.value=window.DOCUMENTER_CURRENT_VERSION;else{const y=["dev"];e.value=y.map(m=>({text:m,link:`${h.value}/${m}/`})),t.value="dev"}}}catch(c){console.warn("Error loading versions:",c);const h=["dev"],y=P(()=>i());e.value=h.map(m=>({text:m,link:`${y.value}/${m}/`})),t.value="dev"}s.value=!0}}),(c,h)=>s.value?(a(),d(M,{key:0},[!c.screenMenu&&e.value.length>0?(a(),k(Fe,{key:0,item:{text:t.value,items:e.value},class:"VPVersionPicker"},null,8,["item"])):c.screenMenu&&e.value.length>0?(a(),k(Ue,{key:1,text:t.value,items:e.value,class:"VPVersionPicker"},null,8,["text","items"])):_("",!0)],64)):_("",!0)}}),Ga=$(ja,[["__scopeId","data-v-d483b3a6"]]),za=o=>{if(typeof document>"u")return{stabilizeScrollPosition:n=>async(...i)=>n(...i)};const e=document.documentElement;return{stabilizeScrollPosition:s=>async(...n)=>{const i=s(...n),l=o.value;if(!l)return i;const p=l.offsetTop-e.scrollTop;return await he(),e.scrollTop=l.offsetTop-p,i}}},je="vitepress:tabSharedState",J=typeof localStorage<"u"?localStorage:null,Ge="vitepress:tabsSharedState",Ka=()=>{const o=J==null?void 0:J.getItem(Ge);if(o)try{return JSON.parse(o)}catch{}return{}},Wa=o=>{J&&J.setItem(Ge,JSON.stringify(o))},qa=o=>{const e=ot({});D(()=>e.content,(t,s)=>{t&&s&&Wa(t)},{deep:!0}),o.provide(je,e)},Ja=(o,e)=>{const t=q(je);if(!t)throw new Error("[vitepress-plugin-tabs] TabsSharedState should be injected");F(()=>{t.content||(t.content=Ka())});const s=T(),n=P({get(){var c;const l=e.value,p=o.value;if(l){const h=(c=t.content)==null?void 0:c[l];if(h&&p.includes(h))return h}else{const h=s.value;if(h)return h}return p[0]},set(l){const p=e.value;p?t.content&&(t.content[p]=l):s.value=l}});return{selected:n,select:l=>{n.value=l}}};let Le=0;const Ya=()=>(Le++,""+Le);function Xa(){const o=Ae();return P(()=>{var s;const t=(s=o.default)==null?void 0:s.call(o);return t?t.filter(n=>typeof n.type=="object"&&"__name"in n.type&&n.type.__name==="PluginTabsTab"&&n.props).map(n=>{var i;return(i=n.props)==null?void 0:i.label}):[]})}const ze="vitepress:tabSingleState",Qa=o=>{me(ze,o)},Za=()=>{const o=q(ze);if(!o)throw new Error("[vitepress-plugin-tabs] TabsSingleState should be injected");return o},xa={class:"plugin-tabs"},er=["id","aria-selected","aria-controls","tabindex","onClick"],tr=b({__name:"PluginTabs",props:{sharedStateKey:{}},setup(o){const e=o,t=Xa(),{selected:s,select:n}=Ja(t,at(e,"sharedStateKey")),i=T(),{stabilizeScrollPosition:l}=za(i),p=l(n),c=T([]),h=m=>{var C;const S=t.value.indexOf(s.value);let V;m.key==="ArrowLeft"?V=S>=1?S-1:t.value.length-1:m.key==="ArrowRight"&&(V=S(a(),d("div",xa,[v("div",{ref_key:"tablist",ref:i,class:"plugin-tabs--tab-list",role:"tablist",onKeydown:h},[(a(!0),d(M,null,E(r(t),V=>(a(),d("button",{id:`tab-${V}-${r(y)}`,ref_for:!0,ref_key:"buttonRefs",ref:c,key:V,role:"tab",class:"plugin-tabs--tab","aria-selected":V===r(s),"aria-controls":`panel-${V}-${r(y)}`,tabindex:V===r(s)?0:-1,onClick:()=>r(p)(V)},N(V),9,er))),128))],544),u(m.$slots,"default")]))}}),nr=["id","aria-labelledby"],sr=b({__name:"PluginTabsTab",props:{label:{}},setup(o){const{uid:e,selected:t}=Za();return(s,n)=>r(t)===s.label?(a(),d("div",{key:0,id:`panel-${s.label}-${r(e)}`,class:"plugin-tabs--content",role:"tabpanel",tabindex:"0","aria-labelledby":`tab-${s.label}-${r(e)}`},[u(s.$slots,"default",{},void 0,!0)],8,nr)):_("",!0)}}),or=$(sr,[["__scopeId","data-v-9b0d03d2"]]),ar=o=>{qa(o),o.component("PluginTabs",tr),o.component("PluginTabsTab",or)},ir={extends:Ve,Layout(){return Pe(Ve.Layout,null,{"aside-ads-before":()=>Pe(Ua)})},enhanceApp({app:o}){ar(o),o.component("VersionPicker",Ga)}};export{ir as R,yo as c,L as u}; diff --git a/dev/assets/index.md.CnfhmVsi.lean.js b/dev/assets/index.md.CnfhmVsi.lean.js deleted file mode 100644 index a83bab0cce..0000000000 --- a/dev/assets/index.md.CnfhmVsi.lean.js +++ /dev/null @@ -1,27 +0,0 @@ -import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const o=JSON.parse('{"title":"","description":"","frontmatter":{"layout":"home","hero":{"name":"LuxDL Docs","text":"Elegant & Performant Scientific Machine Learning in JuliaLang","tagline":"A Pure Julia Deep Learning Framework designed for Scientific Machine Learning","actions":[{"theme":"brand","text":"Tutorials","link":"/tutorials"},{"theme":"alt","text":"API Reference 📚","link":"/api/Lux/layers"},{"theme":"alt","text":"View on GitHub","link":"https://github.com/LuxDL/Lux.jl"}],"image":{"src":"/lux-logo.svg","alt":"Lux.jl"}},"features":[{"icon":"🚀","title":"Fast & Extendable","details":"Lux.jl is written in Julia itself, making it extremely extendable. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.","link":"/introduction"},{"icon":"🐎","title":"Powered by the XLA Compiler","details":"Lux.jl seamlessly integrates with Reactant.jl, to compiler models to run on CPU, GPU, TPU, and more.","link":"/manual/compiling_lux_models"},{"icon":"🧑‍🔬","title":"SciML ❤️ Lux","details":"Lux is the default choice for all SciML packages, including DiffEqFlux.jl, NeuralPDE.jl, and more.","link":"https://sciml.ai/"},{"icon":"🧩","title":"Uniquely Composable","details":"Lux.jl natively supports Arbitrary Parameter Types, making it uniquely composable with other Julia packages (and even Non-Julia packages).","link":"/api/Lux/contrib#Training"}]},"headers":[],"relativePath":"index.md","filePath":"index.md","lastUpdated":null}'),l={name:"index.md"};function e(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    How to Install Lux.jl?

    Its easy to install Lux.jl. Since Lux.jl is registered in the Julia General registry, you can simply run the following command in the Julia REPL:

    julia
    julia> using Pkg
    -julia> Pkg.add("Lux")

    If you want to use the latest unreleased version of Lux.jl, you can run the following command: (in most cases the released version will be same as the version on github)

    julia
    julia> using Pkg
    -julia> Pkg.add(url="https://github.com/LuxDL/Lux.jl")

    Want GPU Support?

    Install the following package(s):

    julia
    using Pkg
    -Pkg.add("LuxCUDA")
    -# or
    -Pkg.add(["CUDA", "cuDNN"])
    julia
    using Pkg
    -Pkg.add("AMDGPU")
    julia
    using Pkg
    -Pkg.add("Metal")
    julia
    using Pkg
    -Pkg.add("oneAPI")

    Run the following to access a device:

    julia
    using Lux, LuxCUDA
    -
    -const dev = gpu_device()
    julia
    using Lux, AMDGPU
    -
    -const dev = gpu_device()
    julia
    using Lux, Metal
    -
    -const dev = gpu_device()
    julia
    using Lux, oneAPI
    -
    -const dev = gpu_device()

    Want Reactant (XLA) Support?

    Install the following package:

    julia
    using Pkg;
    -Pkg.add("Reactant")

    Run the following to access a device (Reactant automatically selects the best backend by default):

    julia
    using Reactant, Lux
    -Reactant.set_default_backend("cpu")
    -
    -const dev = reactant_device()
    julia
    using Reactant, Lux
    -Reactant.set_default_backend("gpu")
    -
    -const dev = reactant_device()
    julia
    using Reactant, Lux
    -Reactant.set_default_backend("tpu")
    -
    -const dev = reactant_device()
    `,15)]))}const c=i(l,[["render",e]]);export{o as __pageData,c as default}; diff --git a/dev/assets/index.md.CnfhmVsi.js b/dev/assets/index.md.d5k3UIjW.js similarity index 91% rename from dev/assets/index.md.CnfhmVsi.js rename to dev/assets/index.md.d5k3UIjW.js index a83bab0cce..f7775b5013 100644 --- a/dev/assets/index.md.CnfhmVsi.js +++ b/dev/assets/index.md.d5k3UIjW.js @@ -1,12 +1,12 @@ -import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const o=JSON.parse('{"title":"","description":"","frontmatter":{"layout":"home","hero":{"name":"LuxDL Docs","text":"Elegant & Performant Scientific Machine Learning in JuliaLang","tagline":"A Pure Julia Deep Learning Framework designed for Scientific Machine Learning","actions":[{"theme":"brand","text":"Tutorials","link":"/tutorials"},{"theme":"alt","text":"API Reference 📚","link":"/api/Lux/layers"},{"theme":"alt","text":"View on GitHub","link":"https://github.com/LuxDL/Lux.jl"}],"image":{"src":"/lux-logo.svg","alt":"Lux.jl"}},"features":[{"icon":"🚀","title":"Fast & Extendable","details":"Lux.jl is written in Julia itself, making it extremely extendable. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.","link":"/introduction"},{"icon":"🐎","title":"Powered by the XLA Compiler","details":"Lux.jl seamlessly integrates with Reactant.jl, to compiler models to run on CPU, GPU, TPU, and more.","link":"/manual/compiling_lux_models"},{"icon":"🧑‍🔬","title":"SciML ❤️ Lux","details":"Lux is the default choice for all SciML packages, including DiffEqFlux.jl, NeuralPDE.jl, and more.","link":"https://sciml.ai/"},{"icon":"🧩","title":"Uniquely Composable","details":"Lux.jl natively supports Arbitrary Parameter Types, making it uniquely composable with other Julia packages (and even Non-Julia packages).","link":"/api/Lux/contrib#Training"}]},"headers":[],"relativePath":"index.md","filePath":"index.md","lastUpdated":null}'),l={name:"index.md"};function e(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    How to Install Lux.jl?

    Its easy to install Lux.jl. Since Lux.jl is registered in the Julia General registry, you can simply run the following command in the Julia REPL:

    julia
    julia> using Pkg
    +import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"","description":"","frontmatter":{"layout":"home","hero":{"name":"LuxDL Docs","text":"Elegant & Performant Scientific Machine Learning in JuliaLang","tagline":"A Pure Julia Deep Learning Framework designed for Scientific Machine Learning","actions":[{"theme":"brand","text":"Tutorials","link":"/tutorials"},{"theme":"alt","text":"API Reference 📚","link":"/api/Lux/layers"},{"theme":"alt","text":"View on GitHub","link":"https://github.com/LuxDL/Lux.jl"}],"image":{"src":"/lux-logo.svg","alt":"Lux.jl"}},"features":[{"icon":"🚀","title":"Fast & Extendable","details":"Lux.jl is written in Julia itself, making it extremely extendable. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.","link":"/introduction"},{"icon":"🐎","title":"Powered by the XLA Compiler","details":"Lux.jl seamlessly integrates with Reactant.jl, to compiler models to run on CPU, GPU, TPU, and more.","link":"/manual/compiling_lux_models"},{"icon":"🧑‍🔬","title":"SciML ❤️ Lux","details":"Lux is the default choice for all SciML packages, including DiffEqFlux.jl, NeuralPDE.jl, and more.","link":"https://sciml.ai/"},{"icon":"🧩","title":"Uniquely Composable","details":"Lux.jl natively supports Arbitrary Parameter Types, making it uniquely composable with other Julia packages (and even Non-Julia packages).","link":"/api/Lux/contrib#Training"}]},"headers":[],"relativePath":"index.md","filePath":"index.md","lastUpdated":null}'),l={name:"index.md"};function e(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    How to Install Lux.jl?

    Its easy to install Lux.jl. Since Lux.jl is registered in the Julia General registry, you can simply run the following command in the Julia REPL:

    julia
    julia> using Pkg
     julia> Pkg.add("Lux")

    If you want to use the latest unreleased version of Lux.jl, you can run the following command: (in most cases the released version will be same as the version on github)

    julia
    julia> using Pkg
    -julia> Pkg.add(url="https://github.com/LuxDL/Lux.jl")

    Want GPU Support?

    Install the following package(s):

    julia
    using Pkg
    +julia> Pkg.add(url="https://github.com/LuxDL/Lux.jl")

    Want GPU Support?

    Install the following package(s):

    julia
    using Pkg
     Pkg.add("LuxCUDA")
     # or
     Pkg.add(["CUDA", "cuDNN"])
    julia
    using Pkg
     Pkg.add("AMDGPU")
    julia
    using Pkg
     Pkg.add("Metal")
    julia
    using Pkg
    -Pkg.add("oneAPI")

    Run the following to access a device:

    julia
    using Lux, LuxCUDA
    +Pkg.add("oneAPI")

    Run the following to access a device:

    julia
    using Lux, LuxCUDA
     
     const dev = gpu_device()
    julia
    using Lux, AMDGPU
     
    @@ -15,7 +15,7 @@ import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const o
     const dev = gpu_device()
    julia
    using Lux, oneAPI
     
     const dev = gpu_device()

    Want Reactant (XLA) Support?

    Install the following package:

    julia
    using Pkg;
    -Pkg.add("Reactant")

    Run the following to access a device (Reactant automatically selects the best backend by default):

    julia
    using Reactant, Lux
    +Pkg.add("Reactant")

    Run the following to access a device (Reactant automatically selects the best backend by default):

    julia
    using Reactant, Lux
     Reactant.set_default_backend("cpu")
     
     const dev = reactant_device()
    julia
    using Reactant, Lux
    @@ -24,4 +24,4 @@ import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const o
     const dev = reactant_device()
    julia
    using Reactant, Lux
     Reactant.set_default_backend("tpu")
     
    -const dev = reactant_device()
    `,15)]))}const c=i(l,[["render",e]]);export{o as __pageData,c as default}; +const dev = reactant_device()
    `,15)]))}const E=i(l,[["render",e]]);export{g as __pageData,E as default}; diff --git a/dev/assets/index.md.d5k3UIjW.lean.js b/dev/assets/index.md.d5k3UIjW.lean.js new file mode 100644 index 0000000000..11d6338972 --- /dev/null +++ b/dev/assets/index.md.d5k3UIjW.lean.js @@ -0,0 +1 @@ +import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"","description":"","frontmatter":{"layout":"home","hero":{"name":"LuxDL Docs","text":"Elegant & Performant Scientific Machine Learning in JuliaLang","tagline":"A Pure Julia Deep Learning Framework designed for Scientific Machine Learning","actions":[{"theme":"brand","text":"Tutorials","link":"/tutorials"},{"theme":"alt","text":"API Reference 📚","link":"/api/Lux/layers"},{"theme":"alt","text":"View on GitHub","link":"https://github.com/LuxDL/Lux.jl"}],"image":{"src":"/lux-logo.svg","alt":"Lux.jl"}},"features":[{"icon":"🚀","title":"Fast & Extendable","details":"Lux.jl is written in Julia itself, making it extremely extendable. CUDA and AMDGPU are supported first-class, with experimental support for Metal and Intel GPUs.","link":"/introduction"},{"icon":"🐎","title":"Powered by the XLA Compiler","details":"Lux.jl seamlessly integrates with Reactant.jl, to compiler models to run on CPU, GPU, TPU, and more.","link":"/manual/compiling_lux_models"},{"icon":"🧑‍🔬","title":"SciML ❤️ Lux","details":"Lux is the default choice for all SciML packages, including DiffEqFlux.jl, NeuralPDE.jl, and more.","link":"https://sciml.ai/"},{"icon":"🧩","title":"Uniquely Composable","details":"Lux.jl natively supports Arbitrary Parameter Types, making it uniquely composable with other Julia packages (and even Non-Julia packages).","link":"/api/Lux/contrib#Training"}]},"headers":[],"relativePath":"index.md","filePath":"index.md","lastUpdated":null}'),l={name:"index.md"};function e(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t("",15)]))}const E=i(l,[["render",e]]);export{g as __pageData,E as default}; diff --git a/dev/assets/introduction_citation.md.Cyg9oVHB.lean.js b/dev/assets/introduction_citation.md.Cyg9oVHB.lean.js deleted file mode 100644 index 794cce5f5b..0000000000 --- a/dev/assets/introduction_citation.md.Cyg9oVHB.lean.js +++ /dev/null @@ -1,16 +0,0 @@ -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const F=JSON.parse('{"title":"Citation","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/citation.md","filePath":"introduction/citation.md","lastUpdated":null}'),h={name:"introduction/citation.md"};function k(l,s,p,e,E,r){return t(),a("div",null,s[0]||(s[0]=[n(`

    Citation

    If you found this library to be useful in academic work, then please cite:

    bibtex
    @software{pal2023lux,
    -  author    = {Pal, Avik},
    -  title     = {{Lux: Explicit Parameterization of Deep Neural Networks in Julia}},
    -  month     = {April},
    -  year      = 2023,
    -  note      = {If you use this software, please cite it as below.},
    -  publisher = {Zenodo},
    -  version   = {v0.5.0},
    -  doi       = {10.5281/zenodo.7808904},
    -  url       = {https://doi.org/10.5281/zenodo.7808904}
    -}
    bibtex
    @thesis{pal2023efficient,
    -  title     = {{On Efficient Training \\& Inference of Neural Differential Equations}},
    -  author    = {Pal, Avik},
    -  year      = {2023},
    -  school    = {Massachusetts Institute of Technology}
    -}
    `,4)]))}const g=i(h,[["render",k]]);export{F as __pageData,g as default}; diff --git a/dev/assets/introduction_citation.md.Cyg9oVHB.js b/dev/assets/introduction_citation.md.oktAg9dE.js similarity index 98% rename from dev/assets/introduction_citation.md.Cyg9oVHB.js rename to dev/assets/introduction_citation.md.oktAg9dE.js index 794cce5f5b..645220aca3 100644 --- a/dev/assets/introduction_citation.md.Cyg9oVHB.js +++ b/dev/assets/introduction_citation.md.oktAg9dE.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const F=JSON.parse('{"title":"Citation","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/citation.md","filePath":"introduction/citation.md","lastUpdated":null}'),h={name:"introduction/citation.md"};function k(l,s,p,e,E,r){return t(),a("div",null,s[0]||(s[0]=[n(`

    Citation

    If you found this library to be useful in academic work, then please cite:

    bibtex
    @software{pal2023lux,
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const F=JSON.parse('{"title":"Citation","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/citation.md","filePath":"introduction/citation.md","lastUpdated":null}'),h={name:"introduction/citation.md"};function k(l,s,p,e,E,r){return t(),a("div",null,s[0]||(s[0]=[n(`

    Citation

    If you found this library to be useful in academic work, then please cite:

    bibtex
    @software{pal2023lux,
       author    = {Pal, Avik},
       title     = {{Lux: Explicit Parameterization of Deep Neural Networks in Julia}},
       month     = {April},
    diff --git a/dev/assets/introduction_citation.md.oktAg9dE.lean.js b/dev/assets/introduction_citation.md.oktAg9dE.lean.js
    new file mode 100644
    index 0000000000..ef55e12e5b
    --- /dev/null
    +++ b/dev/assets/introduction_citation.md.oktAg9dE.lean.js
    @@ -0,0 +1 @@
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const F=JSON.parse('{"title":"Citation","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/citation.md","filePath":"introduction/citation.md","lastUpdated":null}'),h={name:"introduction/citation.md"};function k(l,s,p,e,E,r){return t(),a("div",null,s[0]||(s[0]=[n("",4)]))}const g=i(h,[["render",k]]);export{F as __pageData,g as default};
    diff --git a/dev/assets/introduction_index.md.hL9b0OC7.js b/dev/assets/introduction_index.md.A6L_dCHU.js
    similarity index 99%
    rename from dev/assets/introduction_index.md.hL9b0OC7.js
    rename to dev/assets/introduction_index.md.A6L_dCHU.js
    index 224c9fd085..b1f06223aa 100644
    --- a/dev/assets/introduction_index.md.hL9b0OC7.js
    +++ b/dev/assets/introduction_index.md.A6L_dCHU.js
    @@ -1,4 +1,4 @@
    -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Getting Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/index.md","filePath":"introduction/index.md","lastUpdated":null}'),l={name:"introduction/index.md"};function e(p,s,h,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Getting Started

    Installation

    Install Julia v1.10 or above. Lux.jl is available through the Julia package manager. You can enter it by pressing ] in the REPL and then typing add Lux. Alternatively, you can also do

    julia
    import Pkg
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Getting Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/index.md","filePath":"introduction/index.md","lastUpdated":null}'),l={name:"introduction/index.md"};function e(p,s,h,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Getting Started

    Installation

    Install Julia v1.10 or above. Lux.jl is available through the Julia package manager. You can enter it by pressing ] in the REPL and then typing add Lux. Alternatively, you can also do

    julia
    import Pkg
     Pkg.add("Lux")

    Update to v1

    If you are using a pre-v1 version of Lux.jl, please see the Updating to v1 section for instructions on how to update.

    Quickstart

    Pre-Requisites

    You need to install Optimisers and Zygote if not done already. Pkg.add(["Optimisers", "Zygote"])

    julia
    using Lux, Random, Optimisers, Zygote
     # using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support

    We take randomness very seriously

    julia
    # Seeding
     rng = Random.default_rng()
    @@ -104,7 +104,7 @@ import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g
         return model, ps, st
     end
     
    -train_model!(model, ps, st, x_data, y_data)
    2025-01-20 23:36:47.182300: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 17363501086534973873
    +train_model!(model, ps, st, x_data, y_data)
    2025-01-24 05:15:12.020926: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 310851790191412072
     Iteration: 0001 	 Loss: 2.08073235
     Iteration: 0101 	 Loss: 0.142574623
     Iteration: 0201 	 Loss: 0.0051055951
    diff --git a/dev/assets/introduction_index.md.A6L_dCHU.lean.js b/dev/assets/introduction_index.md.A6L_dCHU.lean.js
    new file mode 100644
    index 0000000000..2d1db7caff
    --- /dev/null
    +++ b/dev/assets/introduction_index.md.A6L_dCHU.lean.js
    @@ -0,0 +1 @@
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Getting Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/index.md","filePath":"introduction/index.md","lastUpdated":null}'),l={name:"introduction/index.md"};function e(p,s,h,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n("",36)]))}const o=i(l,[["render",e]]);export{g as __pageData,o as default};
    diff --git a/dev/assets/introduction_index.md.hL9b0OC7.lean.js b/dev/assets/introduction_index.md.hL9b0OC7.lean.js
    deleted file mode 100644
    index 224c9fd085..0000000000
    --- a/dev/assets/introduction_index.md.hL9b0OC7.lean.js
    +++ /dev/null
    @@ -1,118 +0,0 @@
    -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Getting Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/index.md","filePath":"introduction/index.md","lastUpdated":null}'),l={name:"introduction/index.md"};function e(p,s,h,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Getting Started

    Installation

    Install Julia v1.10 or above. Lux.jl is available through the Julia package manager. You can enter it by pressing ] in the REPL and then typing add Lux. Alternatively, you can also do

    julia
    import Pkg
    -Pkg.add("Lux")

    Update to v1

    If you are using a pre-v1 version of Lux.jl, please see the Updating to v1 section for instructions on how to update.

    Quickstart

    Pre-Requisites

    You need to install Optimisers and Zygote if not done already. Pkg.add(["Optimisers", "Zygote"])

    julia
    using Lux, Random, Optimisers, Zygote
    -# using LuxCUDA, AMDGPU, Metal, oneAPI # Optional packages for GPU support

    We take randomness very seriously

    julia
    # Seeding
    -rng = Random.default_rng()
    -Random.seed!(rng, 0)
    Random.TaskLocalRNG()

    Build the model

    julia
    # Construct the layer
    -model = Chain(Dense(128, 256, tanh), Chain(Dense(256, 1, tanh), Dense(1, 10)))
    Chain(
    -    layer_1 = Dense(128 => 256, tanh),  # 33_024 parameters
    -    layer_2 = Chain(
    -        layer_1 = Dense(256 => 1, tanh),  # 257 parameters
    -        layer_2 = Dense(1 => 10),       # 20 parameters
    -    ),
    -)         # Total: 33_301 parameters,
    -          #        plus 0 states.

    Models don't hold parameters and states so initialize them. From there on, we can just use our standard AD and Optimisers API. However, here we will show how to use Lux's Training API that provides an uniform API over all supported AD systems.

    julia
    # Get the device determined by Lux
    -dev = gpu_device()
    -
    -# Parameter and State Variables
    -ps, st = Lux.setup(rng, model) |> dev
    -
    -# Dummy Input
    -x = rand(rng, Float32, 128, 2) |> dev
    -
    -# Run the model
    -y, st = Lux.apply(model, x, ps, st)
    -
    -# Gradients
    -## First construct a TrainState
    -train_state = Lux.Training.TrainState(model, ps, st, Adam(0.0001f0))
    -
    -## We can compute the gradients using Training.compute_gradients
    -gs, loss, stats, train_state = Lux.Training.compute_gradients(
    -    AutoZygote(), MSELoss(),
    -    (x, dev(rand(rng, Float32, 10, 2))), train_state
    -)
    -
    -## Optimization
    -train_state = Training.apply_gradients!(train_state, gs) # or Training.apply_gradients (no \`!\` at the end)
    -
    -# Both these steps can be combined into a single call
    -gs, loss, stats, train_state = Training.single_train_step!(
    -    AutoZygote(), MSELoss(),
    -    (x, dev(rand(rng, Float32, 10, 2))), train_state
    -)
    ((layer_1 = (weight = Float32[0.0017983615 0.006062332 … 0.0053392933 0.0056276177; 0.0011292367 0.0041270256 … 0.003585879 0.0038155357; … ; -0.0008762945 -0.0031371699 … -0.0027350332 -0.0029033197; 0.0011154839 0.002197485 … 0.0021741025 0.0021157824], bias = Float32[0.006656272, 0.004425203, 0.0028994146, -0.0116051175, 0.0031301186, 0.0037318026, 0.0136483535, 0.013969757, -0.015173428, -0.005173992  …  -0.0018621369, -0.0015270555, -0.007873881, -0.0076395273, -0.0022123815, 0.0039605754, 0.0034407252, -0.0045406874, -0.003383829, 0.0029306945]), layer_2 = (layer_1 = (weight = Float32[0.04993449 0.03202845 … -0.059382 0.07701616], bias = Float32[0.08797912]), layer_2 = (weight = Float32[-0.094527975; -0.11476975; … ; -0.016841749; -0.0698748;;], bias = Float32[-0.21608135, -0.26255828, -0.23534852, -0.21524015, -0.055711076, -0.20314303, -0.1895644, 0.03666526, -0.03937737, -0.15905891]))), 0.8455785f0, NamedTuple(), Lux.Training.TrainState{Nothing, Nothing, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Nothing}, @NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}, layer_2::@NamedTuple{layer_1::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}, layer_2::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}}}, Adam, @NamedTuple{layer_1::@NamedTuple{weight::Optimisers.Leaf{Adam, Tuple{Matrix{Float32}, Matrix{Float32}, Tuple{Float32, Float32}}}, bias::Optimisers.Leaf{Adam, Tuple{Vector{Float32}, Vector{Float32}, Tuple{Float32, Float32}}}}, layer_2::@NamedTuple{layer_1::@NamedTuple{weight::Optimisers.Leaf{Adam, Tuple{Matrix{Float32}, Matrix{Float32}, Tuple{Float32, Float32}}}, bias::Optimisers.Leaf{Adam, Tuple{Vector{Float32}, Vector{Float32}, Tuple{Float32, Float32}}}}, layer_2::@NamedTuple{weight::Optimisers.Leaf{Adam, Tuple{Matrix{Float32}, Matrix{Float32}, Tuple{Float32, Float32}}}, bias::Optimisers.Leaf{Adam, Tuple{Vector{Float32}, Vector{Float32}, Tuple{Float32, Float32}}}}}}}(nothing, nothing, Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Nothing}((layer_1 = Dense(128 => 256, tanh), layer_2 = Chain{@NamedTuple{layer_1::Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(256 => 1, tanh), layer_2 = Dense(1 => 10)), nothing)), nothing), (layer_1 = (weight = Float32[-0.22542597 0.22379348 … 0.1997513 -0.018708104; -0.023026714 0.15451026 … -0.065325744 0.18120264; … ; 0.038037397 -0.07125516 … -0.03306083 0.039138064; -0.18810266 -0.09693537 … -0.18102062 0.019230088], bias = Float32[0.030937059, -0.060276944, 0.084569596, 0.00040024254, -0.065509446, -0.08527214, -0.026523968, 0.06347208, 0.042247728, 0.027705256  …  -0.06052852, 0.03504307, -0.028244259, 0.06788022, 0.0027464977, -0.06942153, 0.0064240773, 0.0141069945, -0.029283267, 0.01174226]), layer_2 = (layer_1 = (weight = Float32[0.12008221 0.06026435 … -0.070576 0.1577647], bias = Float32[0.026844418]), layer_2 = (weight = Float32[0.5345728; -0.28288874; … ; -0.32983455; -0.45298168;;], bias = Float32[-0.59751064, -0.7033041, -0.8457602, -0.53789175, -0.31473723, 0.17461234, -0.82945836, 0.67841595, 0.35837248, -0.14941788]))), (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple())), Adam(0.0001, (0.9, 0.999), 1.0e-8), (layer_1 = (weight = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[0.000926728 0.000860063 … 0.00110328 0.000908301; 0.000480834 0.000574605 … 0.000665883 0.000584197; … ; -0.000391039 -0.000438617 … -0.000520651 -0.000449867; 0.00106235 0.000365587 … 0.000813131 0.000495484], Float32[7.20343f-8 4.46976f-8 … 6.84867f-8 4.63952f-8; 1.79691f-8 2.02649f-8 … 2.45046f-8 1.96227f-8; … ; 1.21215f-8 1.17657f-8 … 1.50136f-8 1.15681f-8; 1.12738f-7 7.45199f-9 … 4.8495f-8 1.44173f-8], (0.729, 0.997003))), bias = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[0.00169459, 0.000977637, 0.00103866, -0.00234933, 0.000659175, 0.000868318, 0.00303222, 0.00271383, -0.00326585, -0.0014993  …  -0.000480712, -0.000501535, -0.00174489, -0.00160158, -0.000470662, 0.00127967, 0.000618911, -0.00103705, -0.000773079, 0.00146704], Float32[1.74884f-7, 5.48983f-8, 7.75433f-8, 3.08981f-7, 2.45763f-8, 4.41623f-8, 5.29156f-7, 4.09021f-7, 6.07287f-7, 1.45678f-7  …  1.4164f-8, 1.73391f-8, 1.7507f-7, 1.44894f-7, 1.25673f-8, 1.1198f-7, 2.11545f-8, 6.25338f-8, 3.4755f-8, 1.78565f-7], (0.729, 0.997003)))), layer_2 = (layer_1 = (weight = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[0.00443555 0.00163654 … -0.0124978 0.0123434], Float32[2.53181f-6 1.32838f-6 … 8.83289f-6 8.58873f-6], (0.729, 0.997003))), bias = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[0.0191175], Float32[2.08743f-5], (0.729, 0.997003)))), layer_2 = (weight = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[-0.0172084; -0.0213176; … ; -0.00376332; -0.0116419;;], Float32[1.63537f-5; 2.51152f-5; … ; 8.16783f-7; 7.55419f-6;;], (0.729, 0.997003))), bias = Leaf(Adam(0.0001, (0.9, 0.999), 1.0e-8), (Float32[-0.0365001, -0.045083, -0.0507623, -0.0390298, -0.0242259, -0.0404982, -0.0358925, 0.0114351, -0.00803444, -0.0248332], Float32[7.40417f-5, 0.000112652, 0.000146818, 8.41229f-5, 4.60234f-5, 9.15105f-5, 7.13093f-5, 8.78741f-6, 3.62043f-6, 3.51285f-5], (0.729, 0.997003)))))), 2))

    Defining Custom Layers

    We can train our model using the above code, but let's go ahead and see how to use Reactant. Reactant is a julia frontend that generates MLIR and then compiles it using XLA (after running fancy optimizations). It is the current recommended way to train large models in Lux. For more details on using Reactant, see the manual.

    julia
    using Lux, Random, Optimisers, Reactant, Enzyme
    -using Printf # For pretty printing
    -
    -dev = reactant_device()
    (::ReactantDevice{Missing, Missing}) (generic function with 1 method)

    We will define a custom MLP using the @compact macro. The macro takes in a list of parameters, layers and states, and a function defining the forward pass of the neural network.

    julia
    n_in = 1
    -n_out = 1
    -nlayers = 3
    -
    -model = @compact(
    -    w1=Dense(n_in => 32),
    -    w2=[Dense(32 => 32) for i in 1:nlayers],
    -    w3=Dense(32 => n_out),
    -    act=relu
    -) do x
    -    embed = act(w1(x))
    -    for w in w2
    -        embed = act(w(embed))
    -    end
    -    out = w3(embed)
    -    @return out
    -end
    @compact(
    -    w1 = Dense(1 => 32),                # 64 parameters
    -    w2 = NamedTuple(
    -        1 = Dense(32 => 32),            # 1_056 parameters
    -        2 = Dense(32 => 32),            # 1_056 parameters
    -        3 = Dense(32 => 32),            # 1_056 parameters
    -    ),
    -    w3 = Dense(32 => 1),                # 33 parameters
    -    act = relu,
    -) do x 
    -    embed = act(w1(x))
    -    for w = w2
    -        embed = act(w(embed))
    -    end
    -    out = w3(embed)
    -    return out
    -end       # Total: 3_265 parameters,
    -          #        plus 1 states.

    We can initialize the model and train it with the same code as before!

    julia
    rng = Random.default_rng()
    -Random.seed!(rng, 0)
    -
    -ps, st = Lux.setup(rng, model) |> dev
    -
    -x = rand(rng, Float32, n_in, 32) |> dev
    -
    -@jit model(x, ps, st)  # 1×32 Matrix and updated state as output.
    -
    -x_data = reshape(collect(-2.0f0:0.1f0:2.0f0), 1, :)
    -y_data = 2 .* x_data .- x_data .^ 3
    -x_data, y_data = dev(x_data), dev(y_data)
    -
    -function train_model!(model, ps, st, x_data, y_data)
    -    train_state = Lux.Training.TrainState(model, ps, st, Adam(0.001f0))
    -
    -    for iter in 1:1000
    -        _, loss, _, train_state = Lux.Training.single_train_step!(
    -            AutoEnzyme(), MSELoss(),
    -            (x_data, y_data), train_state
    -        )
    -        if iter % 100 == 1 || iter == 1000
    -            @printf "Iteration: %04d \\t Loss: %10.9g\\n" iter loss
    -        end
    -    end
    -
    -    return model, ps, st
    -end
    -
    -train_model!(model, ps, st, x_data, y_data)
    2025-01-20 23:36:47.182300: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 17363501086534973873
    -Iteration: 0001 	 Loss: 2.08073235
    -Iteration: 0101 	 Loss: 0.142574623
    -Iteration: 0201 	 Loss: 0.0051055951
    -Iteration: 0301 	 Loss: 0.00118357129
    -Iteration: 0401 	 Loss: 0.000504208321
    -Iteration: 0501 	 Loss: 0.000281832268
    -Iteration: 0601 	 Loss: 0.000203011135
    -Iteration: 0701 	 Loss: 0.000126347542
    -Iteration: 0801 	 Loss: 0.00201115524
    -Iteration: 0901 	 Loss: 9.70276451e-05
    -Iteration: 1000 	 Loss: 7.81012277e-05

    Training with Optimization.jl

    If you are coming from the SciML ecosystem and want to use Optimization.jl, please refer to the Optimization.jl Tutorial.

    Additional Packages

    LuxDL hosts various packages that provide additional functionality for Lux.jl. All packages mentioned in this documentation are available via the Julia General Registry.

    You can install all those packages via import Pkg; Pkg.add(<package name>).

    XLA (CPU/GPU/TPU) Support

    Lux.jl supports XLA compilation for CPU, GPU, and TPU using Reactant.jl.

    GPU Support

    GPU Support for Lux.jl requires loading additional packages:

    `,36)]))}const o=i(l,[["render",e]]);export{g as __pageData,o as default}; diff --git a/dev/assets/introduction_overview.md.BvBZ09ef.js b/dev/assets/introduction_overview.md.B53X8gve.js similarity index 97% rename from dev/assets/introduction_overview.md.BvBZ09ef.js rename to dev/assets/introduction_overview.md.B53X8gve.js index b4a5f7aacf..c4c729bf18 100644 --- a/dev/assets/introduction_overview.md.BvBZ09ef.js +++ b/dev/assets/introduction_overview.md.B53X8gve.js @@ -1 +1 @@ -import{_ as t,c as r,a2 as a,o as s}from"./chunks/framework.I-x9Gl6h.js";const m=JSON.parse('{"title":"Why we wrote Lux?","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/overview.md","filePath":"introduction/overview.md","lastUpdated":null}'),o={name:"introduction/overview.md"};function i(n,e,l,u,d,p){return s(),r("div",null,e[0]||(e[0]=[a('

    Why we wrote Lux?

    Julia already has quite a few well established Neural Network Frameworks – Flux & KNet. However, certain design elements – Coupled Model and Parameters & Internal Mutations – associated with these frameworks make them less compiler and user friendly. Making changes to address these problems in the respective frameworks would be too disruptive for users. Here comes in Lux: a neural network framework built completely using pure functions to make it both compiler and autodiff friendly.

    Design Principles

    • Layers must be immutable – cannot store any parameter/state but rather store the information to construct them

    • Layers are pure functions

    • Layers return a Tuple containing the result and the updated state

    • Given same inputs the outputs must be same – yes this must hold true even for stochastic functions. Randomness must be controlled using rngs passed in the state.

    • Easily extensible

    • Extensive Testing – All layers and features are tested across all supported AD backends across all supported hardware backends.

    Why use Lux over Flux?

    • Neural Networks for SciML: For SciML Applications (Neural ODEs, Deep Equilibrium Models) solvers typically expect a monolithic parameter vector. Flux enables this via its destructure mechanism, but destructure comes with various edge cases and limitations. Lux forces users to make an explicit distinction between state variables and parameter variables to avoid these issues. Also, it comes battery-included for distributed training.

    • Sensible display of Custom Layers – Ever wanted to see Pytorch like Network printouts or wondered how to extend the pretty printing of Flux's layers? Lux handles all of that by default.

    • Truly immutable models - No unexpected internal mutations since all layers are implemented as pure functions. All layers are also deterministic given the parameters and state: if a layer is supposed to be stochastic (say Dropout), the state must contain a seed which is then updated after the function call.

    • Easy Parameter Manipulation – By separating parameter data and layer structures, Lux makes implementing WeightNorm, SpectralNorm, etc. downright trivial. Without this separation, it is much harder to pass such parameters around without mutations which AD systems don't like.

    • Wider AD Support – Lux has extensive support for most AD systems in julia, while Flux is mostly tied to Zygote (with some initial support for Enzyme).

    • Small Neural Networks on CPU – Lux is developed for training large neural networks. For smaller architectures, we recommend using SimpleChains.jl or even better use it in conjunction with Lux via ToSimpleChainsAdaptor.

    • Reliability – We have learned from the mistakes of the past with Flux and everything in our core framework is extensively tested, along with downstream CI to ensure that everything works as expected.

    Revising Previous Recommendation about Large Models

    Previously we recommended not using Lux for very large models. But we have been making a lot of head-way with Reactant.jl and it would be worthwhile to test larger models with Lux. See compiling Lux models for more information.

    ',7)]))}const h=t(o,[["render",i]]);export{m as __pageData,h as default}; +import{_ as t,c as r,a2 as a,o as s}from"./chunks/framework.BetCMmtc.js";const m=JSON.parse('{"title":"Why we wrote Lux?","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/overview.md","filePath":"introduction/overview.md","lastUpdated":null}'),o={name:"introduction/overview.md"};function i(n,e,l,u,d,p){return s(),r("div",null,e[0]||(e[0]=[a('

    Why we wrote Lux?

    Julia already has quite a few well established Neural Network Frameworks – Flux & KNet. However, certain design elements – Coupled Model and Parameters & Internal Mutations – associated with these frameworks make them less compiler and user friendly. Making changes to address these problems in the respective frameworks would be too disruptive for users. Here comes in Lux: a neural network framework built completely using pure functions to make it both compiler and autodiff friendly.

    Design Principles

    • Layers must be immutable – cannot store any parameter/state but rather store the information to construct them

    • Layers are pure functions

    • Layers return a Tuple containing the result and the updated state

    • Given same inputs the outputs must be same – yes this must hold true even for stochastic functions. Randomness must be controlled using rngs passed in the state.

    • Easily extensible

    • Extensive Testing – All layers and features are tested across all supported AD backends across all supported hardware backends.

    Why use Lux over Flux?

    • Neural Networks for SciML: For SciML Applications (Neural ODEs, Deep Equilibrium Models) solvers typically expect a monolithic parameter vector. Flux enables this via its destructure mechanism, but destructure comes with various edge cases and limitations. Lux forces users to make an explicit distinction between state variables and parameter variables to avoid these issues. Also, it comes battery-included for distributed training.

    • Sensible display of Custom Layers – Ever wanted to see Pytorch like Network printouts or wondered how to extend the pretty printing of Flux's layers? Lux handles all of that by default.

    • Truly immutable models - No unexpected internal mutations since all layers are implemented as pure functions. All layers are also deterministic given the parameters and state: if a layer is supposed to be stochastic (say Dropout), the state must contain a seed which is then updated after the function call.

    • Easy Parameter Manipulation – By separating parameter data and layer structures, Lux makes implementing WeightNorm, SpectralNorm, etc. downright trivial. Without this separation, it is much harder to pass such parameters around without mutations which AD systems don't like.

    • Wider AD Support – Lux has extensive support for most AD systems in julia, while Flux is mostly tied to Zygote (with some initial support for Enzyme).

    • Small Neural Networks on CPU – Lux is developed for training large neural networks. For smaller architectures, we recommend using SimpleChains.jl or even better use it in conjunction with Lux via ToSimpleChainsAdaptor.

    • Reliability – We have learned from the mistakes of the past with Flux and everything in our core framework is extensively tested, along with downstream CI to ensure that everything works as expected.

    Revising Previous Recommendation about Large Models

    Previously we recommended not using Lux for very large models. But we have been making a lot of head-way with Reactant.jl and it would be worthwhile to test larger models with Lux. See compiling Lux models for more information.

    ',7)]))}const h=t(o,[["render",i]]);export{m as __pageData,h as default}; diff --git a/dev/assets/introduction_overview.md.B53X8gve.lean.js b/dev/assets/introduction_overview.md.B53X8gve.lean.js new file mode 100644 index 0000000000..6c49e8b93e --- /dev/null +++ b/dev/assets/introduction_overview.md.B53X8gve.lean.js @@ -0,0 +1 @@ +import{_ as t,c as r,a2 as a,o as s}from"./chunks/framework.BetCMmtc.js";const m=JSON.parse('{"title":"Why we wrote Lux?","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/overview.md","filePath":"introduction/overview.md","lastUpdated":null}'),o={name:"introduction/overview.md"};function i(n,e,l,u,d,p){return s(),r("div",null,e[0]||(e[0]=[a("",7)]))}const h=t(o,[["render",i]]);export{m as __pageData,h as default}; diff --git a/dev/assets/introduction_overview.md.BvBZ09ef.lean.js b/dev/assets/introduction_overview.md.BvBZ09ef.lean.js deleted file mode 100644 index b4a5f7aacf..0000000000 --- a/dev/assets/introduction_overview.md.BvBZ09ef.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as t,c as r,a2 as a,o as s}from"./chunks/framework.I-x9Gl6h.js";const m=JSON.parse('{"title":"Why we wrote Lux?","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/overview.md","filePath":"introduction/overview.md","lastUpdated":null}'),o={name:"introduction/overview.md"};function i(n,e,l,u,d,p){return s(),r("div",null,e[0]||(e[0]=[a('

    Why we wrote Lux?

    Julia already has quite a few well established Neural Network Frameworks – Flux & KNet. However, certain design elements – Coupled Model and Parameters & Internal Mutations – associated with these frameworks make them less compiler and user friendly. Making changes to address these problems in the respective frameworks would be too disruptive for users. Here comes in Lux: a neural network framework built completely using pure functions to make it both compiler and autodiff friendly.

    Design Principles

    • Layers must be immutable – cannot store any parameter/state but rather store the information to construct them

    • Layers are pure functions

    • Layers return a Tuple containing the result and the updated state

    • Given same inputs the outputs must be same – yes this must hold true even for stochastic functions. Randomness must be controlled using rngs passed in the state.

    • Easily extensible

    • Extensive Testing – All layers and features are tested across all supported AD backends across all supported hardware backends.

    Why use Lux over Flux?

    • Neural Networks for SciML: For SciML Applications (Neural ODEs, Deep Equilibrium Models) solvers typically expect a monolithic parameter vector. Flux enables this via its destructure mechanism, but destructure comes with various edge cases and limitations. Lux forces users to make an explicit distinction between state variables and parameter variables to avoid these issues. Also, it comes battery-included for distributed training.

    • Sensible display of Custom Layers – Ever wanted to see Pytorch like Network printouts or wondered how to extend the pretty printing of Flux's layers? Lux handles all of that by default.

    • Truly immutable models - No unexpected internal mutations since all layers are implemented as pure functions. All layers are also deterministic given the parameters and state: if a layer is supposed to be stochastic (say Dropout), the state must contain a seed which is then updated after the function call.

    • Easy Parameter Manipulation – By separating parameter data and layer structures, Lux makes implementing WeightNorm, SpectralNorm, etc. downright trivial. Without this separation, it is much harder to pass such parameters around without mutations which AD systems don't like.

    • Wider AD Support – Lux has extensive support for most AD systems in julia, while Flux is mostly tied to Zygote (with some initial support for Enzyme).

    • Small Neural Networks on CPU – Lux is developed for training large neural networks. For smaller architectures, we recommend using SimpleChains.jl or even better use it in conjunction with Lux via ToSimpleChainsAdaptor.

    • Reliability – We have learned from the mistakes of the past with Flux and everything in our core framework is extensively tested, along with downstream CI to ensure that everything works as expected.

    Revising Previous Recommendation about Large Models

    Previously we recommended not using Lux for very large models. But we have been making a lot of head-way with Reactant.jl and it would be worthwhile to test larger models with Lux. See compiling Lux models for more information.

    ',7)]))}const h=t(o,[["render",i]]);export{m as __pageData,h as default}; diff --git a/dev/assets/introduction_resources.md.21ZkndKX.lean.js b/dev/assets/introduction_resources.md.21ZkndKX.lean.js deleted file mode 100644 index 9d2ee1f0f0..0000000000 --- a/dev/assets/introduction_resources.md.21ZkndKX.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as t,c as r,a2 as s,o as a}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Resources to Get Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/resources.md","filePath":"introduction/resources.md","lastUpdated":null}'),o={name:"introduction/resources.md"};function i(u,e,n,l,d,c){return a(),r("div",null,e[0]||(e[0]=[s('

    Resources to Get Started

    • Go through the Quickstart Example.

    • Read the introductory tutorials on Julia and Lux.

    • Go through the examples sorted based on their complexity in the documentation.

    Have More Questions?

    For usage related questions, please use Github Discussions which allows questions and answers to be indexed. To report bugs use Github Issues or even better send in a Pull Request.

    ',3)]))}const g=t(o,[["render",i]]);export{p as __pageData,g as default}; diff --git a/dev/assets/introduction_resources.md.21ZkndKX.js b/dev/assets/introduction_resources.md.CaXbRHi2.js similarity index 84% rename from dev/assets/introduction_resources.md.21ZkndKX.js rename to dev/assets/introduction_resources.md.CaXbRHi2.js index 9d2ee1f0f0..48e98efe80 100644 --- a/dev/assets/introduction_resources.md.21ZkndKX.js +++ b/dev/assets/introduction_resources.md.CaXbRHi2.js @@ -1 +1 @@ -import{_ as t,c as r,a2 as s,o as a}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Resources to Get Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/resources.md","filePath":"introduction/resources.md","lastUpdated":null}'),o={name:"introduction/resources.md"};function i(u,e,n,l,d,c){return a(),r("div",null,e[0]||(e[0]=[s('

    Resources to Get Started

    • Go through the Quickstart Example.

    • Read the introductory tutorials on Julia and Lux.

    • Go through the examples sorted based on their complexity in the documentation.

    Have More Questions?

    For usage related questions, please use Github Discussions which allows questions and answers to be indexed. To report bugs use Github Issues or even better send in a Pull Request.

    ',3)]))}const g=t(o,[["render",i]]);export{p as __pageData,g as default}; +import{_ as t,c as r,a2 as s,o as a}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Resources to Get Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/resources.md","filePath":"introduction/resources.md","lastUpdated":null}'),o={name:"introduction/resources.md"};function i(u,e,n,l,d,c){return a(),r("div",null,e[0]||(e[0]=[s('

    Resources to Get Started

    • Go through the Quickstart Example.

    • Read the introductory tutorials on Julia and Lux.

    • Go through the examples sorted based on their complexity in the documentation.

    Have More Questions?

    For usage related questions, please use Github Discussions which allows questions and answers to be indexed. To report bugs use Github Issues or even better send in a Pull Request.

    ',3)]))}const _=t(o,[["render",i]]);export{p as __pageData,_ as default}; diff --git a/dev/assets/introduction_resources.md.CaXbRHi2.lean.js b/dev/assets/introduction_resources.md.CaXbRHi2.lean.js new file mode 100644 index 0000000000..60077cec57 --- /dev/null +++ b/dev/assets/introduction_resources.md.CaXbRHi2.lean.js @@ -0,0 +1 @@ +import{_ as t,c as r,a2 as s,o as a}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Resources to Get Started","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/resources.md","filePath":"introduction/resources.md","lastUpdated":null}'),o={name:"introduction/resources.md"};function i(u,e,n,l,d,c){return a(),r("div",null,e[0]||(e[0]=[s("",3)]))}const _=t(o,[["render",i]]);export{p as __pageData,_ as default}; diff --git a/dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.lean.js b/dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.lean.js deleted file mode 100644 index f164ad6c49..0000000000 --- a/dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as a,c as o,a2 as i,o as t}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Updating to Lux v1","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/updating_to_v1.md","filePath":"introduction/updating_to_v1.md","lastUpdated":null}'),r={name:"introduction/updating_to_v1.md"};function n(d,e,l,s,c,u){return t(),o("div",null,e[0]||(e[0]=[i('

    Updating to Lux v1

    Lux v1 is a Major Release, mostly to signify the stability of the API. In this page, we list out a concrete set of changes that need to be made to your code to update to Lux v1. We also list out some new exciting features that were added as part of this release.

    LuxLib.jl

    Breaking Changes

    • Old deprecated API with keyword arguments has been removed. See the new docs in LuxLib API for more details.

    • Default for layernorm dims has been changed to exclude the batch dimension.

    New Major Features

    • Dense layers now support CUDA backend for Enzyme (starting v1.1). Wider support for other operations with Enzyme + CUDA is being actively worked on.

    LuxCore.jl

    Breaking Changes

    • AbstractExplicitLayer has been renamed to AbstractLuxLayer.

    • AbstractExplicitContainerLayer behaviour

      • This has been renamed to AbstractLuxContainerLayer.

      • Previously, AbstractExplicitContainerLayer{(:a,)} (i.e. singleton containers) would produce default initial parameters and states without wrapping them in a NamedTuple{(:a,)}. This was inconsistent with non-singleton containers, and was a source of confusion. With v we return (; a = <parameters>) and (; a = <states>) by default. See AbstractLuxWrapperLayer for a replacement of this functionality.

    • inputsize has been removed since it was ambiguous and not used anywhere.

    • Changes to outputsize:

      • Single argument version has been removed. See LuxCore.jl Pull Request 43 for more details on the rationale behind this change.

      • Fallback implementation has been moved to Lux.jl. (i.e. users using Lux shouldn't see a difference, but if Lux.jl isn't loaded, this function has error.)

        • Internally this uses a NilArray that is able to compute sizes without actually running the computation.
    • Functors and Setfield have been made into optional dependencies. Certain LuxCore functionality that rely on these functions, will throw an error if these packages are not loaded.

    New Major Features

    • Introduction of AbstractLuxWrapperLayer. This behaves exactly like the old singleton container. For example, the old AbstractExplicitContainerLayer{(:a,)} is equivalent to AbstractLuxWrapperLayer{:a}.

    WeightInitializers.jl

    This was a major release to signify the stability of the API. There were no breaking changes. We do support a wider range of RNG types, see Supported RNG Types for more details.

    MLDataDevices.jl

    This is the most aggressive change that was made. We renamed the LuxDeviceUtils.jl package to MLDataDevices.jl, to allow for non-Lux packages to use this shared device management abstraction.

    Deprecation of LuxDeviceUtils.jl

    This also marks the deprecation of the LuxDeviceUtils.jl package. We won't be making any updates to that package, including fixing any bugs. All users should switch to MLDataDevices.jl instead.

    Breaking Changes

    • Lux(___)Device objects have been renamed to (___)Device. For example, LuxCUDADevice has been renamed to CUDADevice.

    • Lux(___)Adaptor objects have been removed. The corresponding Device objects should be used directly instead.

    New Major Features

    • DeviceIterator provides a generalization of CUDA.CuIterator and works for all backends and more data types (using Functors.jl). MLUtils.DataLoader |> gdev now returns a DeviceIterator instead of being a no-op.

    Lux.jl

    Breaking Changes (Removed Functionality)

    • Direct reexport of NNlib has been removed. We reexport selected functionality from NNlib. Direactly load NNlib if you need to use the other functions.

    • Flattening of Chain layers has been removed, and the corresponding disable_optimizations kwarg has been removed.

    • Some layers overloaded Base.keys, these have been removed. These were mostly un-documented and weren't supposed to be used outside of the Lux.jl package.

    • Training.TrainState construction with rng has been removed.

    • Older versions of Preferences have been removed.

    • disable_stacktrace_truncation! has been removed. From Julia 1.9 onwards, stacktrace truncation is enabled by default.

    • Certain Experimental features were present outside the Lux.Experimental module. These have been removed, use them via Lux.Experimental instead. Run Julia with with depwarn as error and Lux v0.5 to see the deprecations.

    • Lux.Experimental.@layer_map is not longer needed and has been removed. The name of the variable prevents writing generic functions and is no longer pre-pended to the KeyPath. See the docstring of Lux.Experimental.layer_map for more details.

    • allow_fast_activation kwarg has been removed completely. Pass an anonymous function as the activation to prevent internal modivations to the activation function.

    Breaking Changes (Moved Functionality)

    • Lux.Experimental.Training has been moved to Lux.Training. We guarantee SemVar on this new module.

    • Lux.cpu and Lux.gpu have been removed. Use cpu_device and gpu_device instead.

    • Experimental.@compact can be directly used via @compact now.

    • Experimental.StatefulLuxLayer has been moved to Lux.StatefulLuxLayer.

    • st_fixed_path kwarg has been removed from Lux.StatefulLuxLayer, instead use it as StatefulLuxLayer{st_fixed_path}(...).

    • Strings as inputs to Lux.Experimental.layer_map and Lux.Experimental.@debug_mode are removed, use Functors.KeyPath instead.

    • CrossCor has been removed. Use Conv(args...; kwargs..., cross_correlation=true) instead.

    Breaking Changes (Changes in Defaults)

    • Conv and ConvTranspose use an initialization based on the activation function, taken from Pytorch. Pytorch assumes the activation function is leakyrelu to compute the gain, however, we compute the gain based on the activation function passed in to the layer.

    • Upsample now has an align_corners keyword argument, which defaults to false. Previously this was always true.

    • Dense and Bilinear have updated default initializations to align with the defaults from Pytorch. See the documentation for more details.

    • InstanceNorm now defaults to affine=false instead of affine=true.

    • Embedding now defaults to init_weight=rand32 instead of init_weight=randn32.

    • Recurrent Cells - RNNCell, LSTMCell, and GRUCell now have different default initializations. See the documentation for more details.

    New Features

    • InstanceNorm now supports tracking statistics.

    • RNNCell and LSTMCell add bias_ih and bias_hh to the parameters to align with Pytorch. Both are controlled using init_bias and use_bias.

    • ConvTranspose allows flipkernel=true via cross_correlation=true. This makes it efficient for MIOpen.

    • ConvTranspose now has an outpad keyword argument, which is used to increase the size of the output in the desired dimensions.

    • Pooling Layers based on lpnorm have been added – LPPool, GlobalLPPool, and AdaptiveLPPool.

    ',30)]))}const L=a(r,[["render",n]]);export{p as __pageData,L as default}; diff --git a/dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.js b/dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.js similarity index 99% rename from dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.js rename to dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.js index f164ad6c49..2071001546 100644 --- a/dev/assets/introduction_updating_to_v1.md.Ck5XPGhV.js +++ b/dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.js @@ -1 +1 @@ -import{_ as a,c as o,a2 as i,o as t}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Updating to Lux v1","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/updating_to_v1.md","filePath":"introduction/updating_to_v1.md","lastUpdated":null}'),r={name:"introduction/updating_to_v1.md"};function n(d,e,l,s,c,u){return t(),o("div",null,e[0]||(e[0]=[i('

    Updating to Lux v1

    Lux v1 is a Major Release, mostly to signify the stability of the API. In this page, we list out a concrete set of changes that need to be made to your code to update to Lux v1. We also list out some new exciting features that were added as part of this release.

    LuxLib.jl

    Breaking Changes

    • Old deprecated API with keyword arguments has been removed. See the new docs in LuxLib API for more details.

    • Default for layernorm dims has been changed to exclude the batch dimension.

    New Major Features

    • Dense layers now support CUDA backend for Enzyme (starting v1.1). Wider support for other operations with Enzyme + CUDA is being actively worked on.

    LuxCore.jl

    Breaking Changes

    • AbstractExplicitLayer has been renamed to AbstractLuxLayer.

    • AbstractExplicitContainerLayer behaviour

      • This has been renamed to AbstractLuxContainerLayer.

      • Previously, AbstractExplicitContainerLayer{(:a,)} (i.e. singleton containers) would produce default initial parameters and states without wrapping them in a NamedTuple{(:a,)}. This was inconsistent with non-singleton containers, and was a source of confusion. With v we return (; a = <parameters>) and (; a = <states>) by default. See AbstractLuxWrapperLayer for a replacement of this functionality.

    • inputsize has been removed since it was ambiguous and not used anywhere.

    • Changes to outputsize:

      • Single argument version has been removed. See LuxCore.jl Pull Request 43 for more details on the rationale behind this change.

      • Fallback implementation has been moved to Lux.jl. (i.e. users using Lux shouldn't see a difference, but if Lux.jl isn't loaded, this function has error.)

        • Internally this uses a NilArray that is able to compute sizes without actually running the computation.
    • Functors and Setfield have been made into optional dependencies. Certain LuxCore functionality that rely on these functions, will throw an error if these packages are not loaded.

    New Major Features

    • Introduction of AbstractLuxWrapperLayer. This behaves exactly like the old singleton container. For example, the old AbstractExplicitContainerLayer{(:a,)} is equivalent to AbstractLuxWrapperLayer{:a}.

    WeightInitializers.jl

    This was a major release to signify the stability of the API. There were no breaking changes. We do support a wider range of RNG types, see Supported RNG Types for more details.

    MLDataDevices.jl

    This is the most aggressive change that was made. We renamed the LuxDeviceUtils.jl package to MLDataDevices.jl, to allow for non-Lux packages to use this shared device management abstraction.

    Deprecation of LuxDeviceUtils.jl

    This also marks the deprecation of the LuxDeviceUtils.jl package. We won't be making any updates to that package, including fixing any bugs. All users should switch to MLDataDevices.jl instead.

    Breaking Changes

    • Lux(___)Device objects have been renamed to (___)Device. For example, LuxCUDADevice has been renamed to CUDADevice.

    • Lux(___)Adaptor objects have been removed. The corresponding Device objects should be used directly instead.

    New Major Features

    • DeviceIterator provides a generalization of CUDA.CuIterator and works for all backends and more data types (using Functors.jl). MLUtils.DataLoader |> gdev now returns a DeviceIterator instead of being a no-op.

    Lux.jl

    Breaking Changes (Removed Functionality)

    • Direct reexport of NNlib has been removed. We reexport selected functionality from NNlib. Direactly load NNlib if you need to use the other functions.

    • Flattening of Chain layers has been removed, and the corresponding disable_optimizations kwarg has been removed.

    • Some layers overloaded Base.keys, these have been removed. These were mostly un-documented and weren't supposed to be used outside of the Lux.jl package.

    • Training.TrainState construction with rng has been removed.

    • Older versions of Preferences have been removed.

    • disable_stacktrace_truncation! has been removed. From Julia 1.9 onwards, stacktrace truncation is enabled by default.

    • Certain Experimental features were present outside the Lux.Experimental module. These have been removed, use them via Lux.Experimental instead. Run Julia with with depwarn as error and Lux v0.5 to see the deprecations.

    • Lux.Experimental.@layer_map is not longer needed and has been removed. The name of the variable prevents writing generic functions and is no longer pre-pended to the KeyPath. See the docstring of Lux.Experimental.layer_map for more details.

    • allow_fast_activation kwarg has been removed completely. Pass an anonymous function as the activation to prevent internal modivations to the activation function.

    Breaking Changes (Moved Functionality)

    • Lux.Experimental.Training has been moved to Lux.Training. We guarantee SemVar on this new module.

    • Lux.cpu and Lux.gpu have been removed. Use cpu_device and gpu_device instead.

    • Experimental.@compact can be directly used via @compact now.

    • Experimental.StatefulLuxLayer has been moved to Lux.StatefulLuxLayer.

    • st_fixed_path kwarg has been removed from Lux.StatefulLuxLayer, instead use it as StatefulLuxLayer{st_fixed_path}(...).

    • Strings as inputs to Lux.Experimental.layer_map and Lux.Experimental.@debug_mode are removed, use Functors.KeyPath instead.

    • CrossCor has been removed. Use Conv(args...; kwargs..., cross_correlation=true) instead.

    Breaking Changes (Changes in Defaults)

    • Conv and ConvTranspose use an initialization based on the activation function, taken from Pytorch. Pytorch assumes the activation function is leakyrelu to compute the gain, however, we compute the gain based on the activation function passed in to the layer.

    • Upsample now has an align_corners keyword argument, which defaults to false. Previously this was always true.

    • Dense and Bilinear have updated default initializations to align with the defaults from Pytorch. See the documentation for more details.

    • InstanceNorm now defaults to affine=false instead of affine=true.

    • Embedding now defaults to init_weight=rand32 instead of init_weight=randn32.

    • Recurrent Cells - RNNCell, LSTMCell, and GRUCell now have different default initializations. See the documentation for more details.

    New Features

    • InstanceNorm now supports tracking statistics.

    • RNNCell and LSTMCell add bias_ih and bias_hh to the parameters to align with Pytorch. Both are controlled using init_bias and use_bias.

    • ConvTranspose allows flipkernel=true via cross_correlation=true. This makes it efficient for MIOpen.

    • ConvTranspose now has an outpad keyword argument, which is used to increase the size of the output in the desired dimensions.

    • Pooling Layers based on lpnorm have been added – LPPool, GlobalLPPool, and AdaptiveLPPool.

    ',30)]))}const L=a(r,[["render",n]]);export{p as __pageData,L as default}; +import{_ as a,c as o,a2 as i,o as t}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Updating to Lux v1","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/updating_to_v1.md","filePath":"introduction/updating_to_v1.md","lastUpdated":null}'),r={name:"introduction/updating_to_v1.md"};function n(d,e,l,s,c,u){return t(),o("div",null,e[0]||(e[0]=[i('

    Updating to Lux v1

    Lux v1 is a Major Release, mostly to signify the stability of the API. In this page, we list out a concrete set of changes that need to be made to your code to update to Lux v1. We also list out some new exciting features that were added as part of this release.

    LuxLib.jl

    Breaking Changes

    • Old deprecated API with keyword arguments has been removed. See the new docs in LuxLib API for more details.

    • Default for layernorm dims has been changed to exclude the batch dimension.

    New Major Features

    • Dense layers now support CUDA backend for Enzyme (starting v1.1). Wider support for other operations with Enzyme + CUDA is being actively worked on.

    LuxCore.jl

    Breaking Changes

    • AbstractExplicitLayer has been renamed to AbstractLuxLayer.

    • AbstractExplicitContainerLayer behaviour

      • This has been renamed to AbstractLuxContainerLayer.

      • Previously, AbstractExplicitContainerLayer{(:a,)} (i.e. singleton containers) would produce default initial parameters and states without wrapping them in a NamedTuple{(:a,)}. This was inconsistent with non-singleton containers, and was a source of confusion. With v we return (; a = <parameters>) and (; a = <states>) by default. See AbstractLuxWrapperLayer for a replacement of this functionality.

    • inputsize has been removed since it was ambiguous and not used anywhere.

    • Changes to outputsize:

      • Single argument version has been removed. See LuxCore.jl Pull Request 43 for more details on the rationale behind this change.

      • Fallback implementation has been moved to Lux.jl. (i.e. users using Lux shouldn't see a difference, but if Lux.jl isn't loaded, this function has error.)

        • Internally this uses a NilArray that is able to compute sizes without actually running the computation.
    • Functors and Setfield have been made into optional dependencies. Certain LuxCore functionality that rely on these functions, will throw an error if these packages are not loaded.

    New Major Features

    • Introduction of AbstractLuxWrapperLayer. This behaves exactly like the old singleton container. For example, the old AbstractExplicitContainerLayer{(:a,)} is equivalent to AbstractLuxWrapperLayer{:a}.

    WeightInitializers.jl

    This was a major release to signify the stability of the API. There were no breaking changes. We do support a wider range of RNG types, see Supported RNG Types for more details.

    MLDataDevices.jl

    This is the most aggressive change that was made. We renamed the LuxDeviceUtils.jl package to MLDataDevices.jl, to allow for non-Lux packages to use this shared device management abstraction.

    Deprecation of LuxDeviceUtils.jl

    This also marks the deprecation of the LuxDeviceUtils.jl package. We won't be making any updates to that package, including fixing any bugs. All users should switch to MLDataDevices.jl instead.

    Breaking Changes

    • Lux(___)Device objects have been renamed to (___)Device. For example, LuxCUDADevice has been renamed to CUDADevice.

    • Lux(___)Adaptor objects have been removed. The corresponding Device objects should be used directly instead.

    New Major Features

    • DeviceIterator provides a generalization of CUDA.CuIterator and works for all backends and more data types (using Functors.jl). MLUtils.DataLoader |> gdev now returns a DeviceIterator instead of being a no-op.

    Lux.jl

    Breaking Changes (Removed Functionality)

    • Direct reexport of NNlib has been removed. We reexport selected functionality from NNlib. Direactly load NNlib if you need to use the other functions.

    • Flattening of Chain layers has been removed, and the corresponding disable_optimizations kwarg has been removed.

    • Some layers overloaded Base.keys, these have been removed. These were mostly un-documented and weren't supposed to be used outside of the Lux.jl package.

    • Training.TrainState construction with rng has been removed.

    • Older versions of Preferences have been removed.

    • disable_stacktrace_truncation! has been removed. From Julia 1.9 onwards, stacktrace truncation is enabled by default.

    • Certain Experimental features were present outside the Lux.Experimental module. These have been removed, use them via Lux.Experimental instead. Run Julia with with depwarn as error and Lux v0.5 to see the deprecations.

    • Lux.Experimental.@layer_map is not longer needed and has been removed. The name of the variable prevents writing generic functions and is no longer pre-pended to the KeyPath. See the docstring of Lux.Experimental.layer_map for more details.

    • allow_fast_activation kwarg has been removed completely. Pass an anonymous function as the activation to prevent internal modivations to the activation function.

    Breaking Changes (Moved Functionality)

    • Lux.Experimental.Training has been moved to Lux.Training. We guarantee SemVar on this new module.

    • Lux.cpu and Lux.gpu have been removed. Use cpu_device and gpu_device instead.

    • Experimental.@compact can be directly used via @compact now.

    • Experimental.StatefulLuxLayer has been moved to Lux.StatefulLuxLayer.

    • st_fixed_path kwarg has been removed from Lux.StatefulLuxLayer, instead use it as StatefulLuxLayer{st_fixed_path}(...).

    • Strings as inputs to Lux.Experimental.layer_map and Lux.Experimental.@debug_mode are removed, use Functors.KeyPath instead.

    • CrossCor has been removed. Use Conv(args...; kwargs..., cross_correlation=true) instead.

    Breaking Changes (Changes in Defaults)

    • Conv and ConvTranspose use an initialization based on the activation function, taken from Pytorch. Pytorch assumes the activation function is leakyrelu to compute the gain, however, we compute the gain based on the activation function passed in to the layer.

    • Upsample now has an align_corners keyword argument, which defaults to false. Previously this was always true.

    • Dense and Bilinear have updated default initializations to align with the defaults from Pytorch. See the documentation for more details.

    • InstanceNorm now defaults to affine=false instead of affine=true.

    • Embedding now defaults to init_weight=rand32 instead of init_weight=randn32.

    • Recurrent Cells - RNNCell, LSTMCell, and GRUCell now have different default initializations. See the documentation for more details.

    New Features

    • InstanceNorm now supports tracking statistics.

    • RNNCell and LSTMCell add bias_ih and bias_hh to the parameters to align with Pytorch. Both are controlled using init_bias and use_bias.

    • ConvTranspose allows flipkernel=true via cross_correlation=true. This makes it efficient for MIOpen.

    • ConvTranspose now has an outpad keyword argument, which is used to increase the size of the output in the desired dimensions.

    • Pooling Layers based on lpnorm have been added – LPPool, GlobalLPPool, and AdaptiveLPPool.

    ',30)]))}const L=a(r,[["render",n]]);export{p as __pageData,L as default}; diff --git a/dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.lean.js b/dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.lean.js new file mode 100644 index 0000000000..680deb9bdf --- /dev/null +++ b/dev/assets/introduction_updating_to_v1.md.DYJFRVTZ.lean.js @@ -0,0 +1 @@ +import{_ as a,c as o,a2 as i,o as t}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Updating to Lux v1","description":"","frontmatter":{},"headers":[],"relativePath":"introduction/updating_to_v1.md","filePath":"introduction/updating_to_v1.md","lastUpdated":null}'),r={name:"introduction/updating_to_v1.md"};function n(d,e,l,s,c,u){return t(),o("div",null,e[0]||(e[0]=[i("",30)]))}const L=a(r,[["render",n]]);export{p as __pageData,L as default}; diff --git a/dev/assets/manual_autodiff.md.D-CoZgun.lean.js b/dev/assets/manual_autodiff.md.D-CoZgun.lean.js deleted file mode 100644 index c2971acb8d..0000000000 --- a/dev/assets/manual_autodiff.md.D-CoZgun.lean.js +++ /dev/null @@ -1 +0,0 @@ -import{_ as e,c as l,a2 as a,o}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/autodiff.md","filePath":"manual/autodiff.md","lastUpdated":null}'),s={name:"manual/autodiff.md"};function r(n,t,i,f,d,c){return o(),l("div",null,t[0]||(t[0]=[a('

    Automatic Differentiation

    Lux is not an AD package, but it composes well with most of the AD packages available in the Julia ecosystem. This document lists the current level of support for various AD packages in Lux. Additionally, we provide some convenience functions for working with AD.

    Overview

    AD PackageModeCPUGPUTPUNested 2nd Order ADSupport Class
    Reactant.jl[1] + Enzyme.jlReverse✔️✔️✔️✔️Tier I
    ChainRules.jl[2]Reverse✔️✔️✔️Tier I
    Enzyme.jlReverse✔️[3][3:1]Tier I[4]
    Zygote.jlReverse✔️✔️✔️Tier I
    ForwardDiff.jlForward✔️✔️✔️Tier I
    ReverseDiff.jlReverse✔️Tier II
    Tracker.jlReverse✔️✔️Tier II
    Mooncake.jlReverse[3:2]Tier III
    Diffractor.jlForward[3:3][3:4][3:5]Tier III

    Recommendations

    • For CPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance as well as mutation-support. When available, this is the most reliable and fastest option.

      2. Use Zygote.jl for the best performance without Reactant.jl. This is the most reliable and fastest option for CPU for the time-being. (We are working on faster Enzyme support for CPU)

      3. Use Enzyme.jl, if there are mutations in the code and/or Zygote.jl fails.

      4. If Enzyme.jl fails for some reason, (open an issue and) try ReverseDiff.jl (possibly with compiled mode).

    • For GPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance. This is the most reliable and fastest option, but presently only supports NVIDIA GPU's. AMD GPUs are currently not supported.

      2. Use Zygote.jl for the best performance on non-NVIDIA GPUs. This is the most reliable and fastest non-Reactant.jl option for GPU for the time-being. We are working on supporting Enzyme.jl without Reactant.jl for GPU as well.

    • For TPU Usacases:

      1. Use Reactant.jl. This is the only supported (and fastest) option.

    Support Class

    1. Tier I: These packages are fully supported and have been tested extensively. Often have special rules to enhance performance. Issues for these backends take the highest priority.

    2. Tier II: These packages are supported and extensively tested but often don't have the best performance. Issues against these backends are less critical, but we fix them when possible. (Some specific edge cases, especially with AMDGPU, are known to fail here)

    3. Tier III: We don't know if these packages currently work with Lux. We'd love to add tests for these backends, but currently these are not our priority.

    Footnotes


    1. Note that Reactant.jl is not really an AD package, but a tool for compiling functions, including the use of EnzymeMLIR for AD via Enzyme.jl. We have first-class support for the usage of Reactant.jl for inference and training when using Enzyme.jl for differentiation. ↩︎

    2. Note that ChainRules.jl is not really an AD package, but we have first-class support for packages that use rrules. ↩︎

    3. This feature is supported downstream, but we don't extensively test it to ensure that it works with Lux. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

    4. Currently Enzyme outperforms other AD packages in terms of CPU performance. However, there are some edge cases where it might not work with Lux when not using Reactant. We are working on improving the compatibility. Please report any issues you encounter and try Reactant if something fails. ↩︎

    ',11)]))}const u=e(s,[["render",r]]);export{p as __pageData,u as default}; diff --git a/dev/assets/manual_autodiff.md.D-CoZgun.js b/dev/assets/manual_autodiff.md.E3SgD0p5.js similarity index 99% rename from dev/assets/manual_autodiff.md.D-CoZgun.js rename to dev/assets/manual_autodiff.md.E3SgD0p5.js index c2971acb8d..7f0410ae06 100644 --- a/dev/assets/manual_autodiff.md.D-CoZgun.js +++ b/dev/assets/manual_autodiff.md.E3SgD0p5.js @@ -1 +1 @@ -import{_ as e,c as l,a2 as a,o}from"./chunks/framework.I-x9Gl6h.js";const p=JSON.parse('{"title":"Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/autodiff.md","filePath":"manual/autodiff.md","lastUpdated":null}'),s={name:"manual/autodiff.md"};function r(n,t,i,f,d,c){return o(),l("div",null,t[0]||(t[0]=[a('

    Automatic Differentiation

    Lux is not an AD package, but it composes well with most of the AD packages available in the Julia ecosystem. This document lists the current level of support for various AD packages in Lux. Additionally, we provide some convenience functions for working with AD.

    Overview

    AD PackageModeCPUGPUTPUNested 2nd Order ADSupport Class
    Reactant.jl[1] + Enzyme.jlReverse✔️✔️✔️✔️Tier I
    ChainRules.jl[2]Reverse✔️✔️✔️Tier I
    Enzyme.jlReverse✔️[3][3:1]Tier I[4]
    Zygote.jlReverse✔️✔️✔️Tier I
    ForwardDiff.jlForward✔️✔️✔️Tier I
    ReverseDiff.jlReverse✔️Tier II
    Tracker.jlReverse✔️✔️Tier II
    Mooncake.jlReverse[3:2]Tier III
    Diffractor.jlForward[3:3][3:4][3:5]Tier III

    Recommendations

    • For CPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance as well as mutation-support. When available, this is the most reliable and fastest option.

      2. Use Zygote.jl for the best performance without Reactant.jl. This is the most reliable and fastest option for CPU for the time-being. (We are working on faster Enzyme support for CPU)

      3. Use Enzyme.jl, if there are mutations in the code and/or Zygote.jl fails.

      4. If Enzyme.jl fails for some reason, (open an issue and) try ReverseDiff.jl (possibly with compiled mode).

    • For GPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance. This is the most reliable and fastest option, but presently only supports NVIDIA GPU's. AMD GPUs are currently not supported.

      2. Use Zygote.jl for the best performance on non-NVIDIA GPUs. This is the most reliable and fastest non-Reactant.jl option for GPU for the time-being. We are working on supporting Enzyme.jl without Reactant.jl for GPU as well.

    • For TPU Usacases:

      1. Use Reactant.jl. This is the only supported (and fastest) option.

    Support Class

    1. Tier I: These packages are fully supported and have been tested extensively. Often have special rules to enhance performance. Issues for these backends take the highest priority.

    2. Tier II: These packages are supported and extensively tested but often don't have the best performance. Issues against these backends are less critical, but we fix them when possible. (Some specific edge cases, especially with AMDGPU, are known to fail here)

    3. Tier III: We don't know if these packages currently work with Lux. We'd love to add tests for these backends, but currently these are not our priority.

    Footnotes


    1. Note that Reactant.jl is not really an AD package, but a tool for compiling functions, including the use of EnzymeMLIR for AD via Enzyme.jl. We have first-class support for the usage of Reactant.jl for inference and training when using Enzyme.jl for differentiation. ↩︎

    2. Note that ChainRules.jl is not really an AD package, but we have first-class support for packages that use rrules. ↩︎

    3. This feature is supported downstream, but we don't extensively test it to ensure that it works with Lux. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

    4. Currently Enzyme outperforms other AD packages in terms of CPU performance. However, there are some edge cases where it might not work with Lux when not using Reactant. We are working on improving the compatibility. Please report any issues you encounter and try Reactant if something fails. ↩︎

    ',11)]))}const u=e(s,[["render",r]]);export{p as __pageData,u as default}; +import{_ as e,c as l,a2 as a,o}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/autodiff.md","filePath":"manual/autodiff.md","lastUpdated":null}'),s={name:"manual/autodiff.md"};function r(n,t,i,f,d,c){return o(),l("div",null,t[0]||(t[0]=[a('

    Automatic Differentiation

    Lux is not an AD package, but it composes well with most of the AD packages available in the Julia ecosystem. This document lists the current level of support for various AD packages in Lux. Additionally, we provide some convenience functions for working with AD.

    Overview

    AD PackageModeCPUGPUTPUNested 2nd Order ADSupport Class
    Reactant.jl[1] + Enzyme.jlReverse✔️✔️✔️✔️Tier I
    ChainRules.jl[2]Reverse✔️✔️✔️Tier I
    Enzyme.jlReverse✔️[3][3:1]Tier I[4]
    Zygote.jlReverse✔️✔️✔️Tier I
    ForwardDiff.jlForward✔️✔️✔️Tier I
    ReverseDiff.jlReverse✔️Tier II
    Tracker.jlReverse✔️✔️Tier II
    Mooncake.jlReverse[3:2]Tier III
    Diffractor.jlForward[3:3][3:4][3:5]Tier III

    Recommendations

    • For CPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance as well as mutation-support. When available, this is the most reliable and fastest option.

      2. Use Zygote.jl for the best performance without Reactant.jl. This is the most reliable and fastest option for CPU for the time-being. (We are working on faster Enzyme support for CPU)

      3. Use Enzyme.jl, if there are mutations in the code and/or Zygote.jl fails.

      4. If Enzyme.jl fails for some reason, (open an issue and) try ReverseDiff.jl (possibly with compiled mode).

    • For GPU Usacases:

      1. Use Reactant.jl + Enzyme.jl for the best performance. This is the most reliable and fastest option, but presently only supports NVIDIA GPU's. AMD GPUs are currently not supported.

      2. Use Zygote.jl for the best performance on non-NVIDIA GPUs. This is the most reliable and fastest non-Reactant.jl option for GPU for the time-being. We are working on supporting Enzyme.jl without Reactant.jl for GPU as well.

    • For TPU Usacases:

      1. Use Reactant.jl. This is the only supported (and fastest) option.

    Support Class

    1. Tier I: These packages are fully supported and have been tested extensively. Often have special rules to enhance performance. Issues for these backends take the highest priority.

    2. Tier II: These packages are supported and extensively tested but often don't have the best performance. Issues against these backends are less critical, but we fix them when possible. (Some specific edge cases, especially with AMDGPU, are known to fail here)

    3. Tier III: We don't know if these packages currently work with Lux. We'd love to add tests for these backends, but currently these are not our priority.

    Footnotes


    1. Note that Reactant.jl is not really an AD package, but a tool for compiling functions, including the use of EnzymeMLIR for AD via Enzyme.jl. We have first-class support for the usage of Reactant.jl for inference and training when using Enzyme.jl for differentiation. ↩︎

    2. Note that ChainRules.jl is not really an AD package, but we have first-class support for packages that use rrules. ↩︎

    3. This feature is supported downstream, but we don't extensively test it to ensure that it works with Lux. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

    4. Currently Enzyme outperforms other AD packages in terms of CPU performance. However, there are some edge cases where it might not work with Lux when not using Reactant. We are working on improving the compatibility. Please report any issues you encounter and try Reactant if something fails. ↩︎

    ',11)]))}const u=e(s,[["render",r]]);export{p as __pageData,u as default}; diff --git a/dev/assets/manual_autodiff.md.E3SgD0p5.lean.js b/dev/assets/manual_autodiff.md.E3SgD0p5.lean.js new file mode 100644 index 0000000000..f6072aa506 --- /dev/null +++ b/dev/assets/manual_autodiff.md.E3SgD0p5.lean.js @@ -0,0 +1 @@ +import{_ as e,c as l,a2 as a,o}from"./chunks/framework.BetCMmtc.js";const p=JSON.parse('{"title":"Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/autodiff.md","filePath":"manual/autodiff.md","lastUpdated":null}'),s={name:"manual/autodiff.md"};function r(n,t,i,f,d,c){return o(),l("div",null,t[0]||(t[0]=[a("",11)]))}const u=e(s,[["render",r]]);export{p as __pageData,u as default}; diff --git a/dev/assets/manual_compiling_lux_models.md.BQYERvZe.js b/dev/assets/manual_compiling_lux_models.md.73Iw5CoK.js similarity index 97% rename from dev/assets/manual_compiling_lux_models.md.BQYERvZe.js rename to dev/assets/manual_compiling_lux_models.md.73Iw5CoK.js index 63478cf1e5..5585bc04ef 100644 --- a/dev/assets/manual_compiling_lux_models.md.BQYERvZe.js +++ b/dev/assets/manual_compiling_lux_models.md.73Iw5CoK.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Compiling Lux Models using Reactant.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/compiling_lux_models.md","filePath":"manual/compiling_lux_models.md","lastUpdated":null}'),e={name:"manual/compiling_lux_models.md"};function l(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    Compiling Lux Models using Reactant.jl

    Quoting the Reactant.jl Readme:

    Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

    Experimental

    Reactant compilation is a very new feature and is currently experimental. Certain models might not be compilable yet, but we are actively working on it. Open an issue if you encounter any problems.

    julia
    using Lux, Reactant, Enzyme, Random, Zygote
    +import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Compiling Lux Models using Reactant.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/compiling_lux_models.md","filePath":"manual/compiling_lux_models.md","lastUpdated":null}'),e={name:"manual/compiling_lux_models.md"};function l(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    Compiling Lux Models using Reactant.jl

    Quoting the Reactant.jl Readme:

    Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

    julia
    using Lux, Reactant, Enzyme, Random, Zygote
     using Functors, Optimisers, Printf

    Running on alternate accelerators

    Reactant.set_default_backend("gpu") sets the default backend to CUDA and Reactant.set_default_backend("tpu") sets the default backend to TPU.

    Using the TrainState API

    If you are using the Training.TrainState API, skip to the bottom of this page to see how to train the model without any of this boilerplate.

    We start by defining a simple MLP model:

    julia
    model = Chain(
         Dense(2 => 32, gelu),
         Dense(32 => 32, gelu),
    @@ -13,7 +13,7 @@ import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
     y_ra = y |> xdev
     ps_ra = ps |> xdev
     st_ra = st |> xdev
    -nothing

    First let's run the model as we would normally:

    julia
    pred_lux, _ = model(x, ps, Lux.testmode(st))
    (Float32[0.015869793 0.010564294 … -0.4137662 0.018748894; 0.07865399 0.06953073 … -0.23402624 0.21624334], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

    To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using reactant_device first.

    julia
    model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
    Reactant.Compiler.Thunk{Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#328235"), Tuple{Reactant.ConcreteRArray{Float32, 2}, @NamedTuple{layer_1::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_2::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_3::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, true}(Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing))

    Now we can test the difference between the results:

    julia
    pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
    +nothing

    First let's run the model as we would normally:

    julia
    pred_lux, _ = model(x, ps, Lux.testmode(st))
    (Float32[0.015869793 0.010564294 … -0.4137662 0.018748894; 0.07865399 0.06953073 … -0.23402624 0.21624334], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

    To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using reactant_device first.

    julia
    model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
    Reactant.Compiler.Thunk{Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#328268"), Tuple{Reactant.ConcreteRArray{Float32, 2}, @NamedTuple{layer_1::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_2::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_3::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, true}(Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing))

    Now we can test the difference between the results:

    julia
    pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
     
     pred_lux .- Array(pred_compiled)
    2×32 Matrix{Float32}:
      1.35005f-5  -2.00942f-5  5.03547f-5  …  0.000276357  -1.31615f-5
    @@ -69,4 +69,4 @@ import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
     Iter: [ 700/1000]	Loss: 0.01910517
     Iter: [ 800/1000]	Loss: 0.01325738
     Iter: [ 900/1000]	Loss: 0.01003141
    -Iter: [1000/1000]	Loss: 0.00775477
    `,42)]))}const c=i(e,[["render",l]]);export{g as __pageData,c as default}; +Iter: [1000/1000] Loss: 0.00775477
    `,41)]))}const c=i(e,[["render",l]]);export{g as __pageData,c as default}; diff --git a/dev/assets/manual_compiling_lux_models.md.73Iw5CoK.lean.js b/dev/assets/manual_compiling_lux_models.md.73Iw5CoK.lean.js new file mode 100644 index 0000000000..450f27b3f2 --- /dev/null +++ b/dev/assets/manual_compiling_lux_models.md.73Iw5CoK.lean.js @@ -0,0 +1 @@ +import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Compiling Lux Models using Reactant.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/compiling_lux_models.md","filePath":"manual/compiling_lux_models.md","lastUpdated":null}'),e={name:"manual/compiling_lux_models.md"};function l(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t("",41)]))}const c=i(e,[["render",l]]);export{g as __pageData,c as default}; diff --git a/dev/assets/manual_compiling_lux_models.md.BQYERvZe.lean.js b/dev/assets/manual_compiling_lux_models.md.BQYERvZe.lean.js deleted file mode 100644 index 63478cf1e5..0000000000 --- a/dev/assets/manual_compiling_lux_models.md.BQYERvZe.lean.js +++ /dev/null @@ -1,72 +0,0 @@ -import{_ as i,c as a,a2 as t,o as n}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Compiling Lux Models using Reactant.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/compiling_lux_models.md","filePath":"manual/compiling_lux_models.md","lastUpdated":null}'),e={name:"manual/compiling_lux_models.md"};function l(p,s,h,k,d,r){return n(),a("div",null,s[0]||(s[0]=[t(`

    Compiling Lux Models using Reactant.jl

    Quoting the Reactant.jl Readme:

    Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

    Experimental

    Reactant compilation is a very new feature and is currently experimental. Certain models might not be compilable yet, but we are actively working on it. Open an issue if you encounter any problems.

    julia
    using Lux, Reactant, Enzyme, Random, Zygote
    -using Functors, Optimisers, Printf

    Running on alternate accelerators

    Reactant.set_default_backend("gpu") sets the default backend to CUDA and Reactant.set_default_backend("tpu") sets the default backend to TPU.

    Using the TrainState API

    If you are using the Training.TrainState API, skip to the bottom of this page to see how to train the model without any of this boilerplate.

    We start by defining a simple MLP model:

    julia
    model = Chain(
    -    Dense(2 => 32, gelu),
    -    Dense(32 => 32, gelu),
    -    Dense(32 => 2)
    -)
    -ps, st = Lux.setup(Random.default_rng(), model)
    ((layer_1 = (weight = Float32[0.9670442 -0.36027783; 0.078672916 0.92788666; … ; -0.65058047 -0.47006413; -0.48801818 -0.6615898], bias = Float32[-0.28780195, -0.23392133, 0.084573634, -0.59277534, -0.6795253, 0.47792822, -0.64850235, -0.55131584, -0.33091125, 0.47174177  …  0.07477753, -0.10521463, -0.45745936, 0.19031122, 0.41613227, 0.47329637, -0.68522483, -0.2834571, 0.0235815, 0.61977077]), layer_2 = (weight = Float32[-0.057887085 -0.14646342 … 0.1019723 0.14663221; 0.10022328 -0.09659223 … 0.25911948 -0.008825431; … ; -0.014519578 -0.01100632 … -0.30112675 -0.17886546; 0.21983564 -0.026677115 … -0.030971587 -0.28283697], bias = Float32[0.095548995, 0.10995198, 0.12209795, -0.14433007, 0.11754602, -0.152131, -0.10584956, 0.09469124, 0.09255884, 0.10044085  …  0.07444663, 0.11096934, 0.13462374, 0.15048876, 0.061646424, 0.004753132, 0.08162795, -0.15708117, 0.029835312, 0.005353872]), layer_3 = (weight = Float32[0.005372945 -0.18356045 … 0.052086722 0.07186686; 0.0067291846 0.020219602 … 0.0688707 -0.1961357], bias = Float32[-0.03542879, -0.041368797])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

    We then create a random input and output data:

    julia
    x = randn(Float32, 2, 32)
    -y = x .^ 2
    2×32 Matrix{Float32}:
    - 0.0667989  0.081931  0.337936  2.41127   …  0.926518   1.24719    0.0574222
    - 0.125342   0.126324  0.105     0.885355     0.0636671  0.0058369  0.28691

    We will use reactant_device similar to gpu_device to move the arrays to Reactant.

    julia
    const xdev = reactant_device()
    -
    -x_ra = x |> xdev
    -y_ra = y |> xdev
    -ps_ra = ps |> xdev
    -st_ra = st |> xdev
    -nothing

    First let's run the model as we would normally:

    julia
    pred_lux, _ = model(x, ps, Lux.testmode(st))
    (Float32[0.015869793 0.010564294 … -0.4137662 0.018748894; 0.07865399 0.06953073 … -0.23402624 0.21624334], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

    To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using reactant_device first.

    julia
    model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
    Reactant.Compiler.Thunk{Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#328235"), Tuple{Reactant.ConcreteRArray{Float32, 2}, @NamedTuple{layer_1::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_2::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_3::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, true}(Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing))

    Now we can test the difference between the results:

    julia
    pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
    -
    -pred_lux .- Array(pred_compiled)
    2×32 Matrix{Float32}:
    - 1.35005f-5  -2.00942f-5  5.03547f-5  …  0.000276357  -1.31615f-5
    - 7.97957f-6  -5.80922f-5  4.60148f-5     0.000196472  -8.9705f-5

    The difference is very small as we would expect. Now, let's try to differentiate the output of the model. We need to use Enzyme.jl to do this.

    julia
    function loss_function(model, ps, st, x, y)
    -    pred, _ = model(x, ps, st)
    -    return MSELoss()(pred, y)
    -end
    loss_function (generic function with 1 method)

    We will use Zygote.jl to compute the gradient of the loss function for the vanilla model.

    julia
    loss_function(model, ps, st, x, y)
    -
    -∂ps_zyg = only(Zygote.gradient(ps -> loss_function(model, ps, st, x, y), ps))
    (layer_1 = (weight = Float32[0.2601667 -0.09287714; -0.028029898 -0.013659927; … ; -0.07384164 -0.06003239; 0.042984415 0.051605415], bias = Float32[0.12923913, -0.009405821, -0.026362807, -0.014524898, 0.013915376, 0.093436174, 0.08193636, 0.007762789, 0.0010442184, 0.018755732  …  0.06704105, -0.043209497, 0.104868725, 0.014353451, 0.024228822, -0.06582927, 0.010303013, 0.09878272, 0.06784943, -0.08268406]), layer_2 = (weight = Float32[-0.004457889 0.00021804248 … -0.0033274868 -0.008014375; 0.13051206 -0.004689035 … 0.038353663 0.093302496; … ; -0.041253403 0.00088798604 … 0.00074701244 -0.02034735; 0.06339173 -0.0021197717 … 0.012145694 0.06877028], bias = Float32[-0.006240552, 0.13950492, -0.22439213, -0.11326964, -0.02316084, 0.14702773, 0.035196126, 0.1398194, -0.23715453, 0.3266256  …  -0.014224287, 0.009401777, 0.18295963, 0.13164552, 0.16955197, -0.110567965, -0.007434898, 0.118868664, -0.026588852, 0.031815775]), layer_3 = (weight = Float32[-0.677237 -0.19355828 … 0.092198014 -0.33821836; -0.2986417 -0.09485077 … 0.022576151 -0.17590503], bias = Float32[-1.1515998, -0.556467]))

    Now we will compile the gradient function using Reactant.@compile.

    julia
    function enzyme_gradient(model, ps, st, x, y)
    -    return Enzyme.gradient(Enzyme.Reverse, Const(loss_function), Const(model),
    -        ps, Const(st), Const(x), Const(y))[2]
    -end
    -
    -enzyme_gradient_compiled = @compile enzyme_gradient(model, ps_ra, st_ra, x_ra, y_ra)
    -
    -∂ps_enzyme = enzyme_gradient_compiled(model, ps_ra, st_ra, x_ra, y_ra)
    (layer_1 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[0.26012868 -0.092793465; -0.028036328 -0.013654154; … ; -0.07376325 -0.059965573; 0.042965017 0.051536214]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.12918535, -0.009416734, -0.026323957, -0.014522501, 0.013927094, 0.0934494, 0.081932, 0.0077551096, 0.0010140468, 0.018774498  …  0.06697127, -0.04319424, 0.104844354, 0.014333963, 0.024163416, -0.06574188, 0.010303006, 0.09875466, 0.06776868, -0.082605906])), layer_2 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[-0.0044614216 0.00021798005 … -0.003325696 -0.008014115; 0.13048346 -0.0046932907 … 0.03828844 0.09321641; … ; -0.041269857 0.00088779256 … 0.0007467641 -0.020335387; 0.063393466 -0.0021180161 … 0.012151958 0.06875219]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[-0.006239144, 0.13940203, -0.22421768, -0.113252416, -0.023147244, 0.14692664, 0.035161175, 0.13971032, -0.23703608, 0.32649475  …  -0.014197593, 0.009397319, 0.18292144, 0.1315112, 0.16951457, -0.11050306, -0.0074208006, 0.11883854, -0.026577514, 0.03178858])), layer_3 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[-0.6766358 -0.19343676 … 0.0921624 -0.33802274; -0.29872745 -0.094833545 … 0.022579487 -0.17570448]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[-1.1515675, -0.5565324])))

    Now we check the difference:

    julia
    fmap(Broadcast.BroadcastFunction(-), ∂ps_zyg, ∂ps_enzyme |> cpu_device())
    (layer_1 = (weight = Float32[3.8027763f-5 -8.367747f-5; 6.429851f-6 -5.7732686f-6; … ; -7.838756f-5 -6.681681f-5; 1.9397587f-5 6.920099f-5], bias = Float32[5.377829f-5, 1.0913238f-5, -3.884919f-5, -2.396293f-6, -1.1717901f-5, -1.3224781f-5, 4.3585896f-6, 7.67922f-6, 3.0171592f-5, -1.876615f-5  …  6.977469f-5, -1.5255064f-5, 2.437085f-5, 1.9487925f-5, 6.5406784f-5, -8.738786f-5, 7.450581f-9, 2.8058887f-5, 8.074939f-5, -7.815659f-5]), layer_2 = (weight = Float32[3.5325065f-6 6.2427716f-8 … -1.7907005f-6 -2.6077032f-7; 2.8595328f-5 4.2556785f-6 … 6.522238f-5 8.608401f-5; … ; 1.6454607f-5 1.9348226f-7 … 2.4831388f-7 -1.1961907f-5; -1.7359853f-6 -1.755543f-6 … -6.2640756f-6 1.809001f-5], bias = Float32[-1.4076941f-6, 0.00010289252, -0.0001744479, -1.7225742f-5, -1.3595447f-5, 0.00010108948, 3.4950674f-5, 0.0001090765, -0.00011844933, 0.0001308322  …  -2.6694499f-5, 4.458241f-6, 3.8191676f-5, 0.00013431907, 3.7401915f-5, -6.490201f-5, -1.409743f-5, 3.0122697f-5, -1.1337921f-5, 2.719462f-5]), layer_3 = (weight = Float32[-0.00060117245 -0.00012151897 … 3.5613775f-5 -0.00019562244; 8.574128f-5 -1.7225742f-5 … -3.3359975f-6 -0.00020055473], bias = Float32[-3.2305717f-5, 6.5386295f-5]))

    Using the TrainState API

    Debugging TrainState API Failures

    If the code fails to compile with Reactant, it is useful to dump the HLO. Starting the Julia session with LUX_DUMP_REACTANT_HLO_OPTIMIZE environment variable set to no_enzyme, false, or true will dump the HLO to a file (filename will be displayed). This is an useful information to provide when opening an issue.

    Alternatively, you can set theglobal reference Lux.DUMP_REACTANT_HLO_OPT_MODE to a symbol corresponding to the optimize keyword argument to @code_hlo.

    Now that we saw the low-level API let's see how to train the model without any of this boilerplate. Simply follow the following steps:

    1. Create a device using reactant_device. Remember to load Reactant.jl before doing this.

    2. Similar to other device functions move the model, parameters, states and data to the device. Note that you might want to use DeviceIterator to move the data loader to the device with an iterator.

    3. Construct a TrainState using Training.TrainState.

    4. And most importantly use AutoEnzyme while calling Training.single_train_step! or Training.single_train_step.

    julia
    model = Chain(
    -    Dense(2 => 4, gelu),
    -    Dense(4 => 4, gelu),
    -    Dense(4 => 2)
    -)
    -ps, st = Lux.setup(Random.default_rng(), model)
    -
    -x_ra = [randn(Float32, 2, 32) for _ in 1:32]
    -y_ra = [xᵢ .^ 2 for xᵢ in x_ra]
    -ps_ra = ps |> xdev
    -st_ra = st |> xdev
    -
    -dataloader = DeviceIterator(xdev, zip(x_ra, y_ra))
    -
    -function train_model(model, ps, st, dataloader)
    -    train_state = Training.TrainState(model, ps, st, Adam(0.001f0))
    -
    -    for iteration in 1:1000
    -        for (i, (xᵢ, yᵢ)) in enumerate(dataloader)
    -            _, loss, _, train_state = Training.single_train_step!(
    -                AutoEnzyme(), MSELoss(), (xᵢ, yᵢ), train_state)
    -            if (iteration % 100 == 0 || iteration == 1) && i == 1
    -                @printf("Iter: [%4d/%4d]\\tLoss: %.8f\\n", iteration, 1000, loss)
    -            end
    -        end
    -    end
    -
    -    return train_state
    -end
    -
    -train_model(model, ps_ra, st_ra, dataloader)
    Iter: [   1/1000]	Loss: 13.21854877
    -Iter: [ 100/1000]	Loss: 2.58912802
    -Iter: [ 200/1000]	Loss: 1.13861585
    -Iter: [ 300/1000]	Loss: 0.37783703
    -Iter: [ 400/1000]	Loss: 0.12912875
    -Iter: [ 500/1000]	Loss: 0.05560146
    -Iter: [ 600/1000]	Loss: 0.02995433
    -Iter: [ 700/1000]	Loss: 0.01910517
    -Iter: [ 800/1000]	Loss: 0.01325738
    -Iter: [ 900/1000]	Loss: 0.01003141
    -Iter: [1000/1000]	Loss: 0.00775477
    `,42)]))}const c=i(e,[["render",l]]);export{g as __pageData,c as default}; diff --git a/dev/assets/manual_debugging.md.BbPueqSs.js b/dev/assets/manual_debugging.md.BGHrv_QR.js similarity index 95% rename from dev/assets/manual_debugging.md.BbPueqSs.js rename to dev/assets/manual_debugging.md.BGHrv_QR.js index 06a5b1a529..922a848048 100644 --- a/dev/assets/manual_debugging.md.BbPueqSs.js +++ b/dev/assets/manual_debugging.md.BGHrv_QR.js @@ -1,4 +1,4 @@ -import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Debugging Lux Models","description":"","frontmatter":{},"headers":[],"relativePath":"manual/debugging.md","filePath":"manual/debugging.md","lastUpdated":null}'),t={name:"manual/debugging.md"};function l(p,s,h,k,r,d){return e(),i("div",null,s[0]||(s[0]=[n(`

    Debugging Lux Models

    Debugging DNNs can be very painful. Especially with the gigantic stacktraces for Lux, it is even harder to pin-point to which particular layer errored out. This page describes some useful tools that ship with Lux, that can help you debug your models.

    TL;DR

    Simply wrap your model with Lux.Experimental.@debug_mode!!

    Don't Forget

    Remember to use the non Debug mode model after you finish debugging. Debug mode models are way slower.

    Let us construct a model which has an obviously incorrect dimension. In this example, you will see how easy it is to pin-point the problematic layer.

    Incorrect Model Specification: Dimension Mismatch Problems

    julia
    using Lux, Random
    +import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"Debugging Lux Models","description":"","frontmatter":{},"headers":[],"relativePath":"manual/debugging.md","filePath":"manual/debugging.md","lastUpdated":null}'),l={name:"manual/debugging.md"};function t(p,s,h,k,r,d){return e(),i("div",null,s[0]||(s[0]=[n(`

    Debugging Lux Models

    Debugging DNNs can be very painful. Especially with the gigantic stacktraces for Lux, it is even harder to pin-point to which particular layer errored out. This page describes some useful tools that ship with Lux, that can help you debug your models.

    TL;DR

    Simply wrap your model with Lux.Experimental.@debug_mode!!

    Don't Forget

    Remember to use the non Debug mode model after you finish debugging. Debug mode models are way slower.

    Let us construct a model which has an obviously incorrect dimension. In this example, you will see how easy it is to pin-point the problematic layer.

    Incorrect Model Specification: Dimension Mismatch Problems

    julia
    using Lux, Random
     
     model = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 3), Dense(1 => 1)), BatchNorm(1))
     
    @@ -40,7 +40,7 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g
     [ Info: Input Type: Matrix{Float32} | Input Structure: (3, 2).
     [ Info: Running Layer: Dense(1 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2)!
     ┌ Error: Layer Dense(1 => 1) failed!! This layer is present at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).
    -└ @ Lux.Experimental /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/src/contrib/debug.jl:103
    +└ @ Lux.Experimental /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/src/contrib/debug.jl:103
     DimensionMismatch("A has shape (1, 1) but B has shape (3, 2)")

    See now we know that model.layers.layer_2.layers.layer_2 is the problematic layer. Let us fix that layer and see what happens:

    julia
    model = Chain(Dense(1 => 16, relu),
         Chain(
             Dense(16 => 3),  
    @@ -95,8 +95,12 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g
     model(x, ps, st)
    (Float32[0.9999881 -0.9999881], (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_3 = (running_mean = Float32[0.0026271285], running_var = Float32[0.98396176], training = Val{true}())))

    Let us define a custom backward pass to introduce some NaNs:

    julia
    function CRC.rrule(::typeof(offending_layer), x)
         y = offending_layer(x)
         function ∇offending_layer(Δ)
    -        Δ[1] = NaN
    -        return NoTangent(), Δ
    +        problematicΔ = CRC.@thunk begin
    +            Δ = CRC.unthunk(Δ)
    +            Δ[1] = NaN
    +            return Δ
    +        end
    +        return NoTangent(), problematicΔ
         end
         return y, ∇offending_layer
     end

    Let us compute the gradient of the layer now:

    julia
    Zygote.gradient(ps -> sum(first(model(x, ps, st))), ps)
    ((layer_1 = (weight = Float32[0.0; 0.0; … ; 0.0; 0.0;;], bias = Float32[0.0, 0.0, 0.0, 0.0, NaN, 0.0, NaN, NaN, 0.0, 0.0, 0.0, NaN, NaN, NaN, 0.0, 0.0]), layer_2 = (layer_1 = (weight = Float32[NaN NaN … NaN NaN], bias = Float32[NaN]), layer_2 = nothing), layer_3 = (scale = Float32[0.0], bias = Float32[2.0])),)

    Oh no!! A NaN is present in the gradient of ps. Let us run the debug model and see where the NaN occurred:

    julia
    model_debug = Lux.Experimental.@debug_mode model nan_check=:both
    @@ -117,4 +121,4 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g
     [ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
     [ Info: Running Layer: BatchNorm(1, affine=true, track_stats=true) at location KeyPath(:model, :layers, :layer_3)!
     [ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
    -DomainError(Float32[NaN 0.0], "NaNs detected in pullback output (x)  of layer WrappedFunction(offending_layer) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).")

    And there you go our debug layer prints that the problem is in WrappedFunction(offending_layer) at location model.layers.layer_2.layers.layer_2! Once we fix the pullback of the layer, we will fix the NaNs.

    Conclusion

    In this manual section, we have discussed tracking down errors in Lux models. We have covered tracking incorrect model specifications and NaNs in forward and backward passes. However, remember that this is an Experimental feature, and there might be edge cases that don't work correctly. If you find any such cases, please open an issue on GitHub!

    `,49)]))}const c=a(t,[["render",l]]);export{g as __pageData,c as default}; +DomainError(Float32[NaN 0.0], "NaNs detected in pullback output (x) of layer WrappedFunction(offending_layer) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).")

    And there you go our debug layer prints that the problem is in WrappedFunction(offending_layer) at location model.layers.layer_2.layers.layer_2! Once we fix the pullback of the layer, we will fix the NaNs.

    Conclusion

    In this manual section, we have discussed tracking down errors in Lux models. We have covered tracking incorrect model specifications and NaNs in forward and backward passes. However, remember that this is an Experimental feature, and there might be edge cases that don't work correctly. If you find any such cases, please open an issue on GitHub!

    `,49)]))}const c=a(l,[["render",t]]);export{o as __pageData,c as default}; diff --git a/dev/assets/manual_debugging.md.BGHrv_QR.lean.js b/dev/assets/manual_debugging.md.BGHrv_QR.lean.js new file mode 100644 index 0000000000..6e2a94d935 --- /dev/null +++ b/dev/assets/manual_debugging.md.BGHrv_QR.lean.js @@ -0,0 +1 @@ +import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"Debugging Lux Models","description":"","frontmatter":{},"headers":[],"relativePath":"manual/debugging.md","filePath":"manual/debugging.md","lastUpdated":null}'),l={name:"manual/debugging.md"};function t(p,s,h,k,r,d){return e(),i("div",null,s[0]||(s[0]=[n("",49)]))}const c=a(l,[["render",t]]);export{o as __pageData,c as default}; diff --git a/dev/assets/manual_debugging.md.BbPueqSs.lean.js b/dev/assets/manual_debugging.md.BbPueqSs.lean.js deleted file mode 100644 index 06a5b1a529..0000000000 --- a/dev/assets/manual_debugging.md.BbPueqSs.lean.js +++ /dev/null @@ -1,120 +0,0 @@ -import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Debugging Lux Models","description":"","frontmatter":{},"headers":[],"relativePath":"manual/debugging.md","filePath":"manual/debugging.md","lastUpdated":null}'),t={name:"manual/debugging.md"};function l(p,s,h,k,r,d){return e(),i("div",null,s[0]||(s[0]=[n(`

    Debugging Lux Models

    Debugging DNNs can be very painful. Especially with the gigantic stacktraces for Lux, it is even harder to pin-point to which particular layer errored out. This page describes some useful tools that ship with Lux, that can help you debug your models.

    TL;DR

    Simply wrap your model with Lux.Experimental.@debug_mode!!

    Don't Forget

    Remember to use the non Debug mode model after you finish debugging. Debug mode models are way slower.

    Let us construct a model which has an obviously incorrect dimension. In this example, you will see how easy it is to pin-point the problematic layer.

    Incorrect Model Specification: Dimension Mismatch Problems

    julia
    using Lux, Random
    -
    -model = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 3), Dense(1 => 1)), BatchNorm(1))
    -
    -model_debug = Lux.Experimental.@debug_mode model
    Chain(
    -    layer_1 = DebugLayer(
    -        layer = Dense(1 => 16, relu),   # 32 parameters
    -    ),
    -    layer_2 = Chain(
    -        layer_1 = DebugLayer(
    -            layer = Dense(16 => 3),     # 51 parameters
    -        ),
    -        layer_2 = DebugLayer(
    -            layer = Dense(1 => 1),      # 2 parameters
    -        ),
    -    ),
    -    layer_3 = DebugLayer(
    -        layer = BatchNorm(1, affine=true, track_stats=true),  # 2 parameters, plus 3
    -    ),
    -)         # Total: 87 parameters,
    -          #        plus 3 states.

    Note that we can use the parameters and states for model itself in model_debug, no need to make any changes. If you ran the original model this is the kind of error you would see:

    julia
    rng = Xoshiro(0)
    -
    -ps, st = Lux.setup(rng, model)
    -x = randn(rng, Float32, 1, 2)
    -
    -try
    -    model(x, ps, st)
    -catch e
    -    println(e)
    -end
    DimensionMismatch("A has shape (1, 1) but B has shape (3, 2)")

    Ofcourse, this error will come with a detailed stacktrace, but it is still not very useful. Now let's try using the debug mode model:

    julia
    try
    -    model_debug(x, ps, st)
    -catch e
    -    println(e)
    -end
    [ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: Dense(1 => 16, relu) at location KeyPath(:model, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (16, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (16, 2).
    -[ Info: Running Layer: Dense(16 => 3) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (3, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (3, 2).
    -[ Info: Running Layer: Dense(1 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2)!
    -┌ Error: Layer Dense(1 => 1) failed!! This layer is present at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).
    -└ @ Lux.Experimental /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/src/contrib/debug.jl:103
    -DimensionMismatch("A has shape (1, 1) but B has shape (3, 2)")

    See now we know that model.layers.layer_2.layers.layer_2 is the problematic layer. Let us fix that layer and see what happens:

    julia
    model = Chain(Dense(1 => 16, relu),
    -    Chain(
    -        Dense(16 => 3),  
    -        Dense(16 => 1),  
    -        Dense(1 => 1)),
    -    BatchNorm(1))
    julia
    model_fixed = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 1), Dense(1 => 1)),
    -    BatchNorm(1))
    -
    -ps, st = Lux.setup(rng, model_fixed)
    -
    -model_fixed(x, ps, st)
    (Float32[-0.99998605 0.999986], (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_3 = (running_mean = Float32[0.07133968], running_var = Float32[0.971899], training = Val{true}())))

    Voila!! We have tracked down and fixed the problem.

    Tracking down NaNs

    Have you encountered those pesky little NaNs in your training? They are very hard to track down. We will create an artificially simulate NaNs in our model and see how we can track the offending layer.

    We can set nan_check to :forward, :backward or :both to check for NaNs in the debug model. (or even disable it by setting it to :none)

    julia
    model = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 1), Dense(1 => 1)),
    -    BatchNorm(1))
    -
    -ps, st = Lux.setup(rng, model)
    -
    -model_debug = Lux.Experimental.@debug_mode model nan_check=:both
    Chain(
    -    layer_1 = DebugLayer(
    -        layer = Dense(1 => 16, relu),   # 32 parameters
    -    ),
    -    layer_2 = Chain(
    -        layer_1 = DebugLayer(
    -            layer = Dense(16 => 1),     # 17 parameters
    -        ),
    -        layer_2 = DebugLayer(
    -            layer = Dense(1 => 1),      # 2 parameters
    -        ),
    -    ),
    -    layer_3 = DebugLayer(
    -        layer = BatchNorm(1, affine=true, track_stats=true),  # 2 parameters, plus 3
    -    ),
    -)         # Total: 53 parameters,
    -          #        plus 3 states.

    Let us set a value in the parameter to NaN:

    julia
    ps.layer_2.layer_2.weight[1, 1] = NaN
    NaN

    Now let us run the model

    julia
    model(x, ps, st)
    (Float32[NaN NaN], (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_3 = (running_mean = Float32[NaN], running_var = Float32[NaN], training = Val{true}())))

    Ah as expected our output is NaN. But is is not very clear how to track where the first NaN occurred. Let's run the debug model and check:

    julia
    try
    -    model_debug(x, ps, st)
    -catch e
    -    println(e)
    -end
    [ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: Dense(1 => 16, relu) at location KeyPath(:model, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (16, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (16, 2).
    -[ Info: Running Layer: Dense(16 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: Dense(1 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2)!
    -DomainError(Float32[NaN;;], "NaNs detected in parameters (@ KeyPath(:weight,))  of layer Dense(1 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).")

    And we have figured it out! The first NaN occurred in the parameters of model.layers.layer_2.layers.layer_2! But what if NaN occurs in the reverse pass! Let us define a custom layer and introduce a fake NaN in the backward pass.

    julia
    using ChainRulesCore, Zygote
    -
    -const CRC = ChainRulesCore
    -
    -offending_layer(x) = 2 .* x
    offending_layer (generic function with 1 method)
    julia
    model = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 1), offending_layer), BatchNorm(1))
    -
    -ps, st = Lux.setup(rng, model)
    -
    -model(x, ps, st)
    (Float32[0.9999881 -0.9999881], (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_3 = (running_mean = Float32[0.0026271285], running_var = Float32[0.98396176], training = Val{true}())))

    Let us define a custom backward pass to introduce some NaNs:

    julia
    function CRC.rrule(::typeof(offending_layer), x)
    -    y = offending_layer(x)
    -    function ∇offending_layer(Δ)
    -        Δ[1] = NaN
    -        return NoTangent(), Δ
    -    end
    -    return y, ∇offending_layer
    -end

    Let us compute the gradient of the layer now:

    julia
    Zygote.gradient(ps -> sum(first(model(x, ps, st))), ps)
    ((layer_1 = (weight = Float32[0.0; 0.0; … ; 0.0; 0.0;;], bias = Float32[0.0, 0.0, 0.0, 0.0, NaN, 0.0, NaN, NaN, 0.0, 0.0, 0.0, NaN, NaN, NaN, 0.0, 0.0]), layer_2 = (layer_1 = (weight = Float32[NaN NaN … NaN NaN], bias = Float32[NaN]), layer_2 = nothing), layer_3 = (scale = Float32[0.0], bias = Float32[2.0])),)

    Oh no!! A NaN is present in the gradient of ps. Let us run the debug model and see where the NaN occurred:

    julia
    model_debug = Lux.Experimental.@debug_mode model nan_check=:both
    -
    -try
    -    Zygote.gradient(ps -> sum(first(model_debug(x, ps, st))), ps)
    -catch e
    -    println(e)
    -end
    [ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: Dense(1 => 16, relu) at location KeyPath(:model, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (16, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (16, 2).
    -[ Info: Running Layer: Dense(16 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_1)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: WrappedFunction(offending_layer) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
    -[ Info: Input Type: Matrix{Float32} | Input Structure: (1, 2).
    -[ Info: Running Layer: BatchNorm(1, affine=true, track_stats=true) at location KeyPath(:model, :layers, :layer_3)!
    -[ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
    -DomainError(Float32[NaN 0.0], "NaNs detected in pullback output (x)  of layer WrappedFunction(offending_layer) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).")

    And there you go our debug layer prints that the problem is in WrappedFunction(offending_layer) at location model.layers.layer_2.layers.layer_2! Once we fix the pullback of the layer, we will fix the NaNs.

    Conclusion

    In this manual section, we have discussed tracking down errors in Lux models. We have covered tracking incorrect model specifications and NaNs in forward and backward passes. However, remember that this is an Experimental feature, and there might be edge cases that don't work correctly. If you find any such cases, please open an issue on GitHub!

    `,49)]))}const c=a(t,[["render",l]]);export{g as __pageData,c as default}; diff --git a/dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.js b/dev/assets/manual_dispatch_custom_input.md.B4ERitcs.js similarity index 98% rename from dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.js rename to dev/assets/manual_dispatch_custom_input.md.B4ERitcs.js index 03ef0f92e2..6a13e6103e 100644 --- a/dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.js +++ b/dev/assets/manual_dispatch_custom_input.md.B4ERitcs.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Dispatching on Custom Input Types","description":"","frontmatter":{},"headers":[],"relativePath":"manual/dispatch_custom_input.md","filePath":"manual/dispatch_custom_input.md","lastUpdated":null}'),h={name:"manual/dispatch_custom_input.md"};function p(l,s,e,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Dispatching on Custom Input Types

    Which function should participate in dispatch?

    • Defining a dispatch on (::Layer)(x::MyInputType, ps, st::NamedTuple) is inconvenient, since it requires the user to define a new method for every layer type.

    • (::AbstractLuxLayer)(x::MyInputType, ps, st::NamedTuple) doesn't work.

    • Instead, we need to define the dispatch on Lux.apply(::AbstractLuxLayer, x::MyInputType, ps, st::NamedTuple).

    Concrete Example

    Consider Neural ODEs. In these models, often time we want to every iteration of the neural network to take the current time as input. Here, we won't go through implementing an entire Neural ODE model. Instead we will define a time dependent version of Chain.

    Time-Dependent Chain Implementation

    julia
    using Lux, Random
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Dispatching on Custom Input Types","description":"","frontmatter":{},"headers":[],"relativePath":"manual/dispatch_custom_input.md","filePath":"manual/dispatch_custom_input.md","lastUpdated":null}'),h={name:"manual/dispatch_custom_input.md"};function p(l,s,e,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Dispatching on Custom Input Types

    Which function should participate in dispatch?

    • Defining a dispatch on (::Layer)(x::MyInputType, ps, st::NamedTuple) is inconvenient, since it requires the user to define a new method for every layer type.

    • (::AbstractLuxLayer)(x::MyInputType, ps, st::NamedTuple) doesn't work.

    • Instead, we need to define the dispatch on Lux.apply(::AbstractLuxLayer, x::MyInputType, ps, st::NamedTuple).

    Concrete Example

    Consider Neural ODEs. In these models, often time we want to every iteration of the neural network to take the current time as input. Here, we won't go through implementing an entire Neural ODE model. Instead we will define a time dependent version of Chain.

    Time-Dependent Chain Implementation

    julia
    using Lux, Random
     
     struct TDChain{L <: NamedTuple} <: Lux.AbstractLuxWrapperLayer{:layers}
         layers::L
    @@ -37,11 +37,11 @@ import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g
     
     Closest candidates are:
       apply(!Matched::AbstractLuxLayer, ::Any, ::Any, ::Any)
    -   @ LuxCore /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:154
    +   @ LuxCore /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:154
       apply(!Matched::AbstractLuxLayer, !Matched::Tracker.TrackedArray, ::Any, ::Any)
    -   @ LuxCoreArrayInterfaceTrackerExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceTrackerExt.jl:19
    +   @ LuxCoreArrayInterfaceTrackerExt /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceTrackerExt.jl:19
       apply(!Matched::AbstractLuxLayer, !Matched::ReverseDiff.TrackedArray, ::Any, ::Any)
    -   @ LuxCoreArrayInterfaceReverseDiffExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceReverseDiffExt.jl:20
    +   @ LuxCoreArrayInterfaceReverseDiffExt /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceReverseDiffExt.jl:20
       ...

    Writing the Correct Dispatch Rules

    • Create a Custom Layer storing the time.
    julia
    struct ArrayAndTime{A <: AbstractArray, T <: Real}
         array::A
         time::T
    diff --git a/dev/assets/manual_dispatch_custom_input.md.B4ERitcs.lean.js b/dev/assets/manual_dispatch_custom_input.md.B4ERitcs.lean.js
    new file mode 100644
    index 0000000000..cf0c77ab62
    --- /dev/null
    +++ b/dev/assets/manual_dispatch_custom_input.md.B4ERitcs.lean.js
    @@ -0,0 +1 @@
    +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Dispatching on Custom Input Types","description":"","frontmatter":{},"headers":[],"relativePath":"manual/dispatch_custom_input.md","filePath":"manual/dispatch_custom_input.md","lastUpdated":null}'),h={name:"manual/dispatch_custom_input.md"};function p(l,s,e,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n("",23)]))}const y=i(h,[["render",p]]);export{g as __pageData,y as default};
    diff --git a/dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.lean.js b/dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.lean.js
    deleted file mode 100644
    index 03ef0f92e2..0000000000
    --- a/dev/assets/manual_dispatch_custom_input.md.CcWKCtg7.lean.js
    +++ /dev/null
    @@ -1,62 +0,0 @@
    -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Dispatching on Custom Input Types","description":"","frontmatter":{},"headers":[],"relativePath":"manual/dispatch_custom_input.md","filePath":"manual/dispatch_custom_input.md","lastUpdated":null}'),h={name:"manual/dispatch_custom_input.md"};function p(l,s,e,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

    Dispatching on Custom Input Types

    Which function should participate in dispatch?

    • Defining a dispatch on (::Layer)(x::MyInputType, ps, st::NamedTuple) is inconvenient, since it requires the user to define a new method for every layer type.

    • (::AbstractLuxLayer)(x::MyInputType, ps, st::NamedTuple) doesn't work.

    • Instead, we need to define the dispatch on Lux.apply(::AbstractLuxLayer, x::MyInputType, ps, st::NamedTuple).

    Concrete Example

    Consider Neural ODEs. In these models, often time we want to every iteration of the neural network to take the current time as input. Here, we won't go through implementing an entire Neural ODE model. Instead we will define a time dependent version of Chain.

    Time-Dependent Chain Implementation

    julia
    using Lux, Random
    -
    -struct TDChain{L <: NamedTuple} <: Lux.AbstractLuxWrapperLayer{:layers}
    -    layers::L
    -end
    -
    -function (l::TDChain)((x, t)::Tuple, ps, st::NamedTuple)
    -    # Concatenate along the 2nd last dimension
    -    sz = ntuple(i -> i == ndims(x) - 1 ? 1 : size(x, i), ndims(x))
    -    t_ = ones(eltype(x), sz) .* t  # Needs to be modified for GPU
    -    for name in keys(l.layers)
    -        x, st_ = Lux.apply(getfield(l.layers, name), cat(x, t_; dims=ndims(x) - 1),
    -                           getfield(ps, name), getfield(st, name))
    -        st = merge(st, NamedTuple{(name,)}((st_,)))
    -    end
    -    return x, st
    -end
    -
    -model = Chain(Dense(3, 4), TDChain((; d1=Dense(5, 4), d2=Dense(5, 4))), Dense(4, 1))
    Chain(
    -    layer_1 = Dense(3 => 4),            # 16 parameters
    -    layer_2 = TDChain(
    -        d1 = Dense(5 => 4),             # 24 parameters
    -        d2 = Dense(5 => 4),             # 24 parameters
    -    ),
    -    layer_3 = Dense(4 => 1),            # 5 parameters
    -)         # Total: 69 parameters,
    -          #        plus 0 states.

    Running the TDChain

    julia
    rng = MersenneTwister(0)
    -ps, st = Lux.setup(rng, model)
    -x = randn(rng, Float32, 3, 2)
    -
    -try
    -    model(x, ps, st)
    -catch e
    -    Base.showerror(stdout, e)
    -end
    MethodError: no method matching apply(::@NamedTuple{d1::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}, d2::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, ::Matrix{Float32}, ::@NamedTuple{d1::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}, d2::@NamedTuple{weight::Matrix{Float32}, bias::Vector{Float32}}}, ::@NamedTuple{d1::@NamedTuple{}, d2::@NamedTuple{}})
    -The function \`apply\` exists, but no method is defined for this combination of argument types.
    -
    -Closest candidates are:
    -  apply(!Matched::AbstractLuxLayer, ::Any, ::Any, ::Any)
    -   @ LuxCore /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:154
    -  apply(!Matched::AbstractLuxLayer, !Matched::Tracker.TrackedArray, ::Any, ::Any)
    -   @ LuxCoreArrayInterfaceTrackerExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceTrackerExt.jl:19
    -  apply(!Matched::AbstractLuxLayer, !Matched::ReverseDiff.TrackedArray, ::Any, ::Any)
    -   @ LuxCoreArrayInterfaceReverseDiffExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceReverseDiffExt.jl:20
    -  ...

    Writing the Correct Dispatch Rules

    • Create a Custom Layer storing the time.
    julia
    struct ArrayAndTime{A <: AbstractArray, T <: Real}
    -    array::A
    -    time::T
    -end
    • Define the dispatch on Lux.apply(::AbstractLuxLayer, x::ArrayAndTime, ps, st::NamedTuple).
    julia
    function Lux.apply(layer::Lux.AbstractLuxLayer, x::ArrayAndTime, ps, st::NamedTuple)
    -    y, st = layer(x.array, ps, st)
    -    return ArrayAndTime(y, x.time), st
    -end
    -
    -function Lux.apply(layer::TDChain, x::ArrayAndTime, ps, st::NamedTuple)
    -    y, st = layer((x.array, x.time), ps, st)
    -    return ArrayAndTime(y, x.time), st
    -end
    • Run the model.
    julia
    xt = ArrayAndTime(x, 10.0f0)
    -
    -model(xt, ps, st)[1]
    Main.ArrayAndTime{Matrix{Float32}, Float32}(Float32[4.8874373 5.5271416], 10.0f0)

    Using the Same Input for Non-TD Models

    Writing proper dispatch means we can simply replace the TDChain with a Chain (of course with dimension corrections) and the pipeline still works.

    julia
    model = Chain(Dense(3, 4), Chain((; d1=Dense(4, 4), d2=Dense(4, 4))), Dense(4, 1))
    -
    -ps, st = Lux.setup(rng, model)
    -
    -model(xt, ps, st)[1]
    Main.ArrayAndTime{Matrix{Float32}, Float32}(Float32[0.40721768 1.2363781], 10.0f0)
    `,23)]))}const y=i(h,[["render",p]]);export{g as __pageData,y as default}; diff --git a/dev/assets/manual_distributed_utils.md.CYueR7UH.lean.js b/dev/assets/manual_distributed_utils.md.CYueR7UH.lean.js deleted file mode 100644 index 4ee4df6ed1..0000000000 --- a/dev/assets/manual_distributed_utils.md.CYueR7UH.lean.js +++ /dev/null @@ -1,4 +0,0 @@ -import{_ as e,c as t,a2 as s,o as a}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Distributed Data Parallel Training","description":"","frontmatter":{},"headers":[],"relativePath":"manual/distributed_utils.md","filePath":"manual/distributed_utils.md","lastUpdated":null}'),n={name:"manual/distributed_utils.md"};function l(d,i,r,o,h,p){return a(),t("div",null,i[0]||(i[0]=[s(`

    Distributed Data Parallel Training

    Tip

    For a fully functional example, see the ImageNet Training Example.

    DDP Training using Lux.DistributedUtils is a spiritual successor to FluxMPI.jl, but has some key differences.

    Guide to Integrating DistributedUtils into your code

    • Initialize the respective backend with DistributedUtils.initialize, by passing in a backend type. It is important that you pass in the type, i.e. NCCLBackend and not the object NCCLBackend().
    julia
    DistributedUtils.initialize(NCCLBackend)
    julia
    backend = DistributedUtils.get_distributed_backend(NCCLBackend)

    It is important that you use this function instead of directly constructing the backend, since there are certain internal states that need to be synchronized.

    • Next synchronize the parameters and states of the model. This is done by calling DistributedUtils.synchronize!! with the backend and the respective input.
    julia
    ps = DistributedUtils.synchronize!!(backend, ps)
    -st = DistributedUtils.synchronize!!(backend, st)
    julia
    data = DistributedUtils.DistributedDataContainer(backend, data)
    • Wrap the optimizer in DistributedUtils.DistributedOptimizer to ensure that the optimizer is correctly synchronized across all processes before parameter updates. After initializing the state of the optimizer, synchronize the state across all processes.
    julia
    opt = DistributedUtils.DistributedOptimizer(backend, opt)
    -opt_state = Optimisers.setup(opt, ps)
    -opt_state = DistributedUtils.synchronize!!(backend, opt_state)
    • Finally change all logging and serialization code to trigger on local_rank(backend) == 0. This ensures that only the master process logs and serializes the model.

    Migration Guide from FluxMPI.jl

    Let's compare the changes we need to make wrt the FluxMPI.jl integration guide.

    1. FluxMPI.Init is now DistributedUtils.initialize.

    2. FluxMPI.synchronize!(x) needs to be changed to x_new = DistributedUtils.synchronize!!(backend, x).

    3. DistributedUtils.DistributedDataContainer, DistributedUtils.local_rank, and DistributedUtils.DistributedOptimizer need backend as the first input.

    And that's pretty much it!

    Removed Functionality

    1. FluxMPI.allreduce_gradients no longer exists. Previously this was needed when CUDA communication was flaky, with NCCL.jl this is no longer the case.

    2. FluxMPIFluxModel has been removed. DistributedUtils no longer works with Flux.

    Key Differences

    1. FluxMPI.synchronize! is now DistributedUtils.synchronize!! to highlight the fact that some of the inputs are not updated in-place.

    2. All of the functions now require a communication backend as input.

    3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See GPU-aware MPI for more information.

    4. Older (now non-existent) Lux.gpu implementations used to "just work" with FluxMPI.jl. We expect gpu_device to continue working as expected, however, we recommend using gpu_device after calling DistributedUtils.initialize to avoid any mismatch between the device set via DistributedUtils and the device stores in CUDADevice or AMDGPUDevice.

    Known Shortcomings

    1. Currently we don't run tests with CUDA or ROCM aware MPI, use those features at your own risk. We are working on adding tests for these features.

    2. AMDGPU support is mostly experimental and causes deadlocks in certain situations, this is being investigated. If you have a minimal reproducer for this, please open an issue.

    `,26)]))}const k=e(n,[["render",l]]);export{c as __pageData,k as default}; diff --git a/dev/assets/manual_distributed_utils.md.CYueR7UH.js b/dev/assets/manual_distributed_utils.md.XyQ9ZBk2.js similarity index 98% rename from dev/assets/manual_distributed_utils.md.CYueR7UH.js rename to dev/assets/manual_distributed_utils.md.XyQ9ZBk2.js index 4ee4df6ed1..a82b16fed1 100644 --- a/dev/assets/manual_distributed_utils.md.CYueR7UH.js +++ b/dev/assets/manual_distributed_utils.md.XyQ9ZBk2.js @@ -1,4 +1,4 @@ -import{_ as e,c as t,a2 as s,o as a}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Distributed Data Parallel Training","description":"","frontmatter":{},"headers":[],"relativePath":"manual/distributed_utils.md","filePath":"manual/distributed_utils.md","lastUpdated":null}'),n={name:"manual/distributed_utils.md"};function l(d,i,r,o,h,p){return a(),t("div",null,i[0]||(i[0]=[s(`

    Distributed Data Parallel Training

    Tip

    For a fully functional example, see the ImageNet Training Example.

    DDP Training using Lux.DistributedUtils is a spiritual successor to FluxMPI.jl, but has some key differences.

    Guide to Integrating DistributedUtils into your code

    • Initialize the respective backend with DistributedUtils.initialize, by passing in a backend type. It is important that you pass in the type, i.e. NCCLBackend and not the object NCCLBackend().
    julia
    DistributedUtils.initialize(NCCLBackend)
    julia
    backend = DistributedUtils.get_distributed_backend(NCCLBackend)

    It is important that you use this function instead of directly constructing the backend, since there are certain internal states that need to be synchronized.

    • Next synchronize the parameters and states of the model. This is done by calling DistributedUtils.synchronize!! with the backend and the respective input.
    julia
    ps = DistributedUtils.synchronize!!(backend, ps)
    +import{_ as e,c as t,a2 as s,o as a}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Distributed Data Parallel Training","description":"","frontmatter":{},"headers":[],"relativePath":"manual/distributed_utils.md","filePath":"manual/distributed_utils.md","lastUpdated":null}'),n={name:"manual/distributed_utils.md"};function l(d,i,r,o,h,p){return a(),t("div",null,i[0]||(i[0]=[s(`

    Distributed Data Parallel Training

    Tip

    For a fully functional example, see the ImageNet Training Example.

    DDP Training using Lux.DistributedUtils is a spiritual successor to FluxMPI.jl, but has some key differences.

    Guide to Integrating DistributedUtils into your code

    • Initialize the respective backend with DistributedUtils.initialize, by passing in a backend type. It is important that you pass in the type, i.e. NCCLBackend and not the object NCCLBackend().
    julia
    DistributedUtils.initialize(NCCLBackend)
    julia
    backend = DistributedUtils.get_distributed_backend(NCCLBackend)

    It is important that you use this function instead of directly constructing the backend, since there are certain internal states that need to be synchronized.

    • Next synchronize the parameters and states of the model. This is done by calling DistributedUtils.synchronize!! with the backend and the respective input.
    julia
    ps = DistributedUtils.synchronize!!(backend, ps)
     st = DistributedUtils.synchronize!!(backend, st)
    julia
    data = DistributedUtils.DistributedDataContainer(backend, data)
    • Wrap the optimizer in DistributedUtils.DistributedOptimizer to ensure that the optimizer is correctly synchronized across all processes before parameter updates. After initializing the state of the optimizer, synchronize the state across all processes.
    julia
    opt = DistributedUtils.DistributedOptimizer(backend, opt)
     opt_state = Optimisers.setup(opt, ps)
     opt_state = DistributedUtils.synchronize!!(backend, opt_state)
    • Finally change all logging and serialization code to trigger on local_rank(backend) == 0. This ensures that only the master process logs and serializes the model.

    Migration Guide from FluxMPI.jl

    Let's compare the changes we need to make wrt the FluxMPI.jl integration guide.

    1. FluxMPI.Init is now DistributedUtils.initialize.

    2. FluxMPI.synchronize!(x) needs to be changed to x_new = DistributedUtils.synchronize!!(backend, x).

    3. DistributedUtils.DistributedDataContainer, DistributedUtils.local_rank, and DistributedUtils.DistributedOptimizer need backend as the first input.

    And that's pretty much it!

    Removed Functionality

    1. FluxMPI.allreduce_gradients no longer exists. Previously this was needed when CUDA communication was flaky, with NCCL.jl this is no longer the case.

    2. FluxMPIFluxModel has been removed. DistributedUtils no longer works with Flux.

    Key Differences

    1. FluxMPI.synchronize! is now DistributedUtils.synchronize!! to highlight the fact that some of the inputs are not updated in-place.

    2. All of the functions now require a communication backend as input.

    3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See GPU-aware MPI for more information.

    4. Older (now non-existent) Lux.gpu implementations used to "just work" with FluxMPI.jl. We expect gpu_device to continue working as expected, however, we recommend using gpu_device after calling DistributedUtils.initialize to avoid any mismatch between the device set via DistributedUtils and the device stores in CUDADevice or AMDGPUDevice.

    Known Shortcomings

    1. Currently we don't run tests with CUDA or ROCM aware MPI, use those features at your own risk. We are working on adding tests for these features.

    2. AMDGPU support is mostly experimental and causes deadlocks in certain situations, this is being investigated. If you have a minimal reproducer for this, please open an issue.

    `,26)]))}const k=e(n,[["render",l]]);export{c as __pageData,k as default}; diff --git a/dev/assets/manual_distributed_utils.md.XyQ9ZBk2.lean.js b/dev/assets/manual_distributed_utils.md.XyQ9ZBk2.lean.js new file mode 100644 index 0000000000..337d4e32a4 --- /dev/null +++ b/dev/assets/manual_distributed_utils.md.XyQ9ZBk2.lean.js @@ -0,0 +1 @@ +import{_ as e,c as t,a2 as s,o as a}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Distributed Data Parallel Training","description":"","frontmatter":{},"headers":[],"relativePath":"manual/distributed_utils.md","filePath":"manual/distributed_utils.md","lastUpdated":null}'),n={name:"manual/distributed_utils.md"};function l(d,i,r,o,h,p){return a(),t("div",null,i[0]||(i[0]=[s("",26)]))}const k=e(n,[["render",l]]);export{c as __pageData,k as default}; diff --git a/dev/assets/manual_exporting_to_jax.md.BGqMRVNM.lean.js b/dev/assets/manual_exporting_to_jax.md.BGqMRVNM.lean.js deleted file mode 100644 index 4e852a7e98..0000000000 --- a/dev/assets/manual_exporting_to_jax.md.BGqMRVNM.lean.js +++ /dev/null @@ -1,226 +0,0 @@ -import{_ as a,c as n,a2 as i,o as p}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","description":"","frontmatter":{},"headers":[],"relativePath":"manual/exporting_to_jax.md","filePath":"manual/exporting_to_jax.md","lastUpdated":null}'),l={name:"manual/exporting_to_jax.md"};function t(e,s,h,k,r,E){return p(),n("div",null,s[0]||(s[0]=[i(`

    Exporting Lux Models to Jax (via EnzymeJAX & Reactant)

    Experimental

    This feature is experimental and is subject to change without notice. Additionally, this feature currently requires some manual setup for interacting with Jax, which we are working on improving.

    In this manual, we will go over how to export Lux models to StableHLO and use EnzymeJAX to run integrate Lux models with JAX. We assume that users are familiar with Reactant compilation of Lux models.

    julia
    using Lux, Reactant, Random
    -
    -const dev = reactant_device()
    (::ReactantDevice{Missing, Missing}) (generic function with 1 method)

    We simply define a Lux model and generate the stablehlo code using Reactant.@code_hlo.

    julia
    model = Chain(
    -    Conv((5, 5), 1 => 6, relu),
    -    MaxPool((2, 2)),
    -    Conv((5, 5), 6 => 16, relu),
    -    MaxPool((2, 2)),
    -    FlattenLayer(3),
    -    Chain(
    -        Dense(256 => 128, relu),
    -        Dense(128 => 84, relu),
    -        Dense(84 => 10)
    -    )
    -)
    -ps, st = Lux.setup(Random.default_rng(), model) |> dev;
    ((layer_1 = (weight = Reactant.ConcreteRArray{Float32, 4}(Float32[-0.100514844 0.641402 … 0.15222053 -0.5301181; 0.45682606 0.5083343 … 0.06879873 0.10708179; … ; 0.2401375 0.18095753 … 0.148147 -0.68299204; 0.11708575 0.048821628 … 0.62626415 -0.6875213;;;; 0.45953766 0.6877121 … 0.48257408 -0.52564013; 0.22677277 -0.58328676 … -0.23234344 -0.45553648; … ; 0.15218197 -0.63391256 … 0.17440075 0.120217; -0.055573903 -0.3379242 … -0.59377456 0.41815332;;;; -0.5203194 0.26564768 … 0.2880513 -0.24490844; 0.2609411 0.2002995 … 0.26858228 -0.26229796; … ; 0.09441388 0.24978368 … -0.3149016 0.16969556; 0.6395181 0.059671473 … 0.3798853 0.3777897;;;; 0.32651654 -0.5890444 … -0.1751135 -0.41645882; -0.5597971 0.5320771 … -0.67852753 0.107406616; … ; -0.016386721 -0.6846315 … 0.049217567 -0.64721453; -0.39763013 -0.35652733 … 0.14593515 -0.28993338;;;; -0.2656273 -0.600764 … 0.12497909 -0.15026206; 0.65176255 0.058104068 … -0.16105938 0.3143208; … ; -0.14073242 -0.3695437 … 0.6207278 -0.47512773; -0.13825017 0.3013572 … -0.13684514 -0.4450739;;;; 0.31697956 0.5574339 … -0.2093402 0.6882201; 0.033461835 -0.39950222 … -0.48107117 0.5505513; … ; -0.17039865 0.6595012 … -0.46096706 -0.15864038; 0.06682308 0.285972 … 0.5155236 0.34457564]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[-0.09860833, -0.18450394, -0.172313, -0.09282472, 0.1309025, 0.061889123])), layer_2 = NamedTuple(), layer_3 = (weight = Reactant.ConcreteRArray{Float32, 4}(Float32[0.11987625 0.24046357 … 0.22752927 0.21899778; 0.2806326 0.2810528 … -0.274844 0.15035425; … ; -0.23564403 -0.0028807875 … -0.10842768 -0.18919921; -0.19969921 -0.268992 … -0.2370692 -0.039153982;;; -0.03374887 -0.093958974 … -0.280846 0.26553962; 0.122150525 0.20753737 … 0.14238782 -0.079042196; … ; 0.09650694 0.20296505 … -0.17326018 0.16363813; -0.118356064 0.16504566 … 0.18531604 -0.13996856;;; 0.2514698 -0.17938598 … -0.05262428 -0.06740383; -0.24503033 0.11728277 … -0.13142236 -0.011098074; … ; -0.23847201 -0.24982567 … -0.23192994 0.044427596; 0.18960184 -0.16340032 … -0.18996632 0.09250315;;; -0.12397134 0.12766095 … 0.2779469 0.052803323; 0.039103575 0.004629241 … -0.15262935 -0.111676365; … ; 0.19498321 0.11950846 … -0.06528456 0.008846691; 0.22409724 -0.018854173 … 0.13590635 -0.22521684;;; 0.18495357 0.10401063 … -0.2670698 0.17617925; 0.14366318 0.20561251 … -0.26477206 0.0015469915; … ; 0.27561936 0.0011872598 … -0.17211406 -0.19183022; 0.005086349 0.17840558 … -0.072645485 0.17083026;;; -0.27658862 -0.17361793 … 0.242468 -0.039650977; -0.24199852 0.27319488 … -0.04899112 -0.20071083; … ; 0.09189941 0.014689862 … 0.051825043 -0.12811285; -0.08589938 0.08455851 … 0.07319629 -0.1747854;;;; -0.22517107 0.020608762 … 0.08025096 -0.14336014; 0.20636114 -0.17598884 … 0.20585205 0.1303978; … ; 0.07489191 -0.11631798 … 0.058901027 -0.2794292; -0.0799499 0.19019358 … 0.20101166 -0.15967444;;; -0.17029636 -0.21178308 … 0.002483191 0.034200985; 0.04447686 -0.15771388 … -0.120917246 -0.054846566; … ; -0.06445969 -0.116768986 … -0.24997774 -0.06368081; 0.113006115 -0.08338781 … 0.10346943 0.13751146;;; -0.20694472 0.19138478 … -0.266974 0.085806124; 0.2147609 0.21121861 … 0.027592428 0.12180024; … ; -0.16645215 -0.26774603 … -0.09705801 0.16954337; 0.0055776797 0.08746583 … 0.12936348 0.2017526;;; -0.0018348376 -0.2567473 … -0.26520002 -0.031490605; -0.056034062 -0.176348 … -0.14361875 0.01490508; … ; -0.018303769 -0.0017325052 … 0.2108695 -0.14421012; -0.20487879 -0.19641913 … -0.017829532 0.09932359;;; 0.14163494 0.063480295 … 0.03308521 0.10206564; -0.19744 0.07638691 … -0.23707482 -0.0973789; … ; 0.14562015 0.038802307 … -0.03170667 -0.103913486; 0.09442957 -0.015896475 … -0.044987272 0.24539067;;; 0.08269734 0.17385432 … 0.19634563 0.0692472; -0.20779805 0.12078848 … 0.24063988 0.2714335; … ; -0.105389535 -0.20656037 … 0.15708946 0.18803856; 0.26072562 -0.003485207 … -0.1243891 0.07297467;;;; -0.08554761 0.21957569 … -0.2742818 0.18916532; 0.08927501 -0.1186073 … 0.17124604 -0.19405301; … ; 0.19792819 0.10561423 … -0.19954802 0.1752539; -0.2632644 0.14365605 … 0.048471738 0.15499277;;; 0.059055757 -0.031942792 … 0.21004838 0.049328804; -0.010950223 -0.092265144 … 0.2666627 -0.014741955; … ; -0.2008716 -0.05379219 … 0.24238436 -0.26664025; 0.016865179 0.01717774 … -0.20316577 0.17713173;;; -0.19995327 -0.09096992 … 0.23395808 -0.012063608; -0.21295139 -0.08832364 … -0.21398924 0.047317084; … ; 0.114560924 -0.12348884 … 0.059224278 -0.25860527; -0.17703351 -0.22157605 … 0.17337337 -0.16027175;;; 0.104936846 -0.08765691 … 0.12241076 -0.14012684; 0.2597034 -0.017866217 … 0.12900914 -0.06272482; … ; -0.008840925 0.062121924 … 0.106482625 0.14555879; -0.028596466 -0.07552715 … -0.08260414 0.13732003;;; 0.12650935 0.09646284 … 0.24086508 0.24695547; 0.08096753 0.09591715 … 0.023150858 -0.26545027; … ; 0.19313364 -0.017933888 … -0.15105338 0.1678572; 0.2614398 -0.039614968 … 0.1461747 0.1272793;;; -0.03461915 0.12092318 … 0.012866791 0.1759687; -0.046394978 -0.18018521 … -0.20192719 0.16220272; … ; 0.06777759 -0.15605855 … -0.12559004 -0.061299384; -0.019838588 0.17196824 … -0.20025302 0.040938716;;;; … ;;;; 0.20436017 0.036468588 … 0.07778767 -0.21271591; 0.100167036 -0.1687434 … -0.2821546 0.031386353; … ; 0.23258851 0.27682805 … -0.09668045 0.16447622; -0.19094673 0.048154727 … -0.023283502 0.21796629;;; -0.019472213 0.21634498 … 0.21686329 0.07765452; 0.026193827 0.2553826 … -0.025493514 -0.14033335; … ; -0.23084445 0.03278077 … 0.20206891 0.10923161; 0.08846138 -0.1163871 … -0.10242631 0.23552088;;; -0.17115343 -0.09725678 … -0.14884971 -0.04715905; 0.10361175 0.22230405 … 0.19065982 -0.14736821; … ; -0.08358303 -0.17538628 … 0.08115203 0.027224839; -0.1990666 -0.20310251 … -0.26493692 -0.1941834;;; -0.09596483 -0.05095075 … -0.0883609 -0.10116895; -0.24626082 0.1807569 … -0.014606951 -0.020255674; … ; 0.26055062 0.062463008 … 0.24080847 0.22719024; 0.25654957 0.15332098 … -0.22900078 -0.0035986663;;; -0.06315138 -0.12076889 … -0.09900095 0.21833563; -0.0016859076 0.104042254 … -0.11325522 -0.24203484; … ; -0.13540733 -0.06715196 … -0.24817711 -0.036290962; -0.27834624 0.023097955 … -0.19361475 0.17604505;;; 0.1645548 -0.120147206 … 0.14359581 -0.043790642; 0.10464323 0.12229406 … 0.0069064857 -0.08437178; … ; -0.22202058 -0.21096227 … 0.07406641 0.06445622; 0.10097251 -0.060633026 … -0.18000072 -0.07600229;;;; -0.12116281 0.11673186 … 0.04368514 0.051994912; 0.0824661 -0.117853135 … -0.23987544 0.031034712; … ; -0.02109389 -0.13760304 … 0.057713665 0.037877575; 0.010567766 0.09230051 … 0.13399862 0.08694564;;; 0.25912565 -0.14499082 … 0.20033634 0.13110895; 0.21542016 0.09348221 … 0.087764904 -0.057571076; … ; -0.10137743 -0.10316813 … -0.09222229 0.18629253; 0.14673097 -0.12077212 … 0.00047666396 0.030407937;;; -0.23049244 0.18659353 … 0.19666132 0.25700033; 0.20265023 0.015039141 … -0.23735927 -0.11269632; … ; 0.24349861 0.23598626 … 0.017935842 -0.23224601; -0.039640423 -0.19660722 … 0.27343664 -0.07564111;;; -0.014139019 -0.10875653 … 0.12825768 -0.16428338; -0.005350559 0.093378566 … 0.24873069 0.16869935; … ; -0.1336206 -0.09430397 … 0.12715751 0.19059215; 0.10316533 -0.26615036 … 0.06680218 -0.04229615;;; 0.2470898 0.17973809 … -0.04823295 -0.14660794; 0.053759247 0.11740078 … -0.17696409 0.1323625; … ; 0.017608581 0.20266858 … -0.11454258 -0.05877; 0.11549814 0.10148246 … -0.24045505 -0.11028515;;; 0.19547431 0.060551327 … 0.15830041 -0.26124424; -0.09885789 0.09757828 … 0.25543177 -0.050780848; … ; -0.25723198 -0.05742457 … -0.19259712 0.24154694; -0.043952383 -0.069884226 … -0.026029184 0.08872778;;;; 0.05659819 0.2516714 … 0.21469435 -0.008269919; 0.17558427 0.177697 … -0.11645464 0.059937198; … ; 0.26867408 0.23669082 … -0.28209427 0.23791258; 0.19959326 -0.2304493 … 0.27611518 -0.24344929;;; 0.20862025 0.067008324 … 0.17829275 -0.0755849; 0.16576298 0.17078549 … -0.1537897 0.06592303; … ; 0.0015867107 -0.09658958 … 0.064331025 -0.17755158; -0.22094624 0.17085029 … 0.0020273982 0.021726867;;; -0.014992779 0.09422591 … 0.099841796 -0.23372321; -0.04019666 -0.091481484 … -0.17310219 -0.27664083; … ; 0.1989196 0.100737736 … 0.047496427 0.06352646; 0.26543173 -0.0078206 … -0.11169241 0.1599543;;; -0.17242633 -0.17616239 … -0.07513033 -0.111452006; -0.1138737 0.19899946 … 0.1819797 -0.23389685; … ; -0.19601588 -0.0076573063 … -0.2764903 0.01534216; -0.2379414 0.10914792 … -0.21636114 0.18898767;;; -0.1730737 -0.13276449 … -0.055362783 0.18294385; 0.18816021 -0.007705185 … 0.17029831 0.2541723; … ; -0.21886098 0.17785463 … 0.19920883 -0.16817337; 0.1373128 -0.25020984 … -0.12138993 -0.037206527;;; -0.144009 -0.211378 … -0.007904152 0.2668537; 0.08098776 -0.27800062 … -0.23608004 0.222885; … ; -0.13767318 0.10420467 … -0.17600718 0.0792036; 0.120612435 0.06217661 … 0.14079519 0.12208768]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.059521634, -0.050334737, -0.009720063, 0.019675586, 0.05290228, 0.032847542, -0.030987449, 0.07615535, 0.053398557, -0.030336674, 0.0090858545, -0.055999022, -0.0568757, 0.0106334835, 0.0753409, -0.006780343])), layer_4 = NamedTuple(), layer_5 = NamedTuple(), layer_6 = (layer_1 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[0.20567945 0.13950577 … 0.016287515 -0.13625823; -0.023857513 0.08178409 … 0.18574256 0.18949205; … ; -0.004908773 0.12437135 … 0.122653805 0.16701514; -0.11362071 0.07796077 … 0.089975506 0.0068602865]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.055422388, -0.055411585, -0.0072900057, -0.033446588, 0.04616411, 0.059655108, 0.030724227, -0.042990997, -0.037487797, -0.06080796  …  -0.03704503, 0.03475806, 0.023569956, -0.0073688403, 0.04583689, -0.0385186, 0.047006823, -0.046786353, -0.0062883645, 0.017459199])), layer_2 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[0.0982 -0.30481228 … -0.15199737 -0.20617373; 0.120820366 -0.21799661 … -0.23162602 -0.11640526; … ; 0.18901739 -0.2483782 … 0.28952244 0.13812806; 0.27664563 0.0778448 … -0.23159832 0.14517665]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[-0.02767261, 0.014004502, 0.046468746, 0.0024843565, -0.047050234, 0.011206773, 0.03194926, -0.06063513, -0.06973958, -0.063676804  …  -0.010482817, 0.055663344, -0.08152549, -0.033936515, 0.04119787, 0.059475474, 0.019876348, 0.012382892, -0.01117275, 0.074723326])), layer_3 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[0.14243387 -0.1413405 … 0.099759124 0.11808148; 0.14102985 0.18599004 … -0.110330954 0.00057825993; … ; 0.15877534 -0.14523235 … -0.124123 0.11750876; -0.11532391 0.121751495 … -0.13485748 0.112063006]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.024228286, -0.08603747, 0.09359703, -0.028482005, -0.09540328, -0.07774367, 0.040403437, -0.076062605, -0.0952605, -0.0081296405])))), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple(), layer_5 = NamedTuple(), layer_6 = (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple())))

    Generate an example input.

    julia
    x = randn(Random.default_rng(), Float32, 28, 28, 1, 4) |> dev;
    28×28×1×4 Reactant.ConcreteRArray{Float32, 4}:
    -[:, :, 1, 1] =
    -  0.291513   -1.56926    -0.0183474  …   0.621333    0.367604   -1.46127
    - -0.148189    1.00461     1.84009        1.8828      0.470282    0.135206
    -  1.71638     1.48005     0.604107      -0.250818   -0.446599   -0.227338
    -  0.739571    1.53753     0.171969       0.0811164   0.0492191   0.21301
    - -0.348267    0.383777   -0.0773392     -0.253224   -0.403784    0.208378
    -  0.67469     0.0619725   0.569012   …  -0.0772457  -0.274107    0.123659
    - -0.924262    0.171224    2.90908        0.933106    0.276725   -0.381141
    - -0.634469   -1.76159     0.123123       0.298099    1.19099    -1.53978
    - -0.895668   -0.58338    -0.754008       1.19339     0.0128983  -1.17349
    - -1.16553    -0.140299   -0.444516      -0.841543    0.602513   -0.511156
    -  ⋮                                  ⋱   ⋮                      
    -  1.39049     0.750511    0.630092       0.766737    0.288641    1.19623
    - -0.0961363   0.0907582   0.122563   …  -2.50502    -1.16454     1.26468
    - -0.355299    0.339815    1.76188        1.56542     1.49923     0.248654
    -  0.529701   -1.1187      0.336659       0.725559    0.316552    0.284991
    -  0.269968    0.258473   -0.539333      -0.165689   -0.829671   -1.1678
    - -0.693396   -1.47787     1.39068       -0.315031   -0.0898651  -0.506982
    -  0.0108781   1.6856      2.44338    …   0.397521    0.54465    -1.24321
    -  0.777076   -0.382574    0.453974      -0.677041   -0.188839    0.20346
    - -0.487275    1.64293    -0.0917963      0.289039   -0.93466    -1.24878
    -
    -[:, :, 1, 2] =
    - -0.2564      0.490193     -0.953683  …  -1.53926    -0.742347   -1.24991
    - -0.638731   -0.870364      0.891006      0.947371    2.04728     0.0453085
    -  0.598188    1.07915       3.02946      -0.592152    0.686369    0.171756
    - -1.6616      1.50378       0.176021      1.86299     0.0769223  -0.121998
    -  0.389206    0.932907      1.55841      -0.842642   -0.589177   -0.548012
    - -0.331483   -0.972335     -0.172708  …   0.504491   -0.480797    0.714467
    - -0.628994    1.41017      -0.3971       -2.03657    -0.250362    0.901397
    -  0.766022   -0.160986     -0.80128       1.37745     0.442927    0.127934
    -  2.02324    -0.151792      0.940139      1.96927    -0.663223   -0.0262358
    - -0.914827   -0.122567     -0.397588     -0.259247    0.63175    -0.813359
    -  ⋮                                   ⋱   ⋮                      
    -  0.415368    1.09517      -1.72281      -0.0346338  -0.818807    1.12648
    -  0.750158   -0.458762     -0.2913    …   2.39599     1.10885    -0.0234102
    - -0.0245608   0.885272      0.185913      1.34336     0.673229   -0.940073
    -  0.803491   -0.309233      0.532299      0.213618    2.0833      0.292533
    -  0.994433    0.980885     -1.0785        1.02998    -2.10494    -0.645233
    -  0.432701    1.55744      -1.48815       0.385454    1.04971     1.09799
    - -0.177779    1.03415       0.110962  …   0.266054   -1.22412    -1.8875
    -  0.921374    0.000732422  -0.509239      0.975088   -2.07542     1.5797
    -  0.267658    0.792885     -1.55864      -1.05998     0.0301979   0.226222
    -
    -[:, :, 1, 3] =
    - -0.787919    0.765517    0.683456  …   0.435143   -0.856428   -0.537721
    -  0.894248   -1.43599    -0.671201      0.454535    0.794371   -1.00744
    - -0.0684376   1.45013     1.5718       -0.468792   -0.599632   -0.37979
    - -0.588749    0.419611   -1.1295       -0.0764233  -0.958919   -1.56817
    -  0.477329   -0.71991    -0.596318      0.365734    1.88897     0.27841
    - -1.48525    -1.28766     0.550164  …   0.442849   -0.0904621   0.274203
    - -1.4905     -0.687716    0.342532     -0.315267   -0.238426    0.52272
    - -0.591793   -0.782822    0.971431      0.957152    0.28524    -0.488387
    -  0.730082    0.0321158   2.03569       1.79206     0.443328    0.878222
    - -0.473179    0.179542   -0.383948     -1.89874     0.301429    0.242777
    -  ⋮                                 ⋱   ⋮                      
    -  0.568298    1.43998     0.902204     -0.0341252   0.182365    1.12177
    - -1.06885    -0.227502    1.80644   …  -1.51385    -0.0980411  -0.468023
    -  1.4266     -0.790177   -1.11933      -1.04343    -2.74012    -0.603439
    -  0.941303   -1.2708      1.76251       0.329143   -0.157368    0.518659
    - -0.286642   -0.158975   -0.336721     -2.78971    -0.433858   -1.36348
    - -0.209906    2.24079     1.33335      -1.91862     0.756535    1.05
    - -0.703892   -0.119758    0.338672  …  -1.06781     0.333503    0.367246
    - -0.514301    1.74311     1.23352      -0.234211   -2.67728     0.50096
    - -0.0134848  -0.285777   -0.375862      0.487558   -0.788505   -0.622511
    -
    -[:, :, 1, 4] =
    -  0.139778  -0.84954   -0.191424    …   0.501695   -0.86957     0.15875
    -  0.382491   0.116574   1.4908         -0.0281603   0.0523839   0.171212
    -  1.93197    0.187688  -1.19373         0.112344   -0.747831   -0.41799
    -  1.47092   -1.32698    0.205559       -1.63161     0.578686    2.30573
    -  1.41261    0.394958  -0.361142       -0.594664    2.19236     0.0775962
    - -1.00258   -0.2293    -0.886353    …   0.836232    1.13513    -0.522541
    -  0.304635  -0.259867   0.442206       -0.811523   -0.946637   -1.63823
    -  1.07021   -1.19011   -1.35452         0.583191    0.389748   -0.554008
    - -1.30172   -0.441454   0.238232        0.640625   -1.0373      1.86624
    -  0.68639   -1.10676   -1.63936         1.16447     2.26076    -0.867646
    -  ⋮                                 ⋱   ⋮                      
    -  0.19226    0.193263  -0.324167       -1.5641     -0.0836404   0.353753
    - -0.92663    0.671869   0.00835054  …   1.01832    -0.373847    0.529258
    -  0.697421   2.05721   -1.50536         0.650203   -0.112076    1.7092
    -  0.834789  -1.45895   -0.174513       -1.09461     0.401719   -1.28114
    - -0.966289  -1.05522   -0.0551505      -0.471634    1.42752     0.921201
    -  1.76929    0.886018   0.808093       -1.74448    -0.279297   -0.0572769
    -  0.398299   0.434771   1.57555     …  -1.88135     0.581354   -0.887089
    -  0.138228  -1.04933   -1.20448         1.23037    -2.04173     0.118148
    - -0.469687   0.224029  -1.67689        -0.206943    0.528563   -1.07894

    Now instead of compiling the model, we will use Reactant.@code_hlo to generate the StableHLO code.

    julia
    hlo_code = @code_hlo model(x, ps, st)
    module {
    -  func.func @main(%arg0: tensor<4x1x28x28xf32>, %arg1: tensor<6x1x5x5xf32>, %arg2: tensor<6xf32>, %arg3: tensor<16x6x5x5xf32>, %arg4: tensor<16xf32>, %arg5: tensor<256x128xf32>, %arg6: tensor<128xf32>, %arg7: tensor<128x84xf32>, %arg8: tensor<84xf32>, %arg9: tensor<84x10xf32>, %arg10: tensor<10xf32>) -> tensor<4x10xf32> {
    -    %cst = stablehlo.constant dense<0.000000e+00> : tensor<84x4xf32>
    -    %cst_0 = stablehlo.constant dense<0.000000e+00> : tensor<128x4xf32>
    -    %cst_1 = stablehlo.constant dense<0.000000e+00> : tensor<8x8x16x4xf32>
    -    %cst_2 = stablehlo.constant dense<0.000000e+00> : tensor<24x24x6x4xf32>
    -    %cst_3 = stablehlo.constant dense<0xFF800000> : tensor<f32>
    -    %0 = stablehlo.transpose %arg1, dims = [3, 2, 1, 0] : (tensor<6x1x5x5xf32>) -> tensor<5x5x1x6xf32>
    -    %1 = stablehlo.transpose %arg3, dims = [3, 2, 1, 0] : (tensor<16x6x5x5xf32>) -> tensor<5x5x6x16xf32>
    -    %2 = stablehlo.reverse %0, dims = [0, 1] : tensor<5x5x1x6xf32>
    -    %3 = stablehlo.convolution(%arg0, %2) dim_numbers = [b, f, 1, 0]x[0, 1, i, o]->[0, 1, f, b], window = {stride = [1, 1], pad = [[0, 0], [0, 0]], rhs_dilate = [1, 1]} {batch_group_count = 1 : i64, feature_group_count = 1 : i64} : (tensor<4x1x28x28xf32>, tensor<5x5x1x6xf32>) -> tensor<24x24x6x4xf32>
    -    %4 = stablehlo.broadcast_in_dim %arg2, dims = [2] : (tensor<6xf32>) -> tensor<24x24x6x4xf32>
    -    %5 = stablehlo.add %3, %4 : tensor<24x24x6x4xf32>
    -    %6 = stablehlo.compare  LT, %5, %cst_2 : (tensor<24x24x6x4xf32>, tensor<24x24x6x4xf32>) -> tensor<24x24x6x4xi1>
    -    %7 = stablehlo.select %6, %cst_2, %5 : tensor<24x24x6x4xi1>, tensor<24x24x6x4xf32>
    -    %8 = "stablehlo.reduce_window"(%7, %cst_3) <{padding = dense<0> : tensor<4x2xi64>, window_dilations = array<i64: 1, 1, 1, 1>, window_dimensions = array<i64: 2, 2, 1, 1>, window_strides = array<i64: 2, 2, 1, 1>}> ({
    -    ^bb0(%arg11: tensor<f32>, %arg12: tensor<f32>):
    -      %32 = stablehlo.maximum %arg11, %arg12 : tensor<f32>
    -      stablehlo.return %32 : tensor<f32>
    -    }) : (tensor<24x24x6x4xf32>, tensor<f32>) -> tensor<12x12x6x4xf32>
    -    %9 = stablehlo.reverse %1, dims = [0, 1] : tensor<5x5x6x16xf32>
    -    %10 = stablehlo.convolution(%8, %9) dim_numbers = [0, 1, f, b]x[0, 1, i, o]->[0, 1, f, b], window = {stride = [1, 1], pad = [[0, 0], [0, 0]], rhs_dilate = [1, 1]} {batch_group_count = 1 : i64, feature_group_count = 1 : i64} : (tensor<12x12x6x4xf32>, tensor<5x5x6x16xf32>) -> tensor<8x8x16x4xf32>
    -    %11 = stablehlo.broadcast_in_dim %arg4, dims = [2] : (tensor<16xf32>) -> tensor<8x8x16x4xf32>
    -    %12 = stablehlo.add %10, %11 : tensor<8x8x16x4xf32>
    -    %13 = stablehlo.compare  LT, %12, %cst_1 : (tensor<8x8x16x4xf32>, tensor<8x8x16x4xf32>) -> tensor<8x8x16x4xi1>
    -    %14 = stablehlo.select %13, %cst_1, %12 : tensor<8x8x16x4xi1>, tensor<8x8x16x4xf32>
    -    %15 = "stablehlo.reduce_window"(%14, %cst_3) <{padding = dense<0> : tensor<4x2xi64>, window_dilations = array<i64: 1, 1, 1, 1>, window_dimensions = array<i64: 2, 2, 1, 1>, window_strides = array<i64: 2, 2, 1, 1>}> ({
    -    ^bb0(%arg11: tensor<f32>, %arg12: tensor<f32>):
    -      %32 = stablehlo.maximum %arg11, %arg12 : tensor<f32>
    -      stablehlo.return %32 : tensor<f32>
    -    }) : (tensor<8x8x16x4xf32>, tensor<f32>) -> tensor<4x4x16x4xf32>
    -    %16 = stablehlo.transpose %15, dims = [3, 2, 1, 0] : (tensor<4x4x16x4xf32>) -> tensor<4x16x4x4xf32>
    -    %17 = stablehlo.reshape %16 : (tensor<4x16x4x4xf32>) -> tensor<4x256xf32>
    -    %18 = stablehlo.dot_general %arg5, %17, contracting_dims = [0] x [1] : (tensor<256x128xf32>, tensor<4x256xf32>) -> tensor<128x4xf32>
    -    %19 = stablehlo.broadcast_in_dim %arg6, dims = [0] : (tensor<128xf32>) -> tensor<128x4xf32>
    -    %20 = stablehlo.add %18, %19 : tensor<128x4xf32>
    -    %21 = stablehlo.compare  LT, %20, %cst_0 : (tensor<128x4xf32>, tensor<128x4xf32>) -> tensor<128x4xi1>
    -    %22 = stablehlo.select %21, %cst_0, %20 : tensor<128x4xi1>, tensor<128x4xf32>
    -    %23 = stablehlo.dot_general %arg7, %22, contracting_dims = [0] x [0] : (tensor<128x84xf32>, tensor<128x4xf32>) -> tensor<84x4xf32>
    -    %24 = stablehlo.broadcast_in_dim %arg8, dims = [0] : (tensor<84xf32>) -> tensor<84x4xf32>
    -    %25 = stablehlo.add %23, %24 : tensor<84x4xf32>
    -    %26 = stablehlo.compare  LT, %25, %cst : (tensor<84x4xf32>, tensor<84x4xf32>) -> tensor<84x4xi1>
    -    %27 = stablehlo.select %26, %cst, %25 : tensor<84x4xi1>, tensor<84x4xf32>
    -    %28 = stablehlo.dot_general %arg9, %27, contracting_dims = [0] x [0] : (tensor<84x10xf32>, tensor<84x4xf32>) -> tensor<10x4xf32>
    -    %29 = stablehlo.broadcast_in_dim %arg10, dims = [0] : (tensor<10xf32>) -> tensor<10x4xf32>
    -    %30 = stablehlo.add %28, %29 : tensor<10x4xf32>
    -    %31 = stablehlo.transpose %30, dims = [1, 0] : (tensor<10x4xf32>) -> tensor<4x10xf32>
    -    return %31 : tensor<4x10xf32>
    -  }
    -}

    Now we just save this into an mlir file.

    julia
    open("exported_lux_model.mlir", "w") do io
    -    write(io, string(hlo_code))
    -end
    4754

    Now we define a python script to run the model using EnzymeJAX.

    python
    from enzyme_ad.jax import hlo_call
    -
    -import jax
    -import jax.numpy as jnp
    -
    -with open("exported_lux_model.mlir", "r") as file:
    -    code = file.read()
    -
    -
    -def run_lux_model(
    -    x,
    -    weight1,
    -    bias1,
    -    weight3,
    -    bias3,
    -    weight6_1,
    -    bias6_1,
    -    weight6_2,
    -    bias6_2,
    -    weight6_3,
    -    bias6_3,
    -):
    -    return hlo_call(
    -        x,
    -        weight1,
    -        bias1,
    -        weight3,
    -        bias3,
    -        weight6_1,
    -        bias6_1,
    -        weight6_2,
    -        bias6_2,
    -        weight6_3,
    -        bias6_3,
    -        source=code,
    -    )
    -
    -
    -# Note that all the inputs must be transposed, i.e. if the julia function has an input of
    -# shape (28, 28, 1, 4), then the input to the exported function called from python must be
    -# of shape (4, 1, 28, 28). This is because multi-dimensional arrays in Julia are stored in
    -# column-major order, while in JAX/Python they are stored in row-major order.
    -
    -# Input as defined in our exported Lux model
    -x = jax.random.normal(jax.random.PRNGKey(0), (4, 1, 28, 28))
    -
    -# Weights and biases corresponding to \`ps\` and \`st\` in our exported Lux model
    -weight1 = jax.random.normal(jax.random.PRNGKey(0), (6, 1, 5, 5))
    -bias1 = jax.random.normal(jax.random.PRNGKey(0), (6,))
    -weight3 = jax.random.normal(jax.random.PRNGKey(0), (16, 6, 5, 5))
    -bias3 = jax.random.normal(jax.random.PRNGKey(0), (16,))
    -weight6_1 = jax.random.normal(jax.random.PRNGKey(0), (256, 128))
    -bias6_1 = jax.random.normal(jax.random.PRNGKey(0), (128,))
    -weight6_2 = jax.random.normal(jax.random.PRNGKey(0), (128, 84))
    -bias6_2 = jax.random.normal(jax.random.PRNGKey(0), (84,))
    -weight6_3 = jax.random.normal(jax.random.PRNGKey(0), (84, 10))
    -bias6_3 = jax.random.normal(jax.random.PRNGKey(0), (10,))
    -
    -# Run the exported Lux model
    -print(
    -    jax.jit(run_lux_model)(
    -        x,
    -        weight1,
    -        bias1,
    -        weight3,
    -        bias3,
    -        weight6_1,
    -        bias6_1,
    -        weight6_2,
    -        bias6_2,
    -        weight6_3,
    -        bias6_3,
    -    )
    -)
    `,19)]))}const o=a(l,[["render",t]]);export{g as __pageData,o as default}; diff --git a/dev/assets/manual_exporting_to_jax.md.BGqMRVNM.js b/dev/assets/manual_exporting_to_jax.md.BhFDko_-.js similarity index 97% rename from dev/assets/manual_exporting_to_jax.md.BGqMRVNM.js rename to dev/assets/manual_exporting_to_jax.md.BhFDko_-.js index 4e852a7e98..8fb17960bc 100644 --- a/dev/assets/manual_exporting_to_jax.md.BGqMRVNM.js +++ b/dev/assets/manual_exporting_to_jax.md.BhFDko_-.js @@ -1,4 +1,4 @@ -import{_ as a,c as n,a2 as i,o as p}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","description":"","frontmatter":{},"headers":[],"relativePath":"manual/exporting_to_jax.md","filePath":"manual/exporting_to_jax.md","lastUpdated":null}'),l={name:"manual/exporting_to_jax.md"};function t(e,s,h,k,r,E){return p(),n("div",null,s[0]||(s[0]=[i(`

    Exporting Lux Models to Jax (via EnzymeJAX & Reactant)

    Experimental

    This feature is experimental and is subject to change without notice. Additionally, this feature currently requires some manual setup for interacting with Jax, which we are working on improving.

    In this manual, we will go over how to export Lux models to StableHLO and use EnzymeJAX to run integrate Lux models with JAX. We assume that users are familiar with Reactant compilation of Lux models.

    julia
    using Lux, Reactant, Random
    +import{_ as a,c as n,a2 as i,o as p}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","description":"","frontmatter":{},"headers":[],"relativePath":"manual/exporting_to_jax.md","filePath":"manual/exporting_to_jax.md","lastUpdated":null}'),l={name:"manual/exporting_to_jax.md"};function t(e,s,h,k,r,E){return p(),n("div",null,s[0]||(s[0]=[i(`

    Exporting Lux Models to Jax (via EnzymeJAX & Reactant)

    In this manual, we will go over how to export Lux models to StableHLO and use EnzymeJAX to run integrate Lux models with JAX. We assume that users are familiar with Reactant compilation of Lux models.

    julia
    using Lux, Reactant, Random
     
     const dev = reactant_device()
    (::ReactantDevice{Missing, Missing}) (generic function with 1 method)

    We simply define a Lux model and generate the stablehlo code using Reactant.@code_hlo.

    julia
    model = Chain(
         Conv((5, 5), 1 => 6, relu),
    @@ -223,4 +223,4 @@ import{_ as a,c as n,a2 as i,o as p}from"./chunks/framework.I-x9Gl6h.js";const g
             weight6_3,
             bias6_3,
         )
    -)
    `,19)]))}const o=a(l,[["render",t]]);export{g as __pageData,o as default}; +)
    `,18)]))}const o=a(l,[["render",t]]);export{d as __pageData,o as default}; diff --git a/dev/assets/manual_exporting_to_jax.md.BhFDko_-.lean.js b/dev/assets/manual_exporting_to_jax.md.BhFDko_-.lean.js new file mode 100644 index 0000000000..9c4665c708 --- /dev/null +++ b/dev/assets/manual_exporting_to_jax.md.BhFDko_-.lean.js @@ -0,0 +1 @@ +import{_ as a,c as n,a2 as i,o as p}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Exporting Lux Models to Jax (via EnzymeJAX & Reactant)","description":"","frontmatter":{},"headers":[],"relativePath":"manual/exporting_to_jax.md","filePath":"manual/exporting_to_jax.md","lastUpdated":null}'),l={name:"manual/exporting_to_jax.md"};function t(e,s,h,k,r,E){return p(),n("div",null,s[0]||(s[0]=[i("",18)]))}const o=a(l,[["render",t]]);export{d as __pageData,o as default}; diff --git a/dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.js b/dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.js similarity index 97% rename from dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.js rename to dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.js index 1b1e2b2254..5a7c738c88 100644 --- a/dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.js +++ b/dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Freezing Model Parameters","description":"","frontmatter":{},"headers":[],"relativePath":"manual/freezing_model_parameters.md","filePath":"manual/freezing_model_parameters.md","lastUpdated":null}'),h={name:"manual/freezing_model_parameters.md"};function l(t,s,p,k,r,d){return e(),a("div",null,s[0]||(s[0]=[n(`

    Freezing Model Parameters

    Warning

    API for freezing parameters should be considered experimental at this point.

    In this manual entry, we will go over how to freeze certain parameters in a model.

    Freezing Layers of a Particular Kind

    To freeze a particular kind of layer, let's say Dense in the following example. We can use Lux.Experimental.layer_map and freeze layers if they are of type Dense.

    julia
    using Lux, Random, Functors
    +import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Freezing Model Parameters","description":"","frontmatter":{},"headers":[],"relativePath":"manual/freezing_model_parameters.md","filePath":"manual/freezing_model_parameters.md","lastUpdated":null}'),h={name:"manual/freezing_model_parameters.md"};function l(t,s,p,k,r,d){return e(),a("div",null,s[0]||(s[0]=[n(`

    Freezing Model Parameters

    Warning

    API for freezing parameters should be considered experimental at this point.

    In this manual entry, we will go over how to freeze certain parameters in a model.

    Freezing Layers of a Particular Kind

    To freeze a particular kind of layer, let's say Dense in the following example. We can use Lux.Experimental.layer_map and freeze layers if they are of type Dense.

    julia
    using Lux, Random, Functors
     
     rng = Xoshiro(0)
     
    @@ -17,7 +17,7 @@ import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g
     
     model_frozen, ps_frozen, st_frozen = Lux.Experimental.layer_map(freeze_dense, model, ps, st)
     
    -model_frozen(x, ps_frozen, st_frozen)
    (Float32[0.6886741 -1.2361472], (layer_1 = (frozen_params = (weight = Float32[-0.028461456 -0.5999714 -0.3850993; -0.18860114 0.72428167 0.32322538; -0.965117 -0.4585489 -0.32623518; -0.86290836 -0.82805836 -0.7673453], bias = Float32[0.4216236, -0.4510427, -0.097253, 0.23325463]), states = NamedTuple()), layer_2 = (layer_1 = (frozen_params = (weight = Float32[-0.680748 0.1764085 0.34383082 0.6469914; -0.13819042 -0.109261915 -0.6143286 -0.21672015; -0.20881107 0.70390546 0.48137343 0.25662464; 0.38187847 0.05779423 -0.35181466 -0.096988946], bias = Float32[0.41246277, 0.4318977, -0.4305781, 0.3367505]), states = NamedTuple()), layer_2 = (rng = Random.Xoshiro(0x4fa3403dd074e603, 0x12c522b8034ae186, 0x8e0c3a65079041bb, 0x21617f7747d97206, 0x22a21880af5dc689), training = Val{true}()), layer_3 = (running_mean = Float32[0.01965834, 0.0, 0.0, 0.015937408], running_var = Float32[0.90772897, 0.9, 0.9, 0.90508], training = Val{true}())), layer_3 = (frozen_params = (weight = Float32[0.7794657 0.8337032 0.6323408 -0.18308182], bias = Float32[-0.27373654]), states = NamedTuple())))

    Freezing by Layer Name

    When the function in layer_map is called, the 4th argument is the name of the layer. For example, if you want to freeze the 1st layer inside the inner Chain. The name for this would be layer_2.layer_1.

    julia
    
    +model_frozen(x, ps_frozen, st_frozen)
    (Float32[0.6886741 -1.2361472], (layer_1 = (frozen_params = (weight = Float32[-0.028461456 -0.5999714 -0.3850993; -0.18860114 0.72428167 0.32322538; -0.965117 -0.4585489 -0.32623518; -0.86290836 -0.82805836 -0.7673453], bias = Float32[0.4216236, -0.4510427, -0.097253, 0.23325463]), states = NamedTuple()), layer_2 = (layer_1 = (frozen_params = (weight = Float32[-0.680748 0.1764085 0.34383082 0.6469914; -0.13819042 -0.109261915 -0.6143286 -0.21672015; -0.20881107 0.70390546 0.48137343 0.25662464; 0.38187847 0.05779423 -0.35181466 -0.096988946], bias = Float32[0.41246277, 0.4318977, -0.4305781, 0.3367505]), states = NamedTuple()), layer_2 = (rng = Random.Xoshiro(0x4fa3403dd074e603, 0x12c522b8034ae186, 0x8e0c3a65079041bb, 0x21617f7747d97206, 0x22a21880af5dc689), training = Val{true}()), layer_3 = (running_mean = Float32[0.01965834, 0.0, 0.0, 0.015937408], running_var = Float32[0.90772897, 0.9, 0.9, 0.90508], training = Val{true}())), layer_3 = (frozen_params = (weight = Float32[0.7794657 0.8337032 0.6323408 -0.18308182], bias = Float32[-0.27373654]), states = NamedTuple())))

    Freezing by Layer Name

    When the function in layer_map is called, the 4th argument is the name of the layer. For example, if you want to freeze the 1st layer inside the inner Chain. The name for this would be layer_2.layer_1.

    julia
    
     function freeze_by_name(d, ps, st, name::KeyPath)
         name == KeyPath(:layer_2, :layer_1) &&
             return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
    @@ -26,7 +26,7 @@ import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g
     function freeze_dense(d::Dense, ps, st, _)
         return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
     end
    -freeze_dense(l, ps, st, _) = (l, ps, st)

    Freezing Part of the Parameters

    Instead of freezing all the parameters, we can simply specify (:weight,) to freeze only the weight parameter while training the bias parameter.

    julia
    
    +freeze_dense(l, ps, st, _) = (l, ps, st)

    Freezing Part of the Parameters

    Instead of freezing all the parameters, we can simply specify (:weight,) to freeze only the weight parameter while training the bias parameter.

    julia
    
     function freeze_by_name(d, ps, st, name::KeyPath)
         name == KeyPath(:layer_2, :layer_1) &&
             return Lux.Experimental.freeze(d, ps, st, (:weight,))
    diff --git a/dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.lean.js b/dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.lean.js
    new file mode 100644
    index 0000000000..3735bf9240
    --- /dev/null
    +++ b/dev/assets/manual_freezing_model_parameters.md.C5hCUAD-.lean.js
    @@ -0,0 +1 @@
    +import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"Freezing Model Parameters","description":"","frontmatter":{},"headers":[],"relativePath":"manual/freezing_model_parameters.md","filePath":"manual/freezing_model_parameters.md","lastUpdated":null}'),h={name:"manual/freezing_model_parameters.md"};function l(t,s,p,k,r,d){return e(),a("div",null,s[0]||(s[0]=[n("",16)]))}const y=i(h,[["render",l]]);export{g as __pageData,y as default};
    diff --git a/dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.lean.js b/dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.lean.js
    deleted file mode 100644
    index 1b1e2b2254..0000000000
    --- a/dev/assets/manual_freezing_model_parameters.md.Cni-iuqC.lean.js
    +++ /dev/null
    @@ -1,51 +0,0 @@
    -import{_ as i,c as a,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Freezing Model Parameters","description":"","frontmatter":{},"headers":[],"relativePath":"manual/freezing_model_parameters.md","filePath":"manual/freezing_model_parameters.md","lastUpdated":null}'),h={name:"manual/freezing_model_parameters.md"};function l(t,s,p,k,r,d){return e(),a("div",null,s[0]||(s[0]=[n(`

    Freezing Model Parameters

    Warning

    API for freezing parameters should be considered experimental at this point.

    In this manual entry, we will go over how to freeze certain parameters in a model.

    Freezing Layers of a Particular Kind

    To freeze a particular kind of layer, let's say Dense in the following example. We can use Lux.Experimental.layer_map and freeze layers if they are of type Dense.

    julia
    using Lux, Random, Functors
    -
    -rng = Xoshiro(0)
    -
    -model = Chain(Dense(3, 4), Chain(Dense(4, 4), Dropout(0.5f0), BatchNorm(4)), Dense(4, 1))
    -
    -ps, st = Lux.setup(rng, model)
    -
    -x = randn(rng, Float32, 3, 2)
    -
    -model(x, ps, st)
    -
    -function freeze_dense(d::Lux.Dense, ps, st, path)
    -    return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
    -end
    -freeze_dense(l, ps, st, path) = (l, ps, st)
    -
    -model_frozen, ps_frozen, st_frozen = Lux.Experimental.layer_map(freeze_dense, model, ps, st)
    -
    -model_frozen(x, ps_frozen, st_frozen)
    (Float32[0.6886741 -1.2361472], (layer_1 = (frozen_params = (weight = Float32[-0.028461456 -0.5999714 -0.3850993; -0.18860114 0.72428167 0.32322538; -0.965117 -0.4585489 -0.32623518; -0.86290836 -0.82805836 -0.7673453], bias = Float32[0.4216236, -0.4510427, -0.097253, 0.23325463]), states = NamedTuple()), layer_2 = (layer_1 = (frozen_params = (weight = Float32[-0.680748 0.1764085 0.34383082 0.6469914; -0.13819042 -0.109261915 -0.6143286 -0.21672015; -0.20881107 0.70390546 0.48137343 0.25662464; 0.38187847 0.05779423 -0.35181466 -0.096988946], bias = Float32[0.41246277, 0.4318977, -0.4305781, 0.3367505]), states = NamedTuple()), layer_2 = (rng = Random.Xoshiro(0x4fa3403dd074e603, 0x12c522b8034ae186, 0x8e0c3a65079041bb, 0x21617f7747d97206, 0x22a21880af5dc689), training = Val{true}()), layer_3 = (running_mean = Float32[0.01965834, 0.0, 0.0, 0.015937408], running_var = Float32[0.90772897, 0.9, 0.9, 0.90508], training = Val{true}())), layer_3 = (frozen_params = (weight = Float32[0.7794657 0.8337032 0.6323408 -0.18308182], bias = Float32[-0.27373654]), states = NamedTuple())))

    Freezing by Layer Name

    When the function in layer_map is called, the 4th argument is the name of the layer. For example, if you want to freeze the 1st layer inside the inner Chain. The name for this would be layer_2.layer_1.

    julia
    
    -function freeze_by_name(d, ps, st, name::KeyPath)
    -    name == KeyPath(:layer_2, :layer_1) &&
    -        return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
    -    return d, ps, st
    -end
    julia
    
    -function freeze_dense(d::Dense, ps, st, _)
    -    return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
    -end
    -freeze_dense(l, ps, st, _) = (l, ps, st)

    Freezing Part of the Parameters

    Instead of freezing all the parameters, we can simply specify (:weight,) to freeze only the weight parameter while training the bias parameter.

    julia
    
    -function freeze_by_name(d, ps, st, name::KeyPath)
    -    name == KeyPath(:layer_2, :layer_1) &&
    -        return Lux.Experimental.freeze(d, ps, st, (:weight,))
    -    return d, ps, st
    -end
    julia
    
    -function freeze_by_name(d, ps, st, name::KeyPath)
    -    name == KeyPath(:layer_2, :layer_1) &&
    -        return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
    -    return d, ps, st
    -end

    Freezing Part of a Chain

    julia
    using Lux, Random
    -
    -rng = Random.default_rng()
    -Random.seed!(rng, 0)
    -
    -model = Chain(Dense(3, 4), Dense(4, 4), Dropout(0.5f0), BatchNorm(4), Dense(4, 1))
    -
    -model_frozen = Chain(model[1:2], Lux.Experimental.freeze(model[3:4]), model[5])
    -ps, st = Lux.setup(rng, model_frozen)
    -
    -x = randn(rng, Float32, 3, 2)
    -
    -model_frozen(x, ps, st)
    (Float32[0.7429947 -1.2904677], (layer_1 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_2 = (frozen_params = (layer_3 = NamedTuple(), layer_4 = (scale = Float32[1.0, 1.0, 1.0, 1.0], bias = Float32[0.0, 0.0, 0.0, 0.0])), states = (layer_3 = (rng = Random.TaskLocalRNG(), training = Val{true}()), layer_4 = (running_mean = Float32[0.0, 0.048522998, 0.0, 0.015937408], running_var = Float32[0.9, 0.9470896, 0.9, 0.90508], training = Val{true}()))), layer_3 = NamedTuple()))
    `,16)]))}const y=i(h,[["render",l]]);export{g as __pageData,y as default}; diff --git a/dev/assets/manual_gpu_management.md.DpZq9wPT.js b/dev/assets/manual_gpu_management.md.DCaTfJhy.js similarity index 98% rename from dev/assets/manual_gpu_management.md.DpZq9wPT.js rename to dev/assets/manual_gpu_management.md.DCaTfJhy.js index 0788f171f5..6f614eabb5 100644 --- a/dev/assets/manual_gpu_management.md.DpZq9wPT.js +++ b/dev/assets/manual_gpu_management.md.DCaTfJhy.js @@ -1,4 +1,4 @@ -import{_ as s,c as i,a2 as e,o as n}from"./chunks/framework.I-x9Gl6h.js";const r=JSON.parse('{"title":"GPU Management","description":"","frontmatter":{},"headers":[],"relativePath":"manual/gpu_management.md","filePath":"manual/gpu_management.md","lastUpdated":null}'),t={name:"manual/gpu_management.md"};function p(l,a,c,h,d,k){return n(),i("div",null,a[0]||(a[0]=[e(`

    GPU Management

    Lux.jl can handle multiple GPU backends. Currently, the following backends are supported:

    julia
    # Important to load trigger packages
    +import{_ as s,c as i,a2 as e,o as n}from"./chunks/framework.BetCMmtc.js";const r=JSON.parse('{"title":"GPU Management","description":"","frontmatter":{},"headers":[],"relativePath":"manual/gpu_management.md","filePath":"manual/gpu_management.md","lastUpdated":null}'),t={name:"manual/gpu_management.md"};function p(l,a,c,h,d,k){return n(),i("div",null,a[0]||(a[0]=[e(`

    GPU Management

    Lux.jl can handle multiple GPU backends. Currently, the following backends are supported:

    julia
    # Important to load trigger packages
     using Lux, LuxCUDA #, AMDGPU, Metal, oneAPI
     
     supported_gpu_backends()
    ("CUDA", "AMDGPU", "Metal", "oneAPI")

    GPU Support via Reactant

    If you are using Reactant, you can use the reactant_device function to automatically select Reactant backend if available. Additionally to force Reactant to use gpu, you can run Reactant.set_default_backend("gpu") (this is automatic).

    Metal Support

    Support for Metal GPUs should be considered extremely experimental at this point.

    Automatic Backend Management (Recommended Approach)

    Automatic Backend Management is done by two simple functions: cpu_device and gpu_device.

    • cpu_device: This is a simple function and just returns a CPUDevice object. @example gpu_management cdev = cpu_device()@example gpu_management x_cpu = randn(Float32, 3, 2)

    • gpu_device: This function performs automatic GPU device selection and returns an object.

      1. If no GPU is available, it returns a CPUDevice object.

      2. If a LocalPreferences file is present, then the backend specified in the file is used. To set a backend, use Lux.gpu_backend!(<backend_name>). (a) If the trigger package corresponding to the device is not loaded, then a warning is displayed. (b) If no LocalPreferences file is present, then the first working GPU with loaded trigger package is used.

      @example
      x_gpu = x_cpu |&gt; gdev  \`\`\`
      diff --git a/dev/assets/manual_gpu_management.md.DCaTfJhy.lean.js b/dev/assets/manual_gpu_management.md.DCaTfJhy.lean.js
      new file mode 100644
      index 0000000000..9e68ba88a8
      --- /dev/null
      +++ b/dev/assets/manual_gpu_management.md.DCaTfJhy.lean.js
      @@ -0,0 +1 @@
      +import{_ as s,c as i,a2 as e,o as n}from"./chunks/framework.BetCMmtc.js";const r=JSON.parse('{"title":"GPU Management","description":"","frontmatter":{},"headers":[],"relativePath":"manual/gpu_management.md","filePath":"manual/gpu_management.md","lastUpdated":null}'),t={name:"manual/gpu_management.md"};function p(l,a,c,h,d,k){return n(),i("div",null,a[0]||(a[0]=[e("",13)]))}const g=s(t,[["render",p]]);export{r as __pageData,g as default};
      diff --git a/dev/assets/manual_gpu_management.md.DpZq9wPT.lean.js b/dev/assets/manual_gpu_management.md.DpZq9wPT.lean.js
      deleted file mode 100644
      index 0788f171f5..0000000000
      --- a/dev/assets/manual_gpu_management.md.DpZq9wPT.lean.js
      +++ /dev/null
      @@ -1,20 +0,0 @@
      -import{_ as s,c as i,a2 as e,o as n}from"./chunks/framework.I-x9Gl6h.js";const r=JSON.parse('{"title":"GPU Management","description":"","frontmatter":{},"headers":[],"relativePath":"manual/gpu_management.md","filePath":"manual/gpu_management.md","lastUpdated":null}'),t={name:"manual/gpu_management.md"};function p(l,a,c,h,d,k){return n(),i("div",null,a[0]||(a[0]=[e(`

      GPU Management

      Lux.jl can handle multiple GPU backends. Currently, the following backends are supported:

      julia
      # Important to load trigger packages
      -using Lux, LuxCUDA #, AMDGPU, Metal, oneAPI
      -
      -supported_gpu_backends()
      ("CUDA", "AMDGPU", "Metal", "oneAPI")

      GPU Support via Reactant

      If you are using Reactant, you can use the reactant_device function to automatically select Reactant backend if available. Additionally to force Reactant to use gpu, you can run Reactant.set_default_backend("gpu") (this is automatic).

      Metal Support

      Support for Metal GPUs should be considered extremely experimental at this point.

      Automatic Backend Management (Recommended Approach)

      Automatic Backend Management is done by two simple functions: cpu_device and gpu_device.

      • cpu_device: This is a simple function and just returns a CPUDevice object. @example gpu_management cdev = cpu_device()@example gpu_management x_cpu = randn(Float32, 3, 2)

      • gpu_device: This function performs automatic GPU device selection and returns an object.

        1. If no GPU is available, it returns a CPUDevice object.

        2. If a LocalPreferences file is present, then the backend specified in the file is used. To set a backend, use Lux.gpu_backend!(<backend_name>). (a) If the trigger package corresponding to the device is not loaded, then a warning is displayed. (b) If no LocalPreferences file is present, then the first working GPU with loaded trigger package is used.

        @example
        x_gpu = x_cpu |&gt; gdev  \`\`\`
        -\`@example gpu_management  (x_gpu |> cdev) ≈ x_cpu\`

      Manual Backend Management

      Automatic Device Selection can be circumvented by directly using CPUDevice and AbstractGPUDevice objects.

      julia
      cdev = cpu_device()
      -
      -x_cpu = randn(Float32, 3, 2)
      -
      -if MLDataDevices.functional(CUDADevice)
      -    gdev = CUDADevice()
      -    x_gpu = x_cpu |> gdev
      -elseif MLDataDevices.functional(AMDGPUDevice)
      -    gdev = AMDGPUDevice()
      -    x_gpu = x_cpu |> gdev
      -else
      -    @info "No GPU is available. Using CPU."
      -    x_gpu = x_cpu
      -end
      -
      -(x_gpu |> cdev)  x_cpu
      true
      `,13)]))}const g=s(t,[["render",p]]);export{r as __pageData,g as default}; diff --git a/dev/assets/manual_interface.md.ChAHHkfm.js b/dev/assets/manual_interface.md.BHQCSZeo.js similarity index 99% rename from dev/assets/manual_interface.md.ChAHHkfm.js rename to dev/assets/manual_interface.md.BHQCSZeo.js index 649b783081..9c870fe4d0 100644 --- a/dev/assets/manual_interface.md.ChAHHkfm.js +++ b/dev/assets/manual_interface.md.BHQCSZeo.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const o=JSON.parse('{"title":"Lux Interface","description":"","frontmatter":{},"headers":[],"relativePath":"manual/interface.md","filePath":"manual/interface.md","lastUpdated":null}'),e={name:"manual/interface.md"};function h(l,s,p,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

      Lux Interface

      Lux.jl vs LuxCore.jl

      If you just want to define compatibility with Lux without actually using any of the other functionality provided by Lux (like layers), it is recommended to depend on LuxCore.jl instead of Lux.jl. LuxCore.jl is a significantly lighter dependency.

      Following this interface provides the ability for frameworks built on top of Lux to be cross compatible. Additionally, any new functionality built into Lux, will just work for your framework.

      @compact macro

      While writing out a custom struct and defining dispatches manually is a good way to understand the interface, it is not the most concise way. We recommend using the Lux.@compact macro to define layers which makes handling the states and parameters downright trivial.

      Layer Interface

      Singular Layer

      If the layer doesn't contain any other Lux layer, then it is a Singular Layer. This means it should optionally subtype Lux.AbstractLuxLayer but mandatorily define all the necessary functions mentioned in the docstrings. Consider a simplified version of Dense called Linear.

      First, setup the architectural details for this layer. Note, that the architecture doesn't contain any mutable structure like arrays. When in doubt, remember, once constructed a model architecture cannot change.

      Tip

      For people coming from Flux.jl background, this might be weird. We recommend checking out the Flux to Lux migration guide first before proceeding.

      julia
      using LuxCore, Random, WeightInitializers # Importing \`Lux\` also gives you access to \`LuxCore\`
      +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"Lux Interface","description":"","frontmatter":{},"headers":[],"relativePath":"manual/interface.md","filePath":"manual/interface.md","lastUpdated":null}'),e={name:"manual/interface.md"};function h(l,s,p,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

      Lux Interface

      Lux.jl vs LuxCore.jl

      If you just want to define compatibility with Lux without actually using any of the other functionality provided by Lux (like layers), it is recommended to depend on LuxCore.jl instead of Lux.jl. LuxCore.jl is a significantly lighter dependency.

      Following this interface provides the ability for frameworks built on top of Lux to be cross compatible. Additionally, any new functionality built into Lux, will just work for your framework.

      @compact macro

      While writing out a custom struct and defining dispatches manually is a good way to understand the interface, it is not the most concise way. We recommend using the Lux.@compact macro to define layers which makes handling the states and parameters downright trivial.

      Layer Interface

      Singular Layer

      If the layer doesn't contain any other Lux layer, then it is a Singular Layer. This means it should optionally subtype Lux.AbstractLuxLayer but mandatorily define all the necessary functions mentioned in the docstrings. Consider a simplified version of Dense called Linear.

      First, setup the architectural details for this layer. Note, that the architecture doesn't contain any mutable structure like arrays. When in doubt, remember, once constructed a model architecture cannot change.

      Tip

      For people coming from Flux.jl background, this might be weird. We recommend checking out the Flux to Lux migration guide first before proceeding.

      julia
      using LuxCore, Random, WeightInitializers # Importing \`Lux\` also gives you access to \`LuxCore\`
       
       struct Linear{F1, F2} <: LuxCore.AbstractLuxLayer
           in_dims::Int
      diff --git a/dev/assets/manual_interface.md.BHQCSZeo.lean.js b/dev/assets/manual_interface.md.BHQCSZeo.lean.js
      new file mode 100644
      index 0000000000..407679c3b0
      --- /dev/null
      +++ b/dev/assets/manual_interface.md.BHQCSZeo.lean.js
      @@ -0,0 +1 @@
      +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"Lux Interface","description":"","frontmatter":{},"headers":[],"relativePath":"manual/interface.md","filePath":"manual/interface.md","lastUpdated":null}'),e={name:"manual/interface.md"};function h(l,s,p,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n("",42)]))}const g=i(e,[["render",h]]);export{o as __pageData,g as default};
      diff --git a/dev/assets/manual_interface.md.ChAHHkfm.lean.js b/dev/assets/manual_interface.md.ChAHHkfm.lean.js
      deleted file mode 100644
      index 649b783081..0000000000
      --- a/dev/assets/manual_interface.md.ChAHHkfm.lean.js
      +++ /dev/null
      @@ -1,91 +0,0 @@
      -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const o=JSON.parse('{"title":"Lux Interface","description":"","frontmatter":{},"headers":[],"relativePath":"manual/interface.md","filePath":"manual/interface.md","lastUpdated":null}'),e={name:"manual/interface.md"};function h(l,s,p,k,r,d){return t(),a("div",null,s[0]||(s[0]=[n(`

      Lux Interface

      Lux.jl vs LuxCore.jl

      If you just want to define compatibility with Lux without actually using any of the other functionality provided by Lux (like layers), it is recommended to depend on LuxCore.jl instead of Lux.jl. LuxCore.jl is a significantly lighter dependency.

      Following this interface provides the ability for frameworks built on top of Lux to be cross compatible. Additionally, any new functionality built into Lux, will just work for your framework.

      @compact macro

      While writing out a custom struct and defining dispatches manually is a good way to understand the interface, it is not the most concise way. We recommend using the Lux.@compact macro to define layers which makes handling the states and parameters downright trivial.

      Layer Interface

      Singular Layer

      If the layer doesn't contain any other Lux layer, then it is a Singular Layer. This means it should optionally subtype Lux.AbstractLuxLayer but mandatorily define all the necessary functions mentioned in the docstrings. Consider a simplified version of Dense called Linear.

      First, setup the architectural details for this layer. Note, that the architecture doesn't contain any mutable structure like arrays. When in doubt, remember, once constructed a model architecture cannot change.

      Tip

      For people coming from Flux.jl background, this might be weird. We recommend checking out the Flux to Lux migration guide first before proceeding.

      julia
      using LuxCore, Random, WeightInitializers # Importing \`Lux\` also gives you access to \`LuxCore\`
      -
      -struct Linear{F1, F2} <: LuxCore.AbstractLuxLayer
      -    in_dims::Int
      -    out_dims::Int
      -    init_weight::F1
      -    init_bias::F2
      -end
      -
      -function Linear(in_dims::Int, out_dims::Int; init_weight=glorot_uniform, init_bias=zeros32)
      -    return Linear{typeof(init_weight), typeof(init_bias)}(in_dims, out_dims, init_weight,
      -        init_bias)
      -end
      -
      -l = Linear(2, 4)
      Main.Linear{typeof(glorot_uniform), typeof(zeros32)}(2, 4, WeightInitializers.glorot_uniform, WeightInitializers.zeros32)

      Next, we need to implement functions which return the parameters and states for the layer. In case of Linear, the parameters are weight and bias while the states are empty. States become important when defining layers like BatchNorm, WeightNorm, etc. The recommended data structure for returning parameters is a NamedTuple, though anything satisfying the Parameter Interface is valid.

      julia
      function LuxCore.initialparameters(rng::AbstractRNG, l::Linear)
      -    return (weight=l.init_weight(rng, l.out_dims, l.in_dims),
      -            bias=l.init_bias(rng, l.out_dims, 1))
      -end
      -
      -LuxCore.initialstates(::AbstractRNG, ::Linear) = NamedTuple()

      You could also implement LuxCore.parameterlength and LuxCore.statelength to prevent wasteful reconstruction of the parameters and states.

      julia
      # This works
      -println("Parameter Length: ", LuxCore.parameterlength(l), "; State Length: ",
      -    LuxCore.statelength(l))
      -
      -# But still recommended to define these
      -LuxCore.parameterlength(l::Linear) = l.out_dims * l.in_dims + l.out_dims
      -
      -LuxCore.statelength(::Linear) = 0
      Parameter Length: 12; State Length: 0

      No RNG in initialparameters and initialstates

      You might notice that we don't pass in a RNG for these functions. If your parameter length and/or state length depend on a random number generator, you should think really hard about what you are trying to do and why.

      Now, we need to define how the layer works. For this you make your layer a function with exactly 3 arguments – x the input, ps the parameters, and st the states. This function must return two things – y the output, and st_new the updated state.

      julia
      function (l::Linear)(x::AbstractMatrix, ps, st::NamedTuple)
      -    y = ps.weight * x .+ ps.bias
      -    return y, st
      -end

      Finally, let's run this layer. If you have made this far into the documentation, we don't feel you need a refresher on that. But if you do see the Quickstart.

      julia
      rng = Random.default_rng()
      -Random.seed!(rng, 0)
      -
      -ps, st = LuxCore.setup(rng, l)
      -
      -println("Parameter Length: ", LuxCore.parameterlength(l), "; State Length: ",
      -    LuxCore.statelength(l))
      -
      -x = randn(rng, Float32, 2, 1)
      -
      -LuxCore.apply(l, x, ps, st) # or \`l(x, ps, st)\`
      (Float32[-0.15276335; 0.45325348; 1.0207279; 0.78226817;;], NamedTuple())

      Container Layer

      If your layer comprises of other Lux layers, then it is a Container Layer. Note that you could treat it as a Singular Layer, and it is still fine. FWIW, if you cannot subtype your layer with LuxCore.AbstractLuxContainerLayer then you should go down the Singular Layer route. But subtyping allows us to bypass some of these common definitions. Let us now define a layer, which is basically a composition of two linear layers.

      Wrapper Layer

      If you are defining a layer that is a wrapper around another layer, then you should subtype LuxCore.AbstractLuxWrapperLayer instead of LuxCore.AbstractLuxContainerLayer. The only difference from a container layer is that it can wrap a single layer and the parameter/state structure is exactly the same as the wrapped layer.

      julia
      struct ComposedLinear{L1, L2} <: LuxCore.AbstractLuxContainerLayer{(:linear_1, :linear_2)}
      -    linear_1::L1
      -    linear_2::L2
      -end
      -
      -function (cl::ComposedLinear)(x::AbstractMatrix, ps, st::NamedTuple)
      -    # To access the parameters and states for \`linear_1\` we do \`ps.linear_1\` and
      -    # \`st.linear_1\`. Similarly for \`linear_2\`
      -    y, st_l1 = cl.linear_1(x, ps.linear_1, st.linear_1)
      -    y, st_l2 = cl.linear_2(y, ps.linear_2, st.linear_2)
      -    # Finally, we need to return the new state which has the exact structure as \`st\`
      -    return y, (linear_1 = st_l1, linear_2 = st_l2)
      -end

      Here, you will notice we have passed (:linear_1, :linear_2) to the supertype. It essentially informs the type that, <obj>.linear_1 and <obj>.linear_2 are Lux layers and we need to construct parameters and states for those. Let's construct these and see:

      julia
      model = ComposedLinear(Linear(2, 4), Linear(4, 2))
      -display(model)
      -
      -ps, st = LuxCore.setup(rng, model)
      -
      -println("Parameters: ", ps)
      -println("States: ", st)
      -
      -println("Parameter Length: ", LuxCore.parameterlength(model), "; State Length: ",
      -    LuxCore.statelength(model))
      -
      -x = randn(rng, Float32, 2, 1)
      -
      -LuxCore.apply(model, x, ps, st) # or \`model(x, ps, st)\`
      (Float32[1.3410565; 0.78000563;;], (linear_1 = NamedTuple(), linear_2 = NamedTuple()))

      Parameter Interface

      We accept any parameter type as long as we can fetch the parameters using getproperty(obj, :parameter_name). This allows us to simultaneously support NamedTuples and ComponentArrays. Let us go through a concrete example of what it means. Consider Dense which expects two parameters named weight and bias.

      Automatic Differentiation

      If you are defining your own parameter type, it is your responsibility to make sure that it works with the AutoDiff System you are using.

      julia
      using Lux, Random
      -
      -d = Dense(2, 3)
      -rng = Random.default_rng()
      -Random.seed!(rng, 0)
      -
      -ps_default, st = LuxCore.setup(rng, d)
      -
      -x = randn(rng, Float32, 2, 1)
      -
      -println("Result with \`NamedTuple\` parameters: ", first(d(x, ps_default, st)))
      Result with \`NamedTuple\` parameters: Float32[-0.08713347; -0.4851346; -0.8490221;;]

      Let, us define a custom parameter type with fields myweight and mybias but if we try to access weight we get back myweight, similar for bias.

      Beware!

      This is for demonstrative purposes, don't try this at home!

      julia
      struct DenseLayerParameters{W, B}
      -    myweight::W
      -    mybias::B
      -end
      -
      -function Base.getproperty(ps::DenseLayerParameters, x::Symbol)
      -    if x == :weight
      -        return getfield(ps, :myweight)
      -    elseif x == :bias
      -        return getfield(ps, :mybias)
      -    end
      -    return getfield(ps, x)
      -end
      -
      -ps = DenseLayerParameters(ps_default.weight, ps_default.bias)
      -
      -println("Result with \`DenseLayerParameters\` parameters: ", first(d(x, ps, st)))
      Result with \`DenseLayerParameters\` parameters: Float32[0.23710957; 0.1003911; -0.57671577;;]

      The takeaway from this shouldn't be – lets define weird parameter types. Simply because you can do weird things like this doesn't mean you should, since it only leads to bugs.

      Instead this shows the flexibility you have for how your parameters can be structured.

      State Interface

      States are always type constrained to be NamedTuple. The structure of the input state must match that of the output state, i.e. keys(st_in) == keys(st_out). This doesn't imply that types of the input and output state match. To generate efficient code, we often do dispatch on the state, for example, Dropout, BatchNorm, etc.

      `,42)]))}const g=i(e,[["render",h]]);export{o as __pageData,g as default}; diff --git a/dev/assets/manual_migrate_from_flux.md.nhekMfcw.js b/dev/assets/manual_migrate_from_flux.md.CKssu-OV.js similarity index 97% rename from dev/assets/manual_migrate_from_flux.md.nhekMfcw.js rename to dev/assets/manual_migrate_from_flux.md.CKssu-OV.js index 301198a218..14cdd2366b 100644 --- a/dev/assets/manual_migrate_from_flux.md.nhekMfcw.js +++ b/dev/assets/manual_migrate_from_flux.md.CKssu-OV.js @@ -1,4 +1,4 @@ -import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const x=JSON.parse('{"title":"Migrating from Flux to Lux","description":"","frontmatter":{},"headers":[],"relativePath":"manual/migrate_from_flux.md","filePath":"manual/migrate_from_flux.md","lastUpdated":null}'),h={name:"manual/migrate_from_flux.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.24ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -716 4525.9 727","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.697ex",height:"1.62ex",role:"img",focusable:"false",viewBox:"0 -716 750 716","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.717ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 759 683","aria-hidden":"true"};function o(y,i,c,u,F,m){return t(),a("div",null,[i[10]||(i[10]=l(`

      Migrating from Flux to Lux

      For the core library layers like Dense, Conv, etc. we have intentionally kept the API very similar to Flux. In most cases, replacing using Flux with using Lux should be enough to get you started. We cover the additional changes that you will have to make in the following example.

      julia
      using Lux, Random, NNlib, Zygote
      +import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.BetCMmtc.js";const x=JSON.parse('{"title":"Migrating from Flux to Lux","description":"","frontmatter":{},"headers":[],"relativePath":"manual/migrate_from_flux.md","filePath":"manual/migrate_from_flux.md","lastUpdated":null}'),h={name:"manual/migrate_from_flux.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.24ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -716 4525.9 727","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.697ex",height:"1.62ex",role:"img",focusable:"false",viewBox:"0 -716 750 716","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.717ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 759 683","aria-hidden":"true"};function o(y,i,c,u,F,m){return t(),a("div",null,[i[10]||(i[10]=l(`

      Migrating from Flux to Lux

      For the core library layers like Dense, Conv, etc. we have intentionally kept the API very similar to Flux. In most cases, replacing using Flux with using Lux should be enough to get you started. We cover the additional changes that you will have to make in the following example.

      julia
      using Lux, Random, NNlib, Zygote
       
       model = Chain(Dense(2 => 4), BatchNorm(4, relu), Dense(4 => 2))
       rng = Random.default_rng()
      @@ -18,7 +18,7 @@ import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.I-x9Gl
       
       model(x)
       
      -gradient(model -> sum(model(x)), model)

      Implementing Custom Layers

      Flux and Lux operate under extremely different design philosophies regarding how layers should be implemented. A summary of the differences would be:

      • Flux stores everything in a single struct and relies on Functors.@functor and Flux.trainable to distinguish between trainable and non-trainable parameters.

      • Lux relies on the user to define Lux.initialparameters and Lux.initialstates to distinguish between trainable parameters (called "parameters") and non-trainable parameters (called "states"). Additionally, Lux layers define the model architecture, hence device transfer utilities like gpu_device, cpu_device, etc. cannot be applied on Lux layers, instead they need to be applied on the parameters and states.

      `,6)),s("p",null,[i[6]||(i[6]=n("Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes ")),s("mjx-container",p,[(t(),a("svg",k,i[0]||(i[0]=[l('',1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"×"),s("mi",null,"B"),s("mo",null,"×"),s("mi",null,"x")])],-1))]),i[7]||(i[7]=n(" where ")),s("mjx-container",r,[(t(),a("svg",d,i[2]||(i[2]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D434",d:"M208 74Q208 50 254 46Q272 46 272 35Q272 34 270 22Q267 8 264 4T251 0Q249 0 239 0T205 1T141 2Q70 2 50 0H42Q35 7 35 11Q37 38 48 46H62Q132 49 164 96Q170 102 345 401T523 704Q530 716 547 716H555H572Q578 707 578 706L606 383Q634 60 636 57Q641 46 701 46Q726 46 726 36Q726 34 723 22Q720 7 718 4T704 0Q701 0 690 0T651 1T578 2Q484 2 455 0H443Q437 6 437 9T439 27Q443 40 445 43L449 46H469Q523 49 533 63L521 213H283L249 155Q208 86 208 74ZM516 260Q516 271 504 416T490 562L463 519Q447 492 400 412L310 260L413 259Q516 259 516 260Z",style:{"stroke-width":"3"}})])])],-1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A")])],-1))]),i[8]||(i[8]=n(" is not trainable and ")),s("mjx-container",E,[(t(),a("svg",g,i[4]||(i[4]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D435",d:"M231 637Q204 637 199 638T194 649Q194 676 205 682Q206 683 335 683Q594 683 608 681Q671 671 713 636T756 544Q756 480 698 429T565 360L555 357Q619 348 660 311T702 219Q702 146 630 78T453 1Q446 0 242 0Q42 0 39 2Q35 5 35 10Q35 17 37 24Q42 43 47 45Q51 46 62 46H68Q95 46 128 49Q142 52 147 61Q150 65 219 339T288 628Q288 635 231 637ZM649 544Q649 574 634 600T585 634Q578 636 493 637Q473 637 451 637T416 636H403Q388 635 384 626Q382 622 352 506Q352 503 351 500L320 374H401Q482 374 494 376Q554 386 601 434T649 544ZM595 229Q595 273 572 302T512 336Q506 337 429 337Q311 337 310 336Q310 334 293 263T258 122L240 52Q240 48 252 48T333 46Q422 46 429 47Q491 54 543 105T595 229Z",style:{"stroke-width":"3"}})])])],-1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"B")])],-1))]),i[9]||(i[9]=n(" is trainable."))]),i[11]||(i[11]=l(`
      julia
      using Lux, Random, NNlib, Zygote
      +gradient(model -> sum(model(x)), model)

      Implementing Custom Layers

      Flux and Lux operate under extremely different design philosophies regarding how layers should be implemented. A summary of the differences would be:

      • Flux stores everything in a single struct and relies on Functors.@functor and Flux.trainable to distinguish between trainable and non-trainable parameters.

      • Lux relies on the user to define Lux.initialparameters and Lux.initialstates to distinguish between trainable parameters (called "parameters") and non-trainable parameters (called "states"). Additionally, Lux layers define the model architecture, hence device transfer utilities like gpu_device, cpu_device, etc. cannot be applied on Lux layers, instead they need to be applied on the parameters and states.

      `,6)),s("p",null,[i[6]||(i[6]=n("Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes ")),s("mjx-container",p,[(t(),a("svg",k,i[0]||(i[0]=[l('',1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"×"),s("mi",null,"B"),s("mo",null,"×"),s("mi",null,"x")])],-1))]),i[7]||(i[7]=n(" where ")),s("mjx-container",r,[(t(),a("svg",d,i[2]||(i[2]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D434",d:"M208 74Q208 50 254 46Q272 46 272 35Q272 34 270 22Q267 8 264 4T251 0Q249 0 239 0T205 1T141 2Q70 2 50 0H42Q35 7 35 11Q37 38 48 46H62Q132 49 164 96Q170 102 345 401T523 704Q530 716 547 716H555H572Q578 707 578 706L606 383Q634 60 636 57Q641 46 701 46Q726 46 726 36Q726 34 723 22Q720 7 718 4T704 0Q701 0 690 0T651 1T578 2Q484 2 455 0H443Q437 6 437 9T439 27Q443 40 445 43L449 46H469Q523 49 533 63L521 213H283L249 155Q208 86 208 74ZM516 260Q516 271 504 416T490 562L463 519Q447 492 400 412L310 260L413 259Q516 259 516 260Z",style:{"stroke-width":"3"}})])])],-1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A")])],-1))]),i[8]||(i[8]=n(" is not trainable and ")),s("mjx-container",E,[(t(),a("svg",g,i[4]||(i[4]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D435",d:"M231 637Q204 637 199 638T194 649Q194 676 205 682Q206 683 335 683Q594 683 608 681Q671 671 713 636T756 544Q756 480 698 429T565 360L555 357Q619 348 660 311T702 219Q702 146 630 78T453 1Q446 0 242 0Q42 0 39 2Q35 5 35 10Q35 17 37 24Q42 43 47 45Q51 46 62 46H68Q95 46 128 49Q142 52 147 61Q150 65 219 339T288 628Q288 635 231 637ZM649 544Q649 574 634 600T585 634Q578 636 493 637Q473 637 451 637T416 636H403Q388 635 384 626Q382 622 352 506Q352 503 351 500L320 374H401Q482 374 494 376Q554 386 601 434T649 544ZM595 229Q595 273 572 302T512 336Q506 337 429 337Q311 337 310 336Q310 334 293 263T258 122L240 52Q240 48 252 48T333 46Q422 46 429 47Q491 54 543 105T595 229Z",style:{"stroke-width":"3"}})])])],-1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"B")])],-1))]),i[9]||(i[9]=n(" is trainable."))]),i[11]||(i[11]=l(`
      julia
      using Lux, Random, NNlib, Zygote
       
       struct LuxLinear <: Lux.AbstractLuxLayer
           init_A
      @@ -56,7 +56,7 @@ import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.I-x9Gl
       # Needed so that both \`A\` and \`B\` can be transferred between devices
       Flux.@functor FluxLinear
       
      -(l::FluxLinear)(x) = l.A * l.B * x

      Now let us run the model.

      julia
      rng = Random.default_rng()
      +(l::FluxLinear)(x) = l.A * l.B * x

      Now let us run the model.

      julia
      rng = Random.default_rng()
       model = LuxLinear(randn(rng, 2, 4), randn(rng, 4, 2))
       x = randn(rng, 2, 1)
       
      diff --git a/dev/assets/manual_migrate_from_flux.md.CKssu-OV.lean.js b/dev/assets/manual_migrate_from_flux.md.CKssu-OV.lean.js
      new file mode 100644
      index 0000000000..72bd5ca01d
      --- /dev/null
      +++ b/dev/assets/manual_migrate_from_flux.md.CKssu-OV.lean.js
      @@ -0,0 +1 @@
      +import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.BetCMmtc.js";const x=JSON.parse('{"title":"Migrating from Flux to Lux","description":"","frontmatter":{},"headers":[],"relativePath":"manual/migrate_from_flux.md","filePath":"manual/migrate_from_flux.md","lastUpdated":null}'),h={name:"manual/migrate_from_flux.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.24ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -716 4525.9 727","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.697ex",height:"1.62ex",role:"img",focusable:"false",viewBox:"0 -716 750 716","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.717ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 759 683","aria-hidden":"true"};function o(y,i,c,u,F,m){return t(),a("div",null,[i[10]||(i[10]=l("",6)),s("p",null,[i[6]||(i[6]=n("Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes ")),s("mjx-container",p,[(t(),a("svg",k,i[0]||(i[0]=[l("",1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"×"),s("mi",null,"B"),s("mo",null,"×"),s("mi",null,"x")])],-1))]),i[7]||(i[7]=n(" where ")),s("mjx-container",r,[(t(),a("svg",d,i[2]||(i[2]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D434",d:"M208 74Q208 50 254 46Q272 46 272 35Q272 34 270 22Q267 8 264 4T251 0Q249 0 239 0T205 1T141 2Q70 2 50 0H42Q35 7 35 11Q37 38 48 46H62Q132 49 164 96Q170 102 345 401T523 704Q530 716 547 716H555H572Q578 707 578 706L606 383Q634 60 636 57Q641 46 701 46Q726 46 726 36Q726 34 723 22Q720 7 718 4T704 0Q701 0 690 0T651 1T578 2Q484 2 455 0H443Q437 6 437 9T439 27Q443 40 445 43L449 46H469Q523 49 533 63L521 213H283L249 155Q208 86 208 74ZM516 260Q516 271 504 416T490 562L463 519Q447 492 400 412L310 260L413 259Q516 259 516 260Z",style:{"stroke-width":"3"}})])])],-1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A")])],-1))]),i[8]||(i[8]=n(" is not trainable and ")),s("mjx-container",E,[(t(),a("svg",g,i[4]||(i[4]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D435",d:"M231 637Q204 637 199 638T194 649Q194 676 205 682Q206 683 335 683Q594 683 608 681Q671 671 713 636T756 544Q756 480 698 429T565 360L555 357Q619 348 660 311T702 219Q702 146 630 78T453 1Q446 0 242 0Q42 0 39 2Q35 5 35 10Q35 17 37 24Q42 43 47 45Q51 46 62 46H68Q95 46 128 49Q142 52 147 61Q150 65 219 339T288 628Q288 635 231 637ZM649 544Q649 574 634 600T585 634Q578 636 493 637Q473 637 451 637T416 636H403Q388 635 384 626Q382 622 352 506Q352 503 351 500L320 374H401Q482 374 494 376Q554 386 601 434T649 544ZM595 229Q595 273 572 302T512 336Q506 337 429 337Q311 337 310 336Q310 334 293 263T258 122L240 52Q240 48 252 48T333 46Q422 46 429 47Q491 54 543 105T595 229Z",style:{"stroke-width":"3"}})])])],-1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"B")])],-1))]),i[9]||(i[9]=n(" is trainable."))]),i[11]||(i[11]=l("",10))])}const Q=e(h,[["render",o]]);export{x as __pageData,Q as default};
      diff --git a/dev/assets/manual_migrate_from_flux.md.nhekMfcw.lean.js b/dev/assets/manual_migrate_from_flux.md.nhekMfcw.lean.js
      deleted file mode 100644
      index 301198a218..0000000000
      --- a/dev/assets/manual_migrate_from_flux.md.nhekMfcw.lean.js
      +++ /dev/null
      @@ -1,75 +0,0 @@
      -import{_ as e,c as a,a2 as l,j as s,a as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const x=JSON.parse('{"title":"Migrating from Flux to Lux","description":"","frontmatter":{},"headers":[],"relativePath":"manual/migrate_from_flux.md","filePath":"manual/migrate_from_flux.md","lastUpdated":null}'),h={name:"manual/migrate_from_flux.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.24ex",height:"1.645ex",role:"img",focusable:"false",viewBox:"0 -716 4525.9 727","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.697ex",height:"1.62ex",role:"img",focusable:"false",viewBox:"0 -716 750 716","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"1.717ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 759 683","aria-hidden":"true"};function o(y,i,c,u,F,m){return t(),a("div",null,[i[10]||(i[10]=l(`

      Migrating from Flux to Lux

      For the core library layers like Dense, Conv, etc. we have intentionally kept the API very similar to Flux. In most cases, replacing using Flux with using Lux should be enough to get you started. We cover the additional changes that you will have to make in the following example.

      julia
      using Lux, Random, NNlib, Zygote
      -
      -model = Chain(Dense(2 => 4), BatchNorm(4, relu), Dense(4 => 2))
      -rng = Random.default_rng()
      -x = randn(rng, Float32, 2, 4)
      -
      -ps, st = Lux.setup(rng, model)
      -
      -model(x, ps, st)
      -
      -gradient(ps -> sum(first(model(x, ps, st))), ps)
      julia
      using Flux, Random, NNlib, Zygote
      -
      -model = Chain(Dense(2 => 4), BatchNorm(4, relu), Dense(4 => 2))
      -rng = Random.default_rng()
      -x = randn(rng, Float32, 2, 4)
      -
      -
      -
      -model(x)
      -
      -gradient(model -> sum(model(x)), model)

      Implementing Custom Layers

      Flux and Lux operate under extremely different design philosophies regarding how layers should be implemented. A summary of the differences would be:

      • Flux stores everything in a single struct and relies on Functors.@functor and Flux.trainable to distinguish between trainable and non-trainable parameters.

      • Lux relies on the user to define Lux.initialparameters and Lux.initialstates to distinguish between trainable parameters (called "parameters") and non-trainable parameters (called "states"). Additionally, Lux layers define the model architecture, hence device transfer utilities like gpu_device, cpu_device, etc. cannot be applied on Lux layers, instead they need to be applied on the parameters and states.

      `,6)),s("p",null,[i[6]||(i[6]=n("Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes ")),s("mjx-container",p,[(t(),a("svg",k,i[0]||(i[0]=[l('',1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"×"),s("mi",null,"B"),s("mo",null,"×"),s("mi",null,"x")])],-1))]),i[7]||(i[7]=n(" where ")),s("mjx-container",r,[(t(),a("svg",d,i[2]||(i[2]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D434",d:"M208 74Q208 50 254 46Q272 46 272 35Q272 34 270 22Q267 8 264 4T251 0Q249 0 239 0T205 1T141 2Q70 2 50 0H42Q35 7 35 11Q37 38 48 46H62Q132 49 164 96Q170 102 345 401T523 704Q530 716 547 716H555H572Q578 707 578 706L606 383Q634 60 636 57Q641 46 701 46Q726 46 726 36Q726 34 723 22Q720 7 718 4T704 0Q701 0 690 0T651 1T578 2Q484 2 455 0H443Q437 6 437 9T439 27Q443 40 445 43L449 46H469Q523 49 533 63L521 213H283L249 155Q208 86 208 74ZM516 260Q516 271 504 416T490 562L463 519Q447 492 400 412L310 260L413 259Q516 259 516 260Z",style:{"stroke-width":"3"}})])])],-1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A")])],-1))]),i[8]||(i[8]=n(" is not trainable and ")),s("mjx-container",E,[(t(),a("svg",g,i[4]||(i[4]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D435",d:"M231 637Q204 637 199 638T194 649Q194 676 205 682Q206 683 335 683Q594 683 608 681Q671 671 713 636T756 544Q756 480 698 429T565 360L555 357Q619 348 660 311T702 219Q702 146 630 78T453 1Q446 0 242 0Q42 0 39 2Q35 5 35 10Q35 17 37 24Q42 43 47 45Q51 46 62 46H68Q95 46 128 49Q142 52 147 61Q150 65 219 339T288 628Q288 635 231 637ZM649 544Q649 574 634 600T585 634Q578 636 493 637Q473 637 451 637T416 636H403Q388 635 384 626Q382 622 352 506Q352 503 351 500L320 374H401Q482 374 494 376Q554 386 601 434T649 544ZM595 229Q595 273 572 302T512 336Q506 337 429 337Q311 337 310 336Q310 334 293 263T258 122L240 52Q240 48 252 48T333 46Q422 46 429 47Q491 54 543 105T595 229Z",style:{"stroke-width":"3"}})])])],-1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"B")])],-1))]),i[9]||(i[9]=n(" is trainable."))]),i[11]||(i[11]=l(`
      julia
      using Lux, Random, NNlib, Zygote
      -
      -struct LuxLinear <: Lux.AbstractLuxLayer
      -    init_A
      -    init_B
      -end
      -
      -function LuxLinear(A::AbstractArray, B::AbstractArray)
      -    # Storing Arrays or any mutable structure inside a Lux Layer is not recommended
      -    # instead we will convert this to a function to perform lazy initialization
      -    return LuxLinear(() -> copy(A), () -> copy(B))
      -end
      -
      -# \`B\` is a parameter
      -Lux.initialparameters(::AbstractRNG, layer::LuxLinear) = (B=layer.init_B(),)
      -
      -# \`A\` is a state
      -Lux.initialstates(::AbstractRNG, layer::LuxLinear) = (A=layer.init_A(),)
      -
      -(l::LuxLinear)(x, ps, st) = st.A * ps.B * x, st
      julia
      using Flux, Random, NNlib, Zygote, Optimisers
      -
      -struct FluxLinear
      -    A
      -    B
      -end
      -
      -
      -
      -
      -
      -
      -
      -# \`A\` is not trainable
      -Optimisers.trainable(f::FluxLinear) = (B=f.B,)
      -
      -# Needed so that both \`A\` and \`B\` can be transferred between devices
      -Flux.@functor FluxLinear
      -
      -(l::FluxLinear)(x) = l.A * l.B * x

      Now let us run the model.

      julia
      rng = Random.default_rng()
      -model = LuxLinear(randn(rng, 2, 4), randn(rng, 4, 2))
      -x = randn(rng, 2, 1)
      -
      -ps, st = Lux.setup(rng, model)
      -
      -model(x, ps, st)
      -
      -gradient(ps -> sum(first(model(x, ps, st))), ps)
      julia
      rng = Random.default_rng()
      -model = FluxLinear(randn(rng, 2, 4), randn(rng, 4, 2))
      -x = randn(rng, 2, 1)
      -
      -
      -
      -model(x)
      -
      -gradient(model -> sum(model(x)), model)

      To reiterate some important points:

      • Don't store mutables like Arrays inside a Lux Layer.

      • Parameters and States should be constructured inside the respective initial* functions.

      Certain Important Implementation Details

      Training/Inference Mode

      Flux supports a mode called :auto which automatically decides if the user is training the model or running inference. This is the default mode for Flux.BatchNorm, Flux.GroupNorm, Flux.Dropout, etc. Lux doesn't support this mode (specifically to keep code simple and do exactly what the user wants), hence our default mode is training. This can be changed using Lux.testmode.

      Can we still use Flux Layers?

      If you have Flux loaded in your code, you can use the function FromFluxAdaptor to automatically convert your model to Lux. Note that in case a native Lux counterpart isn't available, we fallback to using Optimisers.destructure.

      `,10))])}const Q=e(h,[["render",o]]);export{x as __pageData,Q as default}; diff --git a/dev/assets/manual_nested_autodiff.md.0e9MwF0A.lean.js b/dev/assets/manual_nested_autodiff.md.0e9MwF0A.lean.js deleted file mode 100644 index 699a10b484..0000000000 --- a/dev/assets/manual_nested_autodiff.md.0e9MwF0A.lean.js +++ /dev/null @@ -1,119 +0,0 @@ -import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.I-x9Gl6h.js";const j=JSON.parse('{"title":"Nested Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nested_autodiff.md","filePath":"manual/nested_autodiff.md","lastUpdated":null}'),h={name:"manual/nested_autodiff.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.178ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4498.7 886","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.009ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 3098 886","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.439ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5056 1199","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"33.692ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 14891.7 2978.9","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"9.913ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4381.7 886","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.167ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 9355.6 2978.9","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.661ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.843ex",height:"2.565ex",role:"img",focusable:"false",viewBox:"0 -841.7 1698.8 1133.9","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.269ex",height:"1.902ex",role:"img",focusable:"false",viewBox:"0 -683 1445 840.8","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.837ex",height:"1.359ex",role:"img",focusable:"false",viewBox:"0 -443 812 600.8","aria-hidden":"true"};function v(w,i,H,D,B,L){return l(),n("div",null,[i[32]||(i[32]=t(`

      Nested Automatic Differentiation

      Note

      This is a relatively new feature in Lux, so there might be some rough edges. If you encounter any issues, please let us know by opening an issue on the GitHub repository.

      In this manual, we will explore how to use automatic differentiation (AD) inside your layers or loss functions and have Lux automatically switch the AD backend with a faster one when needed.

      Tip

      Don't wan't Lux to do this switching for you? You can disable it by setting the automatic_nested_ad_switching Preference to false.

      Remember that if you are using ForwardDiff inside a Zygote call, it will drop gradients (with a warning message), so it is not recommended to use this combination.

      Let's explore this using some questions that were posted on the Julia Discourse forum.

      julia
      using ADTypes, Lux, LinearAlgebra, Zygote, ForwardDiff, Random, StableRNGs
      -using ComponentArrays, FiniteDiff

      First let's set the stage using some minor changes that need to be made for this feature to work:

      • Switching only works if a StatefulLuxLayer is being used, with the following function calls:

        • For operations on the inputs:

          • (<some-function> ∘ <StatefulLuxLayer>)(x::AbstractArray)

          • (<StatefulLuxLayer> ∘ <some-function>)(x::AbstractArray)

          • (<StatefulLuxLayer>)(x::AbstractArray)

        • For operations on the parameters:

          • (<some-function> ∘ Base.Fix1(<StatefulLuxLayer>, x))(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x) ∘ <some-function>)(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x))(ps)

      • Currently we have custom routines implemented for:

      • Switching only happens for ChainRules compatible AD libraries.

      We plan to capture DifferentiationInterface, and Enzyme.autodiff calls in the future (PRs are welcome).

      Tip

      @compact uses StatefulLuxLayers internally, so you can directly use these features inside a layer generated by @compact.

      Loss Function containing Jacobian Computation

      This problem comes from @facusapienza on Discourse. In this case, we want to add a regularization term to the neural DE based on first-order derivatives. The neural DE part is not important here and we can demonstrate this easily with a standard neural network.

      julia
      function loss_function1(model, x, ps, st, y)
      -    # Make it a stateful layer
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -= smodel(x)
      -    loss_emp = sum(abs2, ŷ .- y)
      -    # You can use \`Zygote.jacobian\` as well but ForwardDiff tends to be more efficient here
      -    J = ForwardDiff.jacobian(smodel, x)
      -    loss_reg = abs2(norm(J .* 0.01f0))
      -    return loss_emp + loss_reg
      -end
      -
      -# Using Batchnorm to show that it is possible
      -model = Chain(Dense(2 => 4, tanh), BatchNorm(4), Dense(4 => 2))
      -ps, st = Lux.setup(StableRNG(0), model)
      -x = randn(StableRNG(0), Float32, 2, 10)
      -y = randn(StableRNG(11), Float32, 2, 10)
      -
      -loss_function1(model, x, ps, st, y)
      14.883664f0

      So our loss function works, let's take the gradient (forward diff doesn't nest nicely here):

      julia
      _, ∂x, ∂ps, _, _ = Zygote.gradient(loss_function1, model, x, ps, st, y)
      (nothing, Float32[-1.6702257 0.9043228 … 0.16094846 -4.992662; -8.010404 0.8541596 … 3.3928175 -7.1936812], (layer_1 = (weight = Float32[-4.3707023 -4.9076533; 22.199387 1.867202; 0.47872233 -0.9734574; -0.36428708 0.31861955], bias = Float32[-1.0168695, -0.16566901, 1.0829282, 1.4810884]), layer_2 = (scale = Float32[4.2774315, 3.1984668, 6.840588, 3.7018592], bias = Float32[-2.6477456, 4.9094505, -4.987689, -0.7292344]), layer_3 = (weight = Float32[11.395306 1.9206433 9.744489 -7.6726513; 2.5979974 7.106069 -7.869632 -1.787159], bias = Float32[0.041031003, 7.928609])), nothing, Float32[0.48193252 1.4007905 … -0.19124654 -1.7181164; 1.7811481 0.6913705 … -1.5627227 1.4397957])

      Now let's verify the gradients using finite differences:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function1(model, x, ps, st, y), x)
      -∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function1(model, x, ps, st, y),
      -    ComponentArray(ps))
      -
      -println("∞-norm(∂x - ∂x_fd): ", norm(∂x .- ∂x_fd, Inf))
      -println("∞-norm(∂ps - ∂ps_fd): ", norm(ComponentArray(∂ps) .- ∂ps_fd, Inf))
      ┌ Warning: \`training\` is set to \`Val{true}()\` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a \`Lux.jl\` model, set it to inference (test) mode using \`LuxCore.testmode\`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      -┌ Warning: \`training\` is set to \`Val{true}()\` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a \`Lux.jl\` model, set it to inference (test) mode using \`LuxCore.testmode\`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      -∞-norm(∂x - ∂x_fd): 0.00046014786
      -∞-norm(∂ps - ∂ps_fd): 0.00068473816

      That's pretty good, of course you will have some error from the finite differences calculation.

      Using Batched Jacobian for Multiple Inputs

      Notice that in this example the Jacobian J consists on the full matrix of derivatives of smodel with respect the different inputs in x. In many cases, we are interested in computing the Jacobian with respect to each input individually, avoiding the unnecessary calculation of zero entries of the Jacobian. This can be achieved with batched_jacobian to parse the calculation of the Jacobian per each single input. Using the same example from the previous section:

      julia
      model = Chain(Dense(2 => 4, tanh), Dense(4 => 2))
      -ps, st = Lux.setup(StableRNG(0), model)
      -x = randn(StableRNG(0), Float32, 2, 10)
      -y = randn(StableRNG(11), Float32, 2, 10)
      -
      -function loss_function_batched(model, x, ps, st, y)
      -    # Make it a stateful layer
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -= smodel(x)
      -    loss_emp = sum(abs2, ŷ .- y)
      -    # You can use \`AutoZygote()\` as well but \`AutoForwardDiff()\` tends to be more efficient here
      -    J = batched_jacobian(smodel, AutoForwardDiff(), x)
      -    loss_reg = abs2(norm(J .* 0.01f0))
      -    return loss_emp + loss_reg
      -end
      -
      -loss_function_batched(model, x, ps, st, y)
      11.380777f0

      Notice that in this last example we removed BatchNorm() from the neural network. This is done so outputs corresponding to differern inputs don't have an algebraic dependency due to the batch normalization happening in the neural network. We can now verify again the value of the Jacobian:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function_batched(model, x, ps, st, y), x)
      -∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function_batched(model, x, ps, st, y),
      -    ComponentArray(ps))
      -
      -_, ∂x_b, ∂ps_b, _, _ = Zygote.gradient(loss_function_batched, model, x, ps, st, y)
      -println("∞-norm(∂x_b - ∂x_fd): ", norm(∂x_b .- ∂x_fd, Inf))
      -println("∞-norm(∂ps_b - ∂ps_fd): ", norm(ComponentArray(∂ps_b) .- ∂ps_fd, Inf))
      ∞-norm(∂x_b - ∂x_fd): 0.00020849705
      -∞-norm(∂ps_b - ∂ps_fd): 0.00025326014

      In this example, it is important to remark that now batched_jacobian returns a 3D array with the Jacobian calculation for each independent input value in x.

      Loss Function contains Gradient Computation

      Ok here I am going to cheat a bit. This comes from a discussion on nested AD for PINNs on Discourse. As the consensus there, we shouldn't use nested AD for 3rd or higher order differentiation. Note that in the example there, the user uses ForwardDiff.derivative but we will use ForwardDiff.gradient instead, as we typically deal with array inputs and outputs.

      julia
      function loss_function2(model, t, ps, st)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -= only(Zygote.gradient(Base.Fix1(sum, abs2)  smodel, t)) # Zygote returns a tuple
      -    return sum(abs2, ŷ .- cos.(t))
      -end
      -
      -model = Chain(Dense(1 => 12,tanh), Dense(12 => 12,tanh), Dense(12 => 12,tanh),
      -    Dense(12 => 1))
      -ps, st = Lux.setup(StableRNG(0), model)
      -t = rand(StableRNG(0), Float32, 1, 16)
      1×16 Matrix{Float32}:
      - 0.420698  0.488105  0.267644  0.784768  …  0.305844  0.131726  0.859405

      Now the moment of truth:

      julia
      _, ∂t, ∂ps, _ = Zygote.gradient(loss_function2, model, t, ps, st)
      (nothing, Float32[-0.55306894 0.15707001 … -8.553633 0.07513529], (layer_1 = (weight = Float32[-1.3108878; -2.4101036; … ; 0.43676856; 1.9626999;;], bias = Float32[-1.77037, 1.7834253, -7.107933, -3.4437156, 3.2615936, -1.9511774, 11.527171, -1.8003641, 6.7513776, -4.77004, -3.183308, 6.587816]), layer_2 = (weight = Float32[-0.23921251 -0.20668766 … -0.6383875 -2.23242; -1.6666821 1.042543 … -1.640935 -3.4007297; … ; -0.36023283 -0.086430185 … -0.7054552 -2.1921258; 3.1173706 -1.9727281 … 3.0402095 6.11373], bias = Float32[0.3729237, -2.9340098, 3.6637254, -0.72471243, -0.7925039, -1.1245009, -0.89859, -0.03284671, -2.729647, -8.446214, 0.06208062, 5.5367613]), layer_3 = (weight = Float32[-0.7262076 1.0381727 … -1.5016018 -1.679885; 2.2896144 0.43350345 … -1.6663245 -1.8067697; … ; -2.1851237 -0.64241976 … 1.9577401 2.148901; 0.3654292 -0.09699093 … 0.025357665 0.028738922], bias = Float32[1.1350523, -2.1769388, 4.1149755, 3.2842, 0.35638645, 3.7911117, -0.007984848, -2.0338566, -1.1642132, -2.9500434, 2.0285957, -0.41238895]), layer_4 = (weight = Float32[15.794909 -20.65178 … -7.798029 -9.910249], bias = Float32[11.461399])), nothing)

      Boom that worked! Let's verify the gradient using forward diff:

      julia
      ∂t_fd = ForwardDiff.gradient(t -> loss_function2(model, t, ps, st), t)
      -∂ps_fd = ForwardDiff.gradient(ps -> loss_function2(model, t, ps, st), ComponentArray(ps))
      -
      -println("∞-norm(∂t - ∂t_fd): ", norm(∂t .- ∂t_fd, Inf))
      -println("∞-norm(∂ps - ∂ps_fd): ", norm(ComponentArray(∂ps) .- ∂ps_fd, Inf))
      ∞-norm(∂t - ∂t_fd): 5.722046e-6
      -∞-norm(∂ps - ∂ps_fd): 2.861023e-6

      Loss Function computing the Jacobian of the Parameters

      The above example shows how to compute the gradient/jacobian wrt the inputs in the loss function. However, what if we want to compute the jacobian wrt the parameters? This problem has been taken from Issue 610.

      We resolve these setups by using the Base.Fix1 wrapper around the stateful layer and fixing the input to the stateful layer.

      julia
      function loss_function3(model, x, ps, st)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -    J = only(Zygote.jacobian(Base.Fix1(smodel, x), ps)) # Zygote returns a tuple
      -    return sum(abs2, J)
      -end
      -
      -model = Chain(Dense(1 => 12,tanh), Dense(12 => 12,tanh), Dense(12 => 12,tanh),
      -    Dense(12 => 1))
      -ps, st = Lux.setup(StableRNG(0), model)
      -ps = ComponentArray(ps)  # needs to be an AbstractArray for most jacobian functions
      -x = rand(StableRNG(0), Float32, 1, 16)
      1×16 Matrix{Float32}:
      - 0.420698  0.488105  0.267644  0.784768  …  0.305844  0.131726  0.859405

      We can as usual compute the gradient/jacobian of the loss function:

      julia
      _, ∂x, ∂ps, _ = Zygote.gradient(loss_function3, model, x, ps, st)
      (nothing, Float32[6.8464594 6.2111297 … 1.9693907 -1.959184], (layer_1 = (weight = Float32[-3.6867144; -1.68539; … ; 2.9501405; -6.637219;;], bias = Float32[-6.488623, -7.0661273, 1.3344336, 2.6049256, 0.7290931, -15.730944, -5.431456, 7.4604826, -1.186449, 15.522138, 0.44571495, -15.376386]), layer_2 = (weight = Float32[0.3980046 -4.3071294 … -1.0914631 -4.759412; 0.8852217 -2.2523668 … 0.3977316 0.1306752; … ; -2.2192001 0.88214755 … -0.55989707 1.3939897; -3.154516 4.594261 … -1.7649312 -0.38241944], bias = Float32[7.5247808, 4.2529244, -17.252981, 3.260692, -7.4066525, 1.1126353, 2.8471048, 6.7544622, -9.815336, 0.18652153, -4.5365167, -10.04811]), layer_3 = (weight = Float32[1.0462949 4.899997 … 1.1557573 -2.284967; -2.3719285 8.687264 … -3.1904757 -8.841231; … ; -10.298787 -2.9139607 … -9.754746 -4.0381317; 1.2221471 -0.46878588 … 1.0469304 0.9091032], bias = Float32[2.8379912, 8.345026, 2.9214194, -2.2415926, -11.139433, -3.834073, -2.845412, -7.9164896, 4.222528, -1.2864517, 6.9338737, -1.4144737]), layer_4 = (weight = Float32[-59.44397 -12.688665 … 99.77208 -3.339081], bias = Float32[0.0])), nothing)

      Now let's verify the gradient using forward diff:

      julia
      ∂x_fd = ForwardDiff.gradient(x -> loss_function3(model, x, ps, st), x)
      -∂ps_fd = ForwardDiff.gradient(ps -> loss_function3(model, x, ps, st), ComponentArray(ps))
      -
      -println("∞-norm(∂x - ∂x_fd): ", norm(∂x .- ∂x_fd, Inf))
      -println("∞-norm(∂ps - ∂ps_fd): ", norm(ComponentArray(∂ps) .- ∂ps_fd, Inf))
      ∞-norm(∂x - ∂x_fd): 1.9073486e-6
      -∞-norm(∂ps - ∂ps_fd): 3.0517578e-5

      Hutchinson Trace Estimation

      `,51)),s("p",null,[i[6]||(i[6]=a("Hutchinson Trace Estimation often shows up in machine learning literature to provide a fast estimate of the trace of a Jacobian Matrix. This is based off of ")),i[7]||(i[7]=s("a",{href:"https://www.nowozin.net/sebastian/blog/thoughts-on-trace-estimation-in-deep-learning.html",target:"_blank",rel:"noreferrer"},"Hutchinson 1990",-1)),i[8]||(i[8]=a(" which computes the estimated trace of a matrix ")),s("mjx-container",p,[(l(),n("svg",k,i[0]||(i[0]=[t('',1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D"),s("mo",null,"×"),s("mi",null,"D")])])])],-1))]),i[9]||(i[9]=a(" using random vectors ")),s("mjx-container",d,[(l(),n("svg",r,i[2]||(i[2]=[t('',1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"v"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D")])])])],-1))]),i[10]||(i[10]=a(" s.t. ")),s("mjx-container",o,[(l(),n("svg",g,i[4]||(i[4]=[t('',1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"E")]),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"["),s("mi",null,"v"),s("msup",null,[s("mi",null,"v"),s("mi",null,"T")]),s("mo",{"data-mjx-texclass":"CLOSE"},"]")]),s("mo",null,"="),s("mi",null,"I")])],-1))]),i[11]||(i[11]=a("."))]),s("mjx-container",Q,[(l(),n("svg",E,i[12]||(i[12]=[t('',1)]))),i[13]||(i[13]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"Tr"),s("mo",{stretchy:"false"},"("),s("mi",null,"A"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"E")]),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"["),s("msup",null,[s("mi",null,"v"),s("mi",null,"T")]),s("mi",null,"A"),s("mi",null,"v"),s("mo",{"data-mjx-texclass":"CLOSE"},"]")]),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mi",null,"V")]),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mi",null,"V")]),s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"A"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),s("p",null,[i[16]||(i[16]=a("We can use this to compute the trace of a Jacobian Matrix ")),s("mjx-container",T,[(l(),n("svg",c,i[14]||(i[14]=[t('',1)]))),i[15]||(i[15]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"J"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D"),s("mo",null,"×"),s("mi",null,"D")])])])],-1))]),i[17]||(i[17]=a(" using the following algorithm:"))]),s("mjx-container",y,[(l(),n("svg",m,i[18]||(i[18]=[t('',1)]))),i[19]||(i[19]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"Tr"),s("mo",{stretchy:"false"},"("),s("mi",null,"J"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mi",null,"V")]),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mi",null,"V")]),s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"J"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[33]||(i[33]=s("p",null,"Note that we can compute this using two methods:",-1)),s("ol",null,[s("li",null,[s("p",null,[i[22]||(i[22]=a("Compute ")),s("mjx-container",u,[(l(),n("svg",F,i[20]||(i[20]=[t('',1)]))),i[21]||(i[21]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"J")])],-1))]),i[23]||(i[23]=a(" using a Vector-Jacobian product and then do a matrix-vector product to get the trace."))])]),s("li",null,[s("p",null,[i[26]||(i[26]=a("Compute ")),s("mjx-container",C,[(l(),n("svg",f,i[24]||(i[24]=[t('',1)]))),i[25]||(i[25]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"J"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[27]||(i[27]=a(" using a Jacobian-Vector product and then do a matrix-vector product to get the trace."))])])]),s("p",null,[i[30]||(i[30]=a("For simplicity, we will use a single sample of ")),s("mjx-container",b,[(l(),n("svg",x,i[28]||(i[28]=[t('',1)]))),i[29]||(i[29]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[31]||(i[31]=a(" to compute the trace. Additionally, we will fix the sample to ensure that our tests against the finite difference implementation are not affected by the randomness in the sample."))]),i[34]||(i[34]=t(`

      Computing using the Vector-Jacobian Product

      julia
      function hutchinson_trace_vjp(model, x, ps, st, v)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -    vjp = vector_jacobian_product(smodel, AutoZygote(), x, v)
      -    return sum(batched_matmul(reshape(vjp, 1, :, size(vjp, ndims(vjp))),
      -               reshape(v, :, 1, size(v, ndims(v)))))
      -end
      hutchinson_trace_vjp (generic function with 1 method)

      This vjp version will be the fastest and most scalable and hence is the recommended way for computing hutchinson trace.

      Computing using the Jacobian-Vector Product

      julia
      function hutchinson_trace_jvp(model, x, ps, st, v)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -    jvp = jacobian_vector_product(smodel, AutoForwardDiff(), x, v)
      -    return sum(batched_matmul(reshape(v, 1, :, size(v, ndims(v))),
      -               reshape(jvp, :, 1, size(jvp, ndims(jvp)))))
      -end
      hutchinson_trace_jvp (generic function with 1 method)

      Computing using the Full Jacobian

      This is definitely not recommended, but we are showing it for completeness.

      julia
      function hutchinson_trace_full_jacobian(model, x, ps, st, v)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -    J = ForwardDiff.jacobian(smodel, x)
      -    return vec(v)' * J * vec(v)
      -end
      hutchinson_trace_full_jacobian (generic function with 1 method)

      Now let's compute the trace and compare the results:

      julia
      model = Chain(Dense(4 => 12,tanh), Dense(12 => 12,tanh), Dense(12 => 12,tanh),
      -    Dense(12 => 4))
      -ps, st = Lux.setup(StableRNG(0), model)
      -x = rand(StableRNG(0), Float32, 4, 12)
      -v = (rand(StableRNG(12), Float32, 4, 12) .> 0.5f0) * 2.0f0 .- 1.0f0  # rademacher sample
      julia
      tr_vjp = hutchinson_trace_vjp(model, x, ps, st, v)
      -tr_jvp = hutchinson_trace_jvp(model, x, ps, st, v)
      -tr_full_jacobian = hutchinson_trace_full_jacobian(model, x, ps, st, v)
      -println("Tr(J) using vjp: ", tr_vjp)
      -println("Tr(J) using jvp: ", tr_jvp)
      -println("Tr(J) using full jacobian: ", tr_full_jacobian)
      Tr(J) using vjp: 4.9127817
      -Tr(J) using jvp: 4.9127817
      -Tr(J) using full jacobian: 4.912781

      Now that we have verified that the results are the same, let's try to differentiate the trace estimate. This often shows up as a regularization term in neural networks.

      julia
      _, ∂x_vjp, ∂ps_vjp, _, _ = Zygote.gradient(hutchinson_trace_vjp, model, x, ps, st, v)
      -_, ∂x_jvp, ∂ps_jvp, _, _ = Zygote.gradient(hutchinson_trace_jvp, model, x, ps, st, v)
      -_, ∂x_full_jacobian, ∂ps_full_jacobian, _, _ = Zygote.gradient(hutchinson_trace_full_jacobian,
      -    model, x, ps, st, v)

      For sanity check, let's verify that the gradients are the same:

      julia
      println("∞-norm(∂x using vjp): ", norm(∂x_vjp .- ∂x_jvp, Inf))
      -println("∞-norm(∂ps using vjp): ",
      -    norm(ComponentArray(∂ps_vjp) .- ComponentArray(∂ps_jvp), Inf))
      -println("∞-norm(∂x using full jacobian): ", norm(∂x_full_jacobian .- ∂x_vjp, Inf))
      -println("∞-norm(∂ps using full jacobian): ",
      -    norm(ComponentArray(∂ps_full_jacobian) .- ComponentArray(∂ps_vjp), Inf))
      ∞-norm(∂x using vjp): 0.0
      -∞-norm(∂ps using vjp): 0.0
      -∞-norm(∂x using full jacobian): 7.1525574e-7
      -∞-norm(∂ps using full jacobian): 1.4305115e-6
      `,20))])}const _=e(h,[["render",v]]);export{j as __pageData,_ as default}; diff --git a/dev/assets/manual_nested_autodiff.md.0e9MwF0A.js b/dev/assets/manual_nested_autodiff.md.yMpJh0UD.js similarity index 97% rename from dev/assets/manual_nested_autodiff.md.0e9MwF0A.js rename to dev/assets/manual_nested_autodiff.md.yMpJh0UD.js index 699a10b484..728b5f8e9e 100644 --- a/dev/assets/manual_nested_autodiff.md.0e9MwF0A.js +++ b/dev/assets/manual_nested_autodiff.md.yMpJh0UD.js @@ -1,4 +1,4 @@ -import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.I-x9Gl6h.js";const j=JSON.parse('{"title":"Nested Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nested_autodiff.md","filePath":"manual/nested_autodiff.md","lastUpdated":null}'),h={name:"manual/nested_autodiff.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.178ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4498.7 886","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.009ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 3098 886","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.439ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5056 1199","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"33.692ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 14891.7 2978.9","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"9.913ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4381.7 886","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.167ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 9355.6 2978.9","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.661ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.843ex",height:"2.565ex",role:"img",focusable:"false",viewBox:"0 -841.7 1698.8 1133.9","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.269ex",height:"1.902ex",role:"img",focusable:"false",viewBox:"0 -683 1445 840.8","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.837ex",height:"1.359ex",role:"img",focusable:"false",viewBox:"0 -443 812 600.8","aria-hidden":"true"};function v(w,i,H,D,B,L){return l(),n("div",null,[i[32]||(i[32]=t(`

      Nested Automatic Differentiation

      Note

      This is a relatively new feature in Lux, so there might be some rough edges. If you encounter any issues, please let us know by opening an issue on the GitHub repository.

      In this manual, we will explore how to use automatic differentiation (AD) inside your layers or loss functions and have Lux automatically switch the AD backend with a faster one when needed.

      Tip

      Don't wan't Lux to do this switching for you? You can disable it by setting the automatic_nested_ad_switching Preference to false.

      Remember that if you are using ForwardDiff inside a Zygote call, it will drop gradients (with a warning message), so it is not recommended to use this combination.

      Let's explore this using some questions that were posted on the Julia Discourse forum.

      julia
      using ADTypes, Lux, LinearAlgebra, Zygote, ForwardDiff, Random, StableRNGs
      +import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.BetCMmtc.js";const A=JSON.parse('{"title":"Nested Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nested_autodiff.md","filePath":"manual/nested_autodiff.md","lastUpdated":null}'),h={name:"manual/nested_autodiff.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.178ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4498.7 886","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.009ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 3098 886","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.439ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5056 1199","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"33.692ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 14891.7 2978.9","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"9.913ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4381.7 886","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.167ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 9355.6 2978.9","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.661ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.843ex",height:"2.565ex",role:"img",focusable:"false",viewBox:"0 -841.7 1698.8 1133.9","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.269ex",height:"1.902ex",role:"img",focusable:"false",viewBox:"0 -683 1445 840.8","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.837ex",height:"1.359ex",role:"img",focusable:"false",viewBox:"0 -443 812 600.8","aria-hidden":"true"};function v(w,i,H,D,_,B){return l(),n("div",null,[i[32]||(i[32]=t(`

      Nested Automatic Differentiation

      Note

      This is a relatively new feature in Lux, so there might be some rough edges. If you encounter any issues, please let us know by opening an issue on the GitHub repository.

      In this manual, we will explore how to use automatic differentiation (AD) inside your layers or loss functions and have Lux automatically switch the AD backend with a faster one when needed.

      Tip

      Don't wan't Lux to do this switching for you? You can disable it by setting the automatic_nested_ad_switching Preference to false.

      Remember that if you are using ForwardDiff inside a Zygote call, it will drop gradients (with a warning message), so it is not recommended to use this combination.

      Let's explore this using some questions that were posted on the Julia Discourse forum.

      julia
      using ADTypes, Lux, LinearAlgebra, Zygote, ForwardDiff, Random, StableRNGs
       using ComponentArrays, FiniteDiff

      First let's set the stage using some minor changes that need to be made for this feature to work:

      • Switching only works if a StatefulLuxLayer is being used, with the following function calls:

        • For operations on the inputs:

          • (<some-function> ∘ <StatefulLuxLayer>)(x::AbstractArray)

          • (<StatefulLuxLayer> ∘ <some-function>)(x::AbstractArray)

          • (<StatefulLuxLayer>)(x::AbstractArray)

        • For operations on the parameters:

          • (<some-function> ∘ Base.Fix1(<StatefulLuxLayer>, x))(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x) ∘ <some-function>)(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x))(ps)

      • Currently we have custom routines implemented for:

      • Switching only happens for ChainRules compatible AD libraries.

      We plan to capture DifferentiationInterface, and Enzyme.autodiff calls in the future (PRs are welcome).

      Tip

      @compact uses StatefulLuxLayers internally, so you can directly use these features inside a layer generated by @compact.

      Loss Function containing Jacobian Computation

      This problem comes from @facusapienza on Discourse. In this case, we want to add a regularization term to the neural DE based on first-order derivatives. The neural DE part is not important here and we can demonstrate this easily with a standard neural network.

      julia
      function loss_function1(model, x, ps, st, y)
           # Make it a stateful layer
           smodel = StatefulLuxLayer{true}(model, ps, st)
      @@ -16,15 +16,15 @@ import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.I-x9Gl6h.js
       x = randn(StableRNG(0), Float32, 2, 10)
       y = randn(StableRNG(11), Float32, 2, 10)
       
      -loss_function1(model, x, ps, st, y)
      14.883664f0

      So our loss function works, let's take the gradient (forward diff doesn't nest nicely here):

      julia
      _, ∂x, ∂ps, _, _ = Zygote.gradient(loss_function1, model, x, ps, st, y)
      (nothing, Float32[-1.6702257 0.9043228 … 0.16094846 -4.992662; -8.010404 0.8541596 … 3.3928175 -7.1936812], (layer_1 = (weight = Float32[-4.3707023 -4.9076533; 22.199387 1.867202; 0.47872233 -0.9734574; -0.36428708 0.31861955], bias = Float32[-1.0168695, -0.16566901, 1.0829282, 1.4810884]), layer_2 = (scale = Float32[4.2774315, 3.1984668, 6.840588, 3.7018592], bias = Float32[-2.6477456, 4.9094505, -4.987689, -0.7292344]), layer_3 = (weight = Float32[11.395306 1.9206433 9.744489 -7.6726513; 2.5979974 7.106069 -7.869632 -1.787159], bias = Float32[0.041031003, 7.928609])), nothing, Float32[0.48193252 1.4007905 … -0.19124654 -1.7181164; 1.7811481 0.6913705 … -1.5627227 1.4397957])

      Now let's verify the gradients using finite differences:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function1(model, x, ps, st, y), x)
      +loss_function1(model, x, ps, Lux.testmode(st), y)
      11.380776f0

      So our loss function works, let's take the gradient (forward diff doesn't nest nicely here):

      julia
      _, ∂x, ∂ps, _, _ = Zygote.gradient(loss_function1, model, x, ps, st, y)
      (nothing, Float32[-1.6702257 0.9043228 … 0.16094846 -4.992662; -8.010404 0.8541596 … 3.3928175 -7.1936812], (layer_1 = (weight = Float32[-4.3707023 -4.9076533; 22.199387 1.867202; 0.47872233 -0.9734574; -0.36428708 0.31861955], bias = Float32[-1.0168695, -0.16566901, 1.0829282, 1.4810884]), layer_2 = (scale = Float32[4.2774315, 3.1984668, 6.840588, 3.7018592], bias = Float32[-2.6477456, 4.9094505, -4.987689, -0.7292344]), layer_3 = (weight = Float32[11.395306 1.9206433 9.744489 -7.6726513; 2.5979974 7.106069 -7.869632 -1.787159], bias = Float32[0.041031003, 7.928609])), nothing, Float32[0.48193252 1.4007905 … -0.19124654 -1.7181164; 1.7811481 0.6913705 … -1.5627227 1.4397957])

      Now let's verify the gradients using finite differences:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function1(model, x, ps, st, y), x)
       ∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function1(model, x, ps, st, y),
           ComponentArray(ps))
       
       println("∞-norm(∂x - ∂x_fd): ", norm(∂x .- ∂x_fd, Inf))
       println("∞-norm(∂ps - ∂ps_fd): ", norm(ComponentArray(∂ps) .- ∂ps_fd, Inf))
      ┌ Warning: \`training\` is set to \`Val{true}()\` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a \`Lux.jl\` model, set it to inference (test) mode using \`LuxCore.testmode\`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
       ┌ Warning: \`training\` is set to \`Val{true}()\` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a \`Lux.jl\` model, set it to inference (test) mode using \`LuxCore.testmode\`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
       ∞-norm(∂x - ∂x_fd): 0.00046014786
       ∞-norm(∂ps - ∂ps_fd): 0.00068473816

      That's pretty good, of course you will have some error from the finite differences calculation.

      Using Batched Jacobian for Multiple Inputs

      Notice that in this example the Jacobian J consists on the full matrix of derivatives of smodel with respect the different inputs in x. In many cases, we are interested in computing the Jacobian with respect to each input individually, avoiding the unnecessary calculation of zero entries of the Jacobian. This can be achieved with batched_jacobian to parse the calculation of the Jacobian per each single input. Using the same example from the previous section:

      julia
      model = Chain(Dense(2 => 4, tanh), Dense(4 => 2))
       ps, st = Lux.setup(StableRNG(0), model)
      @@ -42,7 +42,7 @@ import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.I-x9Gl6h.js
           return loss_emp + loss_reg
       end
       
      -loss_function_batched(model, x, ps, st, y)
      11.380777f0

      Notice that in this last example we removed BatchNorm() from the neural network. This is done so outputs corresponding to differern inputs don't have an algebraic dependency due to the batch normalization happening in the neural network. We can now verify again the value of the Jacobian:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function_batched(model, x, ps, st, y), x)
      +loss_function_batched(model, x, ps, st, y)
      11.380777f0

      Notice that in this last example we removed BatchNorm() from the neural network. This is done so outputs corresponding to different inputs don't have an algebraic dependency due to the batch normalization happening in the neural network. We can now verify again the value of the Jacobian:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function_batched(model, x, ps, st, y), x)
       ∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function_batched(model, x, ps, st, y),
           ComponentArray(ps))
       
      @@ -116,4 +116,4 @@ import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.I-x9Gl6h.js
           norm(ComponentArray(∂ps_full_jacobian) .- ComponentArray(∂ps_vjp), Inf))
      ∞-norm(∂x using vjp): 0.0
       ∞-norm(∂ps using vjp): 0.0
       ∞-norm(∂x using full jacobian): 7.1525574e-7
      -∞-norm(∂ps using full jacobian): 1.4305115e-6
      `,20))])}const _=e(h,[["render",v]]);export{j as __pageData,_ as default}; +∞-norm(∂ps using full jacobian): 1.4305115e-6
      `,20))])}const j=e(h,[["render",v]]);export{A as __pageData,j as default}; diff --git a/dev/assets/manual_nested_autodiff.md.yMpJh0UD.lean.js b/dev/assets/manual_nested_autodiff.md.yMpJh0UD.lean.js new file mode 100644 index 0000000000..809b2439d8 --- /dev/null +++ b/dev/assets/manual_nested_autodiff.md.yMpJh0UD.lean.js @@ -0,0 +1 @@ +import{_ as e,c as n,a2 as t,j as s,a,o as l}from"./chunks/framework.BetCMmtc.js";const A=JSON.parse('{"title":"Nested Automatic Differentiation","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nested_autodiff.md","filePath":"manual/nested_autodiff.md","lastUpdated":null}'),h={name:"manual/nested_autodiff.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.178ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4498.7 886","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"7.009ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 3098 886","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.791ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.439ex",height:"2.713ex",role:"img",focusable:"false",viewBox:"0 -849.5 5056 1199","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"33.692ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 14891.7 2978.9","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.09ex"},xmlns:"http://www.w3.org/2000/svg",width:"9.913ex",height:"2.004ex",role:"img",focusable:"false",viewBox:"0 -846 4381.7 886","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"21.167ex",height:"6.74ex",role:"img",focusable:"false",viewBox:"0 -1733 9355.6 2978.9","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.661ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.843ex",height:"2.565ex",role:"img",focusable:"false",viewBox:"0 -841.7 1698.8 1133.9","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"3.269ex",height:"1.902ex",role:"img",focusable:"false",viewBox:"0 -683 1445 840.8","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.357ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.837ex",height:"1.359ex",role:"img",focusable:"false",viewBox:"0 -443 812 600.8","aria-hidden":"true"};function v(w,i,H,D,_,B){return l(),n("div",null,[i[32]||(i[32]=t("",51)),s("p",null,[i[6]||(i[6]=a("Hutchinson Trace Estimation often shows up in machine learning literature to provide a fast estimate of the trace of a Jacobian Matrix. This is based off of ")),i[7]||(i[7]=s("a",{href:"https://www.nowozin.net/sebastian/blog/thoughts-on-trace-estimation-in-deep-learning.html",target:"_blank",rel:"noreferrer"},"Hutchinson 1990",-1)),i[8]||(i[8]=a(" which computes the estimated trace of a matrix ")),s("mjx-container",p,[(l(),n("svg",k,i[0]||(i[0]=[t("",1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"A"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D"),s("mo",null,"×"),s("mi",null,"D")])])])],-1))]),i[9]||(i[9]=a(" using random vectors ")),s("mjx-container",d,[(l(),n("svg",r,i[2]||(i[2]=[t("",1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"v"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D")])])])],-1))]),i[10]||(i[10]=a(" s.t. ")),s("mjx-container",o,[(l(),n("svg",g,i[4]||(i[4]=[t("",1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"E")]),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"["),s("mi",null,"v"),s("msup",null,[s("mi",null,"v"),s("mi",null,"T")]),s("mo",{"data-mjx-texclass":"CLOSE"},"]")]),s("mo",null,"="),s("mi",null,"I")])],-1))]),i[11]||(i[11]=a("."))]),s("mjx-container",Q,[(l(),n("svg",E,i[12]||(i[12]=[t("",1)]))),i[13]||(i[13]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"Tr"),s("mo",{stretchy:"false"},"("),s("mi",null,"A"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"E")]),s("mrow",{"data-mjx-texclass":"INNER"},[s("mo",{"data-mjx-texclass":"OPEN"},"["),s("msup",null,[s("mi",null,"v"),s("mi",null,"T")]),s("mi",null,"A"),s("mi",null,"v"),s("mo",{"data-mjx-texclass":"CLOSE"},"]")]),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mi",null,"V")]),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mi",null,"V")]),s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"A"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),s("p",null,[i[16]||(i[16]=a("We can use this to compute the trace of a Jacobian Matrix ")),s("mjx-container",T,[(l(),n("svg",c,i[14]||(i[14]=[t("",1)]))),i[15]||(i[15]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"J"),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"D"),s("mo",null,"×"),s("mi",null,"D")])])])],-1))]),i[17]||(i[17]=a(" using the following algorithm:"))]),s("mjx-container",y,[(l(),n("svg",m,i[18]||(i[18]=[t("",1)]))),i[19]||(i[19]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mtext",null,"Tr"),s("mo",{stretchy:"false"},"("),s("mi",null,"J"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mi",null,"V")]),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mi",null,"V")]),s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"J"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[33]||(i[33]=s("p",null,"Note that we can compute this using two methods:",-1)),s("ol",null,[s("li",null,[s("p",null,[i[22]||(i[22]=a("Compute ")),s("mjx-container",u,[(l(),n("svg",F,i[20]||(i[20]=[t("",1)]))),i[21]||(i[21]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msubsup",null,[s("mi",null,"v"),s("mi",null,"i"),s("mi",null,"T")]),s("mi",null,"J")])],-1))]),i[23]||(i[23]=a(" using a Vector-Jacobian product and then do a matrix-vector product to get the trace."))])]),s("li",null,[s("p",null,[i[26]||(i[26]=a("Compute ")),s("mjx-container",C,[(l(),n("svg",f,i[24]||(i[24]=[t("",1)]))),i[25]||(i[25]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"J"),s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[27]||(i[27]=a(" using a Jacobian-Vector product and then do a matrix-vector product to get the trace."))])])]),s("p",null,[i[30]||(i[30]=a("For simplicity, we will use a single sample of ")),s("mjx-container",b,[(l(),n("svg",x,i[28]||(i[28]=[t("",1)]))),i[29]||(i[29]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"v"),s("mi",null,"i")])])],-1))]),i[31]||(i[31]=a(" to compute the trace. Additionally, we will fix the sample to ensure that our tests against the finite difference implementation are not affected by the randomness in the sample."))]),i[34]||(i[34]=t("",20))])}const j=e(h,[["render",v]]);export{A as __pageData,j as default}; diff --git a/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.js b/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.js deleted file mode 100644 index 9a60bb8ae5..0000000000 --- a/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.js +++ /dev/null @@ -1,103 +0,0 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Neural Networks Inside GPU Kernels","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nn_inside_gpu_kernels.md","filePath":"manual/nn_inside_gpu_kernels.md","lastUpdated":null}'),l={name:"manual/nn_inside_gpu_kernels.md"};function e(t,s,h,k,r,d){return p(),i("div",null,s[0]||(s[0]=[n(`

      Neural Networks Inside GPU Kernels

      In this page, we will describe how to embed neural networks inside GPU kernels. We will use KernelAbstractions.jl to do this, making it compatible with multiple GPU backends.

      Experimental Feature

      This is a relatively new and experimental feature. Expect edge cases and open issues on GitHub if you find any.

      Inference Only

      Currently this works only for inference. We will eventually test automatic differentiation using Enzyme.jl

      Batching

      In most usecases, this form of batching via embedding the neural network inside a GPU kernel is not recommended and will lead to suboptimal performance. Instead, batch the input data and let Lux handle the batching internally.

      julia
      using Lux, LuxCUDA, Random, Functors
      -using KernelAbstractions, StaticArrays

      First thing to remember is that we can't use regular high-level operations inside the kernels, instead we will use Static Arrays. Leveraging Julia's multiple dispatch Lux will use specialized operations that are compatible with GPU kernels.

      julia
      @kernel function nn_eval_single_batch!(output, model, input, ps, st)
      -    i = @index(Global, Linear)
      -    y, st_ = Lux.apply(model, input[i], ps, st)
      -    output[i] = y
      -end
      nn_eval_single_batch! (generic function with 4 methods)

      We define and initialize the neural network as usual, but we need to additionally convert the Arrays into SArrays.

      julia
      nn = Chain(Dense(4, 4, relu), Dense(4, 4))
      -ps, st = Lux.setup(Xoshiro(123), nn)
      -
      -to_sarray(x) = SArray{Tuple{size(x)...}}(x)
      -ps_static = fmap(to_sarray, ps)
      -st_static = fmap(to_sarray, st)
      (layer_1 = NamedTuple(), layer_2 = NamedTuple())

      First we will run it on CPU.

      Warning

      Currently due to a minor bug, we cannot call the Lux models with vector input. As a workaround we make them into Matrix with batch size 1.

      julia
      input = [@SArray(rand(Float64, 4, 1)) for i in 1:1024]
      -output = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] # Allocate the output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]

      Now run the model using KernelAbstractions.jl

      julia
      backend = KernelAbstractions.get_backend(output)
      -cpu_kernel! = nn_eval_single_batch!(backend)
      -cpu_kernel!(output, nn, input, ps_static, st_static; ndrange=length(output))
      -KernelAbstractions.synchronize(backend)
      -output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922175;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817416; 1.6423511984038721; -1.1477053709248992; -0.3834447782571344;;]
      - [2.091359335844565; 1.0621559246995447; -1.4763277207638008; -1.142470881033475;;]
      - [2.712979078066394; 0.42005835019799886; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]

      Now we will run the same model on GPU.

      julia
      gdev = gpu_device()
      -
      -input_gpu = input |> gdev
      -output_gpu = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] |> gdev
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      julia
      backend = KernelAbstractions.get_backend(output_gpu)
      -gpu_kernel! = nn_eval_single_batch!(backend)
      -gpu_kernel!(output_gpu, nn, input_gpu, ps_static, st_static; ndrange=length(output_gpu))
      -KernelAbstractions.synchronize(backend)
      -output_gpu
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922173;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817414; 1.6423511984038721; -1.147705370924899; -0.3834447782571341;;]
      - [2.0913593358445652; 1.062155924699545; -1.4763277207638013; -1.142470881033475;;]
      - [2.712979078066394; 0.420058350197999; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]
      `,24)]))}const g=a(l,[["render",e]]);export{E as __pageData,g as default}; diff --git a/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.lean.js b/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.lean.js deleted file mode 100644 index 9a60bb8ae5..0000000000 --- a/dev/assets/manual_nn_inside_gpu_kernels.md.DI86XBbU.lean.js +++ /dev/null @@ -1,103 +0,0 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Neural Networks Inside GPU Kernels","description":"","frontmatter":{},"headers":[],"relativePath":"manual/nn_inside_gpu_kernels.md","filePath":"manual/nn_inside_gpu_kernels.md","lastUpdated":null}'),l={name:"manual/nn_inside_gpu_kernels.md"};function e(t,s,h,k,r,d){return p(),i("div",null,s[0]||(s[0]=[n(`

      Neural Networks Inside GPU Kernels

      In this page, we will describe how to embed neural networks inside GPU kernels. We will use KernelAbstractions.jl to do this, making it compatible with multiple GPU backends.

      Experimental Feature

      This is a relatively new and experimental feature. Expect edge cases and open issues on GitHub if you find any.

      Inference Only

      Currently this works only for inference. We will eventually test automatic differentiation using Enzyme.jl

      Batching

      In most usecases, this form of batching via embedding the neural network inside a GPU kernel is not recommended and will lead to suboptimal performance. Instead, batch the input data and let Lux handle the batching internally.

      julia
      using Lux, LuxCUDA, Random, Functors
      -using KernelAbstractions, StaticArrays

      First thing to remember is that we can't use regular high-level operations inside the kernels, instead we will use Static Arrays. Leveraging Julia's multiple dispatch Lux will use specialized operations that are compatible with GPU kernels.

      julia
      @kernel function nn_eval_single_batch!(output, model, input, ps, st)
      -    i = @index(Global, Linear)
      -    y, st_ = Lux.apply(model, input[i], ps, st)
      -    output[i] = y
      -end
      nn_eval_single_batch! (generic function with 4 methods)

      We define and initialize the neural network as usual, but we need to additionally convert the Arrays into SArrays.

      julia
      nn = Chain(Dense(4, 4, relu), Dense(4, 4))
      -ps, st = Lux.setup(Xoshiro(123), nn)
      -
      -to_sarray(x) = SArray{Tuple{size(x)...}}(x)
      -ps_static = fmap(to_sarray, ps)
      -st_static = fmap(to_sarray, st)
      (layer_1 = NamedTuple(), layer_2 = NamedTuple())

      First we will run it on CPU.

      Warning

      Currently due to a minor bug, we cannot call the Lux models with vector input. As a workaround we make them into Matrix with batch size 1.

      julia
      input = [@SArray(rand(Float64, 4, 1)) for i in 1:1024]
      -output = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] # Allocate the output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]

      Now run the model using KernelAbstractions.jl

      julia
      backend = KernelAbstractions.get_backend(output)
      -cpu_kernel! = nn_eval_single_batch!(backend)
      -cpu_kernel!(output, nn, input, ps_static, st_static; ndrange=length(output))
      -KernelAbstractions.synchronize(backend)
      -output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922175;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817416; 1.6423511984038721; -1.1477053709248992; -0.3834447782571344;;]
      - [2.091359335844565; 1.0621559246995447; -1.4763277207638008; -1.142470881033475;;]
      - [2.712979078066394; 0.42005835019799886; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]

      Now we will run the same model on GPU.

      julia
      gdev = gpu_device()
      -
      -input_gpu = input |> gdev
      -output_gpu = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] |> gdev
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      julia
      backend = KernelAbstractions.get_backend(output_gpu)
      -gpu_kernel! = nn_eval_single_batch!(backend)
      -gpu_kernel!(output_gpu, nn, input_gpu, ps_static, st_static; ndrange=length(output_gpu))
      -KernelAbstractions.synchronize(backend)
      -output_gpu
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922173;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817414; 1.6423511984038721; -1.147705370924899; -0.3834447782571341;;]
      - [2.0913593358445652; 1.062155924699545; -1.4763277207638013; -1.142470881033475;;]
      - [2.712979078066394; 0.420058350197999; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]
      `,24)]))}const g=a(l,[["render",e]]);export{E as __pageData,g as default}; diff --git a/dev/assets/manual_performance_pitfalls.md.qHBdfLha.js b/dev/assets/manual_performance_pitfalls.md.CTvklr_I.js similarity index 99% rename from dev/assets/manual_performance_pitfalls.md.qHBdfLha.js rename to dev/assets/manual_performance_pitfalls.md.CTvklr_I.js index d10c18aab3..ecd0e06546 100644 --- a/dev/assets/manual_performance_pitfalls.md.qHBdfLha.js +++ b/dev/assets/manual_performance_pitfalls.md.CTvklr_I.js @@ -1,4 +1,4 @@ -import{_ as s,c as a,a2 as e,o as t}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Performance Pitfalls & How to Catch Them","description":"","frontmatter":{},"headers":[],"relativePath":"manual/performance_pitfalls.md","filePath":"manual/performance_pitfalls.md","lastUpdated":null}'),n={name:"manual/performance_pitfalls.md"};function l(p,i,h,r,o,d){return t(),a("div",null,i[0]||(i[0]=[e(`

      Performance Pitfalls & How to Catch Them

      Go through the following documentations for general performance tips:

      1. Official Julia Performance Tips.

      2. Recommendations for selecting AD packages.

      Spurious Type-Promotion

      Lux by-default uses Julia semantics for type-promotions, while this means that we do the "correct" numerical thing, this can often come as a surprise to users coming from a more deep learning background. For example, consider the following code:

      julia
      using Lux, Random
      +import{_ as s,c as a,a2 as e,o as t}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Performance Pitfalls & How to Catch Them","description":"","frontmatter":{},"headers":[],"relativePath":"manual/performance_pitfalls.md","filePath":"manual/performance_pitfalls.md","lastUpdated":null}'),n={name:"manual/performance_pitfalls.md"};function l(p,i,h,r,o,d){return t(),a("div",null,i[0]||(i[0]=[e(`

      Performance Pitfalls & How to Catch Them

      Go through the following documentations for general performance tips:

      1. Official Julia Performance Tips.

      2. Recommendations for selecting AD packages.

      Spurious Type-Promotion

      Lux by-default uses Julia semantics for type-promotions, while this means that we do the "correct" numerical thing, this can often come as a surprise to users coming from a more deep learning background. For example, consider the following code:

      julia
      using Lux, Random
       
       rng = Xoshiro(0)
       
      diff --git a/dev/assets/manual_performance_pitfalls.md.CTvklr_I.lean.js b/dev/assets/manual_performance_pitfalls.md.CTvklr_I.lean.js
      new file mode 100644
      index 0000000000..80e6f2b334
      --- /dev/null
      +++ b/dev/assets/manual_performance_pitfalls.md.CTvklr_I.lean.js
      @@ -0,0 +1 @@
      +import{_ as s,c as a,a2 as e,o as t}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Performance Pitfalls & How to Catch Them","description":"","frontmatter":{},"headers":[],"relativePath":"manual/performance_pitfalls.md","filePath":"manual/performance_pitfalls.md","lastUpdated":null}'),n={name:"manual/performance_pitfalls.md"};function l(p,i,h,r,o,d){return t(),a("div",null,i[0]||(i[0]=[e("",33)]))}const g=s(n,[["render",l]]);export{c as __pageData,g as default};
      diff --git a/dev/assets/manual_performance_pitfalls.md.qHBdfLha.lean.js b/dev/assets/manual_performance_pitfalls.md.qHBdfLha.lean.js
      deleted file mode 100644
      index d10c18aab3..0000000000
      --- a/dev/assets/manual_performance_pitfalls.md.qHBdfLha.lean.js
      +++ /dev/null
      @@ -1,28 +0,0 @@
      -import{_ as s,c as a,a2 as e,o as t}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Performance Pitfalls & How to Catch Them","description":"","frontmatter":{},"headers":[],"relativePath":"manual/performance_pitfalls.md","filePath":"manual/performance_pitfalls.md","lastUpdated":null}'),n={name:"manual/performance_pitfalls.md"};function l(p,i,h,r,o,d){return t(),a("div",null,i[0]||(i[0]=[e(`

      Performance Pitfalls & How to Catch Them

      Go through the following documentations for general performance tips:

      1. Official Julia Performance Tips.

      2. Recommendations for selecting AD packages.

      Spurious Type-Promotion

      Lux by-default uses Julia semantics for type-promotions, while this means that we do the "correct" numerical thing, this can often come as a surprise to users coming from a more deep learning background. For example, consider the following code:

      julia
      using Lux, Random
      -
      -rng = Xoshiro(0)
      -
      -model = Dense(2 => 2, gelu)
      -ps, st = Lux.setup(rng, model)
      -Lux.recursive_eltype((ps, st))
      Float32

      As we can see that ps and st are structures with the highest precision being Float32. Now let's run the model using some random data:

      julia
      x = rand(rng, 2, 4)
      -
      -eltype(first(model(x, ps, st)))
      Float64

      Oops our output became Float64. This will be bad on CPUs but an absolute performance disaster on GPUs. The reason this happened is that our input x was Float64. Instead, we should have used Float32 input:

      julia
      x = rand(rng, Float32, 2, 4)
      -
      -eltype(first(model(x, ps, st)))
      Float32

      This was easy to fix for a small model. But certain layers might incorrectly promote objects to a higher precision. This will cause a regression in performance. There are 2 recommendations to fix this or track them down:

      1. Use Lux.Experimental.@debug_mode to see which layer is causing the type-promotion.

      2. Alternatively to control the global behavior of eltypes in Lux and allow it to auto-correct the precision use match_eltype and the eltype_mismatch_handling preference.

      Scalar Indexing on GPU Arrays

      When running code on GPUs, it is recommended to disallow scalar indexing. Note that this is disabled by default except in REPL. You can disable it even in REPL mode using:

      julia
      using GPUArraysCore
      -GPUArraysCore.allowscalar(false)

      Type Instabilities

      Lux.jl is integrated with DispatchDoctor.jl to catch type instabilities. You can easily enable it by setting the instability_check preference. This will help you catch type instabilities in your code. For more information on how to set preferences, check out Lux.set_dispatch_doctor_preferences!.

      Faster Primitives

      Prefer to use deep learning primitives and their fused variants from LuxLib.jl instead of NNlib.jl. Some of the alternatives are:

      1. Replace NNlib.batched_mul with LuxLib.batched_matmul.

      2. Replace NNlib.conv with bias and activation with LuxLib.fused_conv_bias_activation.

      3. Replace σ.(w * x .+ b) with LuxLib.fused_dense_bias_activation.

      4. Replace uses of σ.(x) with LuxLib.fast_activation or LuxLib.fast_activation!! (the latter one is often faster).

      5. Replace uses of σ.(x .+ b) with LuxLib.bias_activation or LuxLib.bias_activation!! (the latter one is often faster).

      Optional Dependencies for Performance

      For faster performance on CPUs load the following packages:

      1. LoopVectorization.jl

      2. Octavian.jl

      If these are available, we automatically use optimized versions of the layers. Though there are cases where this might be an issue (see #980 and disabling loop vectorization).

      Data Loading and Device Transfer

      A common pattern for loading data and transferring data to GPUs looks like this:

      julia
      dataloader = DataLoader(dataset; parallel=true, batchsize=12)  # from MLUtils.jl
      -gdev = gpu_device()
      -
      -for (X, y) in dataloader
      -    X = X |> gdev
      -    y = y |> gdev
      -    # ...
      -    # do some computation
      -    # ...
      -end

      This is typically fast enough, but the data transfer to the device is happening in main process, not exploiting the parallelism in the dataloader. Instead, we can do this:

      julia
      dataloader = DataLoader(dataset; parallel=true, batchsize=12)  # from MLUtils.jl
      -gdev = gpu_device()
      -
      -for (X, y) in gdev(dataloader)
      -    # ...
      -    # do some computation
      -    # ...
      -end

      Here, X and y are on the gpu device gdev and the data transfer happens in the worker processes. Additionally, it behaves similar to CuIterator from CUDA.jl and eagerly frees the data after every iteration (this is device agnostic and works on all supported GPU backends).

      `,33)]))}const g=s(n,[["render",l]]);export{c as __pageData,g as default}; diff --git a/dev/assets/manual_preferences.md.rR-7V6zU.js b/dev/assets/manual_preferences.md.Ddm3pW4n.js similarity index 98% rename from dev/assets/manual_preferences.md.rR-7V6zU.js rename to dev/assets/manual_preferences.md.Ddm3pW4n.js index 002fe44861..f407b7d8a8 100644 --- a/dev/assets/manual_preferences.md.rR-7V6zU.js +++ b/dev/assets/manual_preferences.md.Ddm3pW4n.js @@ -1,3 +1,3 @@ -import{_ as a,c as i,a2 as t,o as s}from"./chunks/framework.I-x9Gl6h.js";const u=JSON.parse('{"title":"Preferences for Lux.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/preferences.md","filePath":"manual/preferences.md","lastUpdated":null}'),o={name:"manual/preferences.md"};function n(r,e,l,c,d,p){return s(),i("div",null,e[0]||(e[0]=[t(`

      Preferences for Lux.jl

      How to set Preferences

      PreferenceTools.jl provides an interactive way to set preferences. First run the following command:

      julia
      using PreferenceTools

      Then in the pkg mode (press ] in the REPL), run the following command:

      julia
      pkg> preference add Lux <preference-name>=<value>
      +import{_ as a,c as i,a2 as t,o as s}from"./chunks/framework.BetCMmtc.js";const u=JSON.parse('{"title":"Preferences for Lux.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/preferences.md","filePath":"manual/preferences.md","lastUpdated":null}'),o={name:"manual/preferences.md"};function n(r,e,l,c,d,p){return s(),i("div",null,e[0]||(e[0]=[t(`

      Preferences for Lux.jl

      How to set Preferences

      PreferenceTools.jl provides an interactive way to set preferences. First run the following command:

      julia
      using PreferenceTools

      Then in the pkg mode (press ] in the REPL), run the following command:

      julia
      pkg> preference add Lux <preference-name>=<value>
       pkg> preference add LuxLib <preference-name>=<value>
       pkg> preference add LuxCore <preference-name>=<value>

      Lux.jl relies on several preferences to make decision on how to run your code. Here is an exhaustive list of preferences that Lux.jl uses.

      Nested Automatic Differentiation

      1. automatic_nested_ad_switching - Set this to false to disable automatic switching of backends for nested automatic differentiation. See the manual section on nested automatic differentiation for more details.

      GPU-Aware MPI Support

      If you are using a custom MPI build that supports CUDA or ROCM, you can use the following preferences with Preferences.jl:

      1. cuda_aware_mpi - Set this to true if your MPI build is CUDA aware.

      2. rocm_aware_mpi - Set this to true if your MPI build is ROCM aware.

      By default, both of these preferences are set to false.

      GPU Backend Selection

      1. gpu_backend - Set this to bypass the automatic backend selection and use a specific gpu backend. Valid options are "cuda", "rocm", "metal", and "oneapi". This preference needs to be set for MLDataDevices package. It is recommended to use MLDataDevices.gpu_backend! to set this preference.

      Automatic Eltype Conversion

      1. eltype_mismatch_handling - Preference controlling what happens when layers get different eltypes as input. See the documentation on match_eltype for more details.

      Dispatch Doctor

      1. instability_check - Preference controlling the dispatch doctor. See the documentation on Lux.set_dispatch_doctor_preferences! for more details. The preferences need to be set for LuxCore and LuxLib packages. Both of them default to disable.
      • Setting the LuxCore preference sets the check at the level of LuxCore.apply. This essentially activates the dispatch doctor for all Lux layers.

      • Setting the LuxLib preference sets the check at the level of functional layer of Lux, for example, fused_dense_bias_activation. These functions are supposed to be type stable for common input types and can be used to guarantee type stability.

      Disabling Loop Vectorization / Octavian

      LoopVectorization.jl and Octavian.jl are optional dependencies that are used to accelerate certain CPU operations. However, these packages are tightly coupled with julia and might not work with all julia versions and systems. If these packages are loaded in any form LuxLib will use the optimized versions of the functions. But it might be desirable to disable these packages and use the default implementations instead. This can be done by setting the disable_loop_vectorization preference to true for LuxLib.

      `,18)]))}const k=a(o,[["render",n]]);export{u as __pageData,k as default}; diff --git a/dev/assets/manual_preferences.md.Ddm3pW4n.lean.js b/dev/assets/manual_preferences.md.Ddm3pW4n.lean.js new file mode 100644 index 0000000000..0365cd709f --- /dev/null +++ b/dev/assets/manual_preferences.md.Ddm3pW4n.lean.js @@ -0,0 +1 @@ +import{_ as a,c as i,a2 as t,o as s}from"./chunks/framework.BetCMmtc.js";const u=JSON.parse('{"title":"Preferences for Lux.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/preferences.md","filePath":"manual/preferences.md","lastUpdated":null}'),o={name:"manual/preferences.md"};function n(r,e,l,c,d,p){return s(),i("div",null,e[0]||(e[0]=[t("",18)]))}const k=a(o,[["render",n]]);export{u as __pageData,k as default}; diff --git a/dev/assets/manual_preferences.md.rR-7V6zU.lean.js b/dev/assets/manual_preferences.md.rR-7V6zU.lean.js deleted file mode 100644 index 002fe44861..0000000000 --- a/dev/assets/manual_preferences.md.rR-7V6zU.lean.js +++ /dev/null @@ -1,3 +0,0 @@ -import{_ as a,c as i,a2 as t,o as s}from"./chunks/framework.I-x9Gl6h.js";const u=JSON.parse('{"title":"Preferences for Lux.jl","description":"","frontmatter":{},"headers":[],"relativePath":"manual/preferences.md","filePath":"manual/preferences.md","lastUpdated":null}'),o={name:"manual/preferences.md"};function n(r,e,l,c,d,p){return s(),i("div",null,e[0]||(e[0]=[t(`

      Preferences for Lux.jl

      How to set Preferences

      PreferenceTools.jl provides an interactive way to set preferences. First run the following command:

      julia
      using PreferenceTools

      Then in the pkg mode (press ] in the REPL), run the following command:

      julia
      pkg> preference add Lux <preference-name>=<value>
      -pkg> preference add LuxLib <preference-name>=<value>
      -pkg> preference add LuxCore <preference-name>=<value>

      Lux.jl relies on several preferences to make decision on how to run your code. Here is an exhaustive list of preferences that Lux.jl uses.

      Nested Automatic Differentiation

      1. automatic_nested_ad_switching - Set this to false to disable automatic switching of backends for nested automatic differentiation. See the manual section on nested automatic differentiation for more details.

      GPU-Aware MPI Support

      If you are using a custom MPI build that supports CUDA or ROCM, you can use the following preferences with Preferences.jl:

      1. cuda_aware_mpi - Set this to true if your MPI build is CUDA aware.

      2. rocm_aware_mpi - Set this to true if your MPI build is ROCM aware.

      By default, both of these preferences are set to false.

      GPU Backend Selection

      1. gpu_backend - Set this to bypass the automatic backend selection and use a specific gpu backend. Valid options are "cuda", "rocm", "metal", and "oneapi". This preference needs to be set for MLDataDevices package. It is recommended to use MLDataDevices.gpu_backend! to set this preference.

      Automatic Eltype Conversion

      1. eltype_mismatch_handling - Preference controlling what happens when layers get different eltypes as input. See the documentation on match_eltype for more details.

      Dispatch Doctor

      1. instability_check - Preference controlling the dispatch doctor. See the documentation on Lux.set_dispatch_doctor_preferences! for more details. The preferences need to be set for LuxCore and LuxLib packages. Both of them default to disable.
      • Setting the LuxCore preference sets the check at the level of LuxCore.apply. This essentially activates the dispatch doctor for all Lux layers.

      • Setting the LuxLib preference sets the check at the level of functional layer of Lux, for example, fused_dense_bias_activation. These functions are supposed to be type stable for common input types and can be used to guarantee type stability.

      Disabling Loop Vectorization / Octavian

      LoopVectorization.jl and Octavian.jl are optional dependencies that are used to accelerate certain CPU operations. However, these packages are tightly coupled with julia and might not work with all julia versions and systems. If these packages are loaded in any form LuxLib will use the optimized versions of the functions. But it might be desirable to disable these packages and use the default implementations instead. This can be done by setting the disable_loop_vectorization preference to true for LuxLib.

      `,18)]))}const k=a(o,[["render",n]]);export{u as __pageData,k as default}; diff --git a/dev/assets/manual_weight_initializers.md.D3ynEEqW.js b/dev/assets/manual_weight_initializers.md.BZq8xM8l.js similarity index 72% rename from dev/assets/manual_weight_initializers.md.D3ynEEqW.js rename to dev/assets/manual_weight_initializers.md.BZq8xM8l.js index 092056d44e..f0fa984f56 100644 --- a/dev/assets/manual_weight_initializers.md.D3ynEEqW.js +++ b/dev/assets/manual_weight_initializers.md.BZq8xM8l.js @@ -1,4 +1,4 @@ -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Initializing Weights","description":"","frontmatter":{},"headers":[],"relativePath":"manual/weight_initializers.md","filePath":"manual/weight_initializers.md","lastUpdated":null}'),e={name:"manual/weight_initializers.md"};function l(p,s,h,k,d,g){return t(),a("div",null,s[0]||(s[0]=[n(`

      Initializing Weights

      WeightInitializers.jl provides common weight initialization schemes for deep learning models.

      julia
      using WeightInitializers, Random
      +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Initializing Weights","description":"","frontmatter":{},"headers":[],"relativePath":"manual/weight_initializers.md","filePath":"manual/weight_initializers.md","lastUpdated":null}'),e={name:"manual/weight_initializers.md"};function l(p,s,h,k,d,g){return t(),a("div",null,s[0]||(s[0]=[n(`

      Initializing Weights

      WeightInitializers.jl provides common weight initialization schemes for deep learning models.

      julia
      using WeightInitializers, Random
       
       # Fixing rng
       rng = Random.MersenneTwister(42)
      Random.MersenneTwister(42)
      julia
      # Explicit rng call
      @@ -18,10 +18,10 @@ import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const c
         0.486214   0.321506  -0.306641  0.145296   0.206476

      To generate weights directly on GPU, pass in a CUDA.RNG. For a complete list of supported RNG types, see Supported RNG Types.

      julia
      using LuxCUDA
       
       weights = kaiming_normal(CUDA.default_rng(), 2, 5)
      2×5 CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}:
      - -0.550935  -0.671487  0.595443   0.612179  0.157552
      - -0.706354  -0.58597   0.0450307  0.222485  0.434273

      You can also generate Complex Numbers:

      julia
      weights = kaiming_normal(CUDA.default_rng(), ComplexF32, 2, 5)
      2×5 CUDA.CuArray{ComplexF32, 2, CUDA.DeviceMemory}:
      - 0.0939071-0.432439im    0.51712-0.177116im  …   0.859452+0.0580112im
      - 0.0271588+0.529416im  -0.471206+1.02357im      -0.631482+0.317067im

      Quick examples

      The package is meant to be working with deep learning libraries such as (F)Lux. All the methods take as input the chosen rng type and the dimension for the array.

      julia
      weights = init(rng, dims...)

      The rng is optional, if not specified a default one will be used.

      julia
      weights = init(dims...)

      If there is the need to use keyword arguments the methods can be called with just the rng (optionally) and the keywords to get in return a function behaving like the two examples above.

      julia
      weights_init = init(rng; kwargs...)
      + -1.43846   -0.0514504  -0.752638   0.632952  -0.711561
      + -0.451763   0.846389   -0.053432  -0.857412  -0.351028

      You can also generate Complex Numbers:

      julia
      weights = kaiming_normal(CUDA.default_rng(), ComplexF32, 2, 5)
      2×5 CUDA.CuArray{ComplexF32, 2, CUDA.DeviceMemory}:
      +   0.180649+0.13021im   0.00332033+0.608668im  …  -0.509792+0.507333im
      + -0.0334668-0.253364im   -0.706174-0.81377im      0.0949819-0.0634295im

      Quick examples

      The package is meant to be working with deep learning libraries such as (F)Lux. All the methods take as input the chosen rng type and the dimension for the array.

      julia
      weights = init(rng, dims...)

      The rng is optional, if not specified a default one will be used.

      julia
      weights = init(dims...)

      If there is the need to use keyword arguments the methods can be called with just the rng (optionally) and the keywords to get in return a function behaving like the two examples above.

      julia
      weights_init = init(rng; kwargs...)
       weights = weights_init(rng, dims...)
       
       # Or
      diff --git a/dev/assets/manual_weight_initializers.md.BZq8xM8l.lean.js b/dev/assets/manual_weight_initializers.md.BZq8xM8l.lean.js
      new file mode 100644
      index 0000000000..6bdac3c1ee
      --- /dev/null
      +++ b/dev/assets/manual_weight_initializers.md.BZq8xM8l.lean.js
      @@ -0,0 +1 @@
      +import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Initializing Weights","description":"","frontmatter":{},"headers":[],"relativePath":"manual/weight_initializers.md","filePath":"manual/weight_initializers.md","lastUpdated":null}'),e={name:"manual/weight_initializers.md"};function l(p,s,h,k,d,g){return t(),a("div",null,s[0]||(s[0]=[n("",25)]))}const o=i(e,[["render",l]]);export{c as __pageData,o as default};
      diff --git a/dev/assets/manual_weight_initializers.md.D3ynEEqW.lean.js b/dev/assets/manual_weight_initializers.md.D3ynEEqW.lean.js
      deleted file mode 100644
      index 092056d44e..0000000000
      --- a/dev/assets/manual_weight_initializers.md.D3ynEEqW.lean.js
      +++ /dev/null
      @@ -1,30 +0,0 @@
      -import{_ as i,c as a,a2 as n,o as t}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Initializing Weights","description":"","frontmatter":{},"headers":[],"relativePath":"manual/weight_initializers.md","filePath":"manual/weight_initializers.md","lastUpdated":null}'),e={name:"manual/weight_initializers.md"};function l(p,s,h,k,d,g){return t(),a("div",null,s[0]||(s[0]=[n(`

      Initializing Weights

      WeightInitializers.jl provides common weight initialization schemes for deep learning models.

      julia
      using WeightInitializers, Random
      -
      -# Fixing rng
      -rng = Random.MersenneTwister(42)
      Random.MersenneTwister(42)
      julia
      # Explicit rng call
      -weights = kaiming_normal(rng, 2, 5)
      2×5 Matrix{Float32}:
      -  0.76545    0.255203  -0.0424012   0.643172   -0.360745
      - -0.0499631  0.183381   0.388315   -0.0340666  -0.54248
      julia
      # Default rng call
      -weights = kaiming_normal(2, 5)
      2×5 Matrix{Float32}:
      - -0.227513  -0.265372   0.265788  1.29955  -0.192836
      -  0.687611   0.454679  -0.433656  0.20548   0.292002
      julia
      # Passing kwargs (if needed) with explicit rng call
      -weights_cl = kaiming_normal(rng; gain=1.0)
      -weights = weights_cl(2, 5)
      2×5 Matrix{Float32}:
      - -0.094564   0.196581   0.0791126  -0.794864  0.631217
      - -0.381774  -0.588045  -0.113952   -0.567746  0.261636
      julia
      # Passing kwargs (if needed) with default rng call
      -weights_cl = kaiming_normal(; gain=1.0)
      -weights = weights_cl(2, 5)
      2×5 Matrix{Float32}:
      - -0.160876  -0.187646   0.18794   0.918918  -0.136356
      -  0.486214   0.321506  -0.306641  0.145296   0.206476

      To generate weights directly on GPU, pass in a CUDA.RNG. For a complete list of supported RNG types, see Supported RNG Types.

      julia
      using LuxCUDA
      -
      -weights = kaiming_normal(CUDA.default_rng(), 2, 5)
      2×5 CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}:
      - -0.550935  -0.671487  0.595443   0.612179  0.157552
      - -0.706354  -0.58597   0.0450307  0.222485  0.434273

      You can also generate Complex Numbers:

      julia
      weights = kaiming_normal(CUDA.default_rng(), ComplexF32, 2, 5)
      2×5 CUDA.CuArray{ComplexF32, 2, CUDA.DeviceMemory}:
      - 0.0939071-0.432439im    0.51712-0.177116im  …   0.859452+0.0580112im
      - 0.0271588+0.529416im  -0.471206+1.02357im      -0.631482+0.317067im

      Quick examples

      The package is meant to be working with deep learning libraries such as (F)Lux. All the methods take as input the chosen rng type and the dimension for the array.

      julia
      weights = init(rng, dims...)

      The rng is optional, if not specified a default one will be used.

      julia
      weights = init(dims...)

      If there is the need to use keyword arguments the methods can be called with just the rng (optionally) and the keywords to get in return a function behaving like the two examples above.

      julia
      weights_init = init(rng; kwargs...)
      -weights = weights_init(rng, dims...)
      -
      -# Or
      -
      -weights_init = init(; kwargs...)
      -weights = weights_init(dims...)
      `,25)]))}const o=i(e,[["render",l]]);export{c as __pageData,o as default}; diff --git a/dev/assets/results.CFQJNj5o.gif b/dev/assets/results.CFQJNj5o.gif deleted file mode 100644 index 98b07213a0..0000000000 Binary files a/dev/assets/results.CFQJNj5o.gif and /dev/null differ diff --git a/dev/assets/results.CkivesIs.gif b/dev/assets/results.CkivesIs.gif new file mode 100644 index 0000000000..54404b4cef Binary files /dev/null and b/dev/assets/results.CkivesIs.gif differ diff --git a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.js b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.js deleted file mode 100644 index d8db204a75..0000000000 --- a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.js +++ /dev/null @@ -1,805 +0,0 @@ -import{_ as p,c as a,a2 as n,j as s,a as e,o as i}from"./chunks/framework.I-x9Gl6h.js";const G=JSON.parse('{"title":"Training a Neural ODE to Model Gravitational Waveforms","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/advanced/1_GravitationalWaveForm.md","filePath":"tutorials/advanced/1_GravitationalWaveForm.md","lastUpdated":null}'),t={name:"tutorials/advanced/1_GravitationalWaveForm.md"},l={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.819ex",height:"1.658ex",role:"img",focusable:"false",viewBox:"0 -583 4782.1 733","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"24.527ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10840.9 1000","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},m={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},u={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},w={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"};function U(K,A,P,O,X,j){return i(),a("div",null,[A[41]||(A[41]=n(`

      Training a Neural ODE to Model Gravitational Waveforms

      This code is adapted from Astroinformatics/ScientificMachineLearning

      The code has been minimally adapted from Keith et. al. 2021 which originally used Flux.jl

      Package Imports

      julia
      using Lux, ComponentArrays, LineSearches, OrdinaryDiffEqLowOrderRK, Optimization,
      -      OptimizationOptimJL, Printf, Random, SciMLSensitivity
      -using CairoMakie
      Precompiling Lux...
      -    491.1 ms  ✓ JLLWrappers
      -    554.4 ms  ✓ Requires
      -    552.6 ms  ✓ Compat
      -    629.1 ms  ✓ DocStringExtensions
      -    634.4 ms  ✓ CpuId
      -    820.3 ms  ✓ Static
      -    401.2 ms  ✓ Compat → CompatLinearAlgebraExt
      -    632.2 ms  ✓ Hwloc_jll
      -    661.6 ms  ✓ OpenSpecFun_jll
      -    431.8 ms  ✓ BitTwiddlingConvenienceFunctions
      -    684.5 ms  ✓ LogExpFunctions
      -    658.8 ms  ✓ Functors
      -   1631.2 ms  ✓ DispatchDoctor
      -   1087.2 ms  ✓ CPUSummary
      -    476.3 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   1311.3 ms  ✓ ChainRulesCore
      -   1012.2 ms  ✓ MLDataDevices
      -   1696.2 ms  ✓ StaticArrayInterface
      -    868.5 ms  ✓ PolyesterWeave
      -    545.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    551.7 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -   1443.6 ms  ✓ LuxCore
      -    888.0 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    669.0 ms  ✓ CloseOpenIntervals
      -    744.1 ms  ✓ LayoutPointers
      -    822.5 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -   1334.1 ms  ✓ Optimisers
      -   2495.6 ms  ✓ Hwloc
      -    595.4 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    597.0 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    608.0 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    628.4 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    745.5 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1681.4 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    441.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    441.6 ms  ✓ Optimisers → OptimisersAdaptExt
      -   2906.4 ms  ✓ SpecialFunctions
      -   1013.3 ms  ✓ StrideArraysCore
      -    750.8 ms  ✓ Polyester
      -   1703.7 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   6816.8 ms  ✓ StaticArrays
      -   2749.4 ms  ✓ WeightInitializers
      -    593.3 ms  ✓ Adapt → AdaptStaticArraysExt
      -    602.7 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    617.4 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    643.8 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    673.6 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3505.1 ms  ✓ ForwardDiff
      -    931.9 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    832.3 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3181.1 ms  ✓ KernelAbstractions
      -    622.3 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    698.8 ms  ✓ KernelAbstractions → EnzymeExt
      -   5027.9 ms  ✓ NNlib
      -    805.4 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    892.2 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5422.4 ms  ✓ LuxLib
      -   8900.0 ms  ✓ Lux
      -  58 dependencies successfully precompiled in 32 seconds. 51 already precompiled.
      -Precompiling ComponentArrays...
      -    876.0 ms  ✓ ComponentArrays
      -  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      -Precompiling MLDataDevicesComponentArraysExt...
      -    497.4 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      -Precompiling LuxComponentArraysExt...
      -    526.1 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1503.1 ms  ✓ Lux → LuxComponentArraysExt
      -   1876.2 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      -  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      -Precompiling LineSearches...
      -    980.1 ms  ✓ NLSolversBase
      -   1709.8 ms  ✓ LineSearches
      -  2 dependencies successfully precompiled in 3 seconds. 41 already precompiled.
      -Precompiling FiniteDiffStaticArraysExt...
      -    554.4 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 21 already precompiled.
      -Precompiling OrdinaryDiffEqLowOrderRK...
      -    417.5 ms  ✓ FastPower
      -    435.1 ms  ✓ MuladdMacro
      -    426.0 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    472.1 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    527.1 ms  ✓ TruncatedStacktraces
      -    758.9 ms  ✓ PreallocationTools
      -    790.4 ms  ✓ FastBroadcast
      -    631.1 ms  ✓ FastPower → FastPowerForwardDiffExt
      -   1449.2 ms  ✓ RecipesBase
      -   1660.9 ms  ✓ DataStructures
      -   2067.6 ms  ✓ Accessors
      -    736.5 ms  ✓ Accessors → LinearAlgebraExt
      -   1315.0 ms  ✓ SymbolicIndexingInterface
      -   1715.7 ms  ✓ SciMLOperators
      -    495.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1955.7 ms  ✓ RecursiveArrayTools
      -    717.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    797.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -  10992.2 ms  ✓ SciMLBase
      -   5769.1 ms  ✓ DiffEqBase
      -   4436.6 ms  ✓ OrdinaryDiffEqCore
      -   1470.8 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   4097.4 ms  ✓ OrdinaryDiffEqLowOrderRK
      -  23 dependencies successfully precompiled in 33 seconds. 102 already precompiled.
      -Precompiling StaticArraysExt...
      -    631.9 ms  ✓ Accessors → StaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    587.3 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      -Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    674.5 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
      -Precompiling ComponentArraysSciMLBaseExt...
      -    949.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   1088.6 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      -  2 dependencies successfully precompiled in 1 seconds. 97 already precompiled.
      -Precompiling DiffEqBaseChainRulesCoreExt...
      -   1519.9 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      -  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    424.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling Optimization...
      -    443.0 ms  ✓ ProgressLogging
      -    521.6 ms  ✓ LoggingExtras
      -    619.6 ms  ✓ L_BFGS_B_jll
      -    834.1 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -    845.3 ms  ✓ ProgressMeter
      -    896.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -    663.1 ms  ✓ TerminalLoggers
      -    509.3 ms  ✓ LBFGSB
      -   1243.6 ms  ✓ SparseMatrixColorings
      -    455.1 ms  ✓ ConsoleProgressMonitor
      -    813.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -   3586.2 ms  ✓ SparseConnectivityTracer
      -   2116.4 ms  ✓ OptimizationBase
      -   1946.0 ms  ✓ Optimization
      -  14 dependencies successfully precompiled in 8 seconds. 90 already precompiled.
      -Precompiling ChainRulesCoreSparseArraysExt...
      -    609.2 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      -Precompiling SparseArraysExt...
      -    891.2 ms  ✓ KernelAbstractions → SparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    640.6 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling DiffEqBaseSparseArraysExt...
      -   1626.5 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      -Precompiling DifferentiationInterfaceChainRulesCoreExt...
      -    384.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -  1 dependency successfully precompiled in 0 seconds. 11 already precompiled.
      -Precompiling DifferentiationInterfaceStaticArraysExt...
      -    576.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling DifferentiationInterfaceForwardDiffExt...
      -    757.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      -Precompiling SparseConnectivityTracerSpecialFunctionsExt...
      -   1150.6 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   1534.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      -  2 dependencies successfully precompiled in 2 seconds. 26 already precompiled.
      -Precompiling SparseConnectivityTracerNNlibExt...
      -   1644.8 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      -  1 dependency successfully precompiled in 2 seconds. 46 already precompiled.
      -Precompiling SparseConnectivityTracerNaNMathExt...
      -   1205.7 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling OptimizationForwardDiffExt...
      -    602.4 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      -  1 dependency successfully precompiled in 1 seconds. 110 already precompiled.
      -Precompiling OptimizationMLDataDevicesExt...
      -   1376.5 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      -  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      -Precompiling HwlocTrees...
      -    496.8 ms  ✓ Hwloc → HwlocTrees
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling OptimizationOptimJL...
      -    477.3 ms  ✓ SortingAlgorithms
      -   2165.1 ms  ✓ StatsBase
      -   2976.7 ms  ✓ Optim
      -  12287.9 ms  ✓ OptimizationOptimJL
      -  4 dependencies successfully precompiled in 18 seconds. 136 already precompiled.
      -Precompiling SciMLSensitivity...
      -    519.6 ms  ✓ StructIO
      -    536.7 ms  ✓ HashArrayMappedTries
      -    548.7 ms  ✓ PoissonRandom
      -    590.7 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    608.4 ms  ✓ Scratch
      -    721.9 ms  ✓ Accessors → StructArraysExt
      -    840.3 ms  ✓ Rmath_jll
      -    924.0 ms  ✓ oneTBB_jll
      -   1345.0 ms  ✓ Cassette
      -    866.8 ms  ✓ ResettableStacks
      -   1475.2 ms  ✓ KLU
      -    988.3 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    906.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   1794.3 ms  ✓ FastLapackInterface
      -    663.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -   1578.8 ms  ✓ LazyArtifacts
      -   1610.7 ms  ✓ ZygoteRules
      -   1462.2 ms  ✓ QuadGK
      -   1684.4 ms  ✓ HypergeometricFunctions
      -   1297.9 ms  ✓ HostCPUFeatures
      -    561.4 ms  ✓ ScopedValues
      -   1155.1 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2935.7 ms  ✓ IRTools
      -    777.8 ms  ✓ FunctionProperties
      -   3341.8 ms  ✓ TimerOutputs
      -   1411.2 ms  ✓ Rmath
      -   2171.9 ms  ✓ IntelOpenMP_jll
      -   2295.5 ms  ✓ LLVMExtra_jll
      -   2445.8 ms  ✓ Enzyme_jll
      -   5200.1 ms  ✓ Test
      -   3138.1 ms  ✓ ObjectFile
      -   4434.5 ms  ✓ SciMLJacobianOperators
      -    843.7 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -    901.1 ms  ✓ Accessors → TestExt
      -   2624.9 ms  ✓ StatsFuns
      -   2008.0 ms  ✓ MKL_jll
      -   1606.2 ms  ✓ Sparspak
      -   1703.8 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    859.0 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   7505.6 ms  ✓ ChainRules
      -   1813.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   6845.9 ms  ✓ Tracker
      -   8709.5 ms  ✓ Krylov
      -   6699.2 ms  ✓ DiffEqCallbacks
      -    919.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1322.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -   1538.6 ms  ✓ FastPower → FastPowerTrackerExt
      -   1616.7 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -   1639.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -   1811.3 ms  ✓ Tracker → TrackerPDMatsExt
      -   2806.7 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   8958.9 ms  ✓ VectorizationBase
      -   5930.2 ms  ✓ Distributions
      -   7162.7 ms  ✓ LLVM
      -   1136.5 ms  ✓ SLEEFPirates
      -   1474.0 ms  ✓ Distributions → DistributionsTestExt
      -   1476.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1903.3 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -   2017.8 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -  14531.2 ms  ✓ ArrayLayouts
      -    781.6 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   2432.6 ms  ✓ LazyArrays
      -   3963.6 ms  ✓ DiffEqNoiseProcess
      -   4622.0 ms  ✓ GPUArrays
      -   1307.4 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -  17512.8 ms  ✓ ReverseDiff
      -   3478.5 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   3482.2 ms  ✓ ArrayInterface → ArrayInterfaceReverseDiffExt
      -   3654.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -   4734.7 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -   4958.2 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   5063.9 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18538.1 ms  ✓ GPUCompiler
      -  21384.0 ms  ✓ LoopVectorization
      -   1165.2 ms  ✓ LoopVectorization → SpecialFunctionsExt
      -   1305.6 ms  ✓ LoopVectorization → ForwardDiffExt
      -   3945.1 ms  ✓ TriangularSolve
      -  29507.5 ms  ✓ Zygote
      -   1612.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -   1946.3 ms  ✓ Zygote → ZygoteTrackerExt
      -   3108.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   3504.4 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -  16228.7 ms  ✓ RecursiveFactorization
      -   5413.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  30267.2 ms  ✓ LinearSolve
      -   2570.3 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2623.4 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4099.1 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      - 199431.2 ms  ✓ Enzyme
      -   7082.8 ms  ✓ FastPower → FastPowerEnzymeExt
      -   7095.7 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -   7163.5 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   7185.4 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   7190.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   7370.6 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -  17743.4 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19363.4 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  17569.3 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  29561.6 ms  ✓ SciMLSensitivity
      -  99 dependencies successfully precompiled in 279 seconds. 192 already precompiled.
      -Precompiling LuxLibSLEEFPiratesExt...
      -   2421.1 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      -  1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
      -Precompiling LuxLibLoopVectorizationExt...
      -   4597.0 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      -  1 dependency successfully precompiled in 5 seconds. 105 already precompiled.
      -Precompiling LuxLibEnzymeExt...
      -   1326.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
      -Precompiling LuxEnzymeExt...
      -   7532.3 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling OptimizationEnzymeExt...
      -  20489.3 ms  ✓ OptimizationBase → OptimizationEnzymeExt
      -  1 dependency successfully precompiled in 21 seconds. 109 already precompiled.
      -Precompiling MLDataDevicesTrackerExt...
      -   1148.9 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      -Precompiling LuxLibTrackerExt...
      -   1074.6 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.9 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      -Precompiling LuxTrackerExt...
      -   2038.3 ms  ✓ Lux → LuxTrackerExt
      -  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      -Precompiling ComponentArraysTrackerExt...
      -   1155.5 ms  ✓ ComponentArrays → ComponentArraysTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 70 already precompiled.
      -Precompiling MLDataDevicesReverseDiffExt...
      -   3498.5 ms  ✓ MLDataDevices → MLDataDevicesReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 49 already precompiled.
      -Precompiling LuxLibReverseDiffExt...
      -   3387.4 ms  ✓ LuxCore → LuxCoreArrayInterfaceReverseDiffExt
      -   4268.7 ms  ✓ LuxLib → LuxLibReverseDiffExt
      -  2 dependencies successfully precompiled in 4 seconds. 98 already precompiled.
      -Precompiling ComponentArraysReverseDiffExt...
      -   3584.6 ms  ✓ ComponentArrays → ComponentArraysReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 57 already precompiled.
      -Precompiling OptimizationReverseDiffExt...
      -   3383.0 ms  ✓ OptimizationBase → OptimizationReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 130 already precompiled.
      -Precompiling LuxReverseDiffExt...
      -   4398.6 ms  ✓ Lux → LuxReverseDiffExt
      -  1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    793.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling MLDataDevicesZygoteExt...
      -   1552.7 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -   1579.9 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      -  2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
      -Precompiling LuxZygoteExt...
      -   1645.8 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -   2696.5 ms  ✓ Lux → LuxZygoteExt
      -  2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
      -Precompiling ComponentArraysZygoteExt...
      -   1549.5 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -   1826.0 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      -  2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.
      -Precompiling OptimizationZygoteExt...
      -   2164.3 ms  ✓ OptimizationBase → OptimizationZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 160 already precompiled.
      -Precompiling CairoMakie...
      -    532.3 ms  ✓ RangeArrays
      -    515.1 ms  ✓ PolygonOps
      -    516.4 ms  ✓ IndirectArrays
      -    524.0 ms  ✓ LaTeXStrings
      -    565.6 ms  ✓ GeoFormatTypes
      -    571.2 ms  ✓ Contour
      -    591.2 ms  ✓ TensorCore
      -    628.0 ms  ✓ TriplotBase
      -    639.0 ms  ✓ StableRNGs
      -    691.0 ms  ✓ Extents
      -    703.5 ms  ✓ Observables
      -    742.1 ms  ✓ IntervalSets
      -    744.9 ms  ✓ RoundingEmulator
      -    834.5 ms  ✓ IterTools
      -    457.4 ms  ✓ CRC32c
      -    526.0 ms  ✓ Ratios
      -    550.6 ms  ✓ LazyModules
      -    601.2 ms  ✓ PCRE2_jll
      -   1160.8 ms  ✓ Grisu
      -    615.2 ms  ✓ Inflate
      -    599.2 ms  ✓ MappedArrays
      -    545.5 ms  ✓ RelocatableFolders
      -    780.8 ms  ✓ TranscodingStreams
      -   1577.5 ms  ✓ Format
      -    984.0 ms  ✓ SharedArrays
      -    876.0 ms  ✓ OpenSSL_jll
      -    805.3 ms  ✓ Graphite2_jll
      -    836.5 ms  ✓ LLVMOpenMP_jll
      -    815.4 ms  ✓ Bzip2_jll
      -    876.1 ms  ✓ Libmount_jll
      -    831.4 ms  ✓ libfdk_aac_jll
      -    924.3 ms  ✓ Xorg_libXau_jll
      -    839.4 ms  ✓ Imath_jll
      -    872.4 ms  ✓ libpng_jll
      -    817.1 ms  ✓ Giflib_jll
      -    989.0 ms  ✓ LAME_jll
      -   1592.0 ms  ✓ SimpleTraits
      -    855.9 ms  ✓ LERC_jll
      -    850.5 ms  ✓ EarCut_jll
      -    833.9 ms  ✓ CRlibm_jll
      -    929.8 ms  ✓ JpegTurbo_jll
      -    839.7 ms  ✓ Ogg_jll
      -    847.5 ms  ✓ x265_jll
      -    930.1 ms  ✓ XZ_jll
      -    838.0 ms  ✓ Xorg_libXdmcp_jll
      -    854.5 ms  ✓ x264_jll
      -    875.3 ms  ✓ libaom_jll
      -    885.1 ms  ✓ Zstd_jll
      -    861.8 ms  ✓ Expat_jll
      -   2356.2 ms  ✓ UnicodeFun
      -    838.8 ms  ✓ LZO_jll
      -    841.3 ms  ✓ Opus_jll
      -    717.8 ms  ✓ Xorg_xtrans_jll
      -    838.3 ms  ✓ Libffi_jll
      -    872.3 ms  ✓ Libiconv_jll
      -    818.6 ms  ✓ Libgpg_error_jll
      -    755.5 ms  ✓ Xorg_libpthread_stubs_jll
      -    856.0 ms  ✓ isoband_jll
      -    867.2 ms  ✓ FFTW_jll
      -    845.3 ms  ✓ FriBidi_jll
      -    829.6 ms  ✓ Libuuid_jll
      -    532.0 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    524.7 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -    527.5 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    547.6 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -    572.0 ms  ✓ Showoff
      -    656.4 ms  ✓ MosaicViews
      -   1510.9 ms  ✓ FilePathsBase
      -    855.5 ms  ✓ Pixman_jll
      -   1453.4 ms  ✓ GeoInterface
      -    887.6 ms  ✓ FreeType2_jll
      -   1007.6 ms  ✓ OpenEXR_jll
      -   1835.2 ms  ✓ ColorBrewer
      -    912.5 ms  ✓ libsixel_jll
      -    943.6 ms  ✓ libvorbis_jll
      -    935.4 ms  ✓ Libtiff_jll
      -    645.9 ms  ✓ Isoband
      -    962.7 ms  ✓ XML2_jll
      -    879.5 ms  ✓ Libgcrypt_jll
      -    844.9 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   1120.0 ms  ✓ AxisArrays
      -   2999.8 ms  ✓ ColorVectorSpace
      -   1064.5 ms  ✓ FilePaths
      -   1131.8 ms  ✓ Fontconfig_jll
      -   1151.7 ms  ✓ Gettext_jll
      -   2947.0 ms  ✓ Interpolations
      -   1440.1 ms  ✓ FreeType
      -   1035.2 ms  ✓ XSLT_jll
      -   1822.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1024.1 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -   3759.5 ms  ✓ IntervalArithmetic
      -   1256.5 ms  ✓ Glib_jll
      -   5306.1 ms  ✓ PkgVersion
      -   5392.6 ms  ✓ FileIO
      -    840.7 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   1719.0 ms  ✓ Xorg_libxcb_jll
      -    658.4 ms  ✓ Xorg_libX11_jll
      -    647.1 ms  ✓ Xorg_libXext_jll
      -    817.7 ms  ✓ Xorg_libXrender_jll
      -   1689.9 ms  ✓ QOI
      -   4094.7 ms  ✓ ColorSchemes
      -    912.6 ms  ✓ Libglvnd_jll
      -   2345.9 ms  ✓ OpenEXR
      -   1076.1 ms  ✓ Cairo_jll
      -   1550.8 ms  ✓ libwebp_jll
      -   1399.6 ms  ✓ HarfBuzz_jll
      -   7925.6 ms  ✓ FFTW
      -   7438.4 ms  ✓ GeometryBasics
      -   6105.2 ms  ✓ ExactPredicates
      -  10882.4 ms  ✓ SIMD
      -   1349.5 ms  ✓ libass_jll
      -   1414.4 ms  ✓ Pango_jll
      -   1736.2 ms  ✓ Packing
      -   2325.6 ms  ✓ ShaderAbstractions
      -   1367.2 ms  ✓ FFMPEG_jll
      -   2916.6 ms  ✓ FreeTypeAbstraction
      -   1742.4 ms  ✓ Cairo
      -   3073.0 ms  ✓ KernelDensity
      -   5911.6 ms  ✓ MakieCore
      -   6151.3 ms  ✓ DelaunayTriangulation
      -   7962.2 ms  ✓ GridLayoutBase
      -  11568.1 ms  ✓ PlotUtils
      -  22893.7 ms  ✓ Unitful
      -    586.3 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    590.6 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   1425.8 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -  12104.4 ms  ✓ Automa
      -  21254.1 ms  ✓ ImageCore
      -   2100.7 ms  ✓ ImageBase
      -   2629.4 ms  ✓ WebP
      -   3426.7 ms  ✓ PNGFiles
      -   3607.1 ms  ✓ JpegTurbo
      -   2191.5 ms  ✓ ImageAxes
      -   4724.6 ms  ✓ Sixel
      -   1163.4 ms  ✓ ImageMetadata
      -   1999.8 ms  ✓ Netpbm
      -  12307.5 ms  ✓ MathTeXEngine
      -  49319.1 ms  ✓ TiffImages
      -   1176.4 ms  ✓ ImageIO
      - 112829.4 ms  ✓ Makie
      -  74538.7 ms  ✓ CairoMakie
      -  141 dependencies successfully precompiled in 252 seconds. 129 already precompiled.
      -Precompiling SparseMatrixColoringsColorsExt...
      -    869.7 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      -  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      -Precompiling ZygoteColorsExt...
      -   1732.9 ms  ✓ Zygote → ZygoteColorsExt
      -  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
      -Precompiling IntervalSetsExt...
      -    784.1 ms  ✓ Accessors → IntervalSetsExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling IntervalSetsRecipesBaseExt...
      -    515.5 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
      -Precompiling UnitfulExt...
      -    585.0 ms  ✓ Accessors → UnitfulExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling DiffEqBaseUnitfulExt...
      -   1538.2 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
      -  1 dependency successfully precompiled in 2 seconds. 123 already precompiled.
      -Precompiling NNlibFFTWExt...
      -    860.7 ms  ✓ NNlib → NNlibFFTWExt
      -  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
      -Precompiling IntervalArithmeticForwardDiffExt...
      -    452.2 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    642.3 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      -  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      -Precompiling IntervalArithmeticRecipesBaseExt...
      -    756.5 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      -Precompiling SciMLBaseMakieExt...
      -   9269.6 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.

      Define some Utility Functions

      Tip

      This section can be skipped. It defines functions to simulate the model, however, from a scientific machine learning perspective, isn't super relevant.

      `,8)),s("p",null,[A[6]||(A[6]=e("We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector ")),s("mjx-container",l,[(i(),a("svg",h,A[0]||(A[0]=[n('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"r"),s("mo",null,"="),s("msub",null,[s("mi",null,"r"),s("mn",null,"1")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[7]||(A[7]=e(" and use Newtonian formulas to get ")),s("mjx-container",r,[(i(),a("svg",k,A[2]||(A[2]=[n('',1)]))),A[3]||(A[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"1")])])],-1))]),A[8]||(A[8]=e(", ")),s("mjx-container",E,[(i(),a("svg",d,A[4]||(A[4]=[n('',1)]))),A[5]||(A[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[9]||(A[9]=e(" (e.g. Theoretical Mechanics of Particles and Continua 4.3)"))]),A[42]||(A[42]=n(`
      julia
      function one2two(path, m₁, m₂)
      -    M = m₁ + m₂
      -    r₁ = m₂ / M .* path
      -    r₂ = -m₁ / M .* path
      -    return r₁, r₂
      -end
      one2two (generic function with 1 method)
      `,2)),s("p",null,[A[12]||(A[12]=e("Next we define a function to perform the change of variables: ")),s("mjx-container",o,[(i(),a("svg",Q,A[10]||(A[10]=[n('',1)]))),A[11]||(A[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"("),s("mi",null,"χ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"ϕ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"↦"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"y"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")")])],-1))])]),A[43]||(A[43]=n(`
      julia
      @views function soln2orbit(soln, model_params=nothing)
      -    @assert size(soln, 1)  [2, 4] "size(soln,1) must be either 2 or 4"
      -
      -    if size(soln, 1) == 2
      -        χ = soln[1, :]
      -        ϕ = soln[2, :]
      -
      -        @assert length(model_params)==3 "model_params must have length 3 when size(soln,2) = 2"
      -        p, M, e = model_params
      -    else
      -        χ = soln[1, :]
      -        ϕ = soln[2, :]
      -        p = soln[3, :]
      -        e = soln[4, :]
      -    end
      -
      -    r = p ./ (1 .+ e .* cos.(χ))
      -    x = r .* cos.(ϕ)
      -    y = r .* sin.(ϕ)
      -
      -    orbit = vcat(x', y')
      -    return orbit
      -end
      soln2orbit (generic function with 2 methods)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d_dt(v::AbstractVector, dt)
      -    a = -3 / 2 * v[1] + 2 * v[2] - 1 / 2 * v[3]
      -    b = (v[3:end] .- v[1:(end - 2)]) / 2
      -    c = 3 / 2 * v[end] - 2 * v[end - 1] + 1 / 2 * v[end - 2]
      -    return [a; b; c] / dt
      -end
      d_dt (generic function with 1 method)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d2_dt2(v::AbstractVector, dt)
      -    a = 2 * v[1] - 5 * v[2] + 4 * v[3] - v[4]
      -    b = v[1:(end - 2)] .- 2 * v[2:(end - 1)] .+ v[3:end]
      -    c = 2 * v[end] - 5 * v[end - 1] + 4 * v[end - 2] - v[end - 3]
      -    return [a; b; c] / (dt^2)
      -end
      d2_dt2 (generic function with 1 method)

      Now we define a function to compute the trace-free moment tensor from the orbit

      julia
      function orbit2tensor(orbit, component, mass=1)
      -    x = orbit[1, :]
      -    y = orbit[2, :]
      -
      -    Ixx = x .^ 2
      -    Iyy = y .^ 2
      -    Ixy = x .* y
      -    trace = Ixx .+ Iyy
      -
      -    if component[1] == 1 && component[2] == 1
      -        tmp = Ixx .- trace ./ 3
      -    elseif component[1] == 2 && component[2] == 2
      -        tmp = Iyy .- trace ./ 3
      -    else
      -        tmp = Ixy
      -    end
      -
      -    return mass .* tmp
      -end
      -
      -function h_22_quadrupole_components(dt, orbit, component, mass=1)
      -    mtensor = orbit2tensor(orbit, component, mass)
      -    mtensor_ddot = d2_dt2(mtensor, dt)
      -    return 2 * mtensor_ddot
      -end
      -
      -function h_22_quadrupole(dt, orbit, mass=1)
      -    h11 = h_22_quadrupole_components(dt, orbit, (1, 1), mass)
      -    h22 = h_22_quadrupole_components(dt, orbit, (2, 2), mass)
      -    h12 = h_22_quadrupole_components(dt, orbit, (1, 2), mass)
      -    return h11, h12, h22
      -end
      -
      -function h_22_strain_one_body(dt::T, orbit) where {T}
      -    h11, h12, h22 = h_22_quadrupole(dt, orbit)
      -
      -    h₊ = h11 - h22
      -    hₓ = T(2) * h12
      -
      -    scaling_const =(T(π) / 5)
      -    return scaling_const * h₊, -scaling_const * hₓ
      -end
      -
      -function h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      -    h11_1, h12_1, h22_1 = h_22_quadrupole(dt, orbit1, mass1)
      -    h11_2, h12_2, h22_2 = h_22_quadrupole(dt, orbit2, mass2)
      -    h11 = h11_1 + h11_2
      -    h12 = h12_1 + h12_2
      -    h22 = h22_1 + h22_2
      -    return h11, h12, h22
      -end
      -
      -function h_22_strain_two_body(dt::T, orbit1, mass1, orbit2, mass2) where {T}
      -    # compute (2,2) mode strain from orbits of BH 1 of mass1 and BH2 of mass 2
      -
      -    @assert abs(mass1 + mass2 - 1.0)<1e-12 "Masses do not sum to unity"
      -
      -    h11, h12, h22 = h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      -
      -    h₊ = h11 - h22
      -    hₓ = T(2) * h12
      -
      -    scaling_const =(T(π) / 5)
      -    return scaling_const * h₊, -scaling_const * hₓ
      -end
      -
      -function compute_waveform(dt::T, soln, mass_ratio, model_params=nothing) where {T}
      -    @assert mass_ratio1 "mass_ratio must be <= 1"
      -    @assert mass_ratio0 "mass_ratio must be non-negative"
      -
      -    orbit = soln2orbit(soln, model_params)
      -    if mass_ratio > 0
      -        m₂ = inv(T(1) + mass_ratio)
      -        m₁ = mass_ratio * m₂
      -
      -        orbit₁, orbit₂ = one2two(orbit, m₁, m₂)
      -        waveform = h_22_strain_two_body(dt, orbit₁, m₁, orbit₂, m₂)
      -    else
      -        waveform = h_22_strain_one_body(dt, orbit)
      -    end
      -    return waveform
      -end
      compute_waveform (generic function with 2 methods)

      Simulating the True Model

      RelativisticOrbitModel defines system of odes which describes motion of point like particle in schwarzschild background, uses

      `,13)),s("mjx-container",C,[(i(),a("svg",f,A[13]||(A[13]=[n('',1)]))),A[14]||(A[14]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",g,[(i(),a("svg",c,A[15]||(A[15]=[n('',1)]))),A[16]||(A[16]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[23]||(A[23]=e("where, ")),s("mjx-container",y,[(i(),a("svg",v,A[17]||(A[17]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[18]||(A[18]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[24]||(A[24]=e(", ")),s("mjx-container",m,[(i(),a("svg",u,A[19]||(A[19]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[20]||(A[20]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[25]||(A[25]=e(", and ")),s("mjx-container",I,[(i(),a("svg",F,A[21]||(A[21]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[22]||(A[22]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[26]||(A[26]=e(" are constants"))]),A[44]||(A[44]=n(`
      julia
      function RelativisticOrbitModel(u, (p, M, e), t)
      -    χ, ϕ = u
      -
      -    numer = (p - 2 - 2 * e * cos(χ)) * (1 + e * cos(χ))^2
      -    denom = sqrt((p - 2)^2 - 4 * e^2)
      -
      -    χ̇ = numer * sqrt(p - 6 - 2 * e * cos(χ)) / (M * (p^2) * denom)
      -    ϕ̇ = numer / (M * (p^(3 / 2)) * denom)
      -
      -    return [χ̇, ϕ̇]
      -end
      -
      -mass_ratio = 0.0         # test particle
      -u0 = Float64[π, 0.0]     # initial conditions
      -datasize = 250
      -tspan = (0.0f0, 6.0f4)   # timespace for GW waveform
      -tsteps = range(tspan[1], tspan[2]; length=datasize)  # time at each timestep
      -dt_data = tsteps[2] - tsteps[1]
      -dt = 100.0
      -const ode_model_params = [100.0, 1.0, 0.5]; # p, M, e

      Let's simulate the true model and plot the results using OrdinaryDiffEq.jl

      julia
      prob = ODEProblem(RelativisticOrbitModel, u0, tspan, ode_model_params)
      -soln = Array(solve(prob, RK4(); saveat=tsteps, dt, adaptive=false))
      -waveform = first(compute_waveform(dt_data, soln, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s = scatter!(ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5)
      -
      -    axislegend(ax, [[l, s]], ["Waveform Data"])
      -
      -    fig
      -end

      Defiing a Neural Network Model

      Next, we define the neural network model that takes 1 input (time) and has two outputs. We'll make a function ODE_model that takes the initial conditions, neural network parameters and a time as inputs and returns the derivatives.

      It is typically never recommended to use globals but incase you do use them, make sure to mark them as const.

      We will deviate from the standard Neural Network initialization and use WeightInitializers.jl,

      julia
      const nn = Chain(Base.Fix1(fast_activation, cos),
      -    Dense(1 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      -    Dense(32 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      -    Dense(32 => 2; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32))
      -ps, st = Lux.setup(Random.default_rng(), nn)
      ((layer_1 = NamedTuple(), layer_2 = (weight = Float32[0.00012904078; -0.000112544876; -3.194287f-5; 9.532207f-5; 5.7499907f-5; -0.000106882915; -0.00010909063; 3.8261045f-5; -1.2864484f-5; -8.822358f-5; 3.546014f-5; -1.8142538f-5; -0.000102116464; -4.5605157f-6; -0.00021668768; 0.00014581048; -0.000112148016; 6.22374f-5; 0.00024771586; -4.4607037f-5; 4.5975794f-5; -0.00013581885; -8.202444f-5; -2.2094564f-5; 2.0224994f-5; -0.00024750977; 8.9906325f-6; -9.196156f-5; 1.3154642f-5; 5.571319f-5; -2.2547f-5; -1.0075346f-5;;], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_3 = (weight = Float32[0.0001618415 0.00010802869 9.6770396f-5 -0.00014462393 8.764277f-5 -4.6047666f-5 -6.289895f-5 1.4200443f-5 3.7395082f-5 -3.8730854f-5 -2.4571409f-5 -1.5922284f-5 -0.0001338827 0.00013049862 -1.7215027f-5 0.0001519071 3.6739f-5 5.3988137f-5 7.5490334f-6 4.1321127f-5 0.00012454136 -0.00012439543 0.00032224192 -9.0235466f-5 8.027509f-6 -4.230032f-5 -1.2626992f-5 3.817326f-5 0.00010345286 -3.2830536f-5 -0.00013042263 3.6778765f-5; -0.00010449851 6.9333175f-5 -0.00025979505 -4.1506977f-5 0.00015308514 0.00015436667 4.068362f-5 6.194856f-5 0.00016091425 4.190392f-5 8.6157415f-5 -0.00015018477 -1.8756064f-5 0.00013426754 3.4385368f-5 3.7520163f-5 -8.245856f-6 8.076973f-6 -7.841589f-5 9.4385825f-5 3.1037864f-5 1.632189f-5 -0.00011407531 -9.840075f-5 -9.079082f-6 -0.00012266912 -0.0001501576 -4.8833277f-5 6.306712f-5 -8.952644f-5 -6.778331f-5 0.00013526856; 3.5417983f-5 -6.405736f-5 5.0200804f-5 -2.187474f-5 9.65856f-5 1.6087113f-5 0.00017726328 5.0810268f-5 -1.2770168f-5 0.0002584681 -6.891108f-5 -1.8794399f-5 1.8506884f-5 3.4208548f-5 5.7545672f-5 -0.00019487216 -4.1474475f-5 -7.719686f-5 0.00022728901 7.322569f-5 4.1465402f-5 -0.00011304584 -0.000121477155 -7.1576396f-5 -0.00019784343 7.308304f-6 -8.1276325f-5 -5.8827296f-5 3.2044787f-5 -7.681639f-5 -4.2084102f-5 -1.7746544f-5; -4.1023915f-5 6.294326f-5 4.1910976f-6 -8.0661346f-5 4.198329f-5 -6.740117f-5 2.4916579f-5 -0.00011881353 -0.00010547372 1.9229235f-5 8.645224f-6 0.000109944114 -1.4017144f-5 -7.512534f-6 7.736905f-5 -0.00010222966 0.00014278025 -6.63838f-5 -0.00014546272 -7.750724f-6 -0.00011267116 3.0435387f-5 9.601777f-5 -0.000119031705 -0.00015527422 0.0001169817 -0.00013191596 0.000102389204 5.188713f-5 2.1413676f-5 -0.00010448805 1.1473379f-5; 8.031061f-6 6.5873464f-5 2.5314368f-5 0.00017256496 -4.9571987f-5 -1.3794807f-5 2.4063143f-5 9.781523f-5 -4.280496f-5 -2.0803165f-5 -7.521194f-5 0.00018738274 0.00010471025 7.513476f-5 -4.893005f-5 0.00014860366 -8.2969374f-5 7.298375f-5 -8.980523f-5 0.00023513274 -7.5829416f-6 -5.8425405f-5 1.23955515f-5 6.96497f-5 2.4521105f-5 7.589782f-7 -5.2146323f-5 2.6066837f-5 -1.2284129f-5 -7.0048576f-5 -8.4076615f-5 -6.8420195f-6; 7.3120784f-5 -4.376328f-5 3.065425f-5 2.2613407f-5 5.7423866f-5 5.5075692f-5 -1.3883068f-5 -9.999049f-6 7.647168f-5 2.6283391f-5 -3.4442135f-5 3.925019f-5 -7.635232f-5 7.141157f-5 0.0001065878 -1.3486914f-5 -2.3242749f-6 -0.00015052347 0.00014461616 2.219101f-5 -0.00012743202 -7.240347f-5 7.515301f-5 -3.7679296f-5 0.000116404415 -8.719309f-5 -0.00011620738 -2.2765513f-5 3.5284862f-5 -5.4052427f-5 4.8226982f-5 -5.9169564f-5; -6.7190835f-5 -5.7709756f-5 -3.7505262f-5 9.183235f-5 -2.3899584f-5 0.00014002899 5.2985233f-5 2.9858915f-5 -0.00014034772 0.0002104392 2.2348457f-5 -9.916955f-5 -5.660702f-5 9.209628f-5 -0.00011463575 -5.018853f-6 1.7207522f-5 -9.028814f-5 5.9064158f-5 5.6500547f-5 5.1515904f-5 -6.183798f-5 -1.869243f-5 -0.00019577722 0.0001116403 2.623907f-5 2.5146139f-5 -0.00020383588 -6.0033046f-5 7.342174f-5 -0.00012018732 0.00019175147; -4.4822886f-5 -1.809907f-5 -0.0001516205 -9.591752f-5 -0.000307869 0.00017293467 -0.00014444215 0.00018572801 -4.6772497f-5 0.00020110856 0.00011630823 0.00012441541 -1.9470537f-5 0.0001705941 -0.00011531175 -4.7874884f-5 -3.391144f-5 -0.00017337958 -0.00015705569 0.0001380679 9.7610886f-5 -4.8745784f-5 -5.210988f-5 -0.00017633072 -4.5728182f-5 -9.739671f-5 -1.0597239f-5 2.2026148f-5 -6.3983694f-5 6.102486f-5 0.00010338496 -6.827732f-5; 0.00018907207 -2.7584168f-5 6.10194f-5 -3.5499725f-5 -4.9563205f-5 -2.0559924f-5 3.0567677f-5 -6.0137634f-5 2.9698942f-6 -3.717725f-5 -5.4455915f-5 0.000120902325 -2.6554806f-5 -9.82872f-5 0.00012045895 0.00013023132 -4.3817818f-6 -7.90597f-6 -0.00017591458 -6.145454f-5 -9.7153286f-5 -0.000113216745 -0.00019022694 -1.5265173f-6 -3.7145597f-5 -1.418679f-6 4.0716244f-5 -7.1149034f-6 -5.2239297f-6 6.4689855f-5 0.00015458622 6.8282454f-5; 9.238393f-5 -0.00010029555 -0.00032395986 5.5797962f-5 0.00025480852 0.00010384605 6.894006f-5 9.168032f-5 -0.00014577566 5.929657f-5 0.0002960712 0.00013800748 2.5192774f-5 0.00011150848 -5.873785f-5 -7.113468f-5 -5.1084924f-5 3.14979f-5 8.180124f-5 -1.0872986f-5 6.779847f-5 6.3255684f-5 2.280975f-5 -2.5365513f-5 -7.609393f-5 6.9949754f-5 -0.00020458306 -0.00014076366 -5.1785355f-5 -6.7380766f-5 -9.743695f-5 -2.6103799f-5; 2.995047f-5 -6.736759f-6 -3.8725888f-5 -6.074171f-6 7.08185f-5 -2.7499069f-5 -7.966966f-5 -0.00014262945 0.00015519376 4.3257455f-6 0.00013839432 6.4476895f-5 -0.00012280572 0.00015023709 -0.00010829573 -0.00012139006 -0.00010294529 -6.770257f-6 -4.841615f-5 -1.7801849f-6 -3.3879445f-5 5.58065f-6 3.9426603f-5 5.5769644f-5 -3.4012104f-5 -8.564142f-5 3.2661476f-6 -1.0633827f-5 -9.2488284f-5 6.4112755f-5 -0.00017150475 3.3733468f-5; -5.781235f-5 0.0001663885 -0.00022763766 -2.2378454f-5 6.2147716f-5 0.00016687042 -0.00019428028 -2.5980862f-5 -1.556338f-5 8.4640174f-5 -9.9490266f-5 1.250148f-6 -0.00014455267 -2.947595f-5 6.0663788f-5 -0.00010613067 5.1250395f-6 7.563867f-5 5.97999f-5 4.5715213f-5 1.86434f-5 -7.3369134f-5 -4.44051f-6 -2.5847781f-5 -1.2085683f-5 -3.6958885f-5 0.00019028709 -0.000108798136 0.0001345308 2.3690662f-5 0.00019774816 -0.00010464023; -0.00014413495 -0.00013636223 3.6046575f-5 7.1486343f-6 1.915911f-5 0.000108172964 -4.005471f-5 1.8743809f-5 0.00031020257 -3.856181f-5 4.4903583f-5 3.4804332f-5 -0.00016249428 -0.00010392467 5.827663f-5 9.077724f-5 5.3671643f-5 -4.8848806f-6 9.078535f-5 8.6138905f-5 -1.8303757f-5 -8.899495f-5 3.5389185f-5 7.94879f-6 5.58088f-5 -4.5910605f-5 6.699846f-5 -0.00014090221 -0.0002088031 -2.0524585f-5 5.01877f-5 0.00019194714; -5.0838746f-5 -7.539191f-5 0.00012213155 0.00015550757 -0.00012977826 -5.158861f-5 -9.445511f-5 0.00012671694 5.2321917f-5 0.00023097855 -8.698364f-5 0.00018369449 -1.2698216f-5 -2.2390311f-6 3.0692383f-5 -0.00011975833 0.00012478858 -1.4962289f-5 3.8748185f-5 0.00023630871 -0.00012000166 -1.5395174f-6 8.959036f-5 -3.4250785f-5 1.3674271f-5 -0.000111441506 -0.00010046797 6.0148104f-5 -2.4395333f-5 3.2727636f-5 8.028999f-5 -4.091216f-5; -2.0079646f-5 4.2053984f-5 0.0002203402 0.00020521648 -0.0001587786 -0.0001763035 -0.00016422565 0.00010166542 9.380203f-5 6.9437694f-5 7.57007f-5 -0.00020894472 -0.00016988067 -6.765953f-5 -0.00010387241 5.217241f-5 2.3392158f-5 -6.166616f-5 0.00017520235 -3.2230004f-5 0.00016375548 -3.2104384f-5 0.00013332158 6.2654086f-5 0.00017636029 9.2175396f-5 -3.051804f-5 4.447554f-5 -4.3382257f-5 3.6749552f-5 7.922374f-5 5.572315f-6; -2.6858583f-5 0.00017819837 4.695629f-5 -9.4709256f-5 -0.00016462848 -2.3182307f-5 -4.6315886f-6 8.540567f-5 -8.155574f-5 -0.00014111411 7.0400456f-5 -0.0003217462 0.00013005744 0.00011168272 8.3365914f-5 6.0671675f-5 -7.8074496f-5 -7.818376f-5 -6.0908715f-5 0.00014188448 0.00011401225 0.00024713847 -2.690341f-5 0.000102220314 8.4159314f-5 -0.00015950877 4.007158f-6 -8.7333174f-5 -8.763305f-5 5.593f-5 -7.80944f-5 -8.700921f-5; 4.7898816f-6 -6.2080675f-5 -3.9439437f-5 0.00014633437 0.0001298414 3.786074f-5 3.610755f-5 -5.0587707f-5 -0.000108645305 -7.380673f-5 -3.9151306f-5 2.2096217f-5 4.4382992f-5 -0.00020986874 -2.2879865f-5 -9.160353f-5 9.355732f-6 5.507163f-6 2.5489919f-5 2.7703583f-5 7.0461865f-5 -1.7779136f-6 -8.9434f-5 2.8051127f-5 -5.7027803f-5 8.368734f-6 5.7428515f-6 3.801712f-5 -4.082395f-5 1.3992459f-5 -0.00016164676 -0.0003113387; 0.00010210765 -0.00014016182 -9.4709f-5 -0.00034049625 -0.00021730701 -0.00021877282 -1.7789896f-6 5.1318613f-5 -6.843532f-5 2.579598f-6 -7.676633f-5 3.0357873f-5 4.429472f-5 -0.00013344285 8.7713386f-5 -4.4082106f-5 4.933527f-5 -0.00012167484 0.00014828255 3.681658f-5 5.1665567f-5 3.5419303f-5 1.1032115f-5 -2.0139467f-5 -5.1311414f-5 4.592762f-6 -0.00010240131 0.0003480712 4.9550832f-5 9.322965f-5 0.00013964142 -1.44593205f-5; 4.6231064f-5 0.0001759908 -9.25005f-5 -0.00012786509 -9.314455f-5 -0.00010438618 0.0001246568 -6.0039115f-6 -4.256412f-5 3.3602988f-5 -7.607951f-5 -5.8172565f-5 9.8862365f-5 8.6666434f-5 -0.0001534175 9.098543f-7 -0.0001733446 -7.1654413f-6 7.078415f-6 7.887453f-5 -8.685129f-5 3.543198f-5 0.00011281002 -0.000119297736 2.4146246f-5 -8.2930885f-5 1.4110002f-5 -0.00012959536 -9.485212f-5 -0.00019885453 -1.7635973f-7 6.0904702f-5; 3.8074464f-5 -0.00014175575 -0.00010340612 -5.7893245f-5 -4.6557743f-5 8.02458f-5 -4.9841892f-5 0.000101947204 0.00012222587 -2.4416882f-5 9.291772f-5 7.913553f-6 -6.0172544f-5 0.00018882217 4.9115602f-5 5.3959157f-5 -0.00025722026 -5.7353386f-6 4.8929433f-5 -0.00013650712 -3.4423745f-5 3.0329427f-6 6.108192f-5 -7.605748f-5 0.00013981201 9.759291f-5 8.666391f-5 -2.3675924f-5 0.00012245314 -0.00010652234 -1.3366141f-5 0.00012327557; -3.3457505f-5 4.095177f-5 -0.000109747016 5.973901f-5 -4.265944f-5 -9.30009f-5 8.915604f-5 -0.00010395834 -1.20415725f-5 0.00015913686 -6.577562f-5 -1.2961416f-5 -1.3428478f-6 -2.4245908f-5 -4.767823f-5 8.947429f-6 0.00021950372 6.0229213f-5 4.8471844f-5 -1.8123917f-5 4.926346f-6 0.00016245825 -9.935177f-5 -6.088053f-5 -0.00013101641 0.00015148571 -9.9824676f-5 2.438139f-5 9.963044f-6 -9.3802286f-5 -8.4489046f-5 -9.9167686f-5; 9.8044664f-5 -0.0001373537 6.387073f-5 6.1176324f-5 3.706773f-5 1.7435063f-5 4.7456964f-5 -1.5212719f-5 3.6089532f-5 -8.671219f-5 -7.336197f-6 -2.0153177f-5 -5.8473186f-5 8.0986705f-5 2.2233919f-5 -2.9421195f-5 -3.1216456f-5 -8.403217f-6 -0.00013870597 5.3565193f-5 3.9979343f-5 8.018874f-5 -0.00020783316 -3.9275405f-5 2.2974551f-5 -0.00013394952 0.00018567356 -3.7417303f-5 -5.7108025f-7 5.572647f-5 -0.00011531178 -7.445494f-5; 2.5635985f-5 -3.386408f-5 3.410427f-5 -8.304946f-5 0.00010648793 -7.12835f-5 -1.8213308f-5 0.00015260768 0.0001266959 -0.000107867665 -5.9321865f-5 0.00011133931 -0.000105488856 0.00018247867 9.0476264f-5 -5.4776487f-5 6.0621394f-5 0.00028524833 6.466627f-5 -5.295654f-6 -0.00010272296 0.00010126048 1.4139942f-5 6.678978f-5 -0.00018882136 -5.3258304f-5 1.6948114f-5 -7.912578f-6 -2.1099555f-5 -9.280504f-5 2.0245323f-5 2.3501534f-5; 0.00012267528 -2.3302788f-5 0.0001845575 -3.6030975f-5 -8.9690475f-5 5.39783f-6 -6.8735144f-6 -0.0001025949 2.5651641f-5 -0.00015345518 -0.00013484679 7.593546f-5 -1.713295f-5 -4.639202f-6 9.142571f-5 -2.3218283f-5 5.4158005f-5 -5.6290253f-5 -0.00015190005 3.1802838f-6 -3.7887046f-5 -2.6533817f-5 0.00012959103 8.168044f-5 0.00016691723 1.225193f-5 -0.00022118838 -5.3078587f-5 4.3580926f-6 0.00017463026 -1.0993981f-5 5.7551082f-5; 3.1593845f-5 -0.000110218614 -6.81751f-5 -4.0169387f-5 4.775593f-5 -6.654202f-5 8.2955985f-6 2.4026602f-5 2.1414777f-5 -0.00018230491 4.825706f-6 2.1396403f-5 8.545525f-5 -0.00010729712 -3.5680565f-5 7.198789f-5 -6.8265894f-5 -8.482774f-5 -4.113752f-5 9.73255f-5 0.00013572449 -2.7198057f-6 -4.7429945f-5 -0.0001458811 9.53614f-5 4.0818675f-5 3.8685084f-5 0.00010142652 -7.212244f-5 -7.761069f-5 -0.00018438329 -5.83635f-5; 1.9431573f-5 -3.5413144f-5 5.494157f-5 0.00013289817 0.00010804467 -9.7779135f-5 6.7757464f-5 6.780156f-5 6.958222f-5 -6.413478f-5 9.143599f-5 3.0031626f-5 -4.9843726f-5 9.828978f-5 4.441148f-5 3.23047f-5 0.00010129856 -8.872279f-6 6.877895f-5 0.000117699274 -9.963044f-5 -0.00012705963 -0.00023942554 6.482481f-5 0.00019372506 1.3753422f-5 -0.000111746165 5.2670017f-5 -0.00019889923 2.1095306f-5 0.00014670983 0.000106552376; 0.0001532651 0.00017225282 -0.00019747327 0.000101641795 -1.9324214f-5 3.1270938f-5 -1.0656201f-5 0.00012335583 1.7614955f-5 0.00018734005 4.6449848f-5 -3.556733f-6 0.00014254355 -2.6307333f-5 4.0052928f-5 -4.950784f-5 3.2620028f-5 2.9044973f-5 0.0001169443 -0.0001477714 3.3198372f-5 7.44768f-5 -2.2325634f-5 2.1709455f-5 1.9323952f-5 0.00010349797 4.662355f-6 0.000120134464 -1.426732f-5 -3.614648f-5 -4.7784528f-5 -0.000108450906; 0.00012510877 1.6365531f-5 2.6071106f-5 -0.00022927686 2.5291409f-5 -0.00011106614 -4.2158313f-6 -0.00014278505 0.00012849839 -1.0538752f-5 0.00018321196 -5.0901217f-6 5.6905395f-5 -2.0126035f-5 -1.9563063f-6 0.00020969917 9.217068f-6 -5.2084233f-5 -1.9979205f-5 -0.00010448458 -8.920894f-6 -3.2072923f-5 -0.00010738238 6.593582f-5 0.00014089058 -6.93594f-5 0.0001137811 2.7396969f-5 -0.00014477345 -3.0052819f-5 5.1802308f-5 7.691047f-5; 5.539292f-5 1.2016109f-5 -0.00013654484 8.461434f-5 3.012997f-5 -0.00018663032 0.00010116812 -0.00020896742 7.81598f-5 -0.00022899467 5.9576298f-5 -2.3427787f-5 -2.1780272f-5 -0.0001299228 -2.6480795f-6 -4.194423f-5 0.00017164544 -0.00014682561 -6.270525f-5 -2.418831f-6 5.836604f-7 -0.00014961041 0.0001490025 6.167698f-5 6.434922f-5 2.15672f-5 -7.845563f-5 -9.3900235f-5 -0.00010745666 9.943281f-5 3.506144f-5 -0.00010716154; -0.00031913762 -5.373516f-6 -2.4668616f-5 0.00020465998 -0.00015221832 4.4142293f-5 3.4260665f-5 0.00018585508 -1.5315974f-5 -2.7753547f-5 -7.027571f-5 -0.00012219216 0.00014801395 5.5639845f-5 0.00010767018 -4.5898585f-5 9.620609f-6 9.126621f-5 -6.4281805f-5 4.424764f-5 -0.00010782308 8.663507f-5 -0.00018453374 9.6715936f-5 -0.00011947815 -1.0612806f-5 -7.8717985f-6 0.00016270993 -0.0001183861 8.132558f-5 2.7964297f-5 -2.1770951f-5; 0.00013770966 8.160392f-6 0.00014750332 -9.148222f-5 -2.3455923f-5 -0.00012808383 -3.1211828f-5 -9.638449f-5 2.1945178f-5 -0.00010252736 4.3882188f-5 -2.7055738f-5 4.1048006f-5 -0.00011791357 0.0001088796 -3.35307f-5 4.6414452f-6 7.904794f-5 -3.2272903f-5 4.679918f-5 -2.8823291f-5 -0.0002403455 7.2696894f-5 2.1850115f-5 -0.00017884467 0.00013938849 0.00012041823 3.8262526f-5 -4.831809f-6 -0.00011074009 -0.00014165216 6.2243314f-5; 5.5768774f-5 0.00025591633 -8.7005836f-5 -6.97511f-5 -0.00010677257 0.00014461808 9.6828815f-5 -7.235264f-5 0.00013398477 -0.00022309602 5.1014304f-5 -0.00010017812 -0.00013455361 -7.145826f-5 9.886539f-5 4.9542003f-5 -2.2008837f-5 0.00016627453 1.939884f-5 -8.495026f-6 7.701872f-6 -0.00017832675 2.259097f-5 -6.1449035f-5 -0.00019204959 3.286431f-5 2.5918604f-5 2.890915f-5 2.7365557f-5 -0.00010480247 4.5039164f-5 3.899461f-5], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_4 = (weight = Float32[-5.3218864f-5 -8.7234104f-5 0.00024896805 9.7996935f-6 0.00018562307 -0.00013682363 2.88639f-5 0.00013638294 -5.7898418f-5 1.2768941f-6 -7.310829f-5 3.163944f-5 5.68142f-6 -8.405032f-5 -5.7826495f-5 -5.6752317f-5 -0.00013975805 4.9450424f-5 -1.0195218f-5 4.1595777f-5 0.00018590117 2.2314363f-5 0.00022402793 0.000108809094 6.9467824f-5 7.255384f-5 -9.488476f-5 -3.1999603f-5 3.6781903f-6 -2.4678693f-5 2.8044307f-5 -1.9549065f-5; -4.9224338f-5 -0.00014301574 -3.065813f-5 0.00012645814 5.188297f-5 6.0191636f-5 -8.252057f-5 0.00019450326 2.1002388f-5 6.364012f-5 8.513248f-5 -0.00011287401 0.000117475014 0.00010225453 9.432689f-5 -0.00014117833 -0.00017059014 9.827579f-5 6.402922f-6 6.2420164f-5 -6.583349f-5 1.3587606f-5 0.0001036864 0.00012247443 1.927937f-5 -0.00015768438 7.495479f-5 -5.7442823f-5 1.1396786f-5 1.9020681f-5 -9.5680174f-5 -7.685743f-5], bias = Float32[0.0, 0.0])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple()))

      Similar to most DL frameworks, Lux defaults to using Float32, however, in this case we need Float64

      julia
      const params = ComponentArray(ps |> f64)
      -
      -const nn_model = StatefulLuxLayer{true}(nn, nothing, st)
      StatefulLuxLayer{true}(
      -    Chain(
      -        layer_1 = WrappedFunction(Base.Fix1{typeof(LuxLib.API.fast_activation), typeof(cos)}(LuxLib.API.fast_activation, cos)),
      -        layer_2 = Dense(1 => 32, cos),  # 64 parameters
      -        layer_3 = Dense(32 => 32, cos),  # 1_056 parameters
      -        layer_4 = Dense(32 => 2),       # 66 parameters
      -    ),
      -)         # Total: 1_186 parameters,
      -          #        plus 0 states.

      Now we define a system of odes which describes motion of point like particle with Newtonian physics, uses

      `,14)),s("mjx-container",B,[(i(),a("svg",T,A[27]||(A[27]=[n('',1)]))),A[28]||(A[28]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",q,[(i(),a("svg",D,A[29]||(A[29]=[n('',1)]))),A[30]||(A[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[37]||(A[37]=e("where, ")),s("mjx-container",V,[(i(),a("svg",b,A[31]||(A[31]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[32]||(A[32]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[38]||(A[38]=e(", ")),s("mjx-container",z,[(i(),a("svg",R,A[33]||(A[33]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[34]||(A[34]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[39]||(A[39]=e(", and ")),s("mjx-container",w,[(i(),a("svg",x,A[35]||(A[35]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[36]||(A[36]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[40]||(A[40]=e(" are constants"))]),A[45]||(A[45]=n(`
      julia
      function ODE_model(u, nn_params, t)
      -    χ, ϕ = u
      -    p, M, e = ode_model_params
      -
      -    # In this example we know that \`st\` is am empty NamedTuple hence we can safely ignore
      -    # it, however, in general, we should use \`st\` to store the state of the neural network.
      -    y = 1 .+ nn_model([first(u)], nn_params)
      -
      -    numer = (1 + e * cos(χ))^2
      -    denom = M * (p^(3 / 2))
      -
      -    χ̇ = (numer / denom) * y[1]
      -    ϕ̇ = (numer / denom) * y[2]
      -
      -    return [χ̇, ϕ̇]
      -end
      ODE_model (generic function with 1 method)

      Let us now simulate the neural network model and plot the results. We'll use the untrained neural network parameters to simulate the model.

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, params)
      -soln_nn = Array(solve(prob_nn, RK4(); u0, p=params, saveat=tsteps, dt, adaptive=false))
      -waveform_nn = first(compute_waveform(dt_data, soln_nn, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s1 = scatter!(
      -        ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      -
      -    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      -    s2 = scatter!(
      -        ax, tsteps, waveform_nn; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      -
      -    axislegend(ax, [[l1, s1], [l2, s2]],
      -        ["Waveform Data", "Waveform Neural Net (Untrained)"]; position=:lb)
      -
      -    fig
      -end

      Setting Up for Training the Neural Network

      Next, we define the objective (loss) function to be minimized when training the neural differential equations.

      julia
      const mseloss = MSELoss()
      -
      -function loss(θ)
      -    pred = Array(solve(prob_nn, RK4(); u0, p=θ, saveat=tsteps, dt, adaptive=false))
      -    pred_waveform = first(compute_waveform(dt_data, pred, mass_ratio, ode_model_params))
      -    return mseloss(pred_waveform, waveform)
      -end
      loss (generic function with 1 method)

      Warmup the loss function

      julia
      loss(params)
      0.00074655426662429

      Now let us define a callback function to store the loss over time

      julia
      const losses = Float64[]
      -
      -function callback(θ, l)
      -    push!(losses, l)
      -    @printf "Training \\t Iteration: %5d \\t Loss: %.10f\\n" θ.iter l
      -    return false
      -end
      callback (generic function with 1 method)

      Training the Neural Network

      Training uses the BFGS optimizers. This seems to give good results because the Newtonian model seems to give a very good initial guess

      julia
      adtype = Optimization.AutoZygote()
      -optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
      -optprob = Optimization.OptimizationProblem(optf, params)
      -res = Optimization.solve(
      -    optprob, BFGS(; initial_stepnorm=0.01, linesearch=LineSearches.BackTracking());
      -    callback, maxiters=1000)
      retcode: Success
      -u: ComponentVector{Float64}(layer_1 = Float64[], layer_2 = (weight = [0.00012904078175776253; -0.00011254487617403629; -3.194286909998489e-5; 9.532207332072905e-5; 5.749990668843518e-5; -0.00010688291513358794; -0.00010909062984874433; 3.826104511965575e-5; -1.2864484233419668e-5; -8.822358358875405e-5; 3.54601397702874e-5; -1.8142538465319903e-5; -0.00010211646440444609; -4.560515662882762e-6; -0.00021668768022198756; 0.0001458104816263927; -0.000112148016341931; 6.223739910629945e-5; 0.0002477158559482912; -4.460703712539674e-5; 4.5975793909755746e-5; -0.0001358188455924383; -8.202443859769488e-5; -2.2094563973928504e-5; 2.0224993932037384e-5; -0.00024750977172510844; 8.990632522912101e-6; -9.19615631573604e-5; 1.3154642147125585e-5; 5.571318979492865e-5; -2.2546999389286456e-5; -1.0075345926442328e-5;;], bias = [3.584434140166735e-17, -3.7790832398519353e-17, -4.035584463386469e-17, 1.4173640883628515e-16, 2.977832855146648e-17, -2.5063776117056426e-17, 1.0454082635012388e-16, 7.768131173337056e-17, -1.4275992282234933e-17, -1.0056975360155791e-16, 7.444415986842068e-18, -7.541368570010827e-18, 6.403532485188505e-17, -6.0094632419040616e-18, -1.1801867351985504e-16, 1.6551064034667553e-16, -3.54106732759271e-17, 8.7793657233099e-17, 3.732260928460576e-16, 9.636248802390752e-18, -9.422930643430968e-18, 5.92744761801707e-20, -3.654575479460169e-17, -2.4849142209393e-17, 2.650561506556804e-17, -6.031525195587328e-17, -3.049668143568285e-18, -2.677769293814691e-17, 3.2262887057476924e-18, -1.4834358480483478e-17, -2.198454328000425e-17, -1.1483009379054432e-17]), layer_3 = (weight = [0.00016184421098402112 0.00010803140309250678 9.677311014927782e-5 -0.00014462121920117096 8.76454868059012e-5 -4.604495200949848e-5 -6.289623719097284e-5 1.4203156629426456e-5 3.739779628192939e-5 -3.872813989330821e-5 -2.4568695275612873e-5 -1.5919569827766042e-5 -0.0001338799849098273 0.00013050133553877422 -1.7212312863981315e-5 0.00015190980763250898 3.674171226991112e-5 5.399085045171508e-5 7.551747207908346e-6 4.132384116490144e-5 0.0001245440724825835 -0.00012439271825754832 0.00032224463199342885 -9.023275209512656e-5 8.030222366123042e-6 -4.2297604645573056e-5 -1.2624278619860427e-5 3.8175974500674233e-5 0.00010345557517683067 -3.282782211938267e-5 -0.00013041991781830882 3.6781479016262425e-5; -0.00010449867604768573 6.933300992238576e-5 -0.00025979521651754906 -4.150714212449048e-5 0.00015308497913644838 0.00015436650810295014 4.068345387929354e-5 6.19483931571915e-5 0.0001609140841522923 4.1903755662471326e-5 8.615725036940715e-5 -0.0001501849377760926 -1.8756228566078988e-5 0.0001342673736856988 3.438520306951789e-5 3.751999847588415e-5 -8.24602139039425e-6 8.076808261076978e-6 -7.841605602300225e-5 9.43856599391177e-5 3.103769868027601e-5 1.6321725159625427e-5 -0.00011407547335841485 -9.840091424996789e-5 -9.079246775971852e-6 -0.00012266928879546438 -0.00015015776935036124 -4.883344208940647e-5 6.306695522505033e-5 -8.952660337093097e-5 -6.778347360671323e-5 0.00013526839993425815; 3.5417522044284086e-5 -6.405782431583927e-5 5.020034342551356e-5 -2.1875199932126688e-5 9.658514074062714e-5 1.608665235353539e-5 0.00017726282141184352 5.080980673910808e-5 -1.2770628641464566e-5 0.0002584676306602902 -6.891154288068659e-5 -1.8794859611534308e-5 1.850642303425243e-5 3.420808690380042e-5 5.754521163426771e-5 -0.0001948726227831649 -4.1474936265601624e-5 -7.719732337618453e-5 0.00022728855157284713 7.3225227710919e-5 4.146494149126811e-5 -0.00011304630123531435 -0.00012147761543501332 -7.157685712199914e-5 -0.00019784389104215682 7.3078429656656455e-6 -8.12767854634326e-5 -5.8827756843400645e-5 3.20443262487065e-5 -7.681684900062376e-5 -4.208456328824419e-5 -1.7747005299770105e-5; -4.1025844575904385e-5 6.294133240722606e-5 4.189168384483676e-6 -8.066327532155177e-5 4.1981361254741375e-5 -6.740309722821748e-5 2.4914649520232076e-5 -0.00011881545756345831 -0.0001054756513899967 1.9227306301632845e-5 8.64329454038995e-6 0.00010994218505352448 -1.40190733423352e-5 -7.514463186952786e-6 7.736711891037259e-5 -0.00010223158565216602 0.0001427783198318679 -6.638572917886143e-5 -0.00014546464913156064 -7.752653028135617e-6 -0.00011267308686960662 3.0433457913626107e-5 9.601584210891391e-5 -0.00011903363442847782 -0.0001552761470070882 0.00011697976774439031 -0.0001319178928957165 0.000102387274500376 5.188520194628911e-5 2.141174704725275e-5 -0.00010448997741200304 1.1471450170075303e-5; 8.033831614914848e-6 6.58762349086803e-5 2.5317138603176017e-5 0.00017256773414937995 -4.956921632019073e-5 -1.3792036701308052e-5 2.4065914000128018e-5 9.781800170113297e-5 -4.2802190113243786e-5 -2.0800394598880278e-5 -7.520916708362308e-5 0.00018738551286897687 0.00010471301927846019 7.513753214535398e-5 -4.8927280407803166e-5 0.00014860643411901837 -8.296660391697759e-5 7.298651896799662e-5 -8.980246068299549e-5 0.00023513551266315306 -7.58017106253865e-6 -5.8422634286951925e-5 1.2398322065749257e-5 6.965247146236758e-5 2.4523875505300814e-5 7.617487891816275e-7 -5.2143551987492544e-5 2.6069607225691798e-5 -1.228135872569555e-5 -7.004580559059229e-5 -8.407384459477803e-5 -6.839248930000781e-6; 7.312111023020203e-5 -4.37629542520897e-5 3.065457616139575e-5 2.261373350302043e-5 5.742419210400781e-5 5.5076018665202704e-5 -1.3882741828639766e-5 -9.998722876844519e-6 7.64720088536703e-5 2.628371742866767e-5 -3.444180862607065e-5 3.925051617807157e-5 -7.635199387186866e-5 7.141189319918315e-5 0.00010658812874134512 -1.3486587764421316e-5 -2.323948425329717e-6 -0.00015052314217064189 0.00014461648873656717 2.219133687867678e-5 -0.00012743169742498184 -7.240314249975422e-5 7.515333612354784e-5 -3.7678969480112144e-5 0.00011640474161504216 -8.719276326338101e-5 -0.00011620705578570802 -2.276518631879414e-5 3.5285188399938945e-5 -5.405210055505506e-5 4.82273089201041e-5 -5.916923788936427e-5; -6.719126521181299e-5 -5.771018676147833e-5 -3.750569283999701e-5 9.183191888462663e-5 -2.3900014080365e-5 0.00014002856061290895 5.298480265503702e-5 2.9858484244530953e-5 -0.00014034815229512965 0.00021043877369686858 2.2348026154421175e-5 -9.916997931702271e-5 -5.660744990143062e-5 9.209584697112226e-5 -0.00011463618410322375 -5.019283330847153e-6 1.720709098174664e-5 -9.0288567103354e-5 5.906372699849947e-5 5.650011609270983e-5 5.1515473946554614e-5 -6.18384086786765e-5 -1.869286041744173e-5 -0.00019577764886468858 0.00011163986947815657 2.6238638942811507e-5 2.514570825462949e-5 -0.00020383631034325903 -6.0033476425243785e-5 7.3421308674941e-5 -0.00012018774703880979 0.00019175104408953125; -4.4825145084669266e-5 -1.8101329476251788e-5 -0.0001516227532532436 -9.591978254079201e-5 -0.00030787125091351374 0.00017293241013490452 -0.0001444444098864041 0.00018572575551174637 -4.677475611729583e-5 0.00020110630044896678 0.0001163059676061016 0.00012441315244600152 -1.9472796554823704e-5 0.00017059183643381996 -0.00011531400763243772 -4.787714373136293e-5 -3.391370067887998e-5 -0.0001733818391925073 -0.0001570579517986553 0.00013806564289793466 9.760862645875768e-5 -4.874804311597348e-5 -5.2112140326560686e-5 -0.00017633298215273782 -4.573044155891073e-5 -9.739897017180204e-5 -1.0599498852892281e-5 2.2023888743800232e-5 -6.398595381811298e-5 6.1022601369267256e-5 0.00010338270273009304 -6.827957644177645e-5; 0.00018907153329861083 -2.758470932962243e-5 6.101885932662914e-5 -3.550026545484949e-5 -4.956374564857722e-5 -2.0560465288091306e-5 3.0567135893997024e-5 -6.013817497363848e-5 2.9693534027106658e-6 -3.7177792052481816e-5 -5.445645607027763e-5 0.00012090178401491052 -2.655534686413504e-5 -9.828774150877818e-5 0.00012045840898578325 0.0001302307767612189 -4.382322601250955e-6 -7.90651088822658e-6 -0.0001759151224693883 -6.14550832840387e-5 -9.715382717035302e-5 -0.00011321728621327554 -0.0001902274840692754 -1.527058117738606e-6 -3.714613799888262e-5 -1.4192197856860511e-6 4.071570341989771e-5 -7.115444220054804e-6 -5.224470584601832e-6 6.468931437053212e-5 0.00015458567699466283 6.828191302998365e-5; 9.238470677249613e-5 -0.00010029477259646522 -0.0003239590848124499 5.579873625268897e-5 0.0002548092919142067 0.00010384682111638278 6.894083646787928e-5 9.168109256737795e-5 -0.00014577488461778537 5.929734409168132e-5 0.00029607197514000284 0.00013800825702222952 2.519354832205625e-5 0.00011150925577116165 -5.873707475527475e-5 -7.113390590799612e-5 -5.1084149777091826e-5 3.1498674911780836e-5 8.180201559755303e-5 -1.0872212146714704e-5 6.779924599417398e-5 6.325645820200876e-5 2.281052486527042e-5 -2.5364738280550114e-5 -7.60931549620726e-5 6.995052838194472e-5 -0.00020458228816547466 -0.00014076288487100295 -5.1784580750754484e-5 -6.737999199227543e-5 -9.743617254112895e-5 -2.6103024242692998e-5; 2.994869224486138e-5 -6.738536867571874e-6 -3.87276655966161e-5 -6.075948604206879e-6 7.081672571066655e-5 -2.7500846627020414e-5 -7.967143996219695e-5 -0.00014263122524434918 0.0001551919825577893 4.323967786971796e-6 0.00013839253857431705 6.447511746946438e-5 -0.0001228075003747053 0.0001502353091927029 -0.00010829750921409024 -0.0001213918391295802 -0.00010294706829890184 -6.7720349216853204e-6 -4.841792782832408e-5 -1.781962637414239e-6 -3.3881222989384034e-5 5.578872205579631e-6 3.942482486300257e-5 5.576786621522814e-5 -3.4013881886585616e-5 -8.56431967259246e-5 3.2643698522401245e-6 -1.0635604339854782e-5 -9.24900620158046e-5 6.411097761874896e-5 -0.00017150653242433716 3.373168998214119e-5; -5.78123350160414e-5 0.00016638852086633133 -0.00022763764589309998 -2.2378437805872703e-5 6.214773149711706e-5 0.00016687043664294915 -0.00019428026151128424 -2.5980846275178036e-5 -1.5563364812823254e-5 8.464019030814682e-5 -9.949024986692077e-5 1.2501638329749527e-6 -0.00014455265643757148 -2.9475934369674194e-5 6.066380357968293e-5 -0.00010613065258350538 5.125055323315705e-6 7.563868841405652e-5 5.979991458023563e-5 4.571522860804267e-5 1.86434166798617e-5 -7.33691181985879e-5 -4.440494266436019e-6 -2.5847765372435807e-5 -1.2085667166519694e-5 -3.695886933111921e-5 0.00019028710214204783 -0.00010879812050827156 0.00013453081321220542 2.3690677536234e-5 0.00019774817466985131 -0.00010464021632198549; -0.00014413404597055185 -0.00013636132962621653 3.6047473710807704e-5 7.149533300416954e-6 1.9160008830096583e-5 0.00010817386327924342 -4.005381152002951e-5 1.874470808341595e-5 0.00031020346477542545 -3.8560912346856114e-5 4.490448236263789e-5 3.480523127744434e-5 -0.0001624933825069277 -0.00010392377180598205 5.8277528748447354e-5 9.077814001737008e-5 5.367254243263146e-5 -4.883981549044766e-6 9.078624543414058e-5 8.613980434982519e-5 -1.8302858282384107e-5 -8.899405247850415e-5 3.539008366440695e-5 7.949689003892607e-6 5.5809698548712676e-5 -4.590970593475195e-5 6.699935738377742e-5 -0.0001409013088987373 -0.00020880220427263064 -2.0523686092035774e-5 5.018859928621433e-5 0.00019194803871594836; -5.0836640431256135e-5 -7.538980323372986e-5 0.00012213365219284077 0.00015550967757805854 -0.00012977615303609452 -5.158650426094909e-5 -9.445300135778785e-5 0.00012671904799279023 5.2324022116207556e-5 0.00023098065387673432 -8.698153148654638e-5 0.00018369659185625887 -1.2696110635469216e-5 -2.2369258851732236e-6 3.0694488240208985e-5 -0.00011975622807880166 0.0001247906863943803 -1.4960184008690696e-5 3.87502902674283e-5 0.00023631081669807597 -0.00011999955792927964 -1.5374121369268324e-6 8.959246290830727e-5 -3.424867943178601e-5 1.3676376264610214e-5 -0.00011143940107219109 -0.00010046586728205614 6.015020947304269e-5 -2.4393227496349207e-5 3.272974093195133e-5 8.029209377926103e-5 -4.091005509619314e-5; -2.0077208612465763e-5 4.205642078642352e-5 0.0002203426335448267 0.00020521891535947 -0.00015877615835295628 -0.0001763010589663154 -0.00016422321670933018 0.00010166785766825587 9.380446834013477e-5 6.944013118965557e-5 7.570313726075517e-5 -0.00020894228539207352 -0.00016987823648622463 -6.765709287963186e-5 -0.00010386997048291692 5.2174847657902704e-5 2.339459540718326e-5 -6.166371924580691e-5 0.00017520478984558427 -3.222756670100149e-5 0.00016375791863294776 -3.2101947292805683e-5 0.00013332401402050718 6.26565230753145e-5 0.00017636272939609954 9.217783342798653e-5 -3.0515602995979942e-5 4.447797589574892e-5 -4.3379819837970135e-5 3.675198938919608e-5 7.922617776541621e-5 5.5747520285184625e-6; -2.685894639342157e-5 0.0001781980100810144 4.69559273648144e-5 -9.470961946988632e-5 -0.0001646288436580844 -2.3182670583428215e-5 -4.631952260187872e-6 8.540530700940394e-5 -8.155610691517851e-5 -0.00014111447660014205 7.040009234571584e-5 -0.00032174655631877743 0.00013005707308422635 0.00011168235517345108 8.33655504999339e-5 6.067131119436574e-5 -7.807485951105246e-5 -7.818412256654567e-5 -6.0909078337657313e-5 0.0001418841130822501 0.0001140118839708692 0.00024713810137711416 -2.690377356828372e-5 0.00010221995046833276 8.415895002200944e-5 -0.00015950913063477265 4.0067944888674725e-6 -8.733353740297418e-5 -8.763341599604373e-5 5.5929635100818784e-5 -7.809476653108627e-5 -8.700957539020366e-5; 4.786932413068825e-6 -6.208362415278059e-5 -3.9442386560467134e-5 0.00014633141595326291 0.00012983844985640914 3.785779094827512e-5 3.610460166741359e-5 -5.059065611064305e-5 -0.00010864825403629797 -7.380968113508274e-5 -3.915425500096644e-5 2.2093268261896655e-5 4.4380043027140755e-5 -0.00020987168923099534 -2.2882814528585292e-5 -9.160647700850692e-5 9.35278271712624e-6 5.504214049809485e-6 2.5486969525076378e-5 2.770063415439979e-5 7.04589159336426e-5 -1.7808627266741597e-6 -8.943694651903882e-5 2.804817754579744e-5 -5.7030751714110095e-5 8.365784514893617e-6 5.739902325812697e-6 3.801416946729513e-5 -4.0826899431040455e-5 1.3989509488540305e-5 -0.00016164971261830032 -0.0003113416411456186; 0.000102106123100063 -0.00014016335115625707 -9.471053125195037e-5 -0.00034049777821961217 -0.00021730853985387548 -0.00021877435427482487 -1.7805197188460757e-6 5.131708313969156e-5 -6.843685185627884e-5 2.57806788398924e-6 -7.676785970430642e-5 3.0356342554694465e-5 4.429319016311954e-5 -0.00013344438264936093 8.771185604533769e-5 -4.408363600700849e-5 4.933374076758323e-5 -0.00012167637249545816 0.00014828101631398684 3.6815048238749006e-5 5.1664037178524215e-5 3.541777333852939e-5 1.1030585133668373e-5 -2.014099698895911e-5 -5.131294381895894e-5 4.591231677906073e-6 -0.00010240284098869066 0.0003480696592447589 4.954930192583369e-5 9.322812331050906e-5 0.00013963988532453915 -1.446085060193778e-5; 4.622790451149549e-5 0.00017598764546901608 -9.250365970902723e-5 -0.00012786824796594428 -9.314770564915953e-5 -0.00010438933664945397 0.00012465363992132786 -6.00707058939334e-6 -4.2567278307533954e-5 3.35998288738646e-5 -7.608266738489823e-5 -5.8175724427119784e-5 9.88592060258278e-5 8.666327497675332e-5 -0.00015342065440732578 9.066951980577018e-7 -0.00017334775608278201 -7.168600370192953e-6 7.075255815069164e-6 7.88713719321926e-5 -8.685444614630656e-5 3.5428822719140895e-5 0.00011280686025029057 -0.0001193008951867435 2.41430866827519e-5 -8.293404376840209e-5 1.4106842662012454e-5 -0.00012959851434234343 -9.485527831376266e-5 -0.00019885768548366046 -1.7951882913192419e-7 6.090154287275188e-5; 3.8075657294035454e-5 -0.00014175455732583236 -0.00010340492742215 -5.7892051763693166e-5 -4.655654999622882e-5 8.02469948485016e-5 -4.984069898454611e-5 0.00010194839699580599 0.00012222706566718716 -2.441568910398035e-5 9.291891176873161e-5 7.914746611289046e-6 -6.017135098772205e-5 0.0001888233672889447 4.911679564428883e-5 5.396034969635737e-5 -0.0002572190715636609 -5.734145424856716e-6 4.893062571680914e-5 -0.00013650592888551523 -3.4422551898376484e-5 3.034135879720712e-6 6.10831120229798e-5 -7.605628349664316e-5 0.00013981320073982073 9.759410016157477e-5 8.666510250806197e-5 -2.3674731046403816e-5 0.0001224543374792238 -0.00010652114730872946 -1.3364947399156407e-5 0.00012327676807218543; -3.345835663754017e-5 4.095091920202965e-5 -0.00010974786725387859 5.973815844367526e-5 -4.2660292974010864e-5 -9.30017487637572e-5 8.91551897718699e-5 -0.00010395918813561029 -1.2042423958752527e-5 0.00015913601221722784 -6.577646869960186e-5 -1.2962267800736885e-5 -1.3436992223920287e-6 -2.4246759648355458e-5 -4.7679081189237525e-5 8.946577355117967e-6 0.0002195028717603664 6.022836153601422e-5 4.847099247013986e-5 -1.8124768911780975e-5 4.925494368529956e-6 0.0001624573995566141 -9.93526192990018e-5 -6.088138267774102e-5 -0.00013101725941783716 0.00015148486257272604 -9.982552744009511e-5 2.4380538735154734e-5 9.962192808182691e-6 -9.380313728730091e-5 -8.448989698923161e-5 -9.91685375713648e-5; 9.804364651804558e-5 -0.00013735471168170116 6.38697105170153e-5 6.1175306997164e-5 3.706671248086988e-5 1.743404516323532e-5 4.745594685677983e-5 -1.5213735990850587e-5 3.608851454932399e-5 -8.671320995186048e-5 -7.337214468729621e-6 -2.0154193974899372e-5 -5.847420345512655e-5 8.098568776053526e-5 2.223290118869231e-5 -2.9422212377400927e-5 -3.1217473064180395e-5 -8.40423456234346e-6 -0.00013870699206002676 5.3564175119523264e-5 3.997832516337966e-5 8.018772621303625e-5 -0.00020783417733023503 -3.927642194308173e-5 2.2973533647150512e-5 -0.00013395053849656757 0.00018567254426320238 -3.7418320629378204e-5 -5.72097632225374e-7 5.572545103509233e-5 -0.00011531279471806082 -7.445595713202687e-5; 2.5638073752387038e-5 -3.386199004623343e-5 3.410635709273246e-5 -8.304737256864081e-5 0.0001064900208844284 -7.128141423478875e-5 -1.8211219681205216e-5 0.00015260976961557342 0.0001266979890756742 -0.00010786557667912822 -5.931977629685243e-5 0.00011134139659092425 -0.00010548676782079127 0.00018248076249123237 9.047835210777135e-5 -5.4774398263794144e-5 6.062348275420957e-5 0.0002852504187144746 6.466835493892637e-5 -5.293565478005743e-6 -0.00010272087074141474 0.0001012625654807624 1.4142030708658745e-5 6.679186499653968e-5 -0.0001888192708024324 -5.32562151379396e-5 1.695020201126724e-5 -7.910490053714717e-6 -2.109746655647504e-5 -9.280295470986105e-5 2.0247411343498385e-5 2.3503622473563574e-5; 0.00012267545209250727 -2.3302614846771015e-5 0.0001845576753279514 -3.6030802204195574e-5 -8.969030203080347e-5 5.398002957362856e-6 -6.873341573382579e-6 -0.00010259472945254828 2.565181422905006e-5 -0.00015345501195189177 -0.00013484661938638023 7.593563170156848e-5 -1.713277741292608e-5 -4.6390291405346965e-6 9.142588352768827e-5 -2.321811005605096e-5 5.415817760730499e-5 -5.62900804485342e-5 -0.00015189987787526965 3.1804566045959986e-6 -3.7886873525721716e-5 -2.6533643887533557e-5 0.00012959120614904347 8.168061141840587e-5 0.00016691740713066828 1.2252102368062537e-5 -0.00022118821105469543 -5.3078414549973e-5 4.358265431065218e-6 0.00017463043151873554 -1.0993808229258895e-5 5.75512549570616e-5; 3.159141636274673e-5 -0.00011022104297012485 -6.817752580105708e-5 -4.0171815609258854e-5 4.775350101112189e-5 -6.654445166646065e-5 8.293169802299342e-6 2.4024173061425155e-5 2.1412348022512357e-5 -0.00018230733837413447 4.823277439518023e-6 2.139397441054727e-5 8.545282037516485e-5 -0.00010729954953718456 -3.568299359469238e-5 7.19854593887767e-5 -6.826832247611537e-5 -8.483017006370395e-5 -4.113994725340338e-5 9.732307228545356e-5 0.00013572206228558613 -2.7222343953739493e-6 -4.743237365915371e-5 -0.0001458835307596573 9.53589711832492e-5 4.081624649988187e-5 3.868265560298705e-5 0.000101424093035117 -7.21248710499469e-5 -7.761311860236556e-5 -0.00018438571566663171 -5.836592970014069e-5; 1.9434475735285212e-5 -3.54102412068817e-5 5.494447400514461e-5 0.0001329010774958846 0.00010804757043905769 -9.77762319501544e-5 6.776036757462998e-5 6.780445987778172e-5 6.958511953661725e-5 -6.413187652407419e-5 9.143889478827302e-5 3.0034529164089138e-5 -4.9840822566072335e-5 9.829268678976743e-5 4.441438436574161e-5 3.23076019873696e-5 0.00010130146261024565 -8.869375914682902e-6 6.878185381595422e-5 0.00011770217684051357 -9.962753388145616e-5 -0.00012705672265776708 -0.00023942263937834753 6.482771282271026e-5 0.00019372796673514434 1.375632477959769e-5 -0.0001117432620497369 5.267292003798961e-5 -0.00019889632671884686 2.109820893628288e-5 0.00014671273679659245 0.00010655527880834928; 0.00015326878107489213 0.00017225651296658113 -0.00019746958517596915 0.00010164548345247386 -1.9320526037594694e-5 3.127462602324607e-5 -1.0652513113287296e-5 0.00012335951666220181 1.761864311167285e-5 0.0001873437349485446 4.645353598509099e-5 -3.5530449923320618e-6 0.00014254723552485873 -2.6303644547697315e-5 4.005661591405254e-5 -4.95041504410511e-5 3.262371589417253e-5 2.9048661387528156e-5 0.000116947986468257 -0.00014776770788838128 3.320205993706372e-5 7.4480484697138e-5 -2.2321945864940837e-5 2.1713142784319098e-5 1.9327640044975404e-5 0.00010350166027532305 4.666042938806584e-6 0.00012013815191213221 -1.4263631813337436e-5 -3.614279288111362e-5 -4.778083988010678e-5 -0.00010844721785641021; 0.00012510945583593439 1.6366220221551666e-5 2.6071794706096284e-5 -0.0002292761733439514 2.5292097631191662e-5 -0.00011106545076805646 -4.215142383746798e-6 -0.00014278436225473204 0.00012849907710582722 -1.0538062782561645e-5 0.0001832126485499808 -5.0894328193482666e-6 5.690608435210316e-5 -2.0125346567199934e-5 -1.955617393532209e-6 0.00020969985547375172 9.217756484157493e-6 -5.208354429092567e-5 -1.997851592357001e-5 -0.00010448388872069576 -8.920205403827503e-6 -3.207223432006019e-5 -0.00010738169163565525 6.59365081777121e-5 0.00014089126983557595 -6.935870857312072e-5 0.00011378178663684763 2.73976579013286e-5 -0.00014477276505536627 -3.005213017209725e-5 5.180299689194884e-5 7.691116246527768e-5; 5.538993233130405e-5 1.2013121395275781e-5 -0.00013654782864252595 8.461134953903325e-5 3.0126981673657063e-5 -0.0001866333089674399 0.00010116513536558117 -0.00020897041153150129 7.815681115969442e-5 -0.00022899765948751778 5.957330964518243e-5 -2.343077479965612e-5 -2.1783259708183335e-5 -0.00012992579044296065 -2.6510675433865744e-6 -4.1947217450797755e-5 0.00017164245460661758 -0.00014682859668029686 -6.270823845621428e-5 -2.4218189446592156e-6 5.806723811284637e-7 -0.0001496133966975353 0.00014899951444020555 6.167399502371825e-5 6.43462287007217e-5 2.156421274780055e-5 -7.845862031901837e-5 -9.390322278706127e-5 -0.0001074596487706212 9.94298194745899e-5 3.5058451121565134e-5 -0.00010716452865383237; -0.0003191377717522412 -5.37367159641918e-6 -2.4668771315294324e-5 0.0002046598284286004 -0.0001522184710703305 4.4142137172354915e-5 3.4260509979401333e-5 0.00018585492679984496 -1.5316129887754148e-5 -2.7753702765142216e-5 -7.027586516788974e-5 -0.0001221923111225926 0.00014801379174594998 5.563968906779954e-5 0.00010767002558703952 -4.5898740521908645e-5 9.620453143622839e-6 9.126605172776816e-5 -6.428196038915285e-5 4.424748576265019e-5 -0.00010782323343311245 8.663491198229877e-5 -0.00018453389466604935 9.671578036057946e-5 -0.00011947830617292601 -1.0612961799879316e-5 -7.871953952520206e-6 0.00016270977093507758 -0.00011838625769461324 8.132542743249991e-5 2.7964141823655893e-5 -2.177110664353059e-5; 0.0001377084021509263 8.159131000316127e-6 0.00014750205937833923 -9.148348416838996e-5 -2.3457184027037318e-5 -0.00012808509435908975 -3.121308927130042e-5 -9.638574792244055e-5 2.1943916551562877e-5 -0.00010252862065736459 4.3880926939336946e-5 -2.7056998617454448e-5 4.104674499776066e-5 -0.00011791483356556496 0.00010887833687539623 -3.353196242878853e-5 4.6401841313083126e-6 7.904667782658295e-5 -3.227416399802949e-5 4.67979201925031e-5 -2.8824552457635503e-5 -0.00024034676115898234 7.269563266835809e-5 2.1848854346344296e-5 -0.00017884592906976674 0.0001393872275070107 0.00012041697246549082 3.826126467847036e-5 -4.833069936112578e-6 -0.00011074135053841844 -0.0001416534165333453 6.224205336128725e-5; 5.576822008828056e-5 0.00025591577145627056 -8.700639005161747e-5 -6.975165302043561e-5 -0.00010677312234526383 0.00014461752875598408 9.682826061856272e-5 -7.23531934932611e-5 0.00013398421146797074 -0.00022309657793136115 5.101374922326844e-5 -0.00010017867457697105 -0.00013455416915582251 -7.145881095545308e-5 9.8864837534567e-5 4.95414482861056e-5 -2.2009391207179958e-5 0.00016627398003714284 1.939828548297046e-5 -8.495580507376246e-6 7.701317779203884e-6 -0.00017832730514106884 2.259041543532805e-5 -6.144958893179827e-5 -0.00019205013955121797 3.2863756539376833e-5 2.5918049184321998e-5 2.8908595093011135e-5 2.7365002504150038e-5 -0.00010480302585398664 4.503860913922144e-5 3.899405641953519e-5], bias = [2.7138095716372747e-9, -1.6498382031809333e-10, -4.608275953969919e-10, -1.9291751862516236e-9, 2.7705720728361014e-9, 3.264485710362863e-10, -4.305414876830443e-10, -2.259400710954987e-9, -5.408371575175171e-10, 7.742756136062776e-10, -1.7777348153795833e-9, 1.5844289897328725e-11, 8.990327688345358e-10, 2.1052335604828207e-9, 2.4371523286227764e-9, -3.6367908381110006e-10, -2.9491734226378475e-9, -1.5301196637787933e-9, -3.159095204734098e-9, 1.193193396044492e-9, -8.514409789703838e-10, -1.0173824693803676e-9, 2.0883858638065282e-9, 1.7282071930744482e-10, -2.428688513980065e-9, 2.9031531937412355e-9, 3.6879709296454276e-9, 6.888843398620999e-10, -2.987999344558813e-9, -1.554363854785273e-10, -1.261098632735822e-9, -5.543848307821684e-10]), layer_4 = (weight = [-0.0007628348698585942 -0.0007968502568538448 -0.0004606481010041707 -0.0006998163852451662 -0.0005239929431246982 -0.000846439777128357 -0.0006807522492192497 -0.0005732331186485315 -0.0007675145653677455 -0.0007083392471198734 -0.0007827243794498699 -0.0006779767132701048 -0.0007039347168316969 -0.0007936663818308618 -0.0007674425274839696 -0.0007663684670847433 -0.0008493740251560428 -0.0006601656832833226 -0.0007198111732472897 -0.0006680203486089178 -0.0005237149715754186 -0.0006873017693977915 -0.0004855881439717459 -0.0006008070585930647 -0.0006401482151674277 -0.0006370621519656112 -0.000804500632374427 -0.0007416157470687397 -0.000705937786320906 -0.0007342948457122147 -0.0006815718149522211 -0.0007291652117940308; 0.000175943447597476 8.215208935135869e-5 0.0001945096995201535 0.00035162594564089306 0.00027705075610947075 0.00028535946776719076 0.0001426472594428061 0.00041967105742638807 0.0002461702184744789 0.0002888079448299872 0.00031030029206790394 0.00011229382078070643 0.0003426428411950763 0.00032742233636123233 0.00031949468363295594 8.398950082268685e-5 5.457763855494437e-5 0.0003234436096119211 0.00023157069141912902 0.0002875879872335591 0.0001593343377194908 0.00023875543193984836 0.0003288542043758386 0.0003476422657305056 0.00024444716656784975 6.748340284060588e-5 0.0003001225345201752 0.00016772500603453095 0.00023656456266660993 0.00024418851308823634 0.00012948764830723452 0.00014831039840870855], bias = [-0.0007096161531146074, 0.00022516783217494898]))

      Visualizing the Results

      Let us now plot the loss over time

      julia
      begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Iteration", ylabel="Loss")
      -
      -    lines!(ax, losses; linewidth=4, alpha=0.75)
      -    scatter!(ax, 1:length(losses), losses; marker=:circle, markersize=12, strokewidth=2)
      -
      -    fig
      -end

      Finally let us visualize the results

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, res.u)
      -soln_nn = Array(solve(prob_nn, RK4(); u0, p=res.u, saveat=tsteps, dt, adaptive=false))
      -waveform_nn_trained = first(compute_waveform(
      -    dt_data, soln_nn, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s1 = scatter!(
      -        ax, tsteps, waveform; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      -
      -    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      -    s2 = scatter!(
      -        ax, tsteps, waveform_nn; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      -
      -    l3 = lines!(ax, tsteps, waveform_nn_trained; linewidth=2, alpha=0.75)
      -    s3 = scatter!(ax, tsteps, waveform_nn_trained; marker=:circle,
      -        alpha=0.5, strokewidth=2, markersize=12)
      -
      -    axislegend(ax, [[l1, s1], [l2, s2], [l3, s3]],
      -        ["Waveform Data", "Waveform Neural Net (Untrained)", "Waveform Neural Net"];
      -        position=:lb)
      -
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 16 default, 0 interactive, 8 GC (on 16 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 16
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 16
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,31))])}const L=p(t,[["render",U]]);export{G as __pageData,L as default}; diff --git a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.lean.js b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.lean.js deleted file mode 100644 index d8db204a75..0000000000 --- a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.BuiqplW5.lean.js +++ /dev/null @@ -1,805 +0,0 @@ -import{_ as p,c as a,a2 as n,j as s,a as e,o as i}from"./chunks/framework.I-x9Gl6h.js";const G=JSON.parse('{"title":"Training a Neural ODE to Model Gravitational Waveforms","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/advanced/1_GravitationalWaveForm.md","filePath":"tutorials/advanced/1_GravitationalWaveForm.md","lastUpdated":null}'),t={name:"tutorials/advanced/1_GravitationalWaveForm.md"},l={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.819ex",height:"1.658ex",role:"img",focusable:"false",viewBox:"0 -583 4782.1 733","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"24.527ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10840.9 1000","aria-hidden":"true"},C={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},f={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},m={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},u={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"},B={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},q={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},b={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},R={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},w={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},x={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"};function U(K,A,P,O,X,j){return i(),a("div",null,[A[41]||(A[41]=n(`

      Training a Neural ODE to Model Gravitational Waveforms

      This code is adapted from Astroinformatics/ScientificMachineLearning

      The code has been minimally adapted from Keith et. al. 2021 which originally used Flux.jl

      Package Imports

      julia
      using Lux, ComponentArrays, LineSearches, OrdinaryDiffEqLowOrderRK, Optimization,
      -      OptimizationOptimJL, Printf, Random, SciMLSensitivity
      -using CairoMakie
      Precompiling Lux...
      -    491.1 ms  ✓ JLLWrappers
      -    554.4 ms  ✓ Requires
      -    552.6 ms  ✓ Compat
      -    629.1 ms  ✓ DocStringExtensions
      -    634.4 ms  ✓ CpuId
      -    820.3 ms  ✓ Static
      -    401.2 ms  ✓ Compat → CompatLinearAlgebraExt
      -    632.2 ms  ✓ Hwloc_jll
      -    661.6 ms  ✓ OpenSpecFun_jll
      -    431.8 ms  ✓ BitTwiddlingConvenienceFunctions
      -    684.5 ms  ✓ LogExpFunctions
      -    658.8 ms  ✓ Functors
      -   1631.2 ms  ✓ DispatchDoctor
      -   1087.2 ms  ✓ CPUSummary
      -    476.3 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   1311.3 ms  ✓ ChainRulesCore
      -   1012.2 ms  ✓ MLDataDevices
      -   1696.2 ms  ✓ StaticArrayInterface
      -    868.5 ms  ✓ PolyesterWeave
      -    545.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    551.7 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -   1443.6 ms  ✓ LuxCore
      -    888.0 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    669.0 ms  ✓ CloseOpenIntervals
      -    744.1 ms  ✓ LayoutPointers
      -    822.5 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -   1334.1 ms  ✓ Optimisers
      -   2495.6 ms  ✓ Hwloc
      -    595.4 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    597.0 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    608.0 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    628.4 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    745.5 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1681.4 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    441.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    441.6 ms  ✓ Optimisers → OptimisersAdaptExt
      -   2906.4 ms  ✓ SpecialFunctions
      -   1013.3 ms  ✓ StrideArraysCore
      -    750.8 ms  ✓ Polyester
      -   1703.7 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   6816.8 ms  ✓ StaticArrays
      -   2749.4 ms  ✓ WeightInitializers
      -    593.3 ms  ✓ Adapt → AdaptStaticArraysExt
      -    602.7 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    617.4 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    643.8 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    673.6 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3505.1 ms  ✓ ForwardDiff
      -    931.9 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    832.3 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3181.1 ms  ✓ KernelAbstractions
      -    622.3 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    698.8 ms  ✓ KernelAbstractions → EnzymeExt
      -   5027.9 ms  ✓ NNlib
      -    805.4 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    892.2 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5422.4 ms  ✓ LuxLib
      -   8900.0 ms  ✓ Lux
      -  58 dependencies successfully precompiled in 32 seconds. 51 already precompiled.
      -Precompiling ComponentArrays...
      -    876.0 ms  ✓ ComponentArrays
      -  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      -Precompiling MLDataDevicesComponentArraysExt...
      -    497.4 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      -Precompiling LuxComponentArraysExt...
      -    526.1 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1503.1 ms  ✓ Lux → LuxComponentArraysExt
      -   1876.2 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      -  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      -Precompiling LineSearches...
      -    980.1 ms  ✓ NLSolversBase
      -   1709.8 ms  ✓ LineSearches
      -  2 dependencies successfully precompiled in 3 seconds. 41 already precompiled.
      -Precompiling FiniteDiffStaticArraysExt...
      -    554.4 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 21 already precompiled.
      -Precompiling OrdinaryDiffEqLowOrderRK...
      -    417.5 ms  ✓ FastPower
      -    435.1 ms  ✓ MuladdMacro
      -    426.0 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    472.1 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    527.1 ms  ✓ TruncatedStacktraces
      -    758.9 ms  ✓ PreallocationTools
      -    790.4 ms  ✓ FastBroadcast
      -    631.1 ms  ✓ FastPower → FastPowerForwardDiffExt
      -   1449.2 ms  ✓ RecipesBase
      -   1660.9 ms  ✓ DataStructures
      -   2067.6 ms  ✓ Accessors
      -    736.5 ms  ✓ Accessors → LinearAlgebraExt
      -   1315.0 ms  ✓ SymbolicIndexingInterface
      -   1715.7 ms  ✓ SciMLOperators
      -    495.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1955.7 ms  ✓ RecursiveArrayTools
      -    717.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    797.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -  10992.2 ms  ✓ SciMLBase
      -   5769.1 ms  ✓ DiffEqBase
      -   4436.6 ms  ✓ OrdinaryDiffEqCore
      -   1470.8 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   4097.4 ms  ✓ OrdinaryDiffEqLowOrderRK
      -  23 dependencies successfully precompiled in 33 seconds. 102 already precompiled.
      -Precompiling StaticArraysExt...
      -    631.9 ms  ✓ Accessors → StaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    587.3 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      -Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    674.5 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
      -Precompiling ComponentArraysSciMLBaseExt...
      -    949.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   1088.6 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      -  2 dependencies successfully precompiled in 1 seconds. 97 already precompiled.
      -Precompiling DiffEqBaseChainRulesCoreExt...
      -   1519.9 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      -  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    424.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling Optimization...
      -    443.0 ms  ✓ ProgressLogging
      -    521.6 ms  ✓ LoggingExtras
      -    619.6 ms  ✓ L_BFGS_B_jll
      -    834.1 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -    845.3 ms  ✓ ProgressMeter
      -    896.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -    663.1 ms  ✓ TerminalLoggers
      -    509.3 ms  ✓ LBFGSB
      -   1243.6 ms  ✓ SparseMatrixColorings
      -    455.1 ms  ✓ ConsoleProgressMonitor
      -    813.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -   3586.2 ms  ✓ SparseConnectivityTracer
      -   2116.4 ms  ✓ OptimizationBase
      -   1946.0 ms  ✓ Optimization
      -  14 dependencies successfully precompiled in 8 seconds. 90 already precompiled.
      -Precompiling ChainRulesCoreSparseArraysExt...
      -    609.2 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      -Precompiling SparseArraysExt...
      -    891.2 ms  ✓ KernelAbstractions → SparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    640.6 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling DiffEqBaseSparseArraysExt...
      -   1626.5 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      -Precompiling DifferentiationInterfaceChainRulesCoreExt...
      -    384.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -  1 dependency successfully precompiled in 0 seconds. 11 already precompiled.
      -Precompiling DifferentiationInterfaceStaticArraysExt...
      -    576.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling DifferentiationInterfaceForwardDiffExt...
      -    757.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      -Precompiling SparseConnectivityTracerSpecialFunctionsExt...
      -   1150.6 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   1534.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      -  2 dependencies successfully precompiled in 2 seconds. 26 already precompiled.
      -Precompiling SparseConnectivityTracerNNlibExt...
      -   1644.8 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      -  1 dependency successfully precompiled in 2 seconds. 46 already precompiled.
      -Precompiling SparseConnectivityTracerNaNMathExt...
      -   1205.7 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling OptimizationForwardDiffExt...
      -    602.4 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      -  1 dependency successfully precompiled in 1 seconds. 110 already precompiled.
      -Precompiling OptimizationMLDataDevicesExt...
      -   1376.5 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      -  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      -Precompiling HwlocTrees...
      -    496.8 ms  ✓ Hwloc → HwlocTrees
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling OptimizationOptimJL...
      -    477.3 ms  ✓ SortingAlgorithms
      -   2165.1 ms  ✓ StatsBase
      -   2976.7 ms  ✓ Optim
      -  12287.9 ms  ✓ OptimizationOptimJL
      -  4 dependencies successfully precompiled in 18 seconds. 136 already precompiled.
      -Precompiling SciMLSensitivity...
      -    519.6 ms  ✓ StructIO
      -    536.7 ms  ✓ HashArrayMappedTries
      -    548.7 ms  ✓ PoissonRandom
      -    590.7 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    608.4 ms  ✓ Scratch
      -    721.9 ms  ✓ Accessors → StructArraysExt
      -    840.3 ms  ✓ Rmath_jll
      -    924.0 ms  ✓ oneTBB_jll
      -   1345.0 ms  ✓ Cassette
      -    866.8 ms  ✓ ResettableStacks
      -   1475.2 ms  ✓ KLU
      -    988.3 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    906.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   1794.3 ms  ✓ FastLapackInterface
      -    663.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -   1578.8 ms  ✓ LazyArtifacts
      -   1610.7 ms  ✓ ZygoteRules
      -   1462.2 ms  ✓ QuadGK
      -   1684.4 ms  ✓ HypergeometricFunctions
      -   1297.9 ms  ✓ HostCPUFeatures
      -    561.4 ms  ✓ ScopedValues
      -   1155.1 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2935.7 ms  ✓ IRTools
      -    777.8 ms  ✓ FunctionProperties
      -   3341.8 ms  ✓ TimerOutputs
      -   1411.2 ms  ✓ Rmath
      -   2171.9 ms  ✓ IntelOpenMP_jll
      -   2295.5 ms  ✓ LLVMExtra_jll
      -   2445.8 ms  ✓ Enzyme_jll
      -   5200.1 ms  ✓ Test
      -   3138.1 ms  ✓ ObjectFile
      -   4434.5 ms  ✓ SciMLJacobianOperators
      -    843.7 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -    901.1 ms  ✓ Accessors → TestExt
      -   2624.9 ms  ✓ StatsFuns
      -   2008.0 ms  ✓ MKL_jll
      -   1606.2 ms  ✓ Sparspak
      -   1703.8 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    859.0 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   7505.6 ms  ✓ ChainRules
      -   1813.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   6845.9 ms  ✓ Tracker
      -   8709.5 ms  ✓ Krylov
      -   6699.2 ms  ✓ DiffEqCallbacks
      -    919.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1322.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -   1538.6 ms  ✓ FastPower → FastPowerTrackerExt
      -   1616.7 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -   1639.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -   1811.3 ms  ✓ Tracker → TrackerPDMatsExt
      -   2806.7 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   8958.9 ms  ✓ VectorizationBase
      -   5930.2 ms  ✓ Distributions
      -   7162.7 ms  ✓ LLVM
      -   1136.5 ms  ✓ SLEEFPirates
      -   1474.0 ms  ✓ Distributions → DistributionsTestExt
      -   1476.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1903.3 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -   2017.8 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -  14531.2 ms  ✓ ArrayLayouts
      -    781.6 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   2432.6 ms  ✓ LazyArrays
      -   3963.6 ms  ✓ DiffEqNoiseProcess
      -   4622.0 ms  ✓ GPUArrays
      -   1307.4 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -  17512.8 ms  ✓ ReverseDiff
      -   3478.5 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   3482.2 ms  ✓ ArrayInterface → ArrayInterfaceReverseDiffExt
      -   3654.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -   4734.7 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -   4958.2 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   5063.9 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18538.1 ms  ✓ GPUCompiler
      -  21384.0 ms  ✓ LoopVectorization
      -   1165.2 ms  ✓ LoopVectorization → SpecialFunctionsExt
      -   1305.6 ms  ✓ LoopVectorization → ForwardDiffExt
      -   3945.1 ms  ✓ TriangularSolve
      -  29507.5 ms  ✓ Zygote
      -   1612.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -   1946.3 ms  ✓ Zygote → ZygoteTrackerExt
      -   3108.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   3504.4 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -  16228.7 ms  ✓ RecursiveFactorization
      -   5413.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  30267.2 ms  ✓ LinearSolve
      -   2570.3 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2623.4 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4099.1 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      - 199431.2 ms  ✓ Enzyme
      -   7082.8 ms  ✓ FastPower → FastPowerEnzymeExt
      -   7095.7 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -   7163.5 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   7185.4 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   7190.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   7370.6 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -  17743.4 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19363.4 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  17569.3 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  29561.6 ms  ✓ SciMLSensitivity
      -  99 dependencies successfully precompiled in 279 seconds. 192 already precompiled.
      -Precompiling LuxLibSLEEFPiratesExt...
      -   2421.1 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      -  1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
      -Precompiling LuxLibLoopVectorizationExt...
      -   4597.0 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      -  1 dependency successfully precompiled in 5 seconds. 105 already precompiled.
      -Precompiling LuxLibEnzymeExt...
      -   1326.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
      -Precompiling LuxEnzymeExt...
      -   7532.3 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling OptimizationEnzymeExt...
      -  20489.3 ms  ✓ OptimizationBase → OptimizationEnzymeExt
      -  1 dependency successfully precompiled in 21 seconds. 109 already precompiled.
      -Precompiling MLDataDevicesTrackerExt...
      -   1148.9 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      -Precompiling LuxLibTrackerExt...
      -   1074.6 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.9 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      -Precompiling LuxTrackerExt...
      -   2038.3 ms  ✓ Lux → LuxTrackerExt
      -  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      -Precompiling ComponentArraysTrackerExt...
      -   1155.5 ms  ✓ ComponentArrays → ComponentArraysTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 70 already precompiled.
      -Precompiling MLDataDevicesReverseDiffExt...
      -   3498.5 ms  ✓ MLDataDevices → MLDataDevicesReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 49 already precompiled.
      -Precompiling LuxLibReverseDiffExt...
      -   3387.4 ms  ✓ LuxCore → LuxCoreArrayInterfaceReverseDiffExt
      -   4268.7 ms  ✓ LuxLib → LuxLibReverseDiffExt
      -  2 dependencies successfully precompiled in 4 seconds. 98 already precompiled.
      -Precompiling ComponentArraysReverseDiffExt...
      -   3584.6 ms  ✓ ComponentArrays → ComponentArraysReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 57 already precompiled.
      -Precompiling OptimizationReverseDiffExt...
      -   3383.0 ms  ✓ OptimizationBase → OptimizationReverseDiffExt
      -  1 dependency successfully precompiled in 4 seconds. 130 already precompiled.
      -Precompiling LuxReverseDiffExt...
      -   4398.6 ms  ✓ Lux → LuxReverseDiffExt
      -  1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    793.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling MLDataDevicesZygoteExt...
      -   1552.7 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -   1579.9 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      -  2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
      -Precompiling LuxZygoteExt...
      -   1645.8 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -   2696.5 ms  ✓ Lux → LuxZygoteExt
      -  2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
      -Precompiling ComponentArraysZygoteExt...
      -   1549.5 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -   1826.0 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      -  2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.
      -Precompiling OptimizationZygoteExt...
      -   2164.3 ms  ✓ OptimizationBase → OptimizationZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 160 already precompiled.
      -Precompiling CairoMakie...
      -    532.3 ms  ✓ RangeArrays
      -    515.1 ms  ✓ PolygonOps
      -    516.4 ms  ✓ IndirectArrays
      -    524.0 ms  ✓ LaTeXStrings
      -    565.6 ms  ✓ GeoFormatTypes
      -    571.2 ms  ✓ Contour
      -    591.2 ms  ✓ TensorCore
      -    628.0 ms  ✓ TriplotBase
      -    639.0 ms  ✓ StableRNGs
      -    691.0 ms  ✓ Extents
      -    703.5 ms  ✓ Observables
      -    742.1 ms  ✓ IntervalSets
      -    744.9 ms  ✓ RoundingEmulator
      -    834.5 ms  ✓ IterTools
      -    457.4 ms  ✓ CRC32c
      -    526.0 ms  ✓ Ratios
      -    550.6 ms  ✓ LazyModules
      -    601.2 ms  ✓ PCRE2_jll
      -   1160.8 ms  ✓ Grisu
      -    615.2 ms  ✓ Inflate
      -    599.2 ms  ✓ MappedArrays
      -    545.5 ms  ✓ RelocatableFolders
      -    780.8 ms  ✓ TranscodingStreams
      -   1577.5 ms  ✓ Format
      -    984.0 ms  ✓ SharedArrays
      -    876.0 ms  ✓ OpenSSL_jll
      -    805.3 ms  ✓ Graphite2_jll
      -    836.5 ms  ✓ LLVMOpenMP_jll
      -    815.4 ms  ✓ Bzip2_jll
      -    876.1 ms  ✓ Libmount_jll
      -    831.4 ms  ✓ libfdk_aac_jll
      -    924.3 ms  ✓ Xorg_libXau_jll
      -    839.4 ms  ✓ Imath_jll
      -    872.4 ms  ✓ libpng_jll
      -    817.1 ms  ✓ Giflib_jll
      -    989.0 ms  ✓ LAME_jll
      -   1592.0 ms  ✓ SimpleTraits
      -    855.9 ms  ✓ LERC_jll
      -    850.5 ms  ✓ EarCut_jll
      -    833.9 ms  ✓ CRlibm_jll
      -    929.8 ms  ✓ JpegTurbo_jll
      -    839.7 ms  ✓ Ogg_jll
      -    847.5 ms  ✓ x265_jll
      -    930.1 ms  ✓ XZ_jll
      -    838.0 ms  ✓ Xorg_libXdmcp_jll
      -    854.5 ms  ✓ x264_jll
      -    875.3 ms  ✓ libaom_jll
      -    885.1 ms  ✓ Zstd_jll
      -    861.8 ms  ✓ Expat_jll
      -   2356.2 ms  ✓ UnicodeFun
      -    838.8 ms  ✓ LZO_jll
      -    841.3 ms  ✓ Opus_jll
      -    717.8 ms  ✓ Xorg_xtrans_jll
      -    838.3 ms  ✓ Libffi_jll
      -    872.3 ms  ✓ Libiconv_jll
      -    818.6 ms  ✓ Libgpg_error_jll
      -    755.5 ms  ✓ Xorg_libpthread_stubs_jll
      -    856.0 ms  ✓ isoband_jll
      -    867.2 ms  ✓ FFTW_jll
      -    845.3 ms  ✓ FriBidi_jll
      -    829.6 ms  ✓ Libuuid_jll
      -    532.0 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    524.7 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -    527.5 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    547.6 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -    572.0 ms  ✓ Showoff
      -    656.4 ms  ✓ MosaicViews
      -   1510.9 ms  ✓ FilePathsBase
      -    855.5 ms  ✓ Pixman_jll
      -   1453.4 ms  ✓ GeoInterface
      -    887.6 ms  ✓ FreeType2_jll
      -   1007.6 ms  ✓ OpenEXR_jll
      -   1835.2 ms  ✓ ColorBrewer
      -    912.5 ms  ✓ libsixel_jll
      -    943.6 ms  ✓ libvorbis_jll
      -    935.4 ms  ✓ Libtiff_jll
      -    645.9 ms  ✓ Isoband
      -    962.7 ms  ✓ XML2_jll
      -    879.5 ms  ✓ Libgcrypt_jll
      -    844.9 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   1120.0 ms  ✓ AxisArrays
      -   2999.8 ms  ✓ ColorVectorSpace
      -   1064.5 ms  ✓ FilePaths
      -   1131.8 ms  ✓ Fontconfig_jll
      -   1151.7 ms  ✓ Gettext_jll
      -   2947.0 ms  ✓ Interpolations
      -   1440.1 ms  ✓ FreeType
      -   1035.2 ms  ✓ XSLT_jll
      -   1822.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1024.1 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -   3759.5 ms  ✓ IntervalArithmetic
      -   1256.5 ms  ✓ Glib_jll
      -   5306.1 ms  ✓ PkgVersion
      -   5392.6 ms  ✓ FileIO
      -    840.7 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   1719.0 ms  ✓ Xorg_libxcb_jll
      -    658.4 ms  ✓ Xorg_libX11_jll
      -    647.1 ms  ✓ Xorg_libXext_jll
      -    817.7 ms  ✓ Xorg_libXrender_jll
      -   1689.9 ms  ✓ QOI
      -   4094.7 ms  ✓ ColorSchemes
      -    912.6 ms  ✓ Libglvnd_jll
      -   2345.9 ms  ✓ OpenEXR
      -   1076.1 ms  ✓ Cairo_jll
      -   1550.8 ms  ✓ libwebp_jll
      -   1399.6 ms  ✓ HarfBuzz_jll
      -   7925.6 ms  ✓ FFTW
      -   7438.4 ms  ✓ GeometryBasics
      -   6105.2 ms  ✓ ExactPredicates
      -  10882.4 ms  ✓ SIMD
      -   1349.5 ms  ✓ libass_jll
      -   1414.4 ms  ✓ Pango_jll
      -   1736.2 ms  ✓ Packing
      -   2325.6 ms  ✓ ShaderAbstractions
      -   1367.2 ms  ✓ FFMPEG_jll
      -   2916.6 ms  ✓ FreeTypeAbstraction
      -   1742.4 ms  ✓ Cairo
      -   3073.0 ms  ✓ KernelDensity
      -   5911.6 ms  ✓ MakieCore
      -   6151.3 ms  ✓ DelaunayTriangulation
      -   7962.2 ms  ✓ GridLayoutBase
      -  11568.1 ms  ✓ PlotUtils
      -  22893.7 ms  ✓ Unitful
      -    586.3 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    590.6 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   1425.8 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -  12104.4 ms  ✓ Automa
      -  21254.1 ms  ✓ ImageCore
      -   2100.7 ms  ✓ ImageBase
      -   2629.4 ms  ✓ WebP
      -   3426.7 ms  ✓ PNGFiles
      -   3607.1 ms  ✓ JpegTurbo
      -   2191.5 ms  ✓ ImageAxes
      -   4724.6 ms  ✓ Sixel
      -   1163.4 ms  ✓ ImageMetadata
      -   1999.8 ms  ✓ Netpbm
      -  12307.5 ms  ✓ MathTeXEngine
      -  49319.1 ms  ✓ TiffImages
      -   1176.4 ms  ✓ ImageIO
      - 112829.4 ms  ✓ Makie
      -  74538.7 ms  ✓ CairoMakie
      -  141 dependencies successfully precompiled in 252 seconds. 129 already precompiled.
      -Precompiling SparseMatrixColoringsColorsExt...
      -    869.7 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      -  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      -Precompiling ZygoteColorsExt...
      -   1732.9 ms  ✓ Zygote → ZygoteColorsExt
      -  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
      -Precompiling IntervalSetsExt...
      -    784.1 ms  ✓ Accessors → IntervalSetsExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling IntervalSetsRecipesBaseExt...
      -    515.5 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
      -Precompiling UnitfulExt...
      -    585.0 ms  ✓ Accessors → UnitfulExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling DiffEqBaseUnitfulExt...
      -   1538.2 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
      -  1 dependency successfully precompiled in 2 seconds. 123 already precompiled.
      -Precompiling NNlibFFTWExt...
      -    860.7 ms  ✓ NNlib → NNlibFFTWExt
      -  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
      -Precompiling IntervalArithmeticForwardDiffExt...
      -    452.2 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    642.3 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      -  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      -Precompiling IntervalArithmeticRecipesBaseExt...
      -    756.5 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      -Precompiling SciMLBaseMakieExt...
      -   9269.6 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.

      Define some Utility Functions

      Tip

      This section can be skipped. It defines functions to simulate the model, however, from a scientific machine learning perspective, isn't super relevant.

      `,8)),s("p",null,[A[6]||(A[6]=e("We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector ")),s("mjx-container",l,[(i(),a("svg",h,A[0]||(A[0]=[n('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"r"),s("mo",null,"="),s("msub",null,[s("mi",null,"r"),s("mn",null,"1")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[7]||(A[7]=e(" and use Newtonian formulas to get ")),s("mjx-container",r,[(i(),a("svg",k,A[2]||(A[2]=[n('',1)]))),A[3]||(A[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"1")])])],-1))]),A[8]||(A[8]=e(", ")),s("mjx-container",E,[(i(),a("svg",d,A[4]||(A[4]=[n('',1)]))),A[5]||(A[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[9]||(A[9]=e(" (e.g. Theoretical Mechanics of Particles and Continua 4.3)"))]),A[42]||(A[42]=n(`
      julia
      function one2two(path, m₁, m₂)
      -    M = m₁ + m₂
      -    r₁ = m₂ / M .* path
      -    r₂ = -m₁ / M .* path
      -    return r₁, r₂
      -end
      one2two (generic function with 1 method)
      `,2)),s("p",null,[A[12]||(A[12]=e("Next we define a function to perform the change of variables: ")),s("mjx-container",o,[(i(),a("svg",Q,A[10]||(A[10]=[n('',1)]))),A[11]||(A[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"("),s("mi",null,"χ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"ϕ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"↦"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"y"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")")])],-1))])]),A[43]||(A[43]=n(`
      julia
      @views function soln2orbit(soln, model_params=nothing)
      -    @assert size(soln, 1)  [2, 4] "size(soln,1) must be either 2 or 4"
      -
      -    if size(soln, 1) == 2
      -        χ = soln[1, :]
      -        ϕ = soln[2, :]
      -
      -        @assert length(model_params)==3 "model_params must have length 3 when size(soln,2) = 2"
      -        p, M, e = model_params
      -    else
      -        χ = soln[1, :]
      -        ϕ = soln[2, :]
      -        p = soln[3, :]
      -        e = soln[4, :]
      -    end
      -
      -    r = p ./ (1 .+ e .* cos.(χ))
      -    x = r .* cos.(ϕ)
      -    y = r .* sin.(ϕ)
      -
      -    orbit = vcat(x', y')
      -    return orbit
      -end
      soln2orbit (generic function with 2 methods)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d_dt(v::AbstractVector, dt)
      -    a = -3 / 2 * v[1] + 2 * v[2] - 1 / 2 * v[3]
      -    b = (v[3:end] .- v[1:(end - 2)]) / 2
      -    c = 3 / 2 * v[end] - 2 * v[end - 1] + 1 / 2 * v[end - 2]
      -    return [a; b; c] / dt
      -end
      d_dt (generic function with 1 method)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d2_dt2(v::AbstractVector, dt)
      -    a = 2 * v[1] - 5 * v[2] + 4 * v[3] - v[4]
      -    b = v[1:(end - 2)] .- 2 * v[2:(end - 1)] .+ v[3:end]
      -    c = 2 * v[end] - 5 * v[end - 1] + 4 * v[end - 2] - v[end - 3]
      -    return [a; b; c] / (dt^2)
      -end
      d2_dt2 (generic function with 1 method)

      Now we define a function to compute the trace-free moment tensor from the orbit

      julia
      function orbit2tensor(orbit, component, mass=1)
      -    x = orbit[1, :]
      -    y = orbit[2, :]
      -
      -    Ixx = x .^ 2
      -    Iyy = y .^ 2
      -    Ixy = x .* y
      -    trace = Ixx .+ Iyy
      -
      -    if component[1] == 1 && component[2] == 1
      -        tmp = Ixx .- trace ./ 3
      -    elseif component[1] == 2 && component[2] == 2
      -        tmp = Iyy .- trace ./ 3
      -    else
      -        tmp = Ixy
      -    end
      -
      -    return mass .* tmp
      -end
      -
      -function h_22_quadrupole_components(dt, orbit, component, mass=1)
      -    mtensor = orbit2tensor(orbit, component, mass)
      -    mtensor_ddot = d2_dt2(mtensor, dt)
      -    return 2 * mtensor_ddot
      -end
      -
      -function h_22_quadrupole(dt, orbit, mass=1)
      -    h11 = h_22_quadrupole_components(dt, orbit, (1, 1), mass)
      -    h22 = h_22_quadrupole_components(dt, orbit, (2, 2), mass)
      -    h12 = h_22_quadrupole_components(dt, orbit, (1, 2), mass)
      -    return h11, h12, h22
      -end
      -
      -function h_22_strain_one_body(dt::T, orbit) where {T}
      -    h11, h12, h22 = h_22_quadrupole(dt, orbit)
      -
      -    h₊ = h11 - h22
      -    hₓ = T(2) * h12
      -
      -    scaling_const =(T(π) / 5)
      -    return scaling_const * h₊, -scaling_const * hₓ
      -end
      -
      -function h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      -    h11_1, h12_1, h22_1 = h_22_quadrupole(dt, orbit1, mass1)
      -    h11_2, h12_2, h22_2 = h_22_quadrupole(dt, orbit2, mass2)
      -    h11 = h11_1 + h11_2
      -    h12 = h12_1 + h12_2
      -    h22 = h22_1 + h22_2
      -    return h11, h12, h22
      -end
      -
      -function h_22_strain_two_body(dt::T, orbit1, mass1, orbit2, mass2) where {T}
      -    # compute (2,2) mode strain from orbits of BH 1 of mass1 and BH2 of mass 2
      -
      -    @assert abs(mass1 + mass2 - 1.0)<1e-12 "Masses do not sum to unity"
      -
      -    h11, h12, h22 = h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      -
      -    h₊ = h11 - h22
      -    hₓ = T(2) * h12
      -
      -    scaling_const =(T(π) / 5)
      -    return scaling_const * h₊, -scaling_const * hₓ
      -end
      -
      -function compute_waveform(dt::T, soln, mass_ratio, model_params=nothing) where {T}
      -    @assert mass_ratio1 "mass_ratio must be <= 1"
      -    @assert mass_ratio0 "mass_ratio must be non-negative"
      -
      -    orbit = soln2orbit(soln, model_params)
      -    if mass_ratio > 0
      -        m₂ = inv(T(1) + mass_ratio)
      -        m₁ = mass_ratio * m₂
      -
      -        orbit₁, orbit₂ = one2two(orbit, m₁, m₂)
      -        waveform = h_22_strain_two_body(dt, orbit₁, m₁, orbit₂, m₂)
      -    else
      -        waveform = h_22_strain_one_body(dt, orbit)
      -    end
      -    return waveform
      -end
      compute_waveform (generic function with 2 methods)

      Simulating the True Model

      RelativisticOrbitModel defines system of odes which describes motion of point like particle in schwarzschild background, uses

      `,13)),s("mjx-container",C,[(i(),a("svg",f,A[13]||(A[13]=[n('',1)]))),A[14]||(A[14]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",g,[(i(),a("svg",c,A[15]||(A[15]=[n('',1)]))),A[16]||(A[16]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[23]||(A[23]=e("where, ")),s("mjx-container",y,[(i(),a("svg",v,A[17]||(A[17]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[18]||(A[18]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[24]||(A[24]=e(", ")),s("mjx-container",m,[(i(),a("svg",u,A[19]||(A[19]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[20]||(A[20]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[25]||(A[25]=e(", and ")),s("mjx-container",I,[(i(),a("svg",F,A[21]||(A[21]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[22]||(A[22]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[26]||(A[26]=e(" are constants"))]),A[44]||(A[44]=n(`
      julia
      function RelativisticOrbitModel(u, (p, M, e), t)
      -    χ, ϕ = u
      -
      -    numer = (p - 2 - 2 * e * cos(χ)) * (1 + e * cos(χ))^2
      -    denom = sqrt((p - 2)^2 - 4 * e^2)
      -
      -    χ̇ = numer * sqrt(p - 6 - 2 * e * cos(χ)) / (M * (p^2) * denom)
      -    ϕ̇ = numer / (M * (p^(3 / 2)) * denom)
      -
      -    return [χ̇, ϕ̇]
      -end
      -
      -mass_ratio = 0.0         # test particle
      -u0 = Float64[π, 0.0]     # initial conditions
      -datasize = 250
      -tspan = (0.0f0, 6.0f4)   # timespace for GW waveform
      -tsteps = range(tspan[1], tspan[2]; length=datasize)  # time at each timestep
      -dt_data = tsteps[2] - tsteps[1]
      -dt = 100.0
      -const ode_model_params = [100.0, 1.0, 0.5]; # p, M, e

      Let's simulate the true model and plot the results using OrdinaryDiffEq.jl

      julia
      prob = ODEProblem(RelativisticOrbitModel, u0, tspan, ode_model_params)
      -soln = Array(solve(prob, RK4(); saveat=tsteps, dt, adaptive=false))
      -waveform = first(compute_waveform(dt_data, soln, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s = scatter!(ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5)
      -
      -    axislegend(ax, [[l, s]], ["Waveform Data"])
      -
      -    fig
      -end

      Defiing a Neural Network Model

      Next, we define the neural network model that takes 1 input (time) and has two outputs. We'll make a function ODE_model that takes the initial conditions, neural network parameters and a time as inputs and returns the derivatives.

      It is typically never recommended to use globals but incase you do use them, make sure to mark them as const.

      We will deviate from the standard Neural Network initialization and use WeightInitializers.jl,

      julia
      const nn = Chain(Base.Fix1(fast_activation, cos),
      -    Dense(1 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      -    Dense(32 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      -    Dense(32 => 2; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32))
      -ps, st = Lux.setup(Random.default_rng(), nn)
      ((layer_1 = NamedTuple(), layer_2 = (weight = Float32[0.00012904078; -0.000112544876; -3.194287f-5; 9.532207f-5; 5.7499907f-5; -0.000106882915; -0.00010909063; 3.8261045f-5; -1.2864484f-5; -8.822358f-5; 3.546014f-5; -1.8142538f-5; -0.000102116464; -4.5605157f-6; -0.00021668768; 0.00014581048; -0.000112148016; 6.22374f-5; 0.00024771586; -4.4607037f-5; 4.5975794f-5; -0.00013581885; -8.202444f-5; -2.2094564f-5; 2.0224994f-5; -0.00024750977; 8.9906325f-6; -9.196156f-5; 1.3154642f-5; 5.571319f-5; -2.2547f-5; -1.0075346f-5;;], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_3 = (weight = Float32[0.0001618415 0.00010802869 9.6770396f-5 -0.00014462393 8.764277f-5 -4.6047666f-5 -6.289895f-5 1.4200443f-5 3.7395082f-5 -3.8730854f-5 -2.4571409f-5 -1.5922284f-5 -0.0001338827 0.00013049862 -1.7215027f-5 0.0001519071 3.6739f-5 5.3988137f-5 7.5490334f-6 4.1321127f-5 0.00012454136 -0.00012439543 0.00032224192 -9.0235466f-5 8.027509f-6 -4.230032f-5 -1.2626992f-5 3.817326f-5 0.00010345286 -3.2830536f-5 -0.00013042263 3.6778765f-5; -0.00010449851 6.9333175f-5 -0.00025979505 -4.1506977f-5 0.00015308514 0.00015436667 4.068362f-5 6.194856f-5 0.00016091425 4.190392f-5 8.6157415f-5 -0.00015018477 -1.8756064f-5 0.00013426754 3.4385368f-5 3.7520163f-5 -8.245856f-6 8.076973f-6 -7.841589f-5 9.4385825f-5 3.1037864f-5 1.632189f-5 -0.00011407531 -9.840075f-5 -9.079082f-6 -0.00012266912 -0.0001501576 -4.8833277f-5 6.306712f-5 -8.952644f-5 -6.778331f-5 0.00013526856; 3.5417983f-5 -6.405736f-5 5.0200804f-5 -2.187474f-5 9.65856f-5 1.6087113f-5 0.00017726328 5.0810268f-5 -1.2770168f-5 0.0002584681 -6.891108f-5 -1.8794399f-5 1.8506884f-5 3.4208548f-5 5.7545672f-5 -0.00019487216 -4.1474475f-5 -7.719686f-5 0.00022728901 7.322569f-5 4.1465402f-5 -0.00011304584 -0.000121477155 -7.1576396f-5 -0.00019784343 7.308304f-6 -8.1276325f-5 -5.8827296f-5 3.2044787f-5 -7.681639f-5 -4.2084102f-5 -1.7746544f-5; -4.1023915f-5 6.294326f-5 4.1910976f-6 -8.0661346f-5 4.198329f-5 -6.740117f-5 2.4916579f-5 -0.00011881353 -0.00010547372 1.9229235f-5 8.645224f-6 0.000109944114 -1.4017144f-5 -7.512534f-6 7.736905f-5 -0.00010222966 0.00014278025 -6.63838f-5 -0.00014546272 -7.750724f-6 -0.00011267116 3.0435387f-5 9.601777f-5 -0.000119031705 -0.00015527422 0.0001169817 -0.00013191596 0.000102389204 5.188713f-5 2.1413676f-5 -0.00010448805 1.1473379f-5; 8.031061f-6 6.5873464f-5 2.5314368f-5 0.00017256496 -4.9571987f-5 -1.3794807f-5 2.4063143f-5 9.781523f-5 -4.280496f-5 -2.0803165f-5 -7.521194f-5 0.00018738274 0.00010471025 7.513476f-5 -4.893005f-5 0.00014860366 -8.2969374f-5 7.298375f-5 -8.980523f-5 0.00023513274 -7.5829416f-6 -5.8425405f-5 1.23955515f-5 6.96497f-5 2.4521105f-5 7.589782f-7 -5.2146323f-5 2.6066837f-5 -1.2284129f-5 -7.0048576f-5 -8.4076615f-5 -6.8420195f-6; 7.3120784f-5 -4.376328f-5 3.065425f-5 2.2613407f-5 5.7423866f-5 5.5075692f-5 -1.3883068f-5 -9.999049f-6 7.647168f-5 2.6283391f-5 -3.4442135f-5 3.925019f-5 -7.635232f-5 7.141157f-5 0.0001065878 -1.3486914f-5 -2.3242749f-6 -0.00015052347 0.00014461616 2.219101f-5 -0.00012743202 -7.240347f-5 7.515301f-5 -3.7679296f-5 0.000116404415 -8.719309f-5 -0.00011620738 -2.2765513f-5 3.5284862f-5 -5.4052427f-5 4.8226982f-5 -5.9169564f-5; -6.7190835f-5 -5.7709756f-5 -3.7505262f-5 9.183235f-5 -2.3899584f-5 0.00014002899 5.2985233f-5 2.9858915f-5 -0.00014034772 0.0002104392 2.2348457f-5 -9.916955f-5 -5.660702f-5 9.209628f-5 -0.00011463575 -5.018853f-6 1.7207522f-5 -9.028814f-5 5.9064158f-5 5.6500547f-5 5.1515904f-5 -6.183798f-5 -1.869243f-5 -0.00019577722 0.0001116403 2.623907f-5 2.5146139f-5 -0.00020383588 -6.0033046f-5 7.342174f-5 -0.00012018732 0.00019175147; -4.4822886f-5 -1.809907f-5 -0.0001516205 -9.591752f-5 -0.000307869 0.00017293467 -0.00014444215 0.00018572801 -4.6772497f-5 0.00020110856 0.00011630823 0.00012441541 -1.9470537f-5 0.0001705941 -0.00011531175 -4.7874884f-5 -3.391144f-5 -0.00017337958 -0.00015705569 0.0001380679 9.7610886f-5 -4.8745784f-5 -5.210988f-5 -0.00017633072 -4.5728182f-5 -9.739671f-5 -1.0597239f-5 2.2026148f-5 -6.3983694f-5 6.102486f-5 0.00010338496 -6.827732f-5; 0.00018907207 -2.7584168f-5 6.10194f-5 -3.5499725f-5 -4.9563205f-5 -2.0559924f-5 3.0567677f-5 -6.0137634f-5 2.9698942f-6 -3.717725f-5 -5.4455915f-5 0.000120902325 -2.6554806f-5 -9.82872f-5 0.00012045895 0.00013023132 -4.3817818f-6 -7.90597f-6 -0.00017591458 -6.145454f-5 -9.7153286f-5 -0.000113216745 -0.00019022694 -1.5265173f-6 -3.7145597f-5 -1.418679f-6 4.0716244f-5 -7.1149034f-6 -5.2239297f-6 6.4689855f-5 0.00015458622 6.8282454f-5; 9.238393f-5 -0.00010029555 -0.00032395986 5.5797962f-5 0.00025480852 0.00010384605 6.894006f-5 9.168032f-5 -0.00014577566 5.929657f-5 0.0002960712 0.00013800748 2.5192774f-5 0.00011150848 -5.873785f-5 -7.113468f-5 -5.1084924f-5 3.14979f-5 8.180124f-5 -1.0872986f-5 6.779847f-5 6.3255684f-5 2.280975f-5 -2.5365513f-5 -7.609393f-5 6.9949754f-5 -0.00020458306 -0.00014076366 -5.1785355f-5 -6.7380766f-5 -9.743695f-5 -2.6103799f-5; 2.995047f-5 -6.736759f-6 -3.8725888f-5 -6.074171f-6 7.08185f-5 -2.7499069f-5 -7.966966f-5 -0.00014262945 0.00015519376 4.3257455f-6 0.00013839432 6.4476895f-5 -0.00012280572 0.00015023709 -0.00010829573 -0.00012139006 -0.00010294529 -6.770257f-6 -4.841615f-5 -1.7801849f-6 -3.3879445f-5 5.58065f-6 3.9426603f-5 5.5769644f-5 -3.4012104f-5 -8.564142f-5 3.2661476f-6 -1.0633827f-5 -9.2488284f-5 6.4112755f-5 -0.00017150475 3.3733468f-5; -5.781235f-5 0.0001663885 -0.00022763766 -2.2378454f-5 6.2147716f-5 0.00016687042 -0.00019428028 -2.5980862f-5 -1.556338f-5 8.4640174f-5 -9.9490266f-5 1.250148f-6 -0.00014455267 -2.947595f-5 6.0663788f-5 -0.00010613067 5.1250395f-6 7.563867f-5 5.97999f-5 4.5715213f-5 1.86434f-5 -7.3369134f-5 -4.44051f-6 -2.5847781f-5 -1.2085683f-5 -3.6958885f-5 0.00019028709 -0.000108798136 0.0001345308 2.3690662f-5 0.00019774816 -0.00010464023; -0.00014413495 -0.00013636223 3.6046575f-5 7.1486343f-6 1.915911f-5 0.000108172964 -4.005471f-5 1.8743809f-5 0.00031020257 -3.856181f-5 4.4903583f-5 3.4804332f-5 -0.00016249428 -0.00010392467 5.827663f-5 9.077724f-5 5.3671643f-5 -4.8848806f-6 9.078535f-5 8.6138905f-5 -1.8303757f-5 -8.899495f-5 3.5389185f-5 7.94879f-6 5.58088f-5 -4.5910605f-5 6.699846f-5 -0.00014090221 -0.0002088031 -2.0524585f-5 5.01877f-5 0.00019194714; -5.0838746f-5 -7.539191f-5 0.00012213155 0.00015550757 -0.00012977826 -5.158861f-5 -9.445511f-5 0.00012671694 5.2321917f-5 0.00023097855 -8.698364f-5 0.00018369449 -1.2698216f-5 -2.2390311f-6 3.0692383f-5 -0.00011975833 0.00012478858 -1.4962289f-5 3.8748185f-5 0.00023630871 -0.00012000166 -1.5395174f-6 8.959036f-5 -3.4250785f-5 1.3674271f-5 -0.000111441506 -0.00010046797 6.0148104f-5 -2.4395333f-5 3.2727636f-5 8.028999f-5 -4.091216f-5; -2.0079646f-5 4.2053984f-5 0.0002203402 0.00020521648 -0.0001587786 -0.0001763035 -0.00016422565 0.00010166542 9.380203f-5 6.9437694f-5 7.57007f-5 -0.00020894472 -0.00016988067 -6.765953f-5 -0.00010387241 5.217241f-5 2.3392158f-5 -6.166616f-5 0.00017520235 -3.2230004f-5 0.00016375548 -3.2104384f-5 0.00013332158 6.2654086f-5 0.00017636029 9.2175396f-5 -3.051804f-5 4.447554f-5 -4.3382257f-5 3.6749552f-5 7.922374f-5 5.572315f-6; -2.6858583f-5 0.00017819837 4.695629f-5 -9.4709256f-5 -0.00016462848 -2.3182307f-5 -4.6315886f-6 8.540567f-5 -8.155574f-5 -0.00014111411 7.0400456f-5 -0.0003217462 0.00013005744 0.00011168272 8.3365914f-5 6.0671675f-5 -7.8074496f-5 -7.818376f-5 -6.0908715f-5 0.00014188448 0.00011401225 0.00024713847 -2.690341f-5 0.000102220314 8.4159314f-5 -0.00015950877 4.007158f-6 -8.7333174f-5 -8.763305f-5 5.593f-5 -7.80944f-5 -8.700921f-5; 4.7898816f-6 -6.2080675f-5 -3.9439437f-5 0.00014633437 0.0001298414 3.786074f-5 3.610755f-5 -5.0587707f-5 -0.000108645305 -7.380673f-5 -3.9151306f-5 2.2096217f-5 4.4382992f-5 -0.00020986874 -2.2879865f-5 -9.160353f-5 9.355732f-6 5.507163f-6 2.5489919f-5 2.7703583f-5 7.0461865f-5 -1.7779136f-6 -8.9434f-5 2.8051127f-5 -5.7027803f-5 8.368734f-6 5.7428515f-6 3.801712f-5 -4.082395f-5 1.3992459f-5 -0.00016164676 -0.0003113387; 0.00010210765 -0.00014016182 -9.4709f-5 -0.00034049625 -0.00021730701 -0.00021877282 -1.7789896f-6 5.1318613f-5 -6.843532f-5 2.579598f-6 -7.676633f-5 3.0357873f-5 4.429472f-5 -0.00013344285 8.7713386f-5 -4.4082106f-5 4.933527f-5 -0.00012167484 0.00014828255 3.681658f-5 5.1665567f-5 3.5419303f-5 1.1032115f-5 -2.0139467f-5 -5.1311414f-5 4.592762f-6 -0.00010240131 0.0003480712 4.9550832f-5 9.322965f-5 0.00013964142 -1.44593205f-5; 4.6231064f-5 0.0001759908 -9.25005f-5 -0.00012786509 -9.314455f-5 -0.00010438618 0.0001246568 -6.0039115f-6 -4.256412f-5 3.3602988f-5 -7.607951f-5 -5.8172565f-5 9.8862365f-5 8.6666434f-5 -0.0001534175 9.098543f-7 -0.0001733446 -7.1654413f-6 7.078415f-6 7.887453f-5 -8.685129f-5 3.543198f-5 0.00011281002 -0.000119297736 2.4146246f-5 -8.2930885f-5 1.4110002f-5 -0.00012959536 -9.485212f-5 -0.00019885453 -1.7635973f-7 6.0904702f-5; 3.8074464f-5 -0.00014175575 -0.00010340612 -5.7893245f-5 -4.6557743f-5 8.02458f-5 -4.9841892f-5 0.000101947204 0.00012222587 -2.4416882f-5 9.291772f-5 7.913553f-6 -6.0172544f-5 0.00018882217 4.9115602f-5 5.3959157f-5 -0.00025722026 -5.7353386f-6 4.8929433f-5 -0.00013650712 -3.4423745f-5 3.0329427f-6 6.108192f-5 -7.605748f-5 0.00013981201 9.759291f-5 8.666391f-5 -2.3675924f-5 0.00012245314 -0.00010652234 -1.3366141f-5 0.00012327557; -3.3457505f-5 4.095177f-5 -0.000109747016 5.973901f-5 -4.265944f-5 -9.30009f-5 8.915604f-5 -0.00010395834 -1.20415725f-5 0.00015913686 -6.577562f-5 -1.2961416f-5 -1.3428478f-6 -2.4245908f-5 -4.767823f-5 8.947429f-6 0.00021950372 6.0229213f-5 4.8471844f-5 -1.8123917f-5 4.926346f-6 0.00016245825 -9.935177f-5 -6.088053f-5 -0.00013101641 0.00015148571 -9.9824676f-5 2.438139f-5 9.963044f-6 -9.3802286f-5 -8.4489046f-5 -9.9167686f-5; 9.8044664f-5 -0.0001373537 6.387073f-5 6.1176324f-5 3.706773f-5 1.7435063f-5 4.7456964f-5 -1.5212719f-5 3.6089532f-5 -8.671219f-5 -7.336197f-6 -2.0153177f-5 -5.8473186f-5 8.0986705f-5 2.2233919f-5 -2.9421195f-5 -3.1216456f-5 -8.403217f-6 -0.00013870597 5.3565193f-5 3.9979343f-5 8.018874f-5 -0.00020783316 -3.9275405f-5 2.2974551f-5 -0.00013394952 0.00018567356 -3.7417303f-5 -5.7108025f-7 5.572647f-5 -0.00011531178 -7.445494f-5; 2.5635985f-5 -3.386408f-5 3.410427f-5 -8.304946f-5 0.00010648793 -7.12835f-5 -1.8213308f-5 0.00015260768 0.0001266959 -0.000107867665 -5.9321865f-5 0.00011133931 -0.000105488856 0.00018247867 9.0476264f-5 -5.4776487f-5 6.0621394f-5 0.00028524833 6.466627f-5 -5.295654f-6 -0.00010272296 0.00010126048 1.4139942f-5 6.678978f-5 -0.00018882136 -5.3258304f-5 1.6948114f-5 -7.912578f-6 -2.1099555f-5 -9.280504f-5 2.0245323f-5 2.3501534f-5; 0.00012267528 -2.3302788f-5 0.0001845575 -3.6030975f-5 -8.9690475f-5 5.39783f-6 -6.8735144f-6 -0.0001025949 2.5651641f-5 -0.00015345518 -0.00013484679 7.593546f-5 -1.713295f-5 -4.639202f-6 9.142571f-5 -2.3218283f-5 5.4158005f-5 -5.6290253f-5 -0.00015190005 3.1802838f-6 -3.7887046f-5 -2.6533817f-5 0.00012959103 8.168044f-5 0.00016691723 1.225193f-5 -0.00022118838 -5.3078587f-5 4.3580926f-6 0.00017463026 -1.0993981f-5 5.7551082f-5; 3.1593845f-5 -0.000110218614 -6.81751f-5 -4.0169387f-5 4.775593f-5 -6.654202f-5 8.2955985f-6 2.4026602f-5 2.1414777f-5 -0.00018230491 4.825706f-6 2.1396403f-5 8.545525f-5 -0.00010729712 -3.5680565f-5 7.198789f-5 -6.8265894f-5 -8.482774f-5 -4.113752f-5 9.73255f-5 0.00013572449 -2.7198057f-6 -4.7429945f-5 -0.0001458811 9.53614f-5 4.0818675f-5 3.8685084f-5 0.00010142652 -7.212244f-5 -7.761069f-5 -0.00018438329 -5.83635f-5; 1.9431573f-5 -3.5413144f-5 5.494157f-5 0.00013289817 0.00010804467 -9.7779135f-5 6.7757464f-5 6.780156f-5 6.958222f-5 -6.413478f-5 9.143599f-5 3.0031626f-5 -4.9843726f-5 9.828978f-5 4.441148f-5 3.23047f-5 0.00010129856 -8.872279f-6 6.877895f-5 0.000117699274 -9.963044f-5 -0.00012705963 -0.00023942554 6.482481f-5 0.00019372506 1.3753422f-5 -0.000111746165 5.2670017f-5 -0.00019889923 2.1095306f-5 0.00014670983 0.000106552376; 0.0001532651 0.00017225282 -0.00019747327 0.000101641795 -1.9324214f-5 3.1270938f-5 -1.0656201f-5 0.00012335583 1.7614955f-5 0.00018734005 4.6449848f-5 -3.556733f-6 0.00014254355 -2.6307333f-5 4.0052928f-5 -4.950784f-5 3.2620028f-5 2.9044973f-5 0.0001169443 -0.0001477714 3.3198372f-5 7.44768f-5 -2.2325634f-5 2.1709455f-5 1.9323952f-5 0.00010349797 4.662355f-6 0.000120134464 -1.426732f-5 -3.614648f-5 -4.7784528f-5 -0.000108450906; 0.00012510877 1.6365531f-5 2.6071106f-5 -0.00022927686 2.5291409f-5 -0.00011106614 -4.2158313f-6 -0.00014278505 0.00012849839 -1.0538752f-5 0.00018321196 -5.0901217f-6 5.6905395f-5 -2.0126035f-5 -1.9563063f-6 0.00020969917 9.217068f-6 -5.2084233f-5 -1.9979205f-5 -0.00010448458 -8.920894f-6 -3.2072923f-5 -0.00010738238 6.593582f-5 0.00014089058 -6.93594f-5 0.0001137811 2.7396969f-5 -0.00014477345 -3.0052819f-5 5.1802308f-5 7.691047f-5; 5.539292f-5 1.2016109f-5 -0.00013654484 8.461434f-5 3.012997f-5 -0.00018663032 0.00010116812 -0.00020896742 7.81598f-5 -0.00022899467 5.9576298f-5 -2.3427787f-5 -2.1780272f-5 -0.0001299228 -2.6480795f-6 -4.194423f-5 0.00017164544 -0.00014682561 -6.270525f-5 -2.418831f-6 5.836604f-7 -0.00014961041 0.0001490025 6.167698f-5 6.434922f-5 2.15672f-5 -7.845563f-5 -9.3900235f-5 -0.00010745666 9.943281f-5 3.506144f-5 -0.00010716154; -0.00031913762 -5.373516f-6 -2.4668616f-5 0.00020465998 -0.00015221832 4.4142293f-5 3.4260665f-5 0.00018585508 -1.5315974f-5 -2.7753547f-5 -7.027571f-5 -0.00012219216 0.00014801395 5.5639845f-5 0.00010767018 -4.5898585f-5 9.620609f-6 9.126621f-5 -6.4281805f-5 4.424764f-5 -0.00010782308 8.663507f-5 -0.00018453374 9.6715936f-5 -0.00011947815 -1.0612806f-5 -7.8717985f-6 0.00016270993 -0.0001183861 8.132558f-5 2.7964297f-5 -2.1770951f-5; 0.00013770966 8.160392f-6 0.00014750332 -9.148222f-5 -2.3455923f-5 -0.00012808383 -3.1211828f-5 -9.638449f-5 2.1945178f-5 -0.00010252736 4.3882188f-5 -2.7055738f-5 4.1048006f-5 -0.00011791357 0.0001088796 -3.35307f-5 4.6414452f-6 7.904794f-5 -3.2272903f-5 4.679918f-5 -2.8823291f-5 -0.0002403455 7.2696894f-5 2.1850115f-5 -0.00017884467 0.00013938849 0.00012041823 3.8262526f-5 -4.831809f-6 -0.00011074009 -0.00014165216 6.2243314f-5; 5.5768774f-5 0.00025591633 -8.7005836f-5 -6.97511f-5 -0.00010677257 0.00014461808 9.6828815f-5 -7.235264f-5 0.00013398477 -0.00022309602 5.1014304f-5 -0.00010017812 -0.00013455361 -7.145826f-5 9.886539f-5 4.9542003f-5 -2.2008837f-5 0.00016627453 1.939884f-5 -8.495026f-6 7.701872f-6 -0.00017832675 2.259097f-5 -6.1449035f-5 -0.00019204959 3.286431f-5 2.5918604f-5 2.890915f-5 2.7365557f-5 -0.00010480247 4.5039164f-5 3.899461f-5], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_4 = (weight = Float32[-5.3218864f-5 -8.7234104f-5 0.00024896805 9.7996935f-6 0.00018562307 -0.00013682363 2.88639f-5 0.00013638294 -5.7898418f-5 1.2768941f-6 -7.310829f-5 3.163944f-5 5.68142f-6 -8.405032f-5 -5.7826495f-5 -5.6752317f-5 -0.00013975805 4.9450424f-5 -1.0195218f-5 4.1595777f-5 0.00018590117 2.2314363f-5 0.00022402793 0.000108809094 6.9467824f-5 7.255384f-5 -9.488476f-5 -3.1999603f-5 3.6781903f-6 -2.4678693f-5 2.8044307f-5 -1.9549065f-5; -4.9224338f-5 -0.00014301574 -3.065813f-5 0.00012645814 5.188297f-5 6.0191636f-5 -8.252057f-5 0.00019450326 2.1002388f-5 6.364012f-5 8.513248f-5 -0.00011287401 0.000117475014 0.00010225453 9.432689f-5 -0.00014117833 -0.00017059014 9.827579f-5 6.402922f-6 6.2420164f-5 -6.583349f-5 1.3587606f-5 0.0001036864 0.00012247443 1.927937f-5 -0.00015768438 7.495479f-5 -5.7442823f-5 1.1396786f-5 1.9020681f-5 -9.5680174f-5 -7.685743f-5], bias = Float32[0.0, 0.0])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple()))

      Similar to most DL frameworks, Lux defaults to using Float32, however, in this case we need Float64

      julia
      const params = ComponentArray(ps |> f64)
      -
      -const nn_model = StatefulLuxLayer{true}(nn, nothing, st)
      StatefulLuxLayer{true}(
      -    Chain(
      -        layer_1 = WrappedFunction(Base.Fix1{typeof(LuxLib.API.fast_activation), typeof(cos)}(LuxLib.API.fast_activation, cos)),
      -        layer_2 = Dense(1 => 32, cos),  # 64 parameters
      -        layer_3 = Dense(32 => 32, cos),  # 1_056 parameters
      -        layer_4 = Dense(32 => 2),       # 66 parameters
      -    ),
      -)         # Total: 1_186 parameters,
      -          #        plus 0 states.

      Now we define a system of odes which describes motion of point like particle with Newtonian physics, uses

      `,14)),s("mjx-container",B,[(i(),a("svg",T,A[27]||(A[27]=[n('',1)]))),A[28]||(A[28]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",q,[(i(),a("svg",D,A[29]||(A[29]=[n('',1)]))),A[30]||(A[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[37]||(A[37]=e("where, ")),s("mjx-container",V,[(i(),a("svg",b,A[31]||(A[31]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[32]||(A[32]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[38]||(A[38]=e(", ")),s("mjx-container",z,[(i(),a("svg",R,A[33]||(A[33]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[34]||(A[34]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[39]||(A[39]=e(", and ")),s("mjx-container",w,[(i(),a("svg",x,A[35]||(A[35]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[36]||(A[36]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[40]||(A[40]=e(" are constants"))]),A[45]||(A[45]=n(`
      julia
      function ODE_model(u, nn_params, t)
      -    χ, ϕ = u
      -    p, M, e = ode_model_params
      -
      -    # In this example we know that \`st\` is am empty NamedTuple hence we can safely ignore
      -    # it, however, in general, we should use \`st\` to store the state of the neural network.
      -    y = 1 .+ nn_model([first(u)], nn_params)
      -
      -    numer = (1 + e * cos(χ))^2
      -    denom = M * (p^(3 / 2))
      -
      -    χ̇ = (numer / denom) * y[1]
      -    ϕ̇ = (numer / denom) * y[2]
      -
      -    return [χ̇, ϕ̇]
      -end
      ODE_model (generic function with 1 method)

      Let us now simulate the neural network model and plot the results. We'll use the untrained neural network parameters to simulate the model.

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, params)
      -soln_nn = Array(solve(prob_nn, RK4(); u0, p=params, saveat=tsteps, dt, adaptive=false))
      -waveform_nn = first(compute_waveform(dt_data, soln_nn, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s1 = scatter!(
      -        ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      -
      -    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      -    s2 = scatter!(
      -        ax, tsteps, waveform_nn; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      -
      -    axislegend(ax, [[l1, s1], [l2, s2]],
      -        ["Waveform Data", "Waveform Neural Net (Untrained)"]; position=:lb)
      -
      -    fig
      -end

      Setting Up for Training the Neural Network

      Next, we define the objective (loss) function to be minimized when training the neural differential equations.

      julia
      const mseloss = MSELoss()
      -
      -function loss(θ)
      -    pred = Array(solve(prob_nn, RK4(); u0, p=θ, saveat=tsteps, dt, adaptive=false))
      -    pred_waveform = first(compute_waveform(dt_data, pred, mass_ratio, ode_model_params))
      -    return mseloss(pred_waveform, waveform)
      -end
      loss (generic function with 1 method)

      Warmup the loss function

      julia
      loss(params)
      0.00074655426662429

      Now let us define a callback function to store the loss over time

      julia
      const losses = Float64[]
      -
      -function callback(θ, l)
      -    push!(losses, l)
      -    @printf "Training \\t Iteration: %5d \\t Loss: %.10f\\n" θ.iter l
      -    return false
      -end
      callback (generic function with 1 method)

      Training the Neural Network

      Training uses the BFGS optimizers. This seems to give good results because the Newtonian model seems to give a very good initial guess

      julia
      adtype = Optimization.AutoZygote()
      -optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
      -optprob = Optimization.OptimizationProblem(optf, params)
      -res = Optimization.solve(
      -    optprob, BFGS(; initial_stepnorm=0.01, linesearch=LineSearches.BackTracking());
      -    callback, maxiters=1000)
      retcode: Success
      -u: ComponentVector{Float64}(layer_1 = Float64[], layer_2 = (weight = [0.00012904078175776253; -0.00011254487617403629; -3.194286909998489e-5; 9.532207332072905e-5; 5.749990668843518e-5; -0.00010688291513358794; -0.00010909062984874433; 3.826104511965575e-5; -1.2864484233419668e-5; -8.822358358875405e-5; 3.54601397702874e-5; -1.8142538465319903e-5; -0.00010211646440444609; -4.560515662882762e-6; -0.00021668768022198756; 0.0001458104816263927; -0.000112148016341931; 6.223739910629945e-5; 0.0002477158559482912; -4.460703712539674e-5; 4.5975793909755746e-5; -0.0001358188455924383; -8.202443859769488e-5; -2.2094563973928504e-5; 2.0224993932037384e-5; -0.00024750977172510844; 8.990632522912101e-6; -9.19615631573604e-5; 1.3154642147125585e-5; 5.571318979492865e-5; -2.2546999389286456e-5; -1.0075345926442328e-5;;], bias = [3.584434140166735e-17, -3.7790832398519353e-17, -4.035584463386469e-17, 1.4173640883628515e-16, 2.977832855146648e-17, -2.5063776117056426e-17, 1.0454082635012388e-16, 7.768131173337056e-17, -1.4275992282234933e-17, -1.0056975360155791e-16, 7.444415986842068e-18, -7.541368570010827e-18, 6.403532485188505e-17, -6.0094632419040616e-18, -1.1801867351985504e-16, 1.6551064034667553e-16, -3.54106732759271e-17, 8.7793657233099e-17, 3.732260928460576e-16, 9.636248802390752e-18, -9.422930643430968e-18, 5.92744761801707e-20, -3.654575479460169e-17, -2.4849142209393e-17, 2.650561506556804e-17, -6.031525195587328e-17, -3.049668143568285e-18, -2.677769293814691e-17, 3.2262887057476924e-18, -1.4834358480483478e-17, -2.198454328000425e-17, -1.1483009379054432e-17]), layer_3 = (weight = [0.00016184421098402112 0.00010803140309250678 9.677311014927782e-5 -0.00014462121920117096 8.76454868059012e-5 -4.604495200949848e-5 -6.289623719097284e-5 1.4203156629426456e-5 3.739779628192939e-5 -3.872813989330821e-5 -2.4568695275612873e-5 -1.5919569827766042e-5 -0.0001338799849098273 0.00013050133553877422 -1.7212312863981315e-5 0.00015190980763250898 3.674171226991112e-5 5.399085045171508e-5 7.551747207908346e-6 4.132384116490144e-5 0.0001245440724825835 -0.00012439271825754832 0.00032224463199342885 -9.023275209512656e-5 8.030222366123042e-6 -4.2297604645573056e-5 -1.2624278619860427e-5 3.8175974500674233e-5 0.00010345557517683067 -3.282782211938267e-5 -0.00013041991781830882 3.6781479016262425e-5; -0.00010449867604768573 6.933300992238576e-5 -0.00025979521651754906 -4.150714212449048e-5 0.00015308497913644838 0.00015436650810295014 4.068345387929354e-5 6.19483931571915e-5 0.0001609140841522923 4.1903755662471326e-5 8.615725036940715e-5 -0.0001501849377760926 -1.8756228566078988e-5 0.0001342673736856988 3.438520306951789e-5 3.751999847588415e-5 -8.24602139039425e-6 8.076808261076978e-6 -7.841605602300225e-5 9.43856599391177e-5 3.103769868027601e-5 1.6321725159625427e-5 -0.00011407547335841485 -9.840091424996789e-5 -9.079246775971852e-6 -0.00012266928879546438 -0.00015015776935036124 -4.883344208940647e-5 6.306695522505033e-5 -8.952660337093097e-5 -6.778347360671323e-5 0.00013526839993425815; 3.5417522044284086e-5 -6.405782431583927e-5 5.020034342551356e-5 -2.1875199932126688e-5 9.658514074062714e-5 1.608665235353539e-5 0.00017726282141184352 5.080980673910808e-5 -1.2770628641464566e-5 0.0002584676306602902 -6.891154288068659e-5 -1.8794859611534308e-5 1.850642303425243e-5 3.420808690380042e-5 5.754521163426771e-5 -0.0001948726227831649 -4.1474936265601624e-5 -7.719732337618453e-5 0.00022728855157284713 7.3225227710919e-5 4.146494149126811e-5 -0.00011304630123531435 -0.00012147761543501332 -7.157685712199914e-5 -0.00019784389104215682 7.3078429656656455e-6 -8.12767854634326e-5 -5.8827756843400645e-5 3.20443262487065e-5 -7.681684900062376e-5 -4.208456328824419e-5 -1.7747005299770105e-5; -4.1025844575904385e-5 6.294133240722606e-5 4.189168384483676e-6 -8.066327532155177e-5 4.1981361254741375e-5 -6.740309722821748e-5 2.4914649520232076e-5 -0.00011881545756345831 -0.0001054756513899967 1.9227306301632845e-5 8.64329454038995e-6 0.00010994218505352448 -1.40190733423352e-5 -7.514463186952786e-6 7.736711891037259e-5 -0.00010223158565216602 0.0001427783198318679 -6.638572917886143e-5 -0.00014546464913156064 -7.752653028135617e-6 -0.00011267308686960662 3.0433457913626107e-5 9.601584210891391e-5 -0.00011903363442847782 -0.0001552761470070882 0.00011697976774439031 -0.0001319178928957165 0.000102387274500376 5.188520194628911e-5 2.141174704725275e-5 -0.00010448997741200304 1.1471450170075303e-5; 8.033831614914848e-6 6.58762349086803e-5 2.5317138603176017e-5 0.00017256773414937995 -4.956921632019073e-5 -1.3792036701308052e-5 2.4065914000128018e-5 9.781800170113297e-5 -4.2802190113243786e-5 -2.0800394598880278e-5 -7.520916708362308e-5 0.00018738551286897687 0.00010471301927846019 7.513753214535398e-5 -4.8927280407803166e-5 0.00014860643411901837 -8.296660391697759e-5 7.298651896799662e-5 -8.980246068299549e-5 0.00023513551266315306 -7.58017106253865e-6 -5.8422634286951925e-5 1.2398322065749257e-5 6.965247146236758e-5 2.4523875505300814e-5 7.617487891816275e-7 -5.2143551987492544e-5 2.6069607225691798e-5 -1.228135872569555e-5 -7.004580559059229e-5 -8.407384459477803e-5 -6.839248930000781e-6; 7.312111023020203e-5 -4.37629542520897e-5 3.065457616139575e-5 2.261373350302043e-5 5.742419210400781e-5 5.5076018665202704e-5 -1.3882741828639766e-5 -9.998722876844519e-6 7.64720088536703e-5 2.628371742866767e-5 -3.444180862607065e-5 3.925051617807157e-5 -7.635199387186866e-5 7.141189319918315e-5 0.00010658812874134512 -1.3486587764421316e-5 -2.323948425329717e-6 -0.00015052314217064189 0.00014461648873656717 2.219133687867678e-5 -0.00012743169742498184 -7.240314249975422e-5 7.515333612354784e-5 -3.7678969480112144e-5 0.00011640474161504216 -8.719276326338101e-5 -0.00011620705578570802 -2.276518631879414e-5 3.5285188399938945e-5 -5.405210055505506e-5 4.82273089201041e-5 -5.916923788936427e-5; -6.719126521181299e-5 -5.771018676147833e-5 -3.750569283999701e-5 9.183191888462663e-5 -2.3900014080365e-5 0.00014002856061290895 5.298480265503702e-5 2.9858484244530953e-5 -0.00014034815229512965 0.00021043877369686858 2.2348026154421175e-5 -9.916997931702271e-5 -5.660744990143062e-5 9.209584697112226e-5 -0.00011463618410322375 -5.019283330847153e-6 1.720709098174664e-5 -9.0288567103354e-5 5.906372699849947e-5 5.650011609270983e-5 5.1515473946554614e-5 -6.18384086786765e-5 -1.869286041744173e-5 -0.00019577764886468858 0.00011163986947815657 2.6238638942811507e-5 2.514570825462949e-5 -0.00020383631034325903 -6.0033476425243785e-5 7.3421308674941e-5 -0.00012018774703880979 0.00019175104408953125; -4.4825145084669266e-5 -1.8101329476251788e-5 -0.0001516227532532436 -9.591978254079201e-5 -0.00030787125091351374 0.00017293241013490452 -0.0001444444098864041 0.00018572575551174637 -4.677475611729583e-5 0.00020110630044896678 0.0001163059676061016 0.00012441315244600152 -1.9472796554823704e-5 0.00017059183643381996 -0.00011531400763243772 -4.787714373136293e-5 -3.391370067887998e-5 -0.0001733818391925073 -0.0001570579517986553 0.00013806564289793466 9.760862645875768e-5 -4.874804311597348e-5 -5.2112140326560686e-5 -0.00017633298215273782 -4.573044155891073e-5 -9.739897017180204e-5 -1.0599498852892281e-5 2.2023888743800232e-5 -6.398595381811298e-5 6.1022601369267256e-5 0.00010338270273009304 -6.827957644177645e-5; 0.00018907153329861083 -2.758470932962243e-5 6.101885932662914e-5 -3.550026545484949e-5 -4.956374564857722e-5 -2.0560465288091306e-5 3.0567135893997024e-5 -6.013817497363848e-5 2.9693534027106658e-6 -3.7177792052481816e-5 -5.445645607027763e-5 0.00012090178401491052 -2.655534686413504e-5 -9.828774150877818e-5 0.00012045840898578325 0.0001302307767612189 -4.382322601250955e-6 -7.90651088822658e-6 -0.0001759151224693883 -6.14550832840387e-5 -9.715382717035302e-5 -0.00011321728621327554 -0.0001902274840692754 -1.527058117738606e-6 -3.714613799888262e-5 -1.4192197856860511e-6 4.071570341989771e-5 -7.115444220054804e-6 -5.224470584601832e-6 6.468931437053212e-5 0.00015458567699466283 6.828191302998365e-5; 9.238470677249613e-5 -0.00010029477259646522 -0.0003239590848124499 5.579873625268897e-5 0.0002548092919142067 0.00010384682111638278 6.894083646787928e-5 9.168109256737795e-5 -0.00014577488461778537 5.929734409168132e-5 0.00029607197514000284 0.00013800825702222952 2.519354832205625e-5 0.00011150925577116165 -5.873707475527475e-5 -7.113390590799612e-5 -5.1084149777091826e-5 3.1498674911780836e-5 8.180201559755303e-5 -1.0872212146714704e-5 6.779924599417398e-5 6.325645820200876e-5 2.281052486527042e-5 -2.5364738280550114e-5 -7.60931549620726e-5 6.995052838194472e-5 -0.00020458228816547466 -0.00014076288487100295 -5.1784580750754484e-5 -6.737999199227543e-5 -9.743617254112895e-5 -2.6103024242692998e-5; 2.994869224486138e-5 -6.738536867571874e-6 -3.87276655966161e-5 -6.075948604206879e-6 7.081672571066655e-5 -2.7500846627020414e-5 -7.967143996219695e-5 -0.00014263122524434918 0.0001551919825577893 4.323967786971796e-6 0.00013839253857431705 6.447511746946438e-5 -0.0001228075003747053 0.0001502353091927029 -0.00010829750921409024 -0.0001213918391295802 -0.00010294706829890184 -6.7720349216853204e-6 -4.841792782832408e-5 -1.781962637414239e-6 -3.3881222989384034e-5 5.578872205579631e-6 3.942482486300257e-5 5.576786621522814e-5 -3.4013881886585616e-5 -8.56431967259246e-5 3.2643698522401245e-6 -1.0635604339854782e-5 -9.24900620158046e-5 6.411097761874896e-5 -0.00017150653242433716 3.373168998214119e-5; -5.78123350160414e-5 0.00016638852086633133 -0.00022763764589309998 -2.2378437805872703e-5 6.214773149711706e-5 0.00016687043664294915 -0.00019428026151128424 -2.5980846275178036e-5 -1.5563364812823254e-5 8.464019030814682e-5 -9.949024986692077e-5 1.2501638329749527e-6 -0.00014455265643757148 -2.9475934369674194e-5 6.066380357968293e-5 -0.00010613065258350538 5.125055323315705e-6 7.563868841405652e-5 5.979991458023563e-5 4.571522860804267e-5 1.86434166798617e-5 -7.33691181985879e-5 -4.440494266436019e-6 -2.5847765372435807e-5 -1.2085667166519694e-5 -3.695886933111921e-5 0.00019028710214204783 -0.00010879812050827156 0.00013453081321220542 2.3690677536234e-5 0.00019774817466985131 -0.00010464021632198549; -0.00014413404597055185 -0.00013636132962621653 3.6047473710807704e-5 7.149533300416954e-6 1.9160008830096583e-5 0.00010817386327924342 -4.005381152002951e-5 1.874470808341595e-5 0.00031020346477542545 -3.8560912346856114e-5 4.490448236263789e-5 3.480523127744434e-5 -0.0001624933825069277 -0.00010392377180598205 5.8277528748447354e-5 9.077814001737008e-5 5.367254243263146e-5 -4.883981549044766e-6 9.078624543414058e-5 8.613980434982519e-5 -1.8302858282384107e-5 -8.899405247850415e-5 3.539008366440695e-5 7.949689003892607e-6 5.5809698548712676e-5 -4.590970593475195e-5 6.699935738377742e-5 -0.0001409013088987373 -0.00020880220427263064 -2.0523686092035774e-5 5.018859928621433e-5 0.00019194803871594836; -5.0836640431256135e-5 -7.538980323372986e-5 0.00012213365219284077 0.00015550967757805854 -0.00012977615303609452 -5.158650426094909e-5 -9.445300135778785e-5 0.00012671904799279023 5.2324022116207556e-5 0.00023098065387673432 -8.698153148654638e-5 0.00018369659185625887 -1.2696110635469216e-5 -2.2369258851732236e-6 3.0694488240208985e-5 -0.00011975622807880166 0.0001247906863943803 -1.4960184008690696e-5 3.87502902674283e-5 0.00023631081669807597 -0.00011999955792927964 -1.5374121369268324e-6 8.959246290830727e-5 -3.424867943178601e-5 1.3676376264610214e-5 -0.00011143940107219109 -0.00010046586728205614 6.015020947304269e-5 -2.4393227496349207e-5 3.272974093195133e-5 8.029209377926103e-5 -4.091005509619314e-5; -2.0077208612465763e-5 4.205642078642352e-5 0.0002203426335448267 0.00020521891535947 -0.00015877615835295628 -0.0001763010589663154 -0.00016422321670933018 0.00010166785766825587 9.380446834013477e-5 6.944013118965557e-5 7.570313726075517e-5 -0.00020894228539207352 -0.00016987823648622463 -6.765709287963186e-5 -0.00010386997048291692 5.2174847657902704e-5 2.339459540718326e-5 -6.166371924580691e-5 0.00017520478984558427 -3.222756670100149e-5 0.00016375791863294776 -3.2101947292805683e-5 0.00013332401402050718 6.26565230753145e-5 0.00017636272939609954 9.217783342798653e-5 -3.0515602995979942e-5 4.447797589574892e-5 -4.3379819837970135e-5 3.675198938919608e-5 7.922617776541621e-5 5.5747520285184625e-6; -2.685894639342157e-5 0.0001781980100810144 4.69559273648144e-5 -9.470961946988632e-5 -0.0001646288436580844 -2.3182670583428215e-5 -4.631952260187872e-6 8.540530700940394e-5 -8.155610691517851e-5 -0.00014111447660014205 7.040009234571584e-5 -0.00032174655631877743 0.00013005707308422635 0.00011168235517345108 8.33655504999339e-5 6.067131119436574e-5 -7.807485951105246e-5 -7.818412256654567e-5 -6.0909078337657313e-5 0.0001418841130822501 0.0001140118839708692 0.00024713810137711416 -2.690377356828372e-5 0.00010221995046833276 8.415895002200944e-5 -0.00015950913063477265 4.0067944888674725e-6 -8.733353740297418e-5 -8.763341599604373e-5 5.5929635100818784e-5 -7.809476653108627e-5 -8.700957539020366e-5; 4.786932413068825e-6 -6.208362415278059e-5 -3.9442386560467134e-5 0.00014633141595326291 0.00012983844985640914 3.785779094827512e-5 3.610460166741359e-5 -5.059065611064305e-5 -0.00010864825403629797 -7.380968113508274e-5 -3.915425500096644e-5 2.2093268261896655e-5 4.4380043027140755e-5 -0.00020987168923099534 -2.2882814528585292e-5 -9.160647700850692e-5 9.35278271712624e-6 5.504214049809485e-6 2.5486969525076378e-5 2.770063415439979e-5 7.04589159336426e-5 -1.7808627266741597e-6 -8.943694651903882e-5 2.804817754579744e-5 -5.7030751714110095e-5 8.365784514893617e-6 5.739902325812697e-6 3.801416946729513e-5 -4.0826899431040455e-5 1.3989509488540305e-5 -0.00016164971261830032 -0.0003113416411456186; 0.000102106123100063 -0.00014016335115625707 -9.471053125195037e-5 -0.00034049777821961217 -0.00021730853985387548 -0.00021877435427482487 -1.7805197188460757e-6 5.131708313969156e-5 -6.843685185627884e-5 2.57806788398924e-6 -7.676785970430642e-5 3.0356342554694465e-5 4.429319016311954e-5 -0.00013344438264936093 8.771185604533769e-5 -4.408363600700849e-5 4.933374076758323e-5 -0.00012167637249545816 0.00014828101631398684 3.6815048238749006e-5 5.1664037178524215e-5 3.541777333852939e-5 1.1030585133668373e-5 -2.014099698895911e-5 -5.131294381895894e-5 4.591231677906073e-6 -0.00010240284098869066 0.0003480696592447589 4.954930192583369e-5 9.322812331050906e-5 0.00013963988532453915 -1.446085060193778e-5; 4.622790451149549e-5 0.00017598764546901608 -9.250365970902723e-5 -0.00012786824796594428 -9.314770564915953e-5 -0.00010438933664945397 0.00012465363992132786 -6.00707058939334e-6 -4.2567278307533954e-5 3.35998288738646e-5 -7.608266738489823e-5 -5.8175724427119784e-5 9.88592060258278e-5 8.666327497675332e-5 -0.00015342065440732578 9.066951980577018e-7 -0.00017334775608278201 -7.168600370192953e-6 7.075255815069164e-6 7.88713719321926e-5 -8.685444614630656e-5 3.5428822719140895e-5 0.00011280686025029057 -0.0001193008951867435 2.41430866827519e-5 -8.293404376840209e-5 1.4106842662012454e-5 -0.00012959851434234343 -9.485527831376266e-5 -0.00019885768548366046 -1.7951882913192419e-7 6.090154287275188e-5; 3.8075657294035454e-5 -0.00014175455732583236 -0.00010340492742215 -5.7892051763693166e-5 -4.655654999622882e-5 8.02469948485016e-5 -4.984069898454611e-5 0.00010194839699580599 0.00012222706566718716 -2.441568910398035e-5 9.291891176873161e-5 7.914746611289046e-6 -6.017135098772205e-5 0.0001888233672889447 4.911679564428883e-5 5.396034969635737e-5 -0.0002572190715636609 -5.734145424856716e-6 4.893062571680914e-5 -0.00013650592888551523 -3.4422551898376484e-5 3.034135879720712e-6 6.10831120229798e-5 -7.605628349664316e-5 0.00013981320073982073 9.759410016157477e-5 8.666510250806197e-5 -2.3674731046403816e-5 0.0001224543374792238 -0.00010652114730872946 -1.3364947399156407e-5 0.00012327676807218543; -3.345835663754017e-5 4.095091920202965e-5 -0.00010974786725387859 5.973815844367526e-5 -4.2660292974010864e-5 -9.30017487637572e-5 8.91551897718699e-5 -0.00010395918813561029 -1.2042423958752527e-5 0.00015913601221722784 -6.577646869960186e-5 -1.2962267800736885e-5 -1.3436992223920287e-6 -2.4246759648355458e-5 -4.7679081189237525e-5 8.946577355117967e-6 0.0002195028717603664 6.022836153601422e-5 4.847099247013986e-5 -1.8124768911780975e-5 4.925494368529956e-6 0.0001624573995566141 -9.93526192990018e-5 -6.088138267774102e-5 -0.00013101725941783716 0.00015148486257272604 -9.982552744009511e-5 2.4380538735154734e-5 9.962192808182691e-6 -9.380313728730091e-5 -8.448989698923161e-5 -9.91685375713648e-5; 9.804364651804558e-5 -0.00013735471168170116 6.38697105170153e-5 6.1175306997164e-5 3.706671248086988e-5 1.743404516323532e-5 4.745594685677983e-5 -1.5213735990850587e-5 3.608851454932399e-5 -8.671320995186048e-5 -7.337214468729621e-6 -2.0154193974899372e-5 -5.847420345512655e-5 8.098568776053526e-5 2.223290118869231e-5 -2.9422212377400927e-5 -3.1217473064180395e-5 -8.40423456234346e-6 -0.00013870699206002676 5.3564175119523264e-5 3.997832516337966e-5 8.018772621303625e-5 -0.00020783417733023503 -3.927642194308173e-5 2.2973533647150512e-5 -0.00013395053849656757 0.00018567254426320238 -3.7418320629378204e-5 -5.72097632225374e-7 5.572545103509233e-5 -0.00011531279471806082 -7.445595713202687e-5; 2.5638073752387038e-5 -3.386199004623343e-5 3.410635709273246e-5 -8.304737256864081e-5 0.0001064900208844284 -7.128141423478875e-5 -1.8211219681205216e-5 0.00015260976961557342 0.0001266979890756742 -0.00010786557667912822 -5.931977629685243e-5 0.00011134139659092425 -0.00010548676782079127 0.00018248076249123237 9.047835210777135e-5 -5.4774398263794144e-5 6.062348275420957e-5 0.0002852504187144746 6.466835493892637e-5 -5.293565478005743e-6 -0.00010272087074141474 0.0001012625654807624 1.4142030708658745e-5 6.679186499653968e-5 -0.0001888192708024324 -5.32562151379396e-5 1.695020201126724e-5 -7.910490053714717e-6 -2.109746655647504e-5 -9.280295470986105e-5 2.0247411343498385e-5 2.3503622473563574e-5; 0.00012267545209250727 -2.3302614846771015e-5 0.0001845576753279514 -3.6030802204195574e-5 -8.969030203080347e-5 5.398002957362856e-6 -6.873341573382579e-6 -0.00010259472945254828 2.565181422905006e-5 -0.00015345501195189177 -0.00013484661938638023 7.593563170156848e-5 -1.713277741292608e-5 -4.6390291405346965e-6 9.142588352768827e-5 -2.321811005605096e-5 5.415817760730499e-5 -5.62900804485342e-5 -0.00015189987787526965 3.1804566045959986e-6 -3.7886873525721716e-5 -2.6533643887533557e-5 0.00012959120614904347 8.168061141840587e-5 0.00016691740713066828 1.2252102368062537e-5 -0.00022118821105469543 -5.3078414549973e-5 4.358265431065218e-6 0.00017463043151873554 -1.0993808229258895e-5 5.75512549570616e-5; 3.159141636274673e-5 -0.00011022104297012485 -6.817752580105708e-5 -4.0171815609258854e-5 4.775350101112189e-5 -6.654445166646065e-5 8.293169802299342e-6 2.4024173061425155e-5 2.1412348022512357e-5 -0.00018230733837413447 4.823277439518023e-6 2.139397441054727e-5 8.545282037516485e-5 -0.00010729954953718456 -3.568299359469238e-5 7.19854593887767e-5 -6.826832247611537e-5 -8.483017006370395e-5 -4.113994725340338e-5 9.732307228545356e-5 0.00013572206228558613 -2.7222343953739493e-6 -4.743237365915371e-5 -0.0001458835307596573 9.53589711832492e-5 4.081624649988187e-5 3.868265560298705e-5 0.000101424093035117 -7.21248710499469e-5 -7.761311860236556e-5 -0.00018438571566663171 -5.836592970014069e-5; 1.9434475735285212e-5 -3.54102412068817e-5 5.494447400514461e-5 0.0001329010774958846 0.00010804757043905769 -9.77762319501544e-5 6.776036757462998e-5 6.780445987778172e-5 6.958511953661725e-5 -6.413187652407419e-5 9.143889478827302e-5 3.0034529164089138e-5 -4.9840822566072335e-5 9.829268678976743e-5 4.441438436574161e-5 3.23076019873696e-5 0.00010130146261024565 -8.869375914682902e-6 6.878185381595422e-5 0.00011770217684051357 -9.962753388145616e-5 -0.00012705672265776708 -0.00023942263937834753 6.482771282271026e-5 0.00019372796673514434 1.375632477959769e-5 -0.0001117432620497369 5.267292003798961e-5 -0.00019889632671884686 2.109820893628288e-5 0.00014671273679659245 0.00010655527880834928; 0.00015326878107489213 0.00017225651296658113 -0.00019746958517596915 0.00010164548345247386 -1.9320526037594694e-5 3.127462602324607e-5 -1.0652513113287296e-5 0.00012335951666220181 1.761864311167285e-5 0.0001873437349485446 4.645353598509099e-5 -3.5530449923320618e-6 0.00014254723552485873 -2.6303644547697315e-5 4.005661591405254e-5 -4.95041504410511e-5 3.262371589417253e-5 2.9048661387528156e-5 0.000116947986468257 -0.00014776770788838128 3.320205993706372e-5 7.4480484697138e-5 -2.2321945864940837e-5 2.1713142784319098e-5 1.9327640044975404e-5 0.00010350166027532305 4.666042938806584e-6 0.00012013815191213221 -1.4263631813337436e-5 -3.614279288111362e-5 -4.778083988010678e-5 -0.00010844721785641021; 0.00012510945583593439 1.6366220221551666e-5 2.6071794706096284e-5 -0.0002292761733439514 2.5292097631191662e-5 -0.00011106545076805646 -4.215142383746798e-6 -0.00014278436225473204 0.00012849907710582722 -1.0538062782561645e-5 0.0001832126485499808 -5.0894328193482666e-6 5.690608435210316e-5 -2.0125346567199934e-5 -1.955617393532209e-6 0.00020969985547375172 9.217756484157493e-6 -5.208354429092567e-5 -1.997851592357001e-5 -0.00010448388872069576 -8.920205403827503e-6 -3.207223432006019e-5 -0.00010738169163565525 6.59365081777121e-5 0.00014089126983557595 -6.935870857312072e-5 0.00011378178663684763 2.73976579013286e-5 -0.00014477276505536627 -3.005213017209725e-5 5.180299689194884e-5 7.691116246527768e-5; 5.538993233130405e-5 1.2013121395275781e-5 -0.00013654782864252595 8.461134953903325e-5 3.0126981673657063e-5 -0.0001866333089674399 0.00010116513536558117 -0.00020897041153150129 7.815681115969442e-5 -0.00022899765948751778 5.957330964518243e-5 -2.343077479965612e-5 -2.1783259708183335e-5 -0.00012992579044296065 -2.6510675433865744e-6 -4.1947217450797755e-5 0.00017164245460661758 -0.00014682859668029686 -6.270823845621428e-5 -2.4218189446592156e-6 5.806723811284637e-7 -0.0001496133966975353 0.00014899951444020555 6.167399502371825e-5 6.43462287007217e-5 2.156421274780055e-5 -7.845862031901837e-5 -9.390322278706127e-5 -0.0001074596487706212 9.94298194745899e-5 3.5058451121565134e-5 -0.00010716452865383237; -0.0003191377717522412 -5.37367159641918e-6 -2.4668771315294324e-5 0.0002046598284286004 -0.0001522184710703305 4.4142137172354915e-5 3.4260509979401333e-5 0.00018585492679984496 -1.5316129887754148e-5 -2.7753702765142216e-5 -7.027586516788974e-5 -0.0001221923111225926 0.00014801379174594998 5.563968906779954e-5 0.00010767002558703952 -4.5898740521908645e-5 9.620453143622839e-6 9.126605172776816e-5 -6.428196038915285e-5 4.424748576265019e-5 -0.00010782323343311245 8.663491198229877e-5 -0.00018453389466604935 9.671578036057946e-5 -0.00011947830617292601 -1.0612961799879316e-5 -7.871953952520206e-6 0.00016270977093507758 -0.00011838625769461324 8.132542743249991e-5 2.7964141823655893e-5 -2.177110664353059e-5; 0.0001377084021509263 8.159131000316127e-6 0.00014750205937833923 -9.148348416838996e-5 -2.3457184027037318e-5 -0.00012808509435908975 -3.121308927130042e-5 -9.638574792244055e-5 2.1943916551562877e-5 -0.00010252862065736459 4.3880926939336946e-5 -2.7056998617454448e-5 4.104674499776066e-5 -0.00011791483356556496 0.00010887833687539623 -3.353196242878853e-5 4.6401841313083126e-6 7.904667782658295e-5 -3.227416399802949e-5 4.67979201925031e-5 -2.8824552457635503e-5 -0.00024034676115898234 7.269563266835809e-5 2.1848854346344296e-5 -0.00017884592906976674 0.0001393872275070107 0.00012041697246549082 3.826126467847036e-5 -4.833069936112578e-6 -0.00011074135053841844 -0.0001416534165333453 6.224205336128725e-5; 5.576822008828056e-5 0.00025591577145627056 -8.700639005161747e-5 -6.975165302043561e-5 -0.00010677312234526383 0.00014461752875598408 9.682826061856272e-5 -7.23531934932611e-5 0.00013398421146797074 -0.00022309657793136115 5.101374922326844e-5 -0.00010017867457697105 -0.00013455416915582251 -7.145881095545308e-5 9.8864837534567e-5 4.95414482861056e-5 -2.2009391207179958e-5 0.00016627398003714284 1.939828548297046e-5 -8.495580507376246e-6 7.701317779203884e-6 -0.00017832730514106884 2.259041543532805e-5 -6.144958893179827e-5 -0.00019205013955121797 3.2863756539376833e-5 2.5918049184321998e-5 2.8908595093011135e-5 2.7365002504150038e-5 -0.00010480302585398664 4.503860913922144e-5 3.899405641953519e-5], bias = [2.7138095716372747e-9, -1.6498382031809333e-10, -4.608275953969919e-10, -1.9291751862516236e-9, 2.7705720728361014e-9, 3.264485710362863e-10, -4.305414876830443e-10, -2.259400710954987e-9, -5.408371575175171e-10, 7.742756136062776e-10, -1.7777348153795833e-9, 1.5844289897328725e-11, 8.990327688345358e-10, 2.1052335604828207e-9, 2.4371523286227764e-9, -3.6367908381110006e-10, -2.9491734226378475e-9, -1.5301196637787933e-9, -3.159095204734098e-9, 1.193193396044492e-9, -8.514409789703838e-10, -1.0173824693803676e-9, 2.0883858638065282e-9, 1.7282071930744482e-10, -2.428688513980065e-9, 2.9031531937412355e-9, 3.6879709296454276e-9, 6.888843398620999e-10, -2.987999344558813e-9, -1.554363854785273e-10, -1.261098632735822e-9, -5.543848307821684e-10]), layer_4 = (weight = [-0.0007628348698585942 -0.0007968502568538448 -0.0004606481010041707 -0.0006998163852451662 -0.0005239929431246982 -0.000846439777128357 -0.0006807522492192497 -0.0005732331186485315 -0.0007675145653677455 -0.0007083392471198734 -0.0007827243794498699 -0.0006779767132701048 -0.0007039347168316969 -0.0007936663818308618 -0.0007674425274839696 -0.0007663684670847433 -0.0008493740251560428 -0.0006601656832833226 -0.0007198111732472897 -0.0006680203486089178 -0.0005237149715754186 -0.0006873017693977915 -0.0004855881439717459 -0.0006008070585930647 -0.0006401482151674277 -0.0006370621519656112 -0.000804500632374427 -0.0007416157470687397 -0.000705937786320906 -0.0007342948457122147 -0.0006815718149522211 -0.0007291652117940308; 0.000175943447597476 8.215208935135869e-5 0.0001945096995201535 0.00035162594564089306 0.00027705075610947075 0.00028535946776719076 0.0001426472594428061 0.00041967105742638807 0.0002461702184744789 0.0002888079448299872 0.00031030029206790394 0.00011229382078070643 0.0003426428411950763 0.00032742233636123233 0.00031949468363295594 8.398950082268685e-5 5.457763855494437e-5 0.0003234436096119211 0.00023157069141912902 0.0002875879872335591 0.0001593343377194908 0.00023875543193984836 0.0003288542043758386 0.0003476422657305056 0.00024444716656784975 6.748340284060588e-5 0.0003001225345201752 0.00016772500603453095 0.00023656456266660993 0.00024418851308823634 0.00012948764830723452 0.00014831039840870855], bias = [-0.0007096161531146074, 0.00022516783217494898]))

      Visualizing the Results

      Let us now plot the loss over time

      julia
      begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Iteration", ylabel="Loss")
      -
      -    lines!(ax, losses; linewidth=4, alpha=0.75)
      -    scatter!(ax, 1:length(losses), losses; marker=:circle, markersize=12, strokewidth=2)
      -
      -    fig
      -end

      Finally let us visualize the results

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, res.u)
      -soln_nn = Array(solve(prob_nn, RK4(); u0, p=res.u, saveat=tsteps, dt, adaptive=false))
      -waveform_nn_trained = first(compute_waveform(
      -    dt_data, soln_nn, mass_ratio, ode_model_params))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      -
      -    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      -    s1 = scatter!(
      -        ax, tsteps, waveform; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      -
      -    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      -    s2 = scatter!(
      -        ax, tsteps, waveform_nn; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      -
      -    l3 = lines!(ax, tsteps, waveform_nn_trained; linewidth=2, alpha=0.75)
      -    s3 = scatter!(ax, tsteps, waveform_nn_trained; marker=:circle,
      -        alpha=0.5, strokewidth=2, markersize=12)
      -
      -    axislegend(ax, [[l1, s1], [l2, s2], [l3, s3]],
      -        ["Waveform Data", "Waveform Neural Net (Untrained)", "Waveform Neural Net"];
      -        position=:lb)
      -
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 16 default, 0 interactive, 8 GC (on 16 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 16
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 16
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,31))])}const L=p(t,[["render",U]]);export{G as __pageData,L as default}; diff --git a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.js b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.js new file mode 100644 index 0000000000..c3bb0fe6e7 --- /dev/null +++ b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.js @@ -0,0 +1,920 @@ +import{_ as p,c as a,a2 as i,j as s,a as e,o as n}from"./chunks/framework.BetCMmtc.js";const x=JSON.parse('{"title":"Training a Neural ODE to Model Gravitational Waveforms","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/advanced/1_GravitationalWaveForm.md","filePath":"tutorials/advanced/1_GravitationalWaveForm.md","lastUpdated":null}'),l={name:"tutorials/advanced/1_GravitationalWaveForm.md"},t={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.819ex",height:"1.658ex",role:"img",focusable:"false",viewBox:"0 -583 4782.1 733","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"24.527ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10840.9 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},f={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},R={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"};function L(U,A,Z,X,S,j){return n(),a("div",null,[A[41]||(A[41]=i(`

      Training a Neural ODE to Model Gravitational Waveforms

      This code is adapted from Astroinformatics/ScientificMachineLearning

      The code has been minimally adapted from Keith et. al. 2021 which originally used Flux.jl

      Package Imports

      julia
      using Lux, ComponentArrays, LineSearches, OrdinaryDiffEqLowOrderRK, Optimization,
      +      OptimizationOptimJL, Printf, Random, SciMLSensitivity
      +using CairoMakie
      Precompiling Lux...
      +    394.7 ms  ✓ Future
      +    453.9 ms  ✓ ConcreteStructs
      +    387.9 ms  ✓ Reexport
      +    386.9 ms  ✓ SIMDTypes
      +    395.5 ms  ✓ OpenLibm_jll
      +    404.1 ms  ✓ CEnum
      +    397.8 ms  ✓ ManualMemory
      +    404.8 ms  ✓ ArgCheck
      +    495.3 ms  ✓ Requires
      +    505.2 ms  ✓ CompilerSupportLibraries_jll
      +    542.8 ms  ✓ Statistics
      +    600.6 ms  ✓ EnzymeCore
      +    626.1 ms  ✓ ADTypes
      +    337.8 ms  ✓ IfElse
      +    363.1 ms  ✓ CommonWorldInvalidations
      +    361.7 ms  ✓ FastClosures
      +    403.1 ms  ✓ StaticArraysCore
      +    472.3 ms  ✓ ConstructionBase
      +    950.9 ms  ✓ IrrationalConstants
      +    473.2 ms  ✓ NaNMath
      +    576.9 ms  ✓ Compat
      +    496.7 ms  ✓ JLLWrappers
      +    413.8 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    638.3 ms  ✓ DocStringExtensions
      +    644.4 ms  ✓ CpuId
      +    471.6 ms  ✓ Adapt
      +    419.7 ms  ✓ DiffResults
      +    419.8 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    447.7 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +    824.8 ms  ✓ ThreadingUtilities
      +    404.9 ms  ✓ Compat → CompatLinearAlgebraExt
      +    386.2 ms  ✓ EnzymeCore → AdaptExt
      +    475.3 ms  ✓ GPUArraysCore
      +    815.3 ms  ✓ Static
      +    538.3 ms  ✓ ArrayInterface
      +    627.0 ms  ✓ Hwloc_jll
      +    609.0 ms  ✓ LogExpFunctions
      +    656.6 ms  ✓ OpenSpecFun_jll
      +   1735.7 ms  ✓ UnsafeAtomics
      +    396.1 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    443.8 ms  ✓ BitTwiddlingConvenienceFunctions
      +    633.2 ms  ✓ Functors
      +    398.8 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +   2036.3 ms  ✓ MacroTools
      +    506.0 ms  ✓ Atomix
      +   1188.3 ms  ✓ ChainRulesCore
      +   1084.7 ms  ✓ CPUSummary
      +    674.2 ms  ✓ CommonSubexpressions
      +    839.1 ms  ✓ MLDataDevices
      +    412.9 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    441.1 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +   1457.0 ms  ✓ StaticArrayInterface
      +    610.7 ms  ✓ PolyesterWeave
      +   1417.7 ms  ✓ Setfield
      +    697.6 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +   1519.1 ms  ✓ DispatchDoctor
      +    488.7 ms  ✓ CloseOpenIntervals
      +    619.6 ms  ✓ LayoutPointers
      +   1127.9 ms  ✓ Optimisers
      +   2081.4 ms  ✓ Hwloc
      +   1388.1 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    453.7 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +    445.2 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    448.7 ms  ✓ Optimisers → OptimisersAdaptExt
      +   2505.0 ms  ✓ SpecialFunctions
      +    655.4 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +    973.5 ms  ✓ StrideArraysCore
      +    623.3 ms  ✓ DiffRules
      +   1236.1 ms  ✓ LuxCore
      +    468.5 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    467.0 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    473.9 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    477.7 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    730.6 ms  ✓ Polyester
      +    639.9 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1673.2 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   2627.8 ms  ✓ WeightInitializers
      +   6046.2 ms  ✓ StaticArrays
      +    608.8 ms  ✓ Adapt → AdaptStaticArraysExt
      +    617.9 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    635.3 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    641.0 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    666.9 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +    945.0 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +   3331.0 ms  ✓ ForwardDiff
      +    886.6 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3194.9 ms  ✓ KernelAbstractions
      +    686.5 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    732.4 ms  ✓ KernelAbstractions → EnzymeExt
      +   5372.0 ms  ✓ NNlib
      +    844.4 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    947.4 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5897.7 ms  ✓ LuxLib
      +   9226.4 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 33 seconds. 15 already precompiled.
      +Precompiling ComponentArrays...
      +    899.7 ms  ✓ ComponentArrays
      +  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      +Precompiling MLDataDevicesComponentArraysExt...
      +    527.0 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      +Precompiling LuxComponentArraysExt...
      +    529.6 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      +   1416.5 ms  ✓ Lux → LuxComponentArraysExt
      +   2037.1 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      +  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      +Precompiling LineSearches...
      +    351.6 ms  ✓ UnPack
      +    515.1 ms  ✓ OrderedCollections
      +    588.6 ms  ✓ Serialization
      +    626.9 ms  ✓ FiniteDiff
      +    451.8 ms  ✓ Parameters
      +   1670.2 ms  ✓ Distributed
      +   1058.7 ms  ✓ NLSolversBase
      +   1793.9 ms  ✓ LineSearches
      +  8 dependencies successfully precompiled in 5 seconds. 35 already precompiled.
      +Precompiling FiniteDiffStaticArraysExt...
      +    620.8 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 21 already precompiled.
      +Precompiling OrdinaryDiffEqLowOrderRK...
      +    344.5 ms  ✓ IteratorInterfaceExtensions
      +    355.2 ms  ✓ CommonSolve
      +    357.5 ms  ✓ DataValueInterfaces
      +    371.1 ms  ✓ FastPower
      +    398.1 ms  ✓ SimpleUnPack
      +    396.4 ms  ✓ MuladdMacro
      +    391.6 ms  ✓ CompositionsBase
      +    424.7 ms  ✓ ExprTools
      +    417.3 ms  ✓ EnumX
      +    418.2 ms  ✓ DataAPI
      +    437.1 ms  ✓ SciMLStructures
      +    475.9 ms  ✓ InverseFunctions
      +    594.3 ms  ✓ TruncatedStacktraces
      +    735.8 ms  ✓ FunctionWrappers
      +    401.6 ms  ✓ TableTraits
      +    456.5 ms  ✓ RuntimeGeneratedFunctions
      +    427.2 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      +    446.8 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      +    971.0 ms  ✓ FillArrays
      +    501.3 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      +    691.1 ms  ✓ FastPower → FastPowerForwardDiffExt
      +    397.1 ms  ✓ FunctionWrappersWrappers
      +    778.9 ms  ✓ PreallocationTools
      +    763.8 ms  ✓ FastBroadcast
      +    441.1 ms  ✓ FillArrays → FillArraysStatisticsExt
      +    819.4 ms  ✓ Tables
      +   1461.8 ms  ✓ RecipesBase
      +   1709.5 ms  ✓ DataStructures
      +   2162.1 ms  ✓ Accessors
      +    884.3 ms  ✓ Accessors → LinearAlgebraExt
      +   1519.4 ms  ✓ SymbolicIndexingInterface
      +   1847.2 ms  ✓ SciMLOperators
      +    578.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +   2058.8 ms  ✓ RecursiveArrayTools
      +    773.8 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +    789.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      +  11188.9 ms  ✓ MLStyle
      +   7812.3 ms  ✓ Expronicon
      +  11395.6 ms  ✓ SciMLBase
      +   5993.6 ms  ✓ DiffEqBase
      +   4619.2 ms  ✓ OrdinaryDiffEqCore
      +   1499.4 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      +   4141.6 ms  ✓ OrdinaryDiffEqLowOrderRK
      +  43 dependencies successfully precompiled in 47 seconds. 82 already precompiled.
      +Precompiling StaticArraysExt...
      +    681.6 ms  ✓ Accessors → StaticArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      +Precompiling MLDataDevicesRecursiveArrayToolsExt...
      +    623.5 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      +Precompiling ComponentArraysRecursiveArrayToolsExt...
      +    705.4 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      +  1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
      +Precompiling ComponentArraysSciMLBaseExt...
      +    987.6 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   1128.0 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      +  2 dependencies successfully precompiled in 1 seconds. 97 already precompiled.
      +Precompiling DiffEqBaseChainRulesCoreExt...
      +   1519.9 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      +  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      +Precompiling MLDataDevicesFillArraysExt...
      +    465.7 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling Optimization...
      +    470.7 ms  ✓ SuiteSparse_jll
      +    482.2 ms  ✓ ProgressLogging
      +    533.9 ms  ✓ AbstractTrees
      +    540.6 ms  ✓ LoggingExtras
      +    647.6 ms  ✓ L_BFGS_B_jll
      +    870.8 ms  ✓ DifferentiationInterface
      +    875.8 ms  ✓ ProgressMeter
      +    432.2 ms  ✓ LeftChildRightSiblingTrees
      +    541.9 ms  ✓ LBFGSB
      +    519.7 ms  ✓ ConsoleProgressMonitor
      +    684.8 ms  ✓ TerminalLoggers
      +   3690.2 ms  ✓ SparseArrays
      +    639.5 ms  ✓ SuiteSparse
      +    646.0 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +    675.6 ms  ✓ Statistics → SparseArraysExt
      +    691.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    715.7 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +    818.4 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +    876.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1233.8 ms  ✓ SparseMatrixColorings
      +    884.3 ms  ✓ PDMats
      +    882.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      +    683.6 ms  ✓ FillArrays → FillArraysPDMatsExt
      +   3588.6 ms  ✓ SparseConnectivityTracer
      +   2199.6 ms  ✓ OptimizationBase
      +   2034.9 ms  ✓ Optimization
      +  26 dependencies successfully precompiled in 13 seconds. 78 already precompiled.
      +Precompiling ChainRulesCoreSparseArraysExt...
      +    686.8 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      +Precompiling SparseArraysExt...
      +    958.0 ms  ✓ KernelAbstractions → SparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      +Precompiling MLDataDevicesSparseArraysExt...
      +    693.5 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      +Precompiling FiniteDiffSparseArraysExt...
      +    655.8 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 16 already precompiled.
      +Precompiling DiffEqBaseSparseArraysExt...
      +   1582.4 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      +  1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      +Precompiling DifferentiationInterfaceFiniteDiffExt...
      +    472.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling DifferentiationInterfaceChainRulesCoreExt...
      +    457.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      +Precompiling DifferentiationInterfaceStaticArraysExt...
      +    636.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      +Precompiling DifferentiationInterfaceForwardDiffExt...
      +    841.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      +Precompiling SparseConnectivityTracerSpecialFunctionsExt...
      +   1218.4 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      +   1609.2 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      +  2 dependencies successfully precompiled in 2 seconds. 26 already precompiled.
      +Precompiling SparseConnectivityTracerNNlibExt...
      +   1676.6 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      +  1 dependency successfully precompiled in 2 seconds. 46 already precompiled.
      +Precompiling SparseConnectivityTracerNaNMathExt...
      +   1315.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      +  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      +Precompiling OptimizationFiniteDiffExt...
      +    420.6 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 97 already precompiled.
      +Precompiling OptimizationForwardDiffExt...
      +    678.7 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 110 already precompiled.
      +Precompiling OptimizationMLDataDevicesExt...
      +   1418.5 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      +  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      +Precompiling HwlocTrees...
      +    570.4 ms  ✓ Hwloc → HwlocTrees
      +  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      +Precompiling OptimizationOptimJL...
      +    372.0 ms  ✓ PtrArrays
      +    419.8 ms  ✓ StatsAPI
      +    482.7 ms  ✓ PositiveFactorizations
      +    475.3 ms  ✓ Missings
      +    542.8 ms  ✓ SortingAlgorithms
      +    496.8 ms  ✓ AliasTables
      +   2242.0 ms  ✓ StatsBase
      +   3129.7 ms  ✓ Optim
      +  11882.1 ms  ✓ OptimizationOptimJL
      +  9 dependencies successfully precompiled in 18 seconds. 131 already precompiled.
      +Precompiling SciMLSensitivity...
      +    394.0 ms  ✓ RealDot
      +    408.0 ms  ✓ StructIO
      +    423.7 ms  ✓ HashArrayMappedTries
      +    434.1 ms  ✓ PoissonRandom
      +    452.7 ms  ✓ Scratch
      +    589.2 ms  ✓ AbstractFFTs
      +    667.6 ms  ✓ SparseInverseSubset
      +    794.5 ms  ✓ StructArrays
      +    956.2 ms  ✓ RandomNumbers
      +   1006.9 ms  ✓ Cassette
      +    988.7 ms  ✓ OffsetArrays
      +    615.9 ms  ✓ Rmath_jll
      +   1022.1 ms  ✓ KLU
      +    642.6 ms  ✓ oneTBB_jll
      +   1281.1 ms  ✓ FastLapackInterface
      +    647.2 ms  ✓ ResettableStacks
      +   1094.5 ms  ✓ ZygoteRules
      +   1066.5 ms  ✓ LazyArtifacts
      +    400.6 ms  ✓ ScopedValues
      +    825.1 ms  ✓ HostCPUFeatures
      +   1041.8 ms  ✓ QuadGK
      +   1879.3 ms  ✓ TimerOutputs
      +    436.8 ms  ✓ StructArrays → StructArraysAdaptExt
      +    495.7 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +   1210.3 ms  ✓ HypergeometricFunctions
      +    454.3 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +   1996.8 ms  ✓ IRTools
      +    705.1 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    706.4 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +    509.7 ms  ✓ Accessors → StructArraysExt
      +    748.5 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    646.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    450.1 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      +    564.8 ms  ✓ FunctionProperties
      +    516.1 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      +    892.9 ms  ✓ Random123
      +   2065.5 ms  ✓ ObjectFile
      +    900.9 ms  ✓ Rmath
      +   3028.3 ms  ✓ Test
      +   1291.7 ms  ✓ IntelOpenMP_jll
      +   2931.8 ms  ✓ SciMLJacobianOperators
      +    627.0 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      +    673.4 ms  ✓ Accessors → TestExt
      +   1490.4 ms  ✓ Enzyme_jll
      +   1534.9 ms  ✓ LLVMExtra_jll
      +   1207.4 ms  ✓ Sparspak
      +   1373.1 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      +   1822.9 ms  ✓ StatsFuns
      +   4571.8 ms  ✓ DiffEqCallbacks
      +    720.6 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +   6213.2 ms  ✓ Krylov
      +   5217.8 ms  ✓ Tracker
      +   1569.2 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +   1011.1 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      +   1124.0 ms  ✓ FastPower → FastPowerTrackerExt
      +   1150.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   1261.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +   1431.0 ms  ✓ Tracker → TrackerPDMatsExt
      +   5419.1 ms  ✓ ChainRules
      +   2335.4 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      +    830.3 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +   6854.1 ms  ✓ VectorizationBase
      +   6018.7 ms  ✓ LLVM
      +   5051.9 ms  ✓ Distributions
      +   1027.1 ms  ✓ SLEEFPirates
      +   7721.2 ms  ✓ MKL_jll
      +   1457.8 ms  ✓ Distributions → DistributionsTestExt
      +   1477.1 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +  11969.9 ms  ✓ ArrayLayouts
      +   1780.2 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      +   1891.1 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      +    837.5 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      +   2433.5 ms  ✓ LazyArrays
      +  14546.2 ms  ✓ ReverseDiff
      +   3749.3 ms  ✓ DiffEqNoiseProcess
      +   1328.2 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      +   4547.8 ms  ✓ GPUArrays
      +   3313.2 ms  ✓ ArrayInterface → ArrayInterfaceReverseDiffExt
      +   3335.4 ms  ✓ FastPower → FastPowerReverseDiffExt
      +   3490.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      +   3471.4 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      +   4714.6 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      +   4762.3 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      +  15955.5 ms  ✓ GPUCompiler
      +  17383.9 ms  ✓ LoopVectorization
      +   1133.7 ms  ✓ LoopVectorization → SpecialFunctionsExt
      +   1267.9 ms  ✓ LoopVectorization → ForwardDiffExt
      +   3215.7 ms  ✓ TriangularSolve
      +  24597.6 ms  ✓ Zygote
      +   1551.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      +   1863.5 ms  ✓ Zygote → ZygoteTrackerExt
      +   3198.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      +   3505.5 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      +  15101.7 ms  ✓ RecursiveFactorization
      +   5352.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      +  28296.0 ms  ✓ LinearSolve
      +   2527.9 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      +   2583.6 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      +   4248.9 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      + 188388.4 ms  ✓ Enzyme
      +   5528.8 ms  ✓ FastPower → FastPowerEnzymeExt
      +   5571.1 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      +   5609.8 ms  ✓ QuadGK → QuadGKEnzymeExt
      +   5643.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      +   5681.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      +   5846.7 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      +   7804.7 ms  ✓ Enzyme → EnzymeStaticArraysExt
      +  10642.1 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      +   7971.6 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      +  21510.4 ms  ✓ SciMLSensitivity
      +  110 dependencies successfully precompiled in 247 seconds. 181 already precompiled.
      +  1 dependency had output during precompilation:
      +┌ MKL_jll
      +│  \x1B[32m\x1B[1m Downloading\x1B[22m\x1B[39m artifact: IntelOpenMP
      +
      +Precompiling LuxLibSLEEFPiratesExt...
      +   2441.5 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      +  1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
      +Precompiling LuxLibLoopVectorizationExt...
      +   4016.1 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      +  1 dependency successfully precompiled in 4 seconds. 105 already precompiled.
      +Precompiling LuxLibEnzymeExt...
      +   1328.1 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
      +Precompiling LuxEnzymeExt...
      +   6675.8 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
      +Precompiling OptimizationEnzymeExt...
      +  13182.2 ms  ✓ OptimizationBase → OptimizationEnzymeExt
      +  1 dependency successfully precompiled in 14 seconds. 109 already precompiled.
      +Precompiling MLDataDevicesTrackerExt...
      +   1206.5 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      +  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      +Precompiling LuxLibTrackerExt...
      +   1108.5 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3350.4 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
      +Precompiling LuxTrackerExt...
      +   2125.0 ms  ✓ Lux → LuxTrackerExt
      +  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      +Precompiling ComponentArraysTrackerExt...
      +   1189.0 ms  ✓ ComponentArrays → ComponentArraysTrackerExt
      +  1 dependency successfully precompiled in 1 seconds. 70 already precompiled.
      +Precompiling MLDataDevicesReverseDiffExt...
      +   3431.4 ms  ✓ MLDataDevices → MLDataDevicesReverseDiffExt
      +  1 dependency successfully precompiled in 4 seconds. 49 already precompiled.
      +Precompiling LuxLibReverseDiffExt...
      +   3363.2 ms  ✓ LuxCore → LuxCoreArrayInterfaceReverseDiffExt
      +   4162.8 ms  ✓ LuxLib → LuxLibReverseDiffExt
      +  2 dependencies successfully precompiled in 4 seconds. 98 already precompiled.
      +Precompiling ComponentArraysReverseDiffExt...
      +   3465.8 ms  ✓ ComponentArrays → ComponentArraysReverseDiffExt
      +  1 dependency successfully precompiled in 4 seconds. 57 already precompiled.
      +Precompiling OptimizationReverseDiffExt...
      +   3342.4 ms  ✓ OptimizationBase → OptimizationReverseDiffExt
      +  1 dependency successfully precompiled in 4 seconds. 130 already precompiled.
      +Precompiling LuxReverseDiffExt...
      +   4344.1 ms  ✓ Lux → LuxReverseDiffExt
      +  1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    872.8 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling MLDataDevicesZygoteExt...
      +   1590.9 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      +   1614.8 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      +  2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
      +Precompiling LuxZygoteExt...
      +   1677.6 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   2840.9 ms  ✓ Lux → LuxZygoteExt
      +  2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
      +Precompiling ComponentArraysZygoteExt...
      +   1599.8 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +   1823.6 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +  2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.
      +Precompiling OptimizationZygoteExt...
      +   2207.7 ms  ✓ OptimizationBase → OptimizationZygoteExt
      +  1 dependency successfully precompiled in 3 seconds. 160 already precompiled.
      +Precompiling CairoMakie...
      +    422.2 ms  ✓ RangeArrays
      +    407.9 ms  ✓ PolygonOps
      +    431.3 ms  ✓ IndirectArrays
      +    443.4 ms  ✓ LaTeXStrings
      +    460.9 ms  ✓ GeoFormatTypes
      +    470.0 ms  ✓ Contour
      +    488.3 ms  ✓ TensorCore
      +    484.1 ms  ✓ TriplotBase
      +    487.2 ms  ✓ StableRNGs
      +    528.4 ms  ✓ PaddedViews
      +    530.4 ms  ✓ Observables
      +    534.1 ms  ✓ IntervalSets
      +    543.6 ms  ✓ RoundingEmulator
      +    570.3 ms  ✓ IterTools
      +    442.2 ms  ✓ PCRE2_jll
      +    842.7 ms  ✓ Grisu
      +    378.6 ms  ✓ CRC32c
      +    479.5 ms  ✓ Extents
      +    424.6 ms  ✓ Ratios
      +    462.0 ms  ✓ LazyModules
      +    450.4 ms  ✓ MappedArrays
      +    496.9 ms  ✓ Inflate
      +    477.9 ms  ✓ StackViews
      +    593.6 ms  ✓ TranscodingStreams
      +   1121.1 ms  ✓ Format
      +    714.7 ms  ✓ SharedArrays
      +    732.3 ms  ✓ WoodburyMatrices
      +    423.0 ms  ✓ SignedDistanceFields
      +    452.4 ms  ✓ RelocatableFolders
      +    657.8 ms  ✓ Graphite2_jll
      +    692.3 ms  ✓ OpenSSL_jll
      +    645.5 ms  ✓ Libmount_jll
      +    625.1 ms  ✓ LLVMOpenMP_jll
      +    640.5 ms  ✓ Bzip2_jll
      +    650.6 ms  ✓ Xorg_libXau_jll
      +    670.7 ms  ✓ libpng_jll
      +    641.6 ms  ✓ libfdk_aac_jll
      +    640.5 ms  ✓ Giflib_jll
      +    650.4 ms  ✓ Imath_jll
      +    655.9 ms  ✓ LAME_jll
      +   1585.4 ms  ✓ AdaptivePredicates
      +   1267.9 ms  ✓ SimpleTraits
      +    660.2 ms  ✓ LERC_jll
      +    656.0 ms  ✓ EarCut_jll
      +    665.3 ms  ✓ CRlibm_jll
      +    700.3 ms  ✓ JpegTurbo_jll
      +    644.3 ms  ✓ Ogg_jll
      +    707.1 ms  ✓ XZ_jll
      +   1599.0 ms  ✓ UnicodeFun
      +    678.5 ms  ✓ x265_jll
      +   1977.8 ms  ✓ FixedPointNumbers
      +    654.1 ms  ✓ Xorg_libXdmcp_jll
      +    673.5 ms  ✓ x264_jll
      +    612.0 ms  ✓ Expat_jll
      +    695.7 ms  ✓ libaom_jll
      +    690.1 ms  ✓ Zstd_jll
      +    663.8 ms  ✓ LZO_jll
      +    587.7 ms  ✓ Xorg_xtrans_jll
      +    668.9 ms  ✓ Opus_jll
      +    683.0 ms  ✓ Libiconv_jll
      +    653.9 ms  ✓ Libffi_jll
      +    659.2 ms  ✓ Libgpg_error_jll
      +    660.2 ms  ✓ isoband_jll
      +    572.1 ms  ✓ Xorg_libpthread_stubs_jll
      +    666.0 ms  ✓ FFTW_jll
      +    675.0 ms  ✓ FriBidi_jll
      +    650.2 ms  ✓ Libuuid_jll
      +    449.0 ms  ✓ IntervalSets → IntervalSetsRandomExt
      +    449.8 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      +    452.8 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      +    473.6 ms  ✓ Showoff
      +    508.9 ms  ✓ MosaicViews
      +   1018.6 ms  ✓ FilePathsBase
      +    683.2 ms  ✓ Pixman_jll
      +    743.4 ms  ✓ AxisAlgorithms
      +    695.2 ms  ✓ FreeType2_jll
      +    491.3 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      +    703.4 ms  ✓ libsixel_jll
      +    776.6 ms  ✓ OpenEXR_jll
      +    721.6 ms  ✓ libvorbis_jll
      +   1081.8 ms  ✓ GeoInterface
      +    711.5 ms  ✓ Libtiff_jll
      +    508.4 ms  ✓ Isoband
      +    731.3 ms  ✓ XML2_jll
      +    679.4 ms  ✓ Libgcrypt_jll
      +    582.2 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      +    859.6 ms  ✓ AxisArrays
      +    833.7 ms  ✓ FilePaths
      +   1494.1 ms  ✓ ColorTypes
      +    860.5 ms  ✓ Fontconfig_jll
      +   2502.7 ms  ✓ PkgVersion
      +    744.9 ms  ✓ Gettext_jll
      +   1040.5 ms  ✓ FreeType
      +   1289.6 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      +    751.0 ms  ✓ XSLT_jll
      +    560.8 ms  ✓ ColorTypes → StyledStringsExt
      +   2542.1 ms  ✓ IntervalArithmetic
      +    889.8 ms  ✓ Glib_jll
      +   3468.8 ms  ✓ FileIO
      +   2083.5 ms  ✓ Interpolations
      +    544.9 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      +   1177.4 ms  ✓ Xorg_libxcb_jll
      +   1912.1 ms  ✓ ColorVectorSpace
      +    689.0 ms  ✓ Xorg_libX11_jll
      +    838.1 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      +   1505.2 ms  ✓ QOI
      +    671.4 ms  ✓ Xorg_libXrender_jll
      +    668.6 ms  ✓ Xorg_libXext_jll
      +   4807.9 ms  ✓ FFTW
      +    895.2 ms  ✓ Libglvnd_jll
      +    923.1 ms  ✓ Cairo_jll
      +   4002.3 ms  ✓ Colors
      +   6750.4 ms  ✓ SIMD
      +    647.7 ms  ✓ Graphics
      +    661.9 ms  ✓ Animations
      +   3850.6 ms  ✓ ExactPredicates
      +    903.6 ms  ✓ libwebp_jll
      +    880.6 ms  ✓ HarfBuzz_jll
      +    851.4 ms  ✓ ColorBrewer
      +    784.9 ms  ✓ libass_jll
      +   1817.0 ms  ✓ KernelDensity
      +    850.6 ms  ✓ Pango_jll
      +   1619.6 ms  ✓ OpenEXR
      +    987.5 ms  ✓ FFMPEG_jll
      +   1358.5 ms  ✓ Cairo
      +   3537.3 ms  ✓ ColorSchemes
      +   9513.6 ms  ✓ GeometryBasics
      +   5259.9 ms  ✓ DelaunayTriangulation
      +   1207.8 ms  ✓ Packing
      +   1326.0 ms  ✓ ShaderAbstractions
      +   2112.2 ms  ✓ FreeTypeAbstraction
      +  15568.6 ms  ✓ Unitful
      +    601.0 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      +    612.5 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      +   1287.0 ms  ✓ Interpolations → InterpolationsUnitfulExt
      +   8314.0 ms  ✓ Automa
      +   3920.4 ms  ✓ MakieCore
      +   5349.4 ms  ✓ GridLayoutBase
      +   8321.6 ms  ✓ PlotUtils
      +  14392.5 ms  ✓ ImageCore
      +   1985.2 ms  ✓ ImageBase
      +   2375.0 ms  ✓ WebP
      +   3096.9 ms  ✓ PNGFiles
      +   3252.9 ms  ✓ JpegTurbo
      +   3341.5 ms  ✓ Sixel
      +   8991.4 ms  ✓ MathTeXEngine
      +   2184.2 ms  ✓ ImageAxes
      +   1273.7 ms  ✓ ImageMetadata
      +   1918.1 ms  ✓ Netpbm
      +  43256.2 ms  ✓ TiffImages
      +   1321.2 ms  ✓ ImageIO
      + 106541.5 ms  ✓ Makie
      +  82478.8 ms  ✓ CairoMakie
      +  153 dependencies successfully precompiled in 243 seconds. 118 already precompiled.
      +Precompiling SparseMatrixColoringsColorsExt...
      +    959.5 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      +  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      +Precompiling ZygoteColorsExt...
      +   1822.0 ms  ✓ Zygote → ZygoteColorsExt
      +  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
      +Precompiling IntervalSetsExt...
      +   1039.0 ms  ✓ Accessors → IntervalSetsExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling IntervalSetsRecipesBaseExt...
      +    637.6 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      +  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
      +Precompiling UnitfulExt...
      +    656.6 ms  ✓ Accessors → UnitfulExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling DiffEqBaseUnitfulExt...
      +   1543.3 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
      +  1 dependency successfully precompiled in 2 seconds. 123 already precompiled.
      +Precompiling NNlibFFTWExt...
      +    932.7 ms  ✓ NNlib → NNlibFFTWExt
      +  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
      +Precompiling IntervalArithmeticForwardDiffExt...
      +    553.9 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      +    745.8 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      +Precompiling IntervalArithmeticRecipesBaseExt...
      +    891.2 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      +  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      +Precompiling SciMLBaseMakieExt...
      +   8282.2 ms  ✓ SciMLBase → SciMLBaseMakieExt
      +  1 dependency successfully precompiled in 9 seconds. 304 already precompiled.

      Define some Utility Functions

      Tip

      This section can be skipped. It defines functions to simulate the model, however, from a scientific machine learning perspective, isn't super relevant.

      `,8)),s("p",null,[A[6]||(A[6]=e("We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector ")),s("mjx-container",t,[(n(),a("svg",h,A[0]||(A[0]=[i('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"r"),s("mo",null,"="),s("msub",null,[s("mi",null,"r"),s("mn",null,"1")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[7]||(A[7]=e(" and use Newtonian formulas to get ")),s("mjx-container",r,[(n(),a("svg",k,A[2]||(A[2]=[i('',1)]))),A[3]||(A[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"1")])])],-1))]),A[8]||(A[8]=e(", ")),s("mjx-container",E,[(n(),a("svg",d,A[4]||(A[4]=[i('',1)]))),A[5]||(A[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[9]||(A[9]=e(" (e.g. Theoretical Mechanics of Particles and Continua 4.3)"))]),A[42]||(A[42]=i(`
      julia
      function one2two(path, m₁, m₂)
      +    M = m₁ + m₂
      +    r₁ = m₂ / M .* path
      +    r₂ = -m₁ / M .* path
      +    return r₁, r₂
      +end
      one2two (generic function with 1 method)
      `,2)),s("p",null,[A[12]||(A[12]=e("Next we define a function to perform the change of variables: ")),s("mjx-container",o,[(n(),a("svg",Q,A[10]||(A[10]=[i('',1)]))),A[11]||(A[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"("),s("mi",null,"χ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"ϕ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"↦"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"y"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")")])],-1))])]),A[43]||(A[43]=i(`
      julia
      @views function soln2orbit(soln, model_params=nothing)
      +    @assert size(soln, 1)  [2, 4] "size(soln,1) must be either 2 or 4"
      +
      +    if size(soln, 1) == 2
      +        χ = soln[1, :]
      +        ϕ = soln[2, :]
      +
      +        @assert length(model_params)==3 "model_params must have length 3 when size(soln,2) = 2"
      +        p, M, e = model_params
      +    else
      +        χ = soln[1, :]
      +        ϕ = soln[2, :]
      +        p = soln[3, :]
      +        e = soln[4, :]
      +    end
      +
      +    r = p ./ (1 .+ e .* cos.(χ))
      +    x = r .* cos.(ϕ)
      +    y = r .* sin.(ϕ)
      +
      +    orbit = vcat(x', y')
      +    return orbit
      +end
      soln2orbit (generic function with 2 methods)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d_dt(v::AbstractVector, dt)
      +    a = -3 / 2 * v[1] + 2 * v[2] - 1 / 2 * v[3]
      +    b = (v[3:end] .- v[1:(end - 2)]) / 2
      +    c = 3 / 2 * v[end] - 2 * v[end - 1] + 1 / 2 * v[end - 2]
      +    return [a; b; c] / dt
      +end
      d_dt (generic function with 1 method)

      This function uses second-order one-sided difference stencils at the endpoints; see https://doi.org/10.1090/S0025-5718-1988-0935077-0

      julia
      function d2_dt2(v::AbstractVector, dt)
      +    a = 2 * v[1] - 5 * v[2] + 4 * v[3] - v[4]
      +    b = v[1:(end - 2)] .- 2 * v[2:(end - 1)] .+ v[3:end]
      +    c = 2 * v[end] - 5 * v[end - 1] + 4 * v[end - 2] - v[end - 3]
      +    return [a; b; c] / (dt^2)
      +end
      d2_dt2 (generic function with 1 method)

      Now we define a function to compute the trace-free moment tensor from the orbit

      julia
      function orbit2tensor(orbit, component, mass=1)
      +    x = orbit[1, :]
      +    y = orbit[2, :]
      +
      +    Ixx = x .^ 2
      +    Iyy = y .^ 2
      +    Ixy = x .* y
      +    trace = Ixx .+ Iyy
      +
      +    if component[1] == 1 && component[2] == 1
      +        tmp = Ixx .- trace ./ 3
      +    elseif component[1] == 2 && component[2] == 2
      +        tmp = Iyy .- trace ./ 3
      +    else
      +        tmp = Ixy
      +    end
      +
      +    return mass .* tmp
      +end
      +
      +function h_22_quadrupole_components(dt, orbit, component, mass=1)
      +    mtensor = orbit2tensor(orbit, component, mass)
      +    mtensor_ddot = d2_dt2(mtensor, dt)
      +    return 2 * mtensor_ddot
      +end
      +
      +function h_22_quadrupole(dt, orbit, mass=1)
      +    h11 = h_22_quadrupole_components(dt, orbit, (1, 1), mass)
      +    h22 = h_22_quadrupole_components(dt, orbit, (2, 2), mass)
      +    h12 = h_22_quadrupole_components(dt, orbit, (1, 2), mass)
      +    return h11, h12, h22
      +end
      +
      +function h_22_strain_one_body(dt::T, orbit) where {T}
      +    h11, h12, h22 = h_22_quadrupole(dt, orbit)
      +
      +    h₊ = h11 - h22
      +    hₓ = T(2) * h12
      +
      +    scaling_const =(T(π) / 5)
      +    return scaling_const * h₊, -scaling_const * hₓ
      +end
      +
      +function h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      +    h11_1, h12_1, h22_1 = h_22_quadrupole(dt, orbit1, mass1)
      +    h11_2, h12_2, h22_2 = h_22_quadrupole(dt, orbit2, mass2)
      +    h11 = h11_1 + h11_2
      +    h12 = h12_1 + h12_2
      +    h22 = h22_1 + h22_2
      +    return h11, h12, h22
      +end
      +
      +function h_22_strain_two_body(dt::T, orbit1, mass1, orbit2, mass2) where {T}
      +    # compute (2,2) mode strain from orbits of BH 1 of mass1 and BH2 of mass 2
      +
      +    @assert abs(mass1 + mass2 - 1.0)<1e-12 "Masses do not sum to unity"
      +
      +    h11, h12, h22 = h_22_quadrupole_two_body(dt, orbit1, mass1, orbit2, mass2)
      +
      +    h₊ = h11 - h22
      +    hₓ = T(2) * h12
      +
      +    scaling_const =(T(π) / 5)
      +    return scaling_const * h₊, -scaling_const * hₓ
      +end
      +
      +function compute_waveform(dt::T, soln, mass_ratio, model_params=nothing) where {T}
      +    @assert mass_ratio1 "mass_ratio must be <= 1"
      +    @assert mass_ratio0 "mass_ratio must be non-negative"
      +
      +    orbit = soln2orbit(soln, model_params)
      +    if mass_ratio > 0
      +        m₂ = inv(T(1) + mass_ratio)
      +        m₁ = mass_ratio * m₂
      +
      +        orbit₁, orbit₂ = one2two(orbit, m₁, m₂)
      +        waveform = h_22_strain_two_body(dt, orbit₁, m₁, orbit₂, m₂)
      +    else
      +        waveform = h_22_strain_one_body(dt, orbit)
      +    end
      +    return waveform
      +end
      compute_waveform (generic function with 2 methods)

      Simulating the True Model

      RelativisticOrbitModel defines system of odes which describes motion of point like particle in schwarzschild background, uses

      `,13)),s("mjx-container",g,[(n(),a("svg",C,A[13]||(A[13]=[i('',1)]))),A[14]||(A[14]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",f,[(n(),a("svg",c,A[15]||(A[15]=[i('',1)]))),A[16]||(A[16]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[23]||(A[23]=e("where, ")),s("mjx-container",y,[(n(),a("svg",v,A[17]||(A[17]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[18]||(A[18]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[24]||(A[24]=e(", ")),s("mjx-container",I,[(n(),a("svg",m,A[19]||(A[19]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[20]||(A[20]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[25]||(A[25]=e(", and ")),s("mjx-container",u,[(n(),a("svg",F,A[21]||(A[21]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[22]||(A[22]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[26]||(A[26]=e(" are constants"))]),A[44]||(A[44]=i(`
      julia
      function RelativisticOrbitModel(u, (p, M, e), t)
      +    χ, ϕ = u
      +
      +    numer = (p - 2 - 2 * e * cos(χ)) * (1 + e * cos(χ))^2
      +    denom = sqrt((p - 2)^2 - 4 * e^2)
      +
      +    χ̇ = numer * sqrt(p - 6 - 2 * e * cos(χ)) / (M * (p^2) * denom)
      +    ϕ̇ = numer / (M * (p^(3 / 2)) * denom)
      +
      +    return [χ̇, ϕ̇]
      +end
      +
      +mass_ratio = 0.0         # test particle
      +u0 = Float64[π, 0.0]     # initial conditions
      +datasize = 250
      +tspan = (0.0f0, 6.0f4)   # timespace for GW waveform
      +tsteps = range(tspan[1], tspan[2]; length=datasize)  # time at each timestep
      +dt_data = tsteps[2] - tsteps[1]
      +dt = 100.0
      +const ode_model_params = [100.0, 1.0, 0.5]; # p, M, e

      Let's simulate the true model and plot the results using OrdinaryDiffEq.jl

      julia
      prob = ODEProblem(RelativisticOrbitModel, u0, tspan, ode_model_params)
      +soln = Array(solve(prob, RK4(); saveat=tsteps, dt, adaptive=false))
      +waveform = first(compute_waveform(dt_data, soln, mass_ratio, ode_model_params))
      +
      +begin
      +    fig = Figure()
      +    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      +
      +    l = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      +    s = scatter!(ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5)
      +
      +    axislegend(ax, [[l, s]], ["Waveform Data"])
      +
      +    fig
      +end

      Defiing a Neural Network Model

      Next, we define the neural network model that takes 1 input (time) and has two outputs. We'll make a function ODE_model that takes the initial conditions, neural network parameters and a time as inputs and returns the derivatives.

      It is typically never recommended to use globals but incase you do use them, make sure to mark them as const.

      We will deviate from the standard Neural Network initialization and use WeightInitializers.jl,

      julia
      const nn = Chain(Base.Fix1(fast_activation, cos),
      +    Dense(1 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      +    Dense(32 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
      +    Dense(32 => 2; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32))
      +ps, st = Lux.setup(Random.default_rng(), nn)
      ((layer_1 = NamedTuple(), layer_2 = (weight = Float32[-3.4446337f-5; 0.00010394415; 1.0407738f-5; -6.452793f-5; 4.626418f-5; -1.9925024f-5; -9.464696f-5; 0.00011828133; -0.0001320208; -2.9316854f-5; 0.0002301521; 0.00014383887; -0.000110991656; 0.0001545633; -8.774544f-5; 5.4517077f-5; 1.6845874f-5; -7.0865055f-5; -2.4955572f-5; -0.00011097498; 5.276532f-5; 5.01855f-5; -0.00018031502; 4.723036f-6; 0.00010012918; -2.184352f-5; 0.00017082684; 2.1053067f-5; 2.5283302f-5; 3.8338072f-5; -1.7683138f-5; -6.7184796f-5;;], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_3 = (weight = Float32[-4.2053907f-5 4.3386925f-5 -3.131377f-5 -6.60632f-5 -3.9905488f-5 -8.437089f-6 7.8690624f-5 -0.00021180407 3.879939f-5 3.3297678f-5 -5.6983095f-5 6.549813f-5 -4.6091885f-5 0.00023636897 -9.8516684f-5 0.00014633292 5.217717f-5 -5.8151585f-5 2.6223264f-5 0.00020843332 0.00017904844 0.00014736797 7.529488f-5 -6.913635f-6 0.00014460477 -1.672923f-5 7.8827085f-5 6.582538f-5 0.00020819326 5.399849f-6 -0.00014811082 -0.00018036878; 9.777582f-5 -4.8093298f-5 1.485744f-5 -9.0358415f-5 1.924479f-5 3.6809688f-5 0.00014026738 -5.393586f-5 -1.7883678f-5 -0.00014276597 9.19728f-5 -0.00023419409 0.00011673116 -1.031767f-5 8.732497f-5 3.536482f-5 -0.0001028683 -3.4174761f-6 0.0002528165 -4.212384f-5 -2.2573255f-5 0.00022161243 7.783719f-5 9.593876f-6 -0.00014780935 -4.7141395f-5 -1.852038f-5 -1.7244594f-5 0.00010680739 -6.692015f-5 7.6503886f-5 6.023557f-5; -6.1795204f-6 -1.6390075f-5 -0.00013397221 0.00017982828 -2.661647f-5 0.00012682185 -3.583649f-5 -4.9254697f-5 -1.5946554f-5 0.00015366152 5.8358964f-5 -2.492977f-5 2.3328961f-5 0.00010656902 -6.600941f-5 5.650403f-5 -0.00017425556 8.164926f-5 -7.088771f-5 2.4597326f-5 -5.5395718f-5 0.00011095441 3.942657f-5 -1.2046675f-5 8.3262465f-5 -0.00027824836 5.448429f-5 -6.66772f-5 -4.5188946f-5 -4.6224945f-5 -9.3539435f-5 -0.00015698108; -4.262534f-5 0.00010919985 7.178612f-5 4.447823f-5 0.000120952864 7.292193f-5 2.883672f-5 -3.7112368f-5 8.072319f-5 -9.920693f-5 0.00012860054 -2.1822912f-5 0.0001404181 -4.395641f-5 7.241428f-5 4.3160548f-5 -0.00019252152 0.00019974002 -7.287874f-5 2.611762f-5 -0.00010422342 0.00013939405 -5.4051612f-5 -3.291926f-5 -0.00024160316 1.90957f-5 0.0001583958 0.00018530748 -0.00017939544 -2.2220735f-5 -0.00012709879 -0.00016358988; 8.364984f-6 -6.221388f-5 -1.9876286f-5 -0.0001026845 -5.8145673f-5 -7.792514f-5 0.00012760228 -0.00016144144 0.000165219 -7.008369f-5 6.7995854f-5 7.501381f-5 -6.8149544f-5 6.0617996f-5 -0.00012804705 -2.4683293f-6 5.5424996f-5 3.3211414f-5 9.4243966f-5 0.00012305002 0.00013420079 -3.9417897f-5 8.604986f-5 0.00012585316 -0.0001318502 1.9441803f-5 -6.224895f-5 0.00011242075 -9.341854f-5 -3.9140647f-5 0.00016922234 -1.3684947f-5; 0.00012998351 0.00014052702 2.1539017f-5 -0.00019176696 -4.541933f-5 -9.383414f-5 9.159751f-5 -5.3487583f-9 2.4755604f-5 8.439207f-5 -0.0001435018 -1.0001287f-5 5.3363196f-6 -8.822155f-5 -0.00024739647 -1.9378853f-5 -0.00021555215 -7.5717224f-5 -0.000108057386 -6.8901034f-5 -3.297576f-5 0.00012801001 0.00010306175 -0.00011335675 7.893225f-5 3.4183755f-5 6.409453f-5 2.8854318f-5 7.307147f-5 -2.2687107f-6 4.64625f-5 0.000112042806; 3.4868877f-5 4.411143f-5 5.9462025f-5 3.8976737f-5 -8.417206f-6 -3.9721705f-5 -1.27392295f-5 0.00014519217 -0.00012774303 3.1241507f-5 6.61726f-5 3.7206657f-5 -0.00010260069 0.00017139669 -9.503973f-5 -0.00011928401 -0.00011322229 0.00012576614 -1.4027162f-5 3.3692395f-5 0.00024676963 -5.91134f-5 5.9847254f-5 -5.7607062f-6 0.00013581832 -0.00017533799 1.6051288f-5 -2.7718395f-5 6.2843574f-5 0.00015492218 6.3895364f-5 -1.9570874f-5; 4.300263f-5 -3.759355f-5 0.00018775418 8.594025f-6 -3.54082f-5 -0.00011055921 -0.00020306083 -0.00015270218 -0.00014485297 3.8106497f-5 -6.783617f-5 -9.451611f-5 3.7598531f-6 0.00013164424 1.3968381f-5 0.00013892472 0.00015254486 0.00015852142 5.2956682f-5 -5.0795556f-5 6.262729f-5 -9.733962f-5 0.000107884654 6.811489f-5 2.2295862f-6 2.1777894f-6 5.725284f-5 5.3181444f-5 0.0001883264 0.00011238701 1.2314043f-5 -4.316324f-5; 1.20511f-5 -8.1092134f-5 4.6308156f-5 0.000119998535 2.4881458f-5 6.384344f-5 3.1215386f-5 -0.00010765867 1.1283094f-5 0.0001406692 0.00011210508 -0.00021599686 2.718951f-5 -5.5339577f-5 -9.8123564f-5 0.00023663025 -2.7639313f-5 -0.00010749312 -2.789301f-5 6.23893f-5 -0.00010606541 0.00012061757 -9.2707014f-5 -0.000105748826 -6.368685f-5 2.9856978f-5 8.330723f-5 4.174993f-6 8.206204f-5 -0.00014328213 0.00014588062 6.408158f-5; 0.000120730605 7.0018046f-5 0.00013248438 -7.533903f-7 -1.1465964f-5 -7.088935f-5 -1.1740324f-5 1.5477656f-6 -8.777102f-5 -0.000113697504 2.9643372f-5 4.524983f-5 -0.00016962066 -9.1323294f-5 3.8243717f-5 0.00012625684 8.521389f-5 -0.00010420004 4.760375f-5 -0.0001815075 1.0406746f-5 9.0288624f-5 -4.1407417f-5 -0.00010618409 -0.00013016822 9.881702f-5 2.1220458f-5 3.136785f-5 0.00016577568 -4.3345535f-6 -0.00012734522 7.391863f-5; 0.00013549681 4.478237f-5 3.712353f-6 -0.00011569401 -7.741665f-5 -2.2039007f-5 3.0153607f-5 0.00027885364 6.2282474f-5 1.6433369f-5 5.2078617f-6 -4.7000918f-5 -4.681582f-5 -6.339667f-5 -1.0516141f-5 -9.503237f-5 -4.2336775f-5 4.9513845f-5 0.00013973672 0.000126315 -4.4466865f-6 0.00011439561 -7.2713294f-5 -4.5347075f-5 1.0357716f-5 -9.1666f-5 -7.803404f-5 -9.524844f-5 9.770333f-5 -1.4143775f-5 -0.00014762214 -1.2561137f-5; -8.769051f-5 -1.9699017f-5 0.00013252866 4.734714f-5 4.9325303f-5 -0.0001566244 2.5678337f-5 3.595747f-5 -0.00013119343 -0.0001315169 -2.5327217f-5 0.00025388983 -0.00010227109 -1.8038461f-5 1.2503847f-7 2.5749885f-5 6.381259f-5 2.0629448f-6 -9.804977f-5 4.0389128f-5 -1.1470597f-5 6.467271f-6 8.575985f-6 -6.3617685f-5 5.7763566f-5 -3.609813f-5 0.00013409156 -5.992536f-5 -8.75678f-5 -0.00015813061 -1.4344347f-5 0.00010358078; -4.5211193f-5 -8.982089f-5 -9.538837f-5 -3.780156f-5 -5.5938493f-5 1.5488362f-5 1.812237f-5 -4.2632495f-5 0.00012876061 0.00012243056 -0.00017869765 -9.8027405f-5 5.991346f-5 5.9273414f-5 2.3993003f-5 -7.3010124f-5 6.414427f-5 1.2411233f-6 -5.5126857f-5 -1.1964921f-6 4.2523938f-5 5.6468507f-5 -8.10907f-5 -2.4996172f-5 -3.19213f-5 0.00012697169 -0.00020041059 -2.0566074f-5 6.253308f-7 7.127307f-5 6.3227635f-5 1.4143306f-5; 5.7190555f-5 5.2213036f-5 0.000110101304 -3.0514471f-5 2.1582293f-5 -4.849053f-5 6.5118846f-5 9.516466f-5 -3.110828f-5 6.826293f-5 -5.3967105f-5 0.00012646428 9.023525f-5 4.9806326f-6 -9.399997f-5 -4.8966704f-5 -1.910033f-5 -6.380421f-5 6.0419025f-5 3.608888f-5 -2.6495205f-5 1.6526057f-5 -2.20853f-5 -7.925446f-5 -0.00016420889 5.1522966f-6 5.9229023f-5 -8.79329f-5 7.976851f-5 -5.341451f-6 4.5772078f-5 0.00010451297; 8.830987f-5 0.00013208737 -6.5613574f-5 7.393352f-5 8.9889574f-5 -0.00011885097 1.1837484f-5 -0.00010001018 0.00013785157 -8.159959f-5 -0.00011262807 0.0001376406 2.6677524f-5 0.00014831097 3.827098f-5 -9.3074916f-5 -2.5761932f-5 -0.00016038121 4.318028f-5 -1.9102656f-5 -0.00014921126 0.000115171286 -8.063735f-5 1.1720706f-5 -0.00022359965 -6.249485f-5 4.6211785f-6 6.785435f-5 4.7423113f-5 -0.00013142708 1.7368713f-5 -6.1622326f-5; 0.00010093819 -2.0439998f-5 -3.0679537f-5 -6.0320803f-5 -5.984598f-5 -3.1480556f-6 -2.1526082f-6 3.2503962f-5 -1.4426741f-5 -4.334f-5 4.984604f-5 0.00015791321 0.00024633258 0.0001353566 9.062983f-5 -5.7063633f-5 5.6170902f-5 0.00012129234 -0.0001340508 7.869632f-5 -1.0902069f-5 -0.00018631369 -0.00032723838 -0.00015855787 -4.975448f-5 -4.43156f-5 -2.5760326f-5 -8.0857084f-5 8.673312f-5 -0.00025890366 -0.00015582202 -3.5932255f-5; 8.574478f-5 7.658151f-5 7.7282035f-5 0.000102018974 3.833785f-5 -1.5135049f-5 -2.4218472f-5 -4.685639f-5 -8.222127f-5 4.6414843f-7 7.137507f-5 6.276277f-5 -0.0001706207 -0.00023203099 4.7585643f-5 4.725823f-5 4.0834977f-5 6.72172f-5 -4.836937f-5 -0.00011875496 -9.7569085f-5 -5.2672967f-5 1.989273f-5 -0.00010841036 -5.591725f-5 1.5667161f-5 -0.00013297427 -2.483655f-5 1.0715118f-5 -8.723244f-5 0.00012441004 0.00011524323; 2.2642746f-5 -0.00015272667 -1.4285558f-6 0.0001678838 -1.94947f-5 -0.00019399663 -0.00021315523 -0.00020095686 -4.553039f-5 -7.425489f-5 -8.793752f-5 0.00011267003 1.7942952f-5 -0.00023747998 4.44874f-5 -4.9098144f-5 -3.0323032f-5 5.286811f-6 -0.00013527626 -0.00015747514 -9.665816f-5 4.0802694f-5 -7.731109f-6 -2.1128842f-5 1.92424f-5 3.2279077f-5 -2.6483664f-5 -1.851474f-5 3.1695192f-5 0.00012824128 -0.00010954416 -6.127681f-5; 0.00016396995 -1.7849494f-5 -0.00012519558 -0.00011603789 5.5768614f-5 1.9930843f-5 -3.0362457f-6 4.6149562f-5 -0.0001846106 5.4274f-5 5.7559217f-5 -2.5676858f-5 -2.5646903f-5 -0.00016830383 0.00013469068 4.334508f-5 -0.00019906907 5.6317964f-5 -0.00019370418 -9.5345895f-6 -6.809125f-5 -1.0433996f-5 1.514967f-6 6.4279543f-6 1.0697642f-5 7.3860414f-5 -0.00020137579 5.3130923f-5 -0.0001328794 0.0001391857 -2.1289025f-5 -5.7824684f-5; 1.0808964f-5 0.00010476551 -0.000117880474 0.000117220006 1.9460107f-5 0.00012584326 0.00018816844 7.8853016f-5 -8.625334f-5 3.9968258f-5 -5.603305f-6 -9.644718f-5 6.119011f-5 -0.00010118461 -0.00020499196 -0.00010931679 7.1884504f-5 9.989259f-5 0.00012033464 -0.00013321554 -7.3677445f-5 0.0004115561 -5.7697f-6 1.3060458f-5 1.9810932f-5 4.3456563f-5 -3.0828727f-5 1.5608524f-7 3.5201556f-5 -9.530237f-5 7.480124f-5 3.8174334f-5; 1.8344706f-5 -2.252781f-5 0.00012561919 -8.813143f-5 -5.2716914f-5 -7.3666626f-5 -1.6210448f-5 -0.00012577978 5.61369f-5 5.7660477f-6 9.220057f-5 0.00012333161 -0.00012610656 -6.308743f-5 -0.00014293577 -9.992773f-6 -0.00013108953 -5.112995f-5 -8.967487f-5 -2.3644614f-5 -0.00011522683 0.00010042471 -4.9962324f-5 -8.47632f-5 -0.000110084686 -0.0002513219 -0.00018713776 0.0001360887 -8.912003f-5 -6.544659f-5 4.6231955f-5 -4.6639077f-5; -2.197381f-6 -0.00013310836 -4.532192f-5 2.9993416f-5 1.8022136f-5 0.00018153591 -9.603031f-5 -5.413648f-5 4.2869993f-5 -1.7720167f-5 -0.00014343837 0.00015344907 -0.00018855205 -2.7983097f-5 5.3430977f-6 8.0071106f-5 -0.0001849432 2.559237f-5 -0.00013660316 -0.00015405445 1.15452995f-5 0.0001793029 0.00028733205 1.5900598f-5 0.00010199915 -8.186377f-5 8.4180174f-5 0.00015379698 -3.5117813f-5 2.8186989f-5 -4.3052518f-5 1.9419773f-5; 0.00013047179 4.817894f-5 4.0732393f-5 -0.00014995539 0.00014710434 -3.67937f-5 -1.0262098f-5 0.00010308393 1.1467258f-5 -3.3329135f-5 8.338237f-6 -3.0274683f-5 -4.1485044f-5 9.424386f-6 -3.7485945f-5 -0.00012771913 0.00024751297 -5.3930344f-5 3.390676f-5 -0.00015537383 7.248057f-5 -3.891489f-5 1.9789954f-5 7.569962f-5 2.4107821f-5 -4.2702188f-5 -4.8002308f-5 7.070319f-5 -0.0001293727 -4.302074f-5 8.705469f-5 8.415172f-5; -0.00016218047 -8.946247f-5 -2.5866031f-5 -1.4645909f-5 8.638298f-5 1.6371056f-5 8.870913f-6 -0.00012135565 -1.7863642f-5 -5.6675755f-5 3.62417f-5 0.00015631424 8.173435f-5 -7.158864f-5 0.00013528141 0.00010242883 -0.0001272267 0.000109258086 6.720632f-6 0.00011892835 3.4085126f-5 6.105397f-5 -0.00011939068 0.00012109862 9.136028f-5 -3.0875275f-5 2.0644777f-5 -3.479381f-5 -5.334923f-5 -0.00019021974 -1.6745504f-5 1.3943094f-5; 0.00013323785 -0.00017627855 -9.576856f-5 -2.5931658f-5 -6.515243f-5 7.440905f-5 -3.0850835f-5 -5.1631956f-5 0.0002543031 0.00012153343 4.9543978f-5 -0.00010156559 3.5479672f-5 4.177699f-6 -0.00020312569 0.00017418849 -0.00013027941 0.00011281384 0.00019560126 4.458472f-5 9.118938f-5 -8.003967f-5 7.2361865f-5 3.296879f-5 -5.325159f-5 -2.0788357f-5 -7.615318f-5 -0.00010978453 -5.7122175f-5 -6.846378f-6 1.4307137f-5 5.8990106f-5; -8.06886f-5 4.4959514f-5 -0.00021278443 -3.2261687f-5 9.764534f-5 -8.423573f-5 0.0001125497 -1.4034379f-6 3.0579333f-5 -3.161048f-5 5.7838985f-5 0.0001788286 0.00011405091 -0.000102145015 4.392712f-5 -0.00014379322 5.8394504f-5 -0.00016583968 1.0214576f-5 1.25842525f-5 -6.653386f-5 0.00017337779 -0.00017892485 -6.4587985f-5 1.6112404f-5 0.00017418951 2.5661135f-5 9.1543625f-5 0.00010427124 -4.1046784f-5 2.4212042f-5 2.0537844f-5; 0.00013231862 -0.00016531718 -3.903797f-5 0.0001162607 -0.00010705715 -7.781918f-5 -0.00018513748 4.8409518f-5 0.00011539138 9.923989f-6 -6.161437f-5 5.9833288f-5 1.0861265f-5 -7.660697f-5 0.00016022213 0.00017887045 -9.2765746f-5 -0.00014601879 0.00012357271 -0.00015698257 -2.4156461f-5 4.0419607f-5 1.0189255f-5 -6.369725f-5 -0.000120019686 6.6423318f-6 5.8290443f-5 2.9998882f-5 -2.6212781f-5 -3.004527f-6 -6.9801877f-7 3.105117f-5; 0.00020986063 0.00012419584 -0.00020825498 -0.00010945621 -9.314001f-5 8.488033f-5 0.00014737055 9.028464f-5 -4.44054f-5 3.640222f-5 5.627555f-5 -9.140203f-5 -4.810568f-6 -5.4950953f-5 -0.00018010757 8.046163f-5 1.3352965f-5 -0.0001818743 9.104892f-5 5.82443f-6 9.931345f-5 2.152354f-7 6.684913f-5 -8.968921f-5 -2.0080251f-5 -0.00012493068 0.00013471753 5.068596f-5 0.00012188146 -0.000106949046 0.00016678589 3.056754f-6; -3.5848538f-5 -0.000112701135 -0.00012224929 0.00015687355 -8.6629385f-5 -0.0001651372 9.708392f-5 4.934294f-5 -8.735572f-5 -3.124433f-5 -7.913138f-5 0.00010581072 -0.00018598446 -7.34899f-5 -7.338275f-5 -2.5021853f-5 0.00014257689 -5.907165f-5 1.5547153f-5 -7.50885f-5 -5.0319333f-5 -0.00019138037 -0.00016423872 6.0671366f-5 2.242492f-5 0.000106785905 -2.6482256f-5 0.00018647601 -0.0001572809 8.017354f-5 -9.49135f-5 0.00013588318; 4.2787244f-5 -9.1943675f-6 6.915148f-5 -4.7917685f-5 -3.960024f-5 -4.1653864f-5 -1.4677493f-6 0.00011214721 2.8500568f-5 3.611232f-5 -4.3273107f-5 -5.8955353f-5 -1.2186722f-5 0.00026288297 3.117711f-5 -2.2876113f-5 6.70718f-5 -8.035303f-6 -3.3185188f-5 3.947568f-5 -4.163493f-5 -0.00013158418 9.082587f-5 4.1714437f-5 -4.3842498f-5 0.00021099804 1.51678905f-5 -1.5583524f-5 -4.9487164f-5 -3.5895315f-5 0.0001567502 -6.5042135f-5; 0.00011516511 0.00018854189 0.0001457063 1.7245823f-6 -3.5729056f-5 -5.6654986f-5 7.7222874f-5 -0.00013445372 -5.4964265f-5 3.152649f-5 3.8416994f-5 -5.3257634f-5 -0.00010185206 -8.960718f-5 7.329333f-5 -7.270124f-5 5.957602f-6 5.323332f-5 0.00012447026 8.431008f-5 -5.629363f-5 -4.5606484f-5 8.826203f-5 2.4222798f-5 -0.00018978622 3.625876f-5 6.363001f-5 9.625782f-5 -3.218031f-5 -1.1284627f-6 6.951898f-5 -4.9665214f-5; 1.531859f-5 2.3127225f-6 -5.1700008f-5 0.00015922752 -0.00016962092 -4.6529734f-5 -4.5478373f-5 0.00012428919 2.8450175f-5 0.00018228179 6.1834864f-5 -2.695066f-5 -0.00020513852 8.121125f-5 0.00013299647 0.00011844542 -6.963209f-5 -9.7503f-5 0.00014458733 0.0001557746 -5.863989f-5 -5.3325788f-5 2.6907826f-5 5.709038f-5 8.540226f-5 7.1194113f-6 -0.00018346768 -1.1921413f-5 6.308156f-5 -5.4122836f-7 -1.6140875f-5 -0.0001383969], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_4 = (weight = Float32[5.1893516f-5 3.7079637f-5 0.00017634654 3.0440993f-5 2.5401716f-6 4.476671f-5 2.6396589f-5 -8.747219f-6 5.344276f-5 0.00016784511 -0.00010095129 -3.5999135f-5 8.177836f-5 0.00011939605 0.00014665237 0.00013870627 1.8158957f-5 3.6122958f-5 6.211551f-5 -0.00024395707 -0.00012368748 0.00015478948 9.400265f-5 -0.00014458633 8.1357575f-5 -3.682935f-5 5.6512847f-5 0.0001463257 -4.6914683f-5 1.6573602f-5 -0.000104994106 8.469015f-5; -4.6238537f-7 -0.00015144216 -1.2149378f-5 -0.00022305499 1.599908f-5 5.0801737f-5 -0.00017307313 -0.00010968822 -0.00021031218 -6.3836946f-5 0.00012531393 -6.862671f-5 -4.3170407f-5 1.3885872f-5 2.8827359f-5 0.00010167457 -0.00017535916 0.00022037337 0.00020127723 -0.00011085212 8.650909f-5 0.00017946545 -3.855777f-5 7.952302f-5 -1.6478636f-5 -8.0736776f-5 -0.000106744425 6.852565f-5 -0.00012882442 -1.2838407f-5 0.00014893794 -1.1102453f-5], bias = Float32[0.0, 0.0])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple()))

      Similar to most DL frameworks, Lux defaults to using Float32, however, in this case we need Float64

      julia
      const params = ComponentArray(ps |> f64)
      +
      +const nn_model = StatefulLuxLayer{true}(nn, nothing, st)
      StatefulLuxLayer{true}(
      +    Chain(
      +        layer_1 = WrappedFunction(Base.Fix1{typeof(LuxLib.API.fast_activation), typeof(cos)}(LuxLib.API.fast_activation, cos)),
      +        layer_2 = Dense(1 => 32, cos),  # 64 parameters
      +        layer_3 = Dense(32 => 32, cos),  # 1_056 parameters
      +        layer_4 = Dense(32 => 2),       # 66 parameters
      +    ),
      +)         # Total: 1_186 parameters,
      +          #        plus 0 states.

      Now we define a system of odes which describes motion of point like particle with Newtonian physics, uses

      `,14)),s("mjx-container",V,[(n(),a("svg",B,A[27]||(A[27]=[i('',1)]))),A[28]||(A[28]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",T,[(n(),a("svg",q,A[29]||(A[29]=[i('',1)]))),A[30]||(A[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[37]||(A[37]=e("where, ")),s("mjx-container",b,[(n(),a("svg",D,A[31]||(A[31]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[32]||(A[32]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[38]||(A[38]=e(", ")),s("mjx-container",R,[(n(),a("svg",K,A[33]||(A[33]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[34]||(A[34]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[39]||(A[39]=e(", and ")),s("mjx-container",z,[(n(),a("svg",P,A[35]||(A[35]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[36]||(A[36]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[40]||(A[40]=e(" are constants"))]),A[45]||(A[45]=i(`
      julia
      function ODE_model(u, nn_params, t)
      +    χ, ϕ = u
      +    p, M, e = ode_model_params
      +
      +    # In this example we know that \`st\` is am empty NamedTuple hence we can safely ignore
      +    # it, however, in general, we should use \`st\` to store the state of the neural network.
      +    y = 1 .+ nn_model([first(u)], nn_params)
      +
      +    numer = (1 + e * cos(χ))^2
      +    denom = M * (p^(3 / 2))
      +
      +    χ̇ = (numer / denom) * y[1]
      +    ϕ̇ = (numer / denom) * y[2]
      +
      +    return [χ̇, ϕ̇]
      +end
      ODE_model (generic function with 1 method)

      Let us now simulate the neural network model and plot the results. We'll use the untrained neural network parameters to simulate the model.

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, params)
      +soln_nn = Array(solve(prob_nn, RK4(); u0, p=params, saveat=tsteps, dt, adaptive=false))
      +waveform_nn = first(compute_waveform(dt_data, soln_nn, mass_ratio, ode_model_params))
      +
      +begin
      +    fig = Figure()
      +    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      +
      +    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      +    s1 = scatter!(
      +        ax, tsteps, waveform; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      +
      +    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      +    s2 = scatter!(
      +        ax, tsteps, waveform_nn; marker=:circle, markersize=12, alpha=0.5, strokewidth=2)
      +
      +    axislegend(ax, [[l1, s1], [l2, s2]],
      +        ["Waveform Data", "Waveform Neural Net (Untrained)"]; position=:lb)
      +
      +    fig
      +end

      Setting Up for Training the Neural Network

      Next, we define the objective (loss) function to be minimized when training the neural differential equations.

      julia
      const mseloss = MSELoss()
      +
      +function loss(θ)
      +    pred = Array(solve(prob_nn, RK4(); u0, p=θ, saveat=tsteps, dt, adaptive=false))
      +    pred_waveform = first(compute_waveform(dt_data, pred, mass_ratio, ode_model_params))
      +    return mseloss(pred_waveform, waveform)
      +end
      loss (generic function with 1 method)

      Warmup the loss function

      julia
      loss(params)
      0.0007518903236338871

      Now let us define a callback function to store the loss over time

      julia
      const losses = Float64[]
      +
      +function callback(θ, l)
      +    push!(losses, l)
      +    @printf "Training \\t Iteration: %5d \\t Loss: %.10f\\n" θ.iter l
      +    return false
      +end
      callback (generic function with 1 method)

      Training the Neural Network

      Training uses the BFGS optimizers. This seems to give good results because the Newtonian model seems to give a very good initial guess

      julia
      adtype = Optimization.AutoZygote()
      +optf = Optimization.OptimizationFunction((x, p) -> loss(x), adtype)
      +optprob = Optimization.OptimizationProblem(optf, params)
      +res = Optimization.solve(
      +    optprob, BFGS(; initial_stepnorm=0.01, linesearch=LineSearches.BackTracking());
      +    callback, maxiters=1000)
      retcode: Success
      +u: ComponentVector{Float64}(layer_1 = Float64[], layer_2 = (weight = [-3.444633694017027e-5; 0.0001039441485771382; 1.0407738045610902e-5; -6.452792877103415e-5; 4.626418012775144e-5; -1.9925024389491433e-5; -9.46469590415659e-5; 0.00011828132846848188; -0.0001320207957178097; -2.9316854124754463e-5; 0.0002301521017214964; 0.00014383887173639653; -0.00011099165567425477; 0.0001545632985649061; -8.774543675833383e-5; 5.451707693279354e-5; 1.684587368797963e-5; -7.086505502225862e-5; -2.4955572371183748e-5; -0.00011097497917925826; 5.276532101549441e-5; 5.018549927622104e-5; -0.00018031502258926727; 4.723035999626726e-6; 0.00010012918210118536; -2.184351978938409e-5; 0.00017082683916638447; 2.1053067030159004e-5; 2.5283301511037967e-5; 3.833807204501072e-5; -1.7683138139520778e-5; -6.71847956255333e-5;;], bias = [6.9355129382313795e-18, 1.688039926016194e-16, 4.315277203181784e-18, 5.559977517444896e-17, 1.52001508315218e-17, -3.5540923167334566e-17, -1.6651759850291906e-16, 1.1573865574959218e-16, -2.6674011431911173e-17, -2.0666490256237582e-17, 1.3796747490796324e-16, -3.693878051564067e-16, -5.380525838099007e-17, 4.959567142444426e-16, 7.516426352893122e-17, 3.5831402597941594e-17, 2.1393543540180377e-17, -4.532685340028104e-17, -8.180383189070195e-17, -2.1535290509728552e-16, 1.3723640324634174e-16, 5.611635649463381e-17, -4.301449155436491e-16, 6.064214923865348e-18, 4.886005136988138e-17, -1.8002180440631902e-17, 3.57115207947323e-16, -8.016960934672413e-18, 5.0422985921064145e-17, -3.1155996263835813e-18, -2.8990184509333085e-17, 3.127250447859456e-17]), layer_3 = (weight = [-4.205001286278172e-5 4.339081889086414e-5 -3.1309877241035206e-5 -6.605930510955933e-5 -3.990159356431851e-5 -8.433194207789489e-6 7.869451829649755e-5 -0.00021180017764961743 3.880328395936734e-5 3.330157224740251e-5 -5.697920104542528e-5 6.550202116087691e-5 -4.608799107770618e-5 0.00023637286531921677 -9.851279000101203e-5 0.0001463368188608353 5.218106335564008e-5 -5.8147690734371506e-5 2.6227158659132254e-5 0.0002084372189612855 0.00017905233281384326 0.00014737186748719914 7.529877794622476e-5 -6.909740560110761e-6 0.00014460866251213585 -1.672533479973743e-5 7.883097888152574e-5 6.582927190651587e-5 0.000208197156015778 5.403743588650047e-6 -0.00014810692490824082 -0.0001803648829907284; 9.777762491315963e-5 -4.8091490017820345e-5 1.4859247611038911e-5 -9.035660738996977e-5 1.9246597301697616e-5 3.681149567304587e-5 0.00014026918827615586 -5.3934051240202256e-5 -1.7881870436823375e-5 -0.00014276416593171276 9.197460488240333e-5 -0.00023419227770470874 0.00011673296424153897 -1.031586222932255e-5 8.732677409019296e-5 3.536662873406219e-5 -0.00010286648873412568 -3.415668490254715e-6 0.0002528183136094774 -4.2122032456671046e-5 -2.2571447035863186e-5 0.00022161423433166277 7.78389981208593e-5 9.595683448733404e-6 -0.00014780753960773508 -4.7139587397087056e-5 -1.851857311674287e-5 -1.7242786699771552e-5 0.00010680919834006971 -6.691833957271307e-5 7.65056934399944e-5 6.02373761724604e-5; -6.180848922318268e-6 -1.6391403514363544e-5 -0.00013397353610309097 0.00017982694702403543 -2.661779832089018e-5 0.0001268205189629806 -3.5837817231014925e-5 -4.92560254857468e-5 -1.5947882966072905e-5 0.00015366018782550243 5.835763589894574e-5 -2.4931098559687974e-5 2.3327632552891016e-5 0.00010656769449305567 -6.601073688181039e-5 5.650270326476416e-5 -0.00017425688998593265 8.164793256618043e-5 -7.088904090743741e-5 2.459599749307891e-5 -5.539704648946548e-5 0.0001109530814941399 3.9425241302887315e-5 -1.2048003336182092e-5 8.326113606052651e-5 -0.0002782496928917827 5.448296291469673e-5 -6.667853154760091e-5 -4.5190274940628986e-5 -4.622627307946056e-5 -9.354076315468418e-5 -0.00015698241328174312; -4.262435397975399e-5 0.00010920083438833616 7.178710363661654e-5 4.4479215987684456e-5 0.00012095384879359694 7.292291699997855e-5 2.8837704278719262e-5 -3.7111382723364154e-5 8.072417692602771e-5 -9.92059475058155e-5 0.0001286015278063151 -2.1821927318511916e-5 0.0001404190799556239 -4.3955424941657595e-5 7.241526616127701e-5 4.31615327877986e-5 -0.00019252053318921704 0.00019974101012196015 -7.287775663569804e-5 2.6118604339732293e-5 -0.00010422243651844865 0.00013939503257717642 -5.40506269564419e-5 -3.291827381615741e-5 -0.0002416021787459457 1.9096684966662476e-5 0.00015839677796321706 0.00018530846425421303 -0.00017939445827067134 -2.221974939600825e-5 -0.0001270977998748817 -0.00016358889554538193; 8.366520085372586e-6 -6.221234696760506e-5 -1.987475014367264e-5 -0.00010268296217155972 -5.81441371488035e-5 -7.792360600625302e-5 0.000127603817525491 -0.00016143989967724203 0.00016522053294273028 -7.008215369033789e-5 6.799739014046271e-5 7.501534957362108e-5 -6.814800770563814e-5 6.0619532739907314e-5 -0.00012804550977499733 -2.466793041196418e-6 5.542653263354542e-5 3.321295036392848e-5 9.424550182110481e-5 0.00012305155643651454 0.00013420232527813612 -3.9416360670759e-5 8.60513983915291e-5 0.00012585469186696105 -0.00013184866737184743 1.9443338822283293e-5 -6.22474170833147e-5 0.0001124222845809427 -9.341700104606488e-5 -3.9139110305870014e-5 0.00016922388123738445 -1.3683410436090745e-5; 0.0001299821471066502 0.0001405256499738763 2.1537651397968044e-5 -0.00019176832379611924 -4.5420694598712656e-5 -9.38355059160567e-5 9.15961446908625e-5 -6.714685618071617e-9 2.4754237662873872e-5 8.439070565787211e-5 -0.00014350317165092021 -1.0002652609671686e-5 5.334953718427553e-6 -8.822291952395642e-5 -0.00024739783644049055 -1.938021873952194e-5 -0.000215553516548378 -7.571858973546631e-5 -0.00010805875158707556 -6.890239988291987e-5 -3.297712633224983e-5 0.00012800864546721013 0.00010306038361438175 -0.00011335811527983256 7.89308852718539e-5 3.418238897852649e-5 6.409316315245289e-5 2.8852951933960912e-5 7.307010141362492e-5 -2.270076586635192e-6 4.6461134977154854e-5 0.00011204144006756364; 3.487177087047751e-5 4.4114325240316644e-5 5.946491869802371e-5 3.8979631020290376e-5 -8.414312512077747e-6 -3.971881091890168e-5 -1.2736335926799843e-5 0.0001451950643542815 -0.00012774013380002224 3.124440040958759e-5 6.61754943877683e-5 3.720955060381146e-5 -0.0001025977930537748 0.00017139958577282128 -9.503683617983947e-5 -0.00011928112002967413 -0.00011321939605001596 0.0001257690287759273 -1.4024268645488513e-5 3.369528854202042e-5 0.00024677252581862267 -5.911050761530619e-5 5.985014791185486e-5 -5.7578126386655354e-6 0.00013582121532928907 -0.00017533509748934086 1.6054181790606307e-5 -2.7715501752726425e-5 6.284646729293942e-5 0.00015492507150450375 6.389825789777324e-5 -1.9567980822806048e-5; 4.30052654065131e-5 -3.7590913625494985e-5 0.00018775681507582261 8.596659763544528e-6 -3.54055652041523e-5 -0.00011055657399078439 -0.0002030581951020879 -0.0001526995462245013 -0.0001448503304618146 3.810913164675587e-5 -6.783353331229137e-5 -9.451347307728101e-5 3.762488286196048e-6 0.0001316468709501647 1.3971015841932906e-5 0.00013892735421000216 0.00015254749575438378 0.00015852405465016132 5.2959317481558755e-5 -5.07929204377037e-5 6.262992371380514e-5 -9.733698846498297e-5 0.00010788728956870217 6.81175237060008e-5 2.232221386335118e-6 2.1804245262090085e-6 5.725547577642607e-5 5.318407908821865e-5 0.0001883290400383472 0.00011238964486442804 1.2316677721223581e-5 -4.316060460946486e-5; 1.2051990163880824e-5 -8.109124398922551e-5 4.6309045795196e-5 0.00011999942428067517 2.4882347427618372e-5 6.384433283804282e-5 3.1216275895555067e-5 -0.00010765778067886924 1.12839837338686e-5 0.00014067009244447858 0.00011210597069567706 -0.00021599596737085198 2.7190399417145638e-5 -5.5338686867779225e-5 -9.812267460526811e-5 0.00023663114036301372 -2.7638423195613334e-5 -0.00010749223081527115 -2.789211946671537e-5 6.239018905309913e-5 -0.00010606452058434258 0.00012061846275449036 -9.270612433918453e-5 -0.00010574793639259105 -6.36859601190498e-5 2.9857867341915306e-5 8.330812336045713e-5 4.175882916596657e-6 8.206293052628927e-5 -0.00014328124023174605 0.00014588151163755285 6.408246765479624e-5; 0.00012073029734927432 7.001773861737288e-5 0.00013248406845412403 -7.536979088568644e-7 -1.1466271862099168e-5 -7.088965707191072e-5 -1.174063185282471e-5 1.5474579507849773e-6 -8.77713266523798e-5 -0.00011369781190261977 2.964306419051972e-5 4.524952397367383e-5 -0.00016962096764438475 -9.132360195473337e-5 3.824340979952633e-5 0.00012625653267639905 8.521358337055153e-5 -0.00010420034435775822 4.760344178101243e-5 -0.0001815078014620828 1.040643815975388e-5 9.02883164238858e-5 -4.140772420179515e-5 -0.00010618439613573993 -0.00013016852812108295 9.881671372602889e-5 2.1220150083885688e-5 3.136754254263088e-5 0.0001657753725888879 -4.334861150363281e-6 -0.0001273455293263569 7.391832128667507e-5; 0.00013549696292397282 4.4782520729532884e-5 3.712504179240339e-6 -0.00011569386127735784 -7.741649618470397e-5 -2.20388553650612e-5 3.015375865583452e-5 0.0002788537879161968 6.228262487933607e-5 1.6433519943567903e-5 5.208012920082947e-6 -4.700076691438824e-5 -4.68156701906618e-5 -6.339651894155353e-5 -1.0515989503581901e-5 -9.50322153009969e-5 -4.233662342517915e-5 4.951399593050289e-5 0.00013973686715257073 0.00012631515552424488 -4.4465352737082865e-6 0.00011439576179622385 -7.271314255991458e-5 -4.534692355878928e-5 1.0357867052554138e-5 -9.166584799164253e-5 -7.803389029209924e-5 -9.524828941426518e-5 9.770347811649274e-5 -1.4143623717935339e-5 -0.00014762198564927372 -1.2560985612034067e-5; -8.769151501091377e-5 -1.9700021231792414e-5 0.00013252765310263048 4.734613647834317e-5 4.932429836863505e-5 -0.00015662540719112584 2.5677332440145157e-5 3.5956465206446694e-5 -0.0001311944364822806 -0.00013151791100589748 -2.532822182890829e-5 0.0002538888216697285 -0.00010227209751227683 -1.8039465805291092e-5 1.2403401179566026e-7 2.5748880569342254e-5 6.381158854147211e-5 2.0619403023447958e-6 -9.805077607942942e-5 4.0388123659987015e-5 -1.147160165764285e-5 6.466266393733491e-6 8.57498033675986e-6 -6.361868910225326e-5 5.776256110785641e-5 -3.6099132932389313e-5 0.00013409055790199713 -5.992636616774106e-5 -8.756880598575071e-5 -0.00015813161772875286 -1.434535116622432e-5 0.00010357977552140063; -4.521244771291087e-5 -8.982214420080307e-5 -9.538962693164685e-5 -3.780281620543845e-5 -5.59397485454242e-5 1.5487106811123998e-5 1.8121114425815524e-5 -4.2633750109011224e-5 0.00012875935864890145 0.0001224293046283855 -0.00017869890505656856 -9.802866041404844e-5 5.9912203159406166e-5 5.927215899596406e-5 2.3991748111659956e-5 -7.301137882728573e-5 6.41430124409814e-5 1.239868215126814e-6 -5.5128111835583855e-5 -1.197747206383522e-6 4.252268265139791e-5 5.6467251869835914e-5 -8.109195549017914e-5 -2.4997427299750096e-5 -3.1922554608678126e-5 0.00012697043350221043 -0.00020041184687265496 -2.0567329539117285e-5 6.240757279009177e-7 7.127181381241847e-5 6.322638002478898e-5 1.4142050549200193e-5; 5.71921015585537e-5 5.221458257489788e-5 0.00011010285077698089 -3.051292453112726e-5 2.1583839662139467e-5 -4.8488981810281866e-5 6.512039242859044e-5 9.516620473784184e-5 -3.1106733621921516e-5 6.826447923283547e-5 -5.3965558726716256e-5 0.00012646582461508374 9.02367943859367e-5 4.982179398723983e-6 -9.399842685473047e-5 -4.89651568563434e-5 -1.9098782394779683e-5 -6.380266433500444e-5 6.042057128122542e-5 3.6090427493554216e-5 -2.6493658260210587e-5 1.6527603912049874e-5 -2.2083753100961748e-5 -7.925291294616152e-5 -0.00016420734329533772 5.153843340464007e-6 5.923057022351725e-5 -8.793135504622469e-5 7.977006014750804e-5 -5.339904254855787e-6 4.577362477045988e-5 0.000104514515151617; 8.83084594331987e-5 0.00013208596099301942 -6.561498414358847e-5 7.393211306206346e-5 8.988816441868734e-5 -0.00011885238020314611 1.1836073988698543e-5 -0.00010001158627547407 0.00013785016378987656 -8.160099789366813e-5 -0.00011262947921039688 0.0001376391901228972 2.66761144101045e-5 0.00014830955658708142 3.826957070286014e-5 -9.307632537657983e-5 -2.5763341929826572e-5 -0.000160382622218286 4.317887030791724e-5 -1.9104065379185334e-5 -0.00014921267395239928 0.00011516987613302975 -8.063875977513542e-5 1.1719296687976183e-5 -0.0002236010605176565 -6.249626132759039e-5 4.619768767771663e-6 6.785294133604728e-5 4.742170310945483e-5 -0.00013142848731350815 1.7367303351226586e-5 -6.162373576645326e-5; 0.00010093530811966179 -2.0442878912279912e-5 -3.068241773624291e-5 -6.032368316416563e-5 -5.9848861446223545e-5 -3.1509360444755223e-6 -2.155488636408763e-6 3.250108141467968e-5 -1.4429620995780787e-5 -4.334287866120473e-5 4.984315904109752e-5 0.00015791032722585596 0.0002463296995581305 0.00013535372357250256 9.062694965062129e-5 -5.706651342668618e-5 5.6168021665799215e-5 0.00012128945757533569 -0.00013405368287683735 7.869344056483262e-5 -1.0904949767133873e-5 -0.00018631656627738009 -0.00032724125691788327 -0.00015856075248782845 -4.9757359255881376e-5 -4.431847907588169e-5 -2.5763206457895183e-5 -8.085996439083766e-5 8.673023779076256e-5 -0.00025890653984722865 -0.00015582490511340387 -3.593513526473096e-5; 8.574338144268075e-5 7.658011318295868e-5 7.728063510608977e-5 0.0001020175740459871 3.8336450325969847e-5 -1.5136448472676424e-5 -2.421987219253033e-5 -4.685778831020177e-5 -8.222266760543011e-5 4.62748625805241e-7 7.137367074491195e-5 6.276136709954718e-5 -0.0001706220965378943 -0.00023203238661056113 4.758424338530282e-5 4.725682893064122e-5 4.083357716905078e-5 6.721580293839207e-5 -4.837076912639522e-5 -0.00011875635673180477 -9.757048528530397e-5 -5.267436708794679e-5 1.989133024933483e-5 -0.0001084117567178875 -5.591865110448923e-5 1.5665761476211666e-5 -0.00013297566610974678 -2.4837950239943873e-5 1.0713718348652142e-5 -8.723383789305397e-5 0.00012440864238766146 0.00011524183073657884; 2.2636593513605062e-5 -0.00015273282426144847 -1.4347078545641728e-6 0.00016787764960231856 -1.9500852630290257e-5 -0.00019400278344487945 -0.00021316138392515777 -0.0002009630081543254 -4.5536540909339406e-5 -7.42610393158913e-5 -8.7943674060113e-5 0.00011266387790009605 1.79367996510995e-5 -0.00023748613259022187 4.448124653341628e-5 -4.9104295811547196e-5 -3.0329183772259492e-5 5.280658825955565e-6 -0.00013528241490217056 -0.00015748128882344428 -9.666430943848252e-5 4.0796541526515755e-5 -7.73726080158895e-6 -2.1134994511662673e-5 1.9236247482210526e-5 3.227292452852085e-5 -2.6489815553295507e-5 -1.852089209801681e-5 3.1689039843919384e-5 0.00012823513296229805 -0.00010955031413555335 -6.128296465116822e-5; 0.0001639669261108711 -1.7852515311823013e-5 -0.00012519860517423711 -0.00011604091029803909 5.576559290565641e-5 1.9927821840237917e-5 -3.03926724009262e-6 4.6146540471095255e-5 -0.00018461362394226968 5.42709802444648e-5 5.7556195160621146e-5 -2.567987955479808e-5 -2.5649924437308832e-5 -0.00016830684960076193 0.00013468765776415066 4.334205898848534e-5 -0.000199072086771057 5.631494225744535e-5 -0.00019370720307998422 -9.537610976983005e-6 -6.809427047338184e-5 -1.0437017563533625e-5 1.5119455425749796e-6 6.42493280755817e-6 1.0694620713912318e-5 7.385739248055888e-5 -0.00020137881271728322 5.312790183742453e-5 -0.0001328824238684407 0.00013918267161861086 -2.129204601297713e-5 -5.782770510486932e-5; 1.0811757892775233e-5 0.00010476830313545685 -0.00011787768060475499 0.00011722279989026201 1.9462900599868895e-5 0.00012584605385182484 0.00018817123738681168 7.885580955170764e-5 -8.62505480474385e-5 3.997105115541275e-5 -5.600511467716606e-6 -9.64443829436409e-5 6.11929058502883e-5 -0.00010118181715389958 -0.00020498916317610683 -0.00010931399490818558 7.188729752652418e-5 9.989538331847273e-5 0.00012033743361808402 -0.00013321274353093578 -7.367465193983327e-5 0.0004115588958664411 -5.7669062518711875e-6 1.3063251135287365e-5 1.9813725448144703e-5 4.3459356238423456e-5 -3.0825933645449456e-5 1.5887876639323259e-7 3.520434940565415e-5 -9.529957649664991e-5 7.480403248319472e-5 3.8177127425844396e-5; 1.8338686637164163e-5 -2.2533830637509336e-5 0.00012561317016270552 -8.813745335309354e-5 -5.272293383704488e-5 -7.367264588929118e-5 -1.6216467706072427e-5 -0.0001257858046340239 5.613088178118954e-5 5.760027939415189e-6 9.219455132171401e-5 0.00012332559453693195 -0.00012611258244239548 -6.309344717396738e-5 -0.00014294178509251964 -9.998792669915691e-6 -0.00013109555111832344 -5.1135971083830777e-5 -8.968088768988207e-5 -2.365063374261197e-5 -0.00011523285029788361 0.00010041868990420451 -4.9968343595745904e-5 -8.47692233985983e-5 -0.00011009070549723973 -0.00025132792762114984 -0.0001871437805714198 0.00013608267654099173 -8.912605226605487e-5 -6.545261094679247e-5 4.6225935144048385e-5 -4.664509724924789e-5; -2.1970263511415903e-6 -0.00013310800752116086 -4.532156710375631e-5 2.999377100458256e-5 1.8022490603825117e-5 0.00018153626841048174 -9.602995307396797e-5 -5.413612418088823e-5 4.287034787956824e-5 -1.771981262173578e-5 -0.00014343801924007225 0.00015344942765331058 -0.00018855169220806766 -2.7982741830431546e-5 5.343452340083523e-6 8.007146059789129e-5 -0.0001849428463352618 2.5592724323713863e-5 -0.00013660281003431953 -0.00015405409250853318 1.154565414289149e-5 0.00017930324791485817 0.00028733240285908827 1.5900952330766533e-5 0.00010199950154891523 -8.186341621659298e-5 8.418052855666789e-5 0.0001537973348425927 -3.511745855540573e-5 2.8187343541857908e-5 -4.305216318208326e-5 1.9420127482943934e-5; 0.00013047291307106862 4.8180064396950286e-5 4.073351831852744e-5 -0.00014995426587898148 0.00014710546113878698 -3.679257403614656e-5 -1.0260972856991384e-5 0.00010308505173401682 1.1468383519459738e-5 -3.332801040283146e-5 8.3393624949406e-6 -3.0273557939436337e-5 -4.1483918692899636e-5 9.425511081133681e-6 -3.7484819557499064e-5 -0.00012771800808746188 0.00024751409822006056 -5.392921863727165e-5 3.390788427300087e-5 -0.0001553727006180973 7.248169734470736e-5 -3.891376670121949e-5 1.9791078585908083e-5 7.570074834024823e-5 2.410894651933961e-5 -4.270106271046873e-5 -4.800118310282789e-5 7.070431186680027e-5 -0.0001293715713186908 -4.301961504875446e-5 8.705581151975598e-5 8.415284267488991e-5; -0.00016218015626896033 -8.94621549489345e-5 -2.5865717918577946e-5 -1.4645595558477864e-5 8.638329053405804e-5 1.637136882686671e-5 8.871226225663575e-6 -0.00012135534017238758 -1.7863328696196246e-5 -5.667544211269212e-5 3.624201452816441e-5 0.00015631455643210496 8.173466666243995e-5 -7.158832851223743e-5 0.0001352817274780579 0.00010242914175955702 -0.00012722638313084693 0.00010925839923197505 6.72094519154548e-6 0.00011892866349421466 3.408543888121968e-5 6.105428509522358e-5 -0.00011939036595527266 0.00012109893613137458 9.136059123897974e-5 -3.087496189641618e-5 2.064508988746292e-5 -3.479349799113199e-5 -5.334891794723312e-5 -0.00019021942681524766 -1.674519045286662e-5 1.394340723686492e-5; 0.00013323872651408686 -0.00017627767276980928 -9.576768109372526e-5 -2.59307770899759e-5 -6.515155012284174e-5 7.440993165101207e-5 -3.084995380749891e-5 -5.1631074645864936e-5 0.00025430398188500445 0.00012153430822973407 4.954485945953685e-5 -0.00010156470573538287 3.5480553444632644e-5 4.178580249843196e-6 -0.00020312480676484222 0.0001741893730216401 -0.0001302785303120752 0.00011281472058936172 0.00019560213793038666 4.458560312950922e-5 9.11902616938318e-5 -8.003878513381596e-5 7.236274638880334e-5 3.2969671403684794e-5 -5.325071008673865e-5 -2.078747541208472e-5 -7.615229599441948e-5 -0.00010978364928423146 -5.7121293982795475e-5 -6.845496434557529e-6 1.4308017918943016e-5 5.8990987847486295e-5; -8.068773107643253e-5 4.496038442320833e-5 -0.00021278356374514274 -3.226081620359892e-5 9.76462083259314e-5 -8.423486227675334e-5 0.00011255057094112734 -1.4025670565429145e-6 3.058020362240716e-5 -3.160961072117094e-5 5.783985531032455e-5 0.00017882947346387792 0.00011405178289587279 -0.0001021441444549124 4.392799152137513e-5 -0.0001437923515708472 5.839537467418251e-5 -0.00016583880873204335 1.0215446730034912e-5 1.2585123352069816e-5 -6.653298854635155e-5 0.0001733786607133871 -0.00017892397821689 -6.45871137179712e-5 1.6113275216954894e-5 0.00017419038109674125 2.5662005521181286e-5 9.154449568322866e-5 0.00010427211005899125 -4.104591292834862e-5 2.4212913069822937e-5 2.0538714545726616e-5; 0.00013231759597882783 -0.00016531819770708206 -3.903898747186057e-5 0.00011625968121560546 -0.00010705817172666824 -7.782020171536004e-5 -0.0001851385028766427 4.840849882211186e-5 0.00011539036435178243 9.922969491606616e-6 -6.161539264384642e-5 5.983226886937267e-5 1.0860245618543714e-5 -7.660799079704119e-5 0.00016022111170795186 0.0001788694347288481 -9.276676488472023e-5 -0.0001460198115291046 0.00012357169119976596 -0.00015698358826003438 -2.415748045851735e-5 4.041858786703286e-5 1.0188235444820348e-5 -6.369826647816108e-5 -0.00012002070494591742 6.641312518775373e-6 5.828942388516075e-5 2.9997863146570874e-5 -2.621380068550216e-5 -3.005546124688226e-6 -6.990380024146575e-7 3.1050152154502304e-5; 0.0002098619435244305 0.00012419714688146732 -0.00020825366743085208 -0.0001094549014553375 -9.313870474872496e-5 8.488164088046866e-5 0.00014737185768606465 9.028594570988895e-5 -4.440408962839323e-5 3.6403528731881814e-5 5.627685744253288e-5 -9.140071858130575e-5 -4.809259159832102e-6 -5.494964431566503e-5 -0.00018010626160250601 8.046293726945874e-5 1.335427362644808e-5 -0.00018187299507846564 9.105022684960374e-5 5.825738874388089e-6 9.931476154886771e-5 2.165442864734179e-7 6.68504354805617e-5 -8.968789995125722e-5 -2.0078942604616312e-5 -0.00012492937172939317 0.00013471883642775397 5.0687270352484084e-5 0.00012188276662780651 -0.0001069477374287258 0.00016678719760673813 3.058062823383844e-6; -3.585199986335995e-5 -0.00011270459683944482 -0.00012225274870505853 0.00015687008387541565 -8.663284709544917e-5 -0.00016514066258313396 9.708045515364746e-5 4.933947754691703e-5 -8.735918411618291e-5 -3.124779207499906e-5 -7.913484180673843e-5 0.0001058072605439852 -0.0001859879255141768 -7.349336258699577e-5 -7.338620955923004e-5 -2.5025314906802652e-5 0.00014257342379206514 -5.9075113975980495e-5 1.554369125297988e-5 -7.509196323442113e-5 -5.0322795440284445e-5 -0.0001913838338885272 -0.0001642421836811525 6.066790344558352e-5 2.2421457534822557e-5 0.00010678244259107603 -2.648571783425076e-5 0.00018647255025936032 -0.00015728436003772776 8.017007464442261e-5 -9.491696139665593e-5 0.00013587971741000395; 4.278961614077199e-5 -9.191995109320312e-6 6.91538521049109e-5 -4.791531273908925e-5 -3.959786727605242e-5 -4.165149176073911e-5 -1.465376845004567e-6 0.00011214958112290285 2.8502940490249345e-5 3.611469264328724e-5 -4.327073429994369e-5 -5.895298045758524e-5 -1.2184350002970956e-5 0.0002628853403574167 3.117948335315745e-5 -2.2873740367823404e-5 6.707417241600633e-5 -8.03293051390289e-6 -3.3182815518693234e-5 3.947805134415912e-5 -4.163255608105006e-5 -0.00013158180764107847 9.082824589752237e-5 4.171680893237082e-5 -4.384012527694463e-5 0.00021100041385111674 1.517026293625028e-5 -1.5581151733521386e-5 -4.948479136614798e-5 -3.5892942382952084e-5 0.00015675257645384697 -6.503976283525156e-5; 0.00011516668527934832 0.00018854346635367353 0.00014570788116276455 1.7261589592279639e-6 -3.5727478828392096e-5 -5.665340941317205e-5 7.722445079439977e-5 -0.00013445213917561414 -5.496268786641555e-5 3.1528068208956726e-5 3.841857105456514e-5 -5.325605743832841e-5 -0.00010185048668330986 -8.960560299142385e-5 7.329490981379316e-5 -7.269966081634124e-5 5.959178540309124e-6 5.3234898220635266e-5 0.00012447183471255168 8.431165551615428e-5 -5.629205261624368e-5 -4.5604907793802504e-5 8.826360644620271e-5 2.4224374644348466e-5 -0.00018978464723017397 3.62603364893066e-5 6.36315868151055e-5 9.625939637504688e-5 -3.217873236547994e-5 -1.1268860161522646e-6 6.952055773314264e-5 -4.966363703982947e-5; 1.531988234051425e-5 2.314014654576705e-6 -5.169871586116995e-5 0.00015922881427245944 -0.00016961962976933575 -4.6528442208404164e-5 -4.5477080885073506e-5 0.00012429048071639927 2.8451466977956513e-5 0.0001822830788747122 6.183615620969391e-5 -2.6949367775572175e-5 -0.00020513723141489166 8.121254065330963e-5 0.00013299776471244751 0.00011844671532303873 -6.96308008904852e-5 -9.750170577758233e-5 0.0001445886271263188 0.00015577589217132342 -5.8638596993155374e-5 -5.332449584825098e-5 2.6909118577842545e-5 5.70916715965324e-5 8.54035504467461e-5 7.1207034757716905e-6 -0.00018346638828957787 -1.1920120623875877e-5 6.30828551671183e-5 -5.399361775239503e-7 -1.6139583049556656e-5 -0.00013839561436766043], bias = [3.894373769798364e-9, 1.8076464949467985e-9, -1.3285531315531493e-9, 9.851399415276393e-10, 1.5362437709569422e-9, -1.365927299203859e-9, 2.8936058075982555e-9, 2.635142714310196e-9, 8.89779611791719e-10, -3.0762714144371656e-10, 1.512155374745555e-10, -1.0044566749965056e-9, -1.2550850082862792e-9, 1.5467599735927566e-9, -1.4097369907469507e-9, -2.8804327300928263e-9, -1.3998023316198665e-9, -6.152020937505254e-9, -3.021496386569071e-9, 2.793530990416719e-9, -6.019767443146926e-9, 3.5468509822384623e-10, 1.1250735475854338e-9, 3.132188335458648e-10, 8.813661273152041e-10, 8.708071617554962e-10, -1.0192360717498933e-9, 1.3088836405277338e-9, -3.4621996649483235e-9, 2.372412246981049e-9, 1.5766973440404484e-9, 1.2921823837799265e-9]), layer_4 = (weight = [-0.000671515387886372 -0.0006863294593320278 -0.000547062580375197 -0.0006929681398377475 -0.0007208689380831659 -0.0006786424064657307 -0.0006970124261547597 -0.0007321562545599134 -0.0006699663759985883 -0.000555564038915714 -0.0008243604406214613 -0.0007594082672826099 -0.0006416307649416789 -0.0006040130617433301 -0.0005767567490700625 -0.000584702750938777 -0.0007052501604153043 -0.0006872855625840169 -0.000661293486180718 -0.0009673660790493529 -0.0008470960064196446 -0.0005686196674731176 -0.0006294064772027735 -0.0008679954776256519 -0.0006420515608012984 -0.0007602384872897437 -0.0006668962852593428 -0.0005770834146635521 -0.0007703236351394529 -0.0007068354548441024 -0.0008284032113006212 -0.0006387189687910598; 0.00024534184506921366 9.436213549089752e-5 0.0002336549263445295 2.2749320911528578e-5 0.00026180337971381206 0.0002966060395107406 7.273113699313643e-5 0.0001361160562741851 3.549212847121244e-5 0.00018196736714670334 0.00037111824754254026 0.00017717759912452464 0.00020263389821266977 0.00025969017269987537 0.0002746316613115953 0.0003474788350635215 7.044513852118433e-5 0.00046617747547874494 0.0004470814936101512 0.000134952149724482 0.0003323131938604503 0.0004252697652671806 0.00020724653688725098 0.0003253273352741355 0.00022932567297481477 0.00016506753305175245 0.00013905988311074547 0.00031432995686404174 0.00011697982280827048 0.00023296587503694646 0.0003947422343828539 0.00023470185140701995], bias = [-0.0007234091484616137, 0.000245804313270806]))

      Visualizing the Results

      Let us now plot the loss over time

      julia
      begin
      +    fig = Figure()
      +    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Iteration", ylabel="Loss")
      +
      +    lines!(ax, losses; linewidth=4, alpha=0.75)
      +    scatter!(ax, 1:length(losses), losses; marker=:circle, markersize=12, strokewidth=2)
      +
      +    fig
      +end

      Finally let us visualize the results

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, res.u)
      +soln_nn = Array(solve(prob_nn, RK4(); u0, p=res.u, saveat=tsteps, dt, adaptive=false))
      +waveform_nn_trained = first(compute_waveform(
      +    dt_data, soln_nn, mass_ratio, ode_model_params))
      +
      +begin
      +    fig = Figure()
      +    ax = CairoMakie.Axis(fig[1, 1]; xlabel="Time", ylabel="Waveform")
      +
      +    l1 = lines!(ax, tsteps, waveform; linewidth=2, alpha=0.75)
      +    s1 = scatter!(
      +        ax, tsteps, waveform; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      +
      +    l2 = lines!(ax, tsteps, waveform_nn; linewidth=2, alpha=0.75)
      +    s2 = scatter!(
      +        ax, tsteps, waveform_nn; marker=:circle, alpha=0.5, strokewidth=2, markersize=12)
      +
      +    l3 = lines!(ax, tsteps, waveform_nn_trained; linewidth=2, alpha=0.75)
      +    s3 = scatter!(ax, tsteps, waveform_nn_trained; marker=:circle,
      +        alpha=0.5, strokewidth=2, markersize=12)
      +
      +    axislegend(ax, [[l1, s1], [l2, s2], [l3, s3]],
      +        ["Waveform Data", "Waveform Neural Net (Untrained)", "Waveform Neural Net"];
      +        position=:lb)
      +
      +    fig
      +end

      Appendix

      julia
      using InteractiveUtils
      +InteractiveUtils.versioninfo()
      +
      +if @isdefined(MLDataDevices)
      +    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      +        println()
      +        CUDA.versioninfo()
      +    end
      +
      +    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      +        println()
      +        AMDGPU.versioninfo()
      +    end
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
      +Build Info:
      +  Official https://julialang.org/ release
      +Platform Info:
      +  OS: Linux (x86_64-linux-gnu)
      +  CPU: 128 × AMD EPYC 7502 32-Core Processor
      +  WORD_SIZE: 64
      +  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      +Threads: 128 default, 0 interactive, 64 GC (on 128 virtual cores)
      +Environment:
      +  JULIA_CPU_THREADS = 128
      +  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      +  JULIA_PKG_SERVER = 
      +  JULIA_NUM_THREADS = 128
      +  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      +  JULIA_PKG_PRECOMPILE_AUTO = 0
      +  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,31))])}const M=p(l,[["render",L]]);export{x as __pageData,M as default}; diff --git a/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.lean.js b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.lean.js new file mode 100644 index 0000000000..c379ba1937 --- /dev/null +++ b/dev/assets/tutorials_advanced_1_GravitationalWaveForm.md.iiGvMAnB.lean.js @@ -0,0 +1 @@ +import{_ as p,c as a,a2 as i,j as s,a as e,o as n}from"./chunks/framework.BetCMmtc.js";const x=JSON.parse('{"title":"Training a Neural ODE to Model Gravitational Waveforms","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/advanced/1_GravitationalWaveForm.md","filePath":"tutorials/advanced/1_GravitationalWaveForm.md","lastUpdated":null}'),l={name:"tutorials/advanced/1_GravitationalWaveForm.md"},t={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.819ex",height:"1.658ex",role:"img",focusable:"false",viewBox:"0 -583 4782.1 733","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},k={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.339ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.008ex",height:"1.339ex",role:"img",focusable:"false",viewBox:"0 -442 887.6 592","aria-hidden":"true"},o={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"24.527ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 10840.9 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},f={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},v={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},I={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},F={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"},V={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},B={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.117ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3587.6 1000","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.049ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3557.6 1000","aria-hidden":"true"},b={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},D={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.439ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.138ex",height:"1.439ex",role:"img",focusable:"false",viewBox:"0 -442 503 636","aria-hidden":"true"},R={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},K={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"0"},xmlns:"http://www.w3.org/2000/svg",width:"2.378ex",height:"1.545ex",role:"img",focusable:"false",viewBox:"0 -683 1051 683","aria-hidden":"true"},z={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},P={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"1.054ex",height:"1.025ex",role:"img",focusable:"false",viewBox:"0 -442 466 453","aria-hidden":"true"};function L(U,A,Z,X,S,j){return n(),a("div",null,[A[41]||(A[41]=i("",8)),s("p",null,[A[6]||(A[6]=e("We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector ")),s("mjx-container",t,[(n(),a("svg",h,A[0]||(A[0]=[i("",1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"r"),s("mo",null,"="),s("msub",null,[s("mi",null,"r"),s("mn",null,"1")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[7]||(A[7]=e(" and use Newtonian formulas to get ")),s("mjx-container",r,[(n(),a("svg",k,A[2]||(A[2]=[i("",1)]))),A[3]||(A[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"1")])])],-1))]),A[8]||(A[8]=e(", ")),s("mjx-container",E,[(n(),a("svg",d,A[4]||(A[4]=[i("",1)]))),A[5]||(A[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"r"),s("mn",null,"2")])])],-1))]),A[9]||(A[9]=e(" (e.g. Theoretical Mechanics of Particles and Continua 4.3)"))]),A[42]||(A[42]=i("",2)),s("p",null,[A[12]||(A[12]=e("Next we define a function to perform the change of variables: ")),s("mjx-container",o,[(n(),a("svg",Q,A[10]||(A[10]=[i("",1)]))),A[11]||(A[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{stretchy:"false"},"("),s("mi",null,"χ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"ϕ"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"↦"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"y"),s("mo",{stretchy:"false"},"("),s("mi",null,"t"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},")")])],-1))])]),A[43]||(A[43]=i("",13)),s("mjx-container",g,[(n(),a("svg",C,A[13]||(A[13]=[i("",1)]))),A[14]||(A[14]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",f,[(n(),a("svg",c,A[15]||(A[15]=[i("",1)]))),A[16]||(A[16]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[23]||(A[23]=e("where, ")),s("mjx-container",y,[(n(),a("svg",v,A[17]||(A[17]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[18]||(A[18]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[24]||(A[24]=e(", ")),s("mjx-container",I,[(n(),a("svg",m,A[19]||(A[19]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[20]||(A[20]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[25]||(A[25]=e(", and ")),s("mjx-container",u,[(n(),a("svg",F,A[21]||(A[21]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[22]||(A[22]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[26]||(A[26]=e(" are constants"))]),A[44]||(A[44]=i("",14)),s("mjx-container",V,[(n(),a("svg",B,A[27]||(A[27]=[i("",1)]))),A[28]||(A[28]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"1"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"χ")])],-1))]),s("mjx-container",T,[(n(),a("svg",q,A[29]||(A[29]=[i("",1)]))),A[30]||(A[30]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"u"),s("mo",{stretchy:"false"},"["),s("mn",null,"2"),s("mo",{stretchy:"false"},"]"),s("mo",null,"="),s("mi",null,"ϕ")])],-1))]),s("p",null,[A[37]||(A[37]=e("where, ")),s("mjx-container",b,[(n(),a("svg",D,A[31]||(A[31]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D45D",d:"M23 287Q24 290 25 295T30 317T40 348T55 381T75 411T101 433T134 442Q209 442 230 378L240 387Q302 442 358 442Q423 442 460 395T497 281Q497 173 421 82T249 -10Q227 -10 210 -4Q199 1 187 11T168 28L161 36Q160 35 139 -51T118 -138Q118 -144 126 -145T163 -148H188Q194 -155 194 -157T191 -175Q188 -187 185 -190T172 -194Q170 -194 161 -194T127 -193T65 -192Q-5 -192 -24 -194H-32Q-39 -187 -39 -183Q-37 -156 -26 -148H-6Q28 -147 33 -136Q36 -130 94 103T155 350Q156 355 156 364Q156 405 131 405Q109 405 94 377T71 316T59 280Q57 278 43 278H29Q23 284 23 287ZM178 102Q200 26 252 26Q282 26 310 49T356 107Q374 141 392 215T411 325V331Q411 405 350 405Q339 405 328 402T306 393T286 380T269 365T254 350T243 336T235 326L232 322Q232 321 229 308T218 264T204 212Q178 106 178 102Z",style:{"stroke-width":"3"}})])])],-1)]))),A[32]||(A[32]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"p")])],-1))]),A[38]||(A[38]=e(", ")),s("mjx-container",R,[(n(),a("svg",K,A[33]||(A[33]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D440",d:"M289 629Q289 635 232 637Q208 637 201 638T194 648Q194 649 196 659Q197 662 198 666T199 671T201 676T203 679T207 681T212 683T220 683T232 684Q238 684 262 684T307 683Q386 683 398 683T414 678Q415 674 451 396L487 117L510 154Q534 190 574 254T662 394Q837 673 839 675Q840 676 842 678T846 681L852 683H948Q965 683 988 683T1017 684Q1051 684 1051 673Q1051 668 1048 656T1045 643Q1041 637 1008 637Q968 636 957 634T939 623Q936 618 867 340T797 59Q797 55 798 54T805 50T822 48T855 46H886Q892 37 892 35Q892 19 885 5Q880 0 869 0Q864 0 828 1T736 2Q675 2 644 2T609 1Q592 1 592 11Q592 13 594 25Q598 41 602 43T625 46Q652 46 685 49Q699 52 704 61Q706 65 742 207T813 490T848 631L654 322Q458 10 453 5Q451 4 449 3Q444 0 433 0Q418 0 415 7Q413 11 374 317L335 624L267 354Q200 88 200 79Q206 46 272 46H282Q288 41 289 37T286 19Q282 3 278 1Q274 0 267 0Q265 0 255 0T221 1T157 2Q127 2 95 1T58 0Q43 0 39 2T35 11Q35 13 38 25T43 40Q45 46 65 46Q135 46 154 86Q158 92 223 354T289 629Z",style:{"stroke-width":"3"}})])])],-1)]))),A[34]||(A[34]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"M")])],-1))]),A[39]||(A[39]=e(", and ")),s("mjx-container",z,[(n(),a("svg",P,A[35]||(A[35]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D452",d:"M39 168Q39 225 58 272T107 350T174 402T244 433T307 442H310Q355 442 388 420T421 355Q421 265 310 237Q261 224 176 223Q139 223 138 221Q138 219 132 186T125 128Q125 81 146 54T209 26T302 45T394 111Q403 121 406 121Q410 121 419 112T429 98T420 82T390 55T344 24T281 -1T205 -11Q126 -11 83 42T39 168ZM373 353Q367 405 305 405Q272 405 244 391T199 357T170 316T154 280T149 261Q149 260 169 260Q282 260 327 284T373 353Z",style:{"stroke-width":"3"}})])])],-1)]))),A[36]||(A[36]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"e")])],-1))]),A[40]||(A[40]=e(" are constants"))]),A[45]||(A[45]=i("",31))])}const M=p(l,[["render",L]]);export{x as __pageData,M as default}; diff --git a/dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.js b/dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.js similarity index 81% rename from dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.js rename to dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.js index c36de999f4..7363ed0102 100644 --- a/dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.js +++ b/dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.js @@ -1,4 +1,99 @@ -import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl6h.js";const D=JSON.parse('{"title":"Julia & Lux for the Uninitiated","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/1_Basics.md","filePath":"tutorials/beginner/1_Basics.md","lastUpdated":null}'),p={name:"tutorials/beginner/1_Basics.md"},h={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.781ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.013ex",height:"2.737ex",role:"img",focusable:"false",viewBox:"0 -864.9 5751.9 1209.9","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.494ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 4638.6 1000","aria-hidden":"true"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.51ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 17905.2 1000","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.05ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.371ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -683 1048 705","aria-hidden":"true"},c={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.971ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -694 429 705","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"17.577ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 7769 1037.2","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},b={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.144ex",height:"6.757ex",role:"img",focusable:"false",viewBox:"0 -1740.7 15091.8 2986.6","aria-hidden":"true"};function F(v,a,C,f,x,w){return n(),i("div",null,[a[21]||(a[21]=t(`

      Julia & Lux for the Uninitiated

      This is a quick intro to Lux loosely based on:

      1. PyTorch's tutorial.

      2. Flux's tutorial (the link for which has now been lost to abyss).

      3. Jax's tutorial.

      It introduces basic Julia programming, as well Zygote, a source-to-source automatic differentiation (AD) framework in Julia. We'll use these tools to build a very simple neural network. Let's start with importing Lux.jl

      julia
      using Lux, Random

      Now let us control the randomness in our code using proper Pseudo Random Number Generator (PRNG)

      julia
      rng = Random.default_rng()
      +import{_ as l,c as i,a2 as n,j as s,a as e,o as t}from"./chunks/framework.BetCMmtc.js";const D=JSON.parse('{"title":"Julia & Lux for the Uninitiated","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/1_Basics.md","filePath":"tutorials/beginner/1_Basics.md","lastUpdated":null}'),p={name:"tutorials/beginner/1_Basics.md"},h={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.781ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.013ex",height:"2.737ex",role:"img",focusable:"false",viewBox:"0 -864.9 5751.9 1209.9","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.494ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 4638.6 1000","aria-hidden":"true"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.51ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 17905.2 1000","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.05ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.371ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -683 1048 705","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.971ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -694 429 705","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"17.577ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 7769 1037.2","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.144ex",height:"6.757ex",role:"img",focusable:"false",viewBox:"0 -1740.7 15091.8 2986.6","aria-hidden":"true"};function b(F,a,v,x,f,A){return t(),i("div",null,[a[21]||(a[21]=n(`

      Julia & Lux for the Uninitiated

      This is a quick intro to Lux loosely based on:

      1. PyTorch's tutorial.

      2. Flux's tutorial (the link for which has now been lost to abyss).

      3. Jax's tutorial.

      It introduces basic Julia programming, as well Zygote, a source-to-source automatic differentiation (AD) framework in Julia. We'll use these tools to build a very simple neural network. Let's start with importing Lux.jl

      julia
      using Lux, Random
      Precompiling Lux...
      +    588.7 ms  ✓ ConcreteStructs
      +    557.1 ms  ✓ Future
      +    542.0 ms  ✓ SIMDTypes
      +    554.5 ms  ✓ Reexport
      +    578.7 ms  ✓ CEnum
      +    603.2 ms  ✓ ManualMemory
      +    634.8 ms  ✓ OpenLibm_jll
      +    655.7 ms  ✓ ArgCheck
      +    772.8 ms  ✓ CompilerSupportLibraries_jll
      +    774.2 ms  ✓ Requires
      +    837.0 ms  ✓ Statistics
      +    919.0 ms  ✓ EnzymeCore
      +    978.0 ms  ✓ ADTypes
      +    514.9 ms  ✓ IfElse
      +    537.2 ms  ✓ CommonWorldInvalidations
      +    532.9 ms  ✓ FastClosures
      +    593.8 ms  ✓ StaticArraysCore
      +    674.2 ms  ✓ ConstructionBase
      +   1416.1 ms  ✓ IrrationalConstants
      +    852.3 ms  ✓ Compat
      +    762.3 ms  ✓ JLLWrappers
      +    612.2 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    749.7 ms  ✓ NaNMath
      +    652.1 ms  ✓ Adapt
      +   1012.4 ms  ✓ CpuId
      +   1007.3 ms  ✓ DocStringExtensions
      +    614.9 ms  ✓ DiffResults
      +    585.7 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    664.1 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +   1279.3 ms  ✓ ThreadingUtilities
      +    589.5 ms  ✓ Compat → CompatLinearAlgebraExt
      +    543.6 ms  ✓ EnzymeCore → AdaptExt
      +    578.2 ms  ✓ GPUArraysCore
      +   1086.6 ms  ✓ Static
      +    696.2 ms  ✓ ArrayInterface
      +    913.3 ms  ✓ Hwloc_jll
      +    952.7 ms  ✓ OpenSpecFun_jll
      +    891.9 ms  ✓ LogExpFunctions
      +   2737.1 ms  ✓ UnsafeAtomics
      +    616.6 ms  ✓ BitTwiddlingConvenienceFunctions
      +    524.7 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +    526.1 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    820.5 ms  ✓ Functors
      +   3060.7 ms  ✓ MacroTools
      +    623.6 ms  ✓ Atomix
      +   1626.6 ms  ✓ CPUSummary
      +   1755.2 ms  ✓ ChainRulesCore
      +    958.7 ms  ✓ MLDataDevices
      +    933.9 ms  ✓ CommonSubexpressions
      +   1882.9 ms  ✓ StaticArrayInterface
      +    600.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    609.1 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +    936.4 ms  ✓ PolyesterWeave
      +    900.5 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +    661.5 ms  ✓ CloseOpenIntervals
      +   1817.0 ms  ✓ Setfield
      +    810.8 ms  ✓ LayoutPointers
      +   2126.8 ms  ✓ DispatchDoctor
      +   1388.2 ms  ✓ Optimisers
      +   2862.1 ms  ✓ Hwloc
      +   1666.2 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    482.1 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    492.2 ms  ✓ Optimisers → OptimisersAdaptExt
      +    555.6 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +    742.3 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +   3385.6 ms  ✓ SpecialFunctions
      +   1098.7 ms  ✓ StrideArraysCore
      +    654.6 ms  ✓ DiffRules
      +   1405.9 ms  ✓ LuxCore
      +    818.2 ms  ✓ Polyester
      +    502.0 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    518.2 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    583.9 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    646.2 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    764.0 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1915.1 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   3077.6 ms  ✓ WeightInitializers
      +   7932.7 ms  ✓ StaticArrays
      +    630.8 ms  ✓ Adapt → AdaptStaticArraysExt
      +    641.1 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    666.2 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    686.3 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    713.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +   1037.4 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +   3930.4 ms  ✓ ForwardDiff
      +    875.2 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3408.4 ms  ✓ KernelAbstractions
      +    670.5 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    727.6 ms  ✓ KernelAbstractions → EnzymeExt
      +   5502.1 ms  ✓ NNlib
      +    845.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    929.2 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5954.0 ms  ✓ LuxLib
      +  10166.3 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 37 seconds. 15 already precompiled.

      Now let us control the randomness in our code using proper Pseudo Random Number Generator (PRNG)

      julia
      rng = Random.default_rng()
       Random.seed!(rng, 0)
      Random.TaskLocalRNG()

      Arrays

      The starting point for all of our models is the Array (sometimes referred to as a Tensor in other frameworks). This is really just a list of numbers, which might be arranged into a shape like a square. Let's write down an array with three elements.

      julia
      x = [1, 2, 3]
      3-element Vector{Int64}:
        1
        2
      @@ -89,7 +184,85 @@ import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl
           println("Iteration $i ", rand(rng, 10))
       end
      Iteration 1 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]
       Iteration 2 [0.018743665453639813, 0.8601828553599953, 0.6556360448565952, 0.7746656838366666, 0.7817315740767116, 0.5553797706980106, 0.1261990389976131, 0.4488101521328277, 0.624383955429775, 0.05657739601024536]
      -Iteration 3 [0.19597391412112541, 0.6830945313415872, 0.6776220912718907, 0.6456416023530093, 0.6340362477836592, 0.5595843665394066, 0.5675557670686644, 0.34351700231383653, 0.7237308297251812, 0.3691778381831775]

      Automatic Differentiation

      Julia has quite a few (maybe too many) AD tools. For the purpose of this tutorial, we will use:

      1. ForwardDiff.jl – For Jacobian-Vector Product (JVP)

      2. Zygote.jl – For Vector-Jacobian Product (VJP)

      Slight Detour: We have had several questions regarding if we will be considering any other AD system for the reverse-diff backend. For now we will stick to Zygote.jl, however once we have tested Lux extensively with Enzyme.jl, we will make the switch.

      Even though, theoretically, a VJP (Vector-Jacobian product - reverse autodiff) and a JVP (Jacobian-Vector product - forward-mode autodiff) are similar—they compute a product of a Jacobian and a vector—they differ by the computational complexity of the operation. In short, when you have a large number of parameters (hence a wide matrix), a JVP is less efficient computationally than a VJP, and, conversely, a JVP is more efficient when the Jacobian matrix is a tall matrix.

      julia
      using ComponentArrays, ForwardDiff, Zygote

      Gradients

      `,89)),s("p",null,[a[4]||(a[4]=e("For our first example, consider a simple function computing ")),s("mjx-container",h,[(n(),i("svg",d,a[0]||(a[0]=[t('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("msup",null,[s("mi",null,"x"),s("mi",null,"T")]),s("mi",null,"x")])],-1))]),a[5]||(a[5]=e(", where ")),s("mjx-container",r,[(n(),i("svg",o,a[2]||(a[2]=[t('',1)]))),a[3]||(a[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",{mathvariant:"normal"},"∇"),s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"x")])],-1))])]),a[22]||(a[22]=t(`
      julia
      f(x) = x' * x / 2
      +Iteration 3 [0.19597391412112541, 0.6830945313415872, 0.6776220912718907, 0.6456416023530093, 0.6340362477836592, 0.5595843665394066, 0.5675557670686644, 0.34351700231383653, 0.7237308297251812, 0.3691778381831775]

      Automatic Differentiation

      Julia has quite a few (maybe too many) AD tools. For the purpose of this tutorial, we will use:

      1. ForwardDiff.jl – For Jacobian-Vector Product (JVP)

      2. Zygote.jl – For Vector-Jacobian Product (VJP)

      Slight Detour: We have had several questions regarding if we will be considering any other AD system for the reverse-diff backend. For now we will stick to Zygote.jl, however once we have tested Lux extensively with Enzyme.jl, we will make the switch.

      Even though, theoretically, a VJP (Vector-Jacobian product - reverse autodiff) and a JVP (Jacobian-Vector product - forward-mode autodiff) are similar—they compute a product of a Jacobian and a vector—they differ by the computational complexity of the operation. In short, when you have a large number of parameters (hence a wide matrix), a JVP is less efficient computationally than a VJP, and, conversely, a JVP is more efficient when the Jacobian matrix is a tall matrix.

      julia
      using ComponentArrays, ForwardDiff, Zygote
      Precompiling ComponentArrays...
      +    985.3 ms  ✓ ComponentArrays
      +  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      +Precompiling MLDataDevicesComponentArraysExt...
      +    570.3 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      +Precompiling LuxComponentArraysExt...
      +    573.6 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      +   1664.0 ms  ✓ Lux → LuxComponentArraysExt
      +   2431.9 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      +  3 dependencies successfully precompiled in 3 seconds. 111 already precompiled.
      +Precompiling Zygote...
      +    451.4 ms  ✓ DataValueInterfaces
      +    494.4 ms  ✓ IteratorInterfaceExtensions
      +    565.1 ms  ✓ RealDot
      +    581.1 ms  ✓ DataAPI
      +    649.9 ms  ✓ OrderedCollections
      +    651.1 ms  ✓ SuiteSparse_jll
      +    615.1 ms  ✓ HashArrayMappedTries
      +    634.7 ms  ✓ Zlib_jll
      +    810.0 ms  ✓ AbstractFFTs
      +    833.0 ms  ✓ Serialization
      +    495.5 ms  ✓ TableTraits
      +    471.4 ms  ✓ ScopedValues
      +   1251.5 ms  ✓ FillArrays
      +    602.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +   1413.6 ms  ✓ ZygoteRules
      +    510.4 ms  ✓ FillArrays → FillArraysStatisticsExt
      +   1297.5 ms  ✓ LazyArtifacts
      +    986.1 ms  ✓ Tables
      +   2402.0 ms  ✓ IRTools
      +    967.7 ms  ✓ StructArrays
      +   2223.1 ms  ✓ Distributed
      +    479.3 ms  ✓ StructArrays → StructArraysAdaptExt
      +    538.1 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +   1696.3 ms  ✓ LLVMExtra_jll
      +    829.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    830.8 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +   4402.7 ms  ✓ SparseArrays
      +    712.3 ms  ✓ SuiteSparse
      +    782.0 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +    785.0 ms  ✓ Statistics → SparseArraysExt
      +    794.0 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    836.5 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +   1140.7 ms  ✓ KernelAbstractions → SparseArraysExt
      +    719.6 ms  ✓ SparseInverseSubset
      +   7074.3 ms  ✓ LLVM
      +   2035.0 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      +   6428.2 ms  ✓ ChainRules
      +   5380.2 ms  ✓ GPUArrays
      +  30733.7 ms  ✓ Zygote
      +  39 dependencies successfully precompiled in 49 seconds. 63 already precompiled.
      +Precompiling ArrayInterfaceSparseArraysExt...
      +    701.6 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
      +Precompiling MLDataDevicesSparseArraysExt...
      +    761.5 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      +Precompiling ArrayInterfaceChainRulesExt...
      +    880.9 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 39 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    955.2 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling MLDataDevicesFillArraysExt...
      +    505.2 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling MLDataDevicesZygoteExt...
      +   1901.7 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      +   1906.4 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      +  2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
      +Precompiling LuxZygoteExt...
      +   2066.3 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   3444.9 ms  ✓ Lux → LuxZygoteExt
      +  2 dependencies successfully precompiled in 4 seconds. 165 already precompiled.
      +Precompiling ComponentArraysZygoteExt...
      +   1944.2 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +   2246.4 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +  2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.

      Gradients

      `,91)),s("p",null,[a[4]||(a[4]=e("For our first example, consider a simple function computing ")),s("mjx-container",h,[(t(),i("svg",r,a[0]||(a[0]=[n('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("msup",null,[s("mi",null,"x"),s("mi",null,"T")]),s("mi",null,"x")])],-1))]),a[5]||(a[5]=e(", where ")),s("mjx-container",d,[(t(),i("svg",o,a[2]||(a[2]=[n('',1)]))),a[3]||(a[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",{mathvariant:"normal"},"∇"),s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"x")])],-1))])]),a[22]||(a[22]=n(`
      julia
      f(x) = x' * x / 2
       ∇f(x) = x  # \`∇\` can be typed as \`\\nabla<TAB>\`
       v = randn(rng, Float32, 4)
      4-element Vector{Float32}:
        -0.4051151
      @@ -108,7 +281,7 @@ import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl
        1.0
        1.0

      Using DifferentiationInterface

      While DifferentiationInterface provides these functions for a wider range of backends, we currently don't recommend using them with Lux models, since the functions presented here come with additional goodies like fast second-order derivatives.

      Compute the jvp. AutoForwardDiff specifies that we want to use ForwardDiff.jl for the Jacobian-Vector Product

      julia
      jvp = jacobian_vector_product(f, AutoForwardDiff(), x, v)
       println("JVP: ", jvp)
      JVP: Float32[-0.877497, 1.1953009, -0.057005208, 0.25055695, 0.09351656]

      Vector-Jacobian Product

      Using the same function and inputs, let us compute the VJP.

      julia
      vjp = vector_jacobian_product(f, AutoZygote(), x, v)
      -println("VJP: ", vjp)
      VJP: Float32[-0.877497, 1.1953009, -0.057005208, 0.25055695, 0.09351656]

      Linear Regression

      `,19)),s("p",null,[a[14]||(a[14]=e("Finally, now let us consider a linear regression problem. From a set of data-points ")),s("mjx-container",k,[(n(),i("svg",Q,a[6]||(a[6]=[t('',1)]))),a[7]||(a[7]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{fence:"false",stretchy:"false"},"{"),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"i"),s("mo",null,"∈"),s("mo",{fence:"false",stretchy:"false"},"{"),s("mn",null,"1"),s("mo",null,","),s("mo",null,"…"),s("mo",null,","),s("mi",null,"k"),s("mo",{fence:"false",stretchy:"false"},"}"),s("mo",null,","),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"n")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"m")]),s("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[15]||(a[15]=e(", we try to find a set of parameters ")),s("mjx-container",T,[(n(),i("svg",g,a[8]||(a[8]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44A",d:"M436 683Q450 683 486 682T553 680Q604 680 638 681T677 682Q695 682 695 674Q695 670 692 659Q687 641 683 639T661 637Q636 636 621 632T600 624T597 615Q597 603 613 377T629 138L631 141Q633 144 637 151T649 170T666 200T690 241T720 295T759 362Q863 546 877 572T892 604Q892 619 873 628T831 637Q817 637 817 647Q817 650 819 660Q823 676 825 679T839 682Q842 682 856 682T895 682T949 681Q1015 681 1034 683Q1048 683 1048 672Q1048 666 1045 655T1038 640T1028 637Q1006 637 988 631T958 617T939 600T927 584L923 578L754 282Q586 -14 585 -15Q579 -22 561 -22Q546 -22 542 -17Q539 -14 523 229T506 480L494 462Q472 425 366 239Q222 -13 220 -15T215 -19Q210 -22 197 -22Q178 -22 176 -15Q176 -12 154 304T131 622Q129 631 121 633T82 637H58Q51 644 51 648Q52 671 64 683H76Q118 680 176 680Q301 680 313 683H323Q329 677 329 674T327 656Q322 641 318 637H297Q236 634 232 620Q262 160 266 136L501 550L499 587Q496 629 489 632Q483 636 447 637Q428 637 422 639T416 648Q416 650 418 660Q419 664 420 669T421 676T424 680T428 682T436 683Z",style:{"stroke-width":"3"}})])])],-1)]))),a[9]||(a[9]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"W")])],-1))]),a[16]||(a[16]=e(" and ")),s("mjx-container",c,[(n(),i("svg",m,a[10]||(a[10]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44F",d:"M73 647Q73 657 77 670T89 683Q90 683 161 688T234 694Q246 694 246 685T212 542Q204 508 195 472T180 418L176 399Q176 396 182 402Q231 442 283 442Q345 442 383 396T422 280Q422 169 343 79T173 -11Q123 -11 82 27T40 150V159Q40 180 48 217T97 414Q147 611 147 623T109 637Q104 637 101 637H96Q86 637 83 637T76 640T73 647ZM336 325V331Q336 405 275 405Q258 405 240 397T207 376T181 352T163 330L157 322L136 236Q114 150 114 114Q114 66 138 42Q154 26 178 26Q211 26 245 58Q270 81 285 114T318 219Q336 291 336 325Z",style:{"stroke-width":"3"}})])])],-1)]))),a[11]||(a[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"b")])],-1))]),a[17]||(a[17]=e(", s.t. ")),s("mjx-container",u,[(n(),i("svg",E,a[12]||(a[12]=[t('',1)]))),a[13]||(a[13]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"W"),s("mi",null,"x"),s("mo",null,"+"),s("mi",null,"b")])],-1))]),a[18]||(a[18]=e(", which minimizes the mean squared error:"))]),s("mjx-container",y,[(n(),i("svg",b,a[19]||(a[19]=[t('',1)]))),a[20]||(a[20]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",{stretchy:"false"},"("),s("mi",null,"W"),s("mo",null,","),s("mi",null,"b"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"⟶"),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"k")])]),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("msubsup",null,[s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("mn",null,"2"),s("mn",null,"2")])])],-1))]),a[23]||(a[23]=t(`

      We can write f from scratch, but to demonstrate Lux, let us use the Dense layer.

      julia
      model = Dense(10 => 5)
      +println("VJP: ", vjp)
      VJP: Float32[-0.877497, 1.1953009, -0.057005208, 0.25055695, 0.09351656]

      Linear Regression

      `,19)),s("p",null,[a[14]||(a[14]=e("Finally, now let us consider a linear regression problem. From a set of data-points ")),s("mjx-container",k,[(t(),i("svg",c,a[6]||(a[6]=[n('',1)]))),a[7]||(a[7]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{fence:"false",stretchy:"false"},"{"),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"i"),s("mo",null,"∈"),s("mo",{fence:"false",stretchy:"false"},"{"),s("mn",null,"1"),s("mo",null,","),s("mo",null,"…"),s("mo",null,","),s("mi",null,"k"),s("mo",{fence:"false",stretchy:"false"},"}"),s("mo",null,","),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"n")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"m")]),s("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[15]||(a[15]=e(", we try to find a set of parameters ")),s("mjx-container",Q,[(t(),i("svg",T,a[8]||(a[8]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44A",d:"M436 683Q450 683 486 682T553 680Q604 680 638 681T677 682Q695 682 695 674Q695 670 692 659Q687 641 683 639T661 637Q636 636 621 632T600 624T597 615Q597 603 613 377T629 138L631 141Q633 144 637 151T649 170T666 200T690 241T720 295T759 362Q863 546 877 572T892 604Q892 619 873 628T831 637Q817 637 817 647Q817 650 819 660Q823 676 825 679T839 682Q842 682 856 682T895 682T949 681Q1015 681 1034 683Q1048 683 1048 672Q1048 666 1045 655T1038 640T1028 637Q1006 637 988 631T958 617T939 600T927 584L923 578L754 282Q586 -14 585 -15Q579 -22 561 -22Q546 -22 542 -17Q539 -14 523 229T506 480L494 462Q472 425 366 239Q222 -13 220 -15T215 -19Q210 -22 197 -22Q178 -22 176 -15Q176 -12 154 304T131 622Q129 631 121 633T82 637H58Q51 644 51 648Q52 671 64 683H76Q118 680 176 680Q301 680 313 683H323Q329 677 329 674T327 656Q322 641 318 637H297Q236 634 232 620Q262 160 266 136L501 550L499 587Q496 629 489 632Q483 636 447 637Q428 637 422 639T416 648Q416 650 418 660Q419 664 420 669T421 676T424 680T428 682T436 683Z",style:{"stroke-width":"3"}})])])],-1)]))),a[9]||(a[9]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"W")])],-1))]),a[16]||(a[16]=e(" and ")),s("mjx-container",g,[(t(),i("svg",m,a[10]||(a[10]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44F",d:"M73 647Q73 657 77 670T89 683Q90 683 161 688T234 694Q246 694 246 685T212 542Q204 508 195 472T180 418L176 399Q176 396 182 402Q231 442 283 442Q345 442 383 396T422 280Q422 169 343 79T173 -11Q123 -11 82 27T40 150V159Q40 180 48 217T97 414Q147 611 147 623T109 637Q104 637 101 637H96Q86 637 83 637T76 640T73 647ZM336 325V331Q336 405 275 405Q258 405 240 397T207 376T181 352T163 330L157 322L136 236Q114 150 114 114Q114 66 138 42Q154 26 178 26Q211 26 245 58Q270 81 285 114T318 219Q336 291 336 325Z",style:{"stroke-width":"3"}})])])],-1)]))),a[11]||(a[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"b")])],-1))]),a[17]||(a[17]=e(", s.t. ")),s("mjx-container",u,[(t(),i("svg",y,a[12]||(a[12]=[n('',1)]))),a[13]||(a[13]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"W"),s("mi",null,"x"),s("mo",null,"+"),s("mi",null,"b")])],-1))]),a[18]||(a[18]=e(", which minimizes the mean squared error:"))]),s("mjx-container",E,[(t(),i("svg",C,a[19]||(a[19]=[n('',1)]))),a[20]||(a[20]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",{stretchy:"false"},"("),s("mi",null,"W"),s("mo",null,","),s("mi",null,"b"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"⟶"),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"k")])]),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("msubsup",null,[s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("mn",null,"2"),s("mn",null,"2")])])],-1))]),a[23]||(a[23]=n(`

      We can write f from scratch, but to demonstrate Lux, let us use the Dense layer.

      julia
      model = Dense(10 => 5)
       
       rng = Random.default_rng()
       Random.seed!(rng, 0)
      Random.TaskLocalRNG()

      Let us initialize the parameters and states (in this case it is empty) for the model.

      julia
      ps, st = Lux.setup(rng, model)
      @@ -162,8 +335,8 @@ import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -179,4 +352,4 @@ import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl
         JULIA_NUM_THREADS = 16
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28))])}const H=l(p,[["render",F]]);export{D as __pageData,H as default}; + JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28))])}const L=l(p,[["render",b]]);export{D as __pageData,L as default}; diff --git a/dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.lean.js b/dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.lean.js new file mode 100644 index 0000000000..bfcf14c201 --- /dev/null +++ b/dev/assets/tutorials_beginner_1_Basics.md.BIkM597J.lean.js @@ -0,0 +1 @@ +import{_ as l,c as i,a2 as n,j as s,a as e,o as t}from"./chunks/framework.BetCMmtc.js";const D=JSON.parse('{"title":"Julia & Lux for the Uninitiated","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/1_Basics.md","filePath":"tutorials/beginner/1_Basics.md","lastUpdated":null}'),p={name:"tutorials/beginner/1_Basics.md"},h={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.781ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.013ex",height:"2.737ex",role:"img",focusable:"false",viewBox:"0 -864.9 5751.9 1209.9","aria-hidden":"true"},d={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.494ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 4638.6 1000","aria-hidden":"true"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},c={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.51ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 17905.2 1000","aria-hidden":"true"},Q={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},T={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.05ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.371ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -683 1048 705","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.971ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -694 429 705","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"17.577ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 7769 1037.2","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},C={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.144ex",height:"6.757ex",role:"img",focusable:"false",viewBox:"0 -1740.7 15091.8 2986.6","aria-hidden":"true"};function b(F,a,v,x,f,A){return t(),i("div",null,[a[21]||(a[21]=n("",91)),s("p",null,[a[4]||(a[4]=e("For our first example, consider a simple function computing ")),s("mjx-container",h,[(t(),i("svg",r,a[0]||(a[0]=[n("",1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("msup",null,[s("mi",null,"x"),s("mi",null,"T")]),s("mi",null,"x")])],-1))]),a[5]||(a[5]=e(", where ")),s("mjx-container",d,[(t(),i("svg",o,a[2]||(a[2]=[n("",1)]))),a[3]||(a[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",{mathvariant:"normal"},"∇"),s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"x")])],-1))])]),a[22]||(a[22]=n("",19)),s("p",null,[a[14]||(a[14]=e("Finally, now let us consider a linear regression problem. From a set of data-points ")),s("mjx-container",k,[(t(),i("svg",c,a[6]||(a[6]=[n("",1)]))),a[7]||(a[7]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{fence:"false",stretchy:"false"},"{"),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"i"),s("mo",null,"∈"),s("mo",{fence:"false",stretchy:"false"},"{"),s("mn",null,"1"),s("mo",null,","),s("mo",null,"…"),s("mo",null,","),s("mi",null,"k"),s("mo",{fence:"false",stretchy:"false"},"}"),s("mo",null,","),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"n")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"m")]),s("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[15]||(a[15]=e(", we try to find a set of parameters ")),s("mjx-container",Q,[(t(),i("svg",T,a[8]||(a[8]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44A",d:"M436 683Q450 683 486 682T553 680Q604 680 638 681T677 682Q695 682 695 674Q695 670 692 659Q687 641 683 639T661 637Q636 636 621 632T600 624T597 615Q597 603 613 377T629 138L631 141Q633 144 637 151T649 170T666 200T690 241T720 295T759 362Q863 546 877 572T892 604Q892 619 873 628T831 637Q817 637 817 647Q817 650 819 660Q823 676 825 679T839 682Q842 682 856 682T895 682T949 681Q1015 681 1034 683Q1048 683 1048 672Q1048 666 1045 655T1038 640T1028 637Q1006 637 988 631T958 617T939 600T927 584L923 578L754 282Q586 -14 585 -15Q579 -22 561 -22Q546 -22 542 -17Q539 -14 523 229T506 480L494 462Q472 425 366 239Q222 -13 220 -15T215 -19Q210 -22 197 -22Q178 -22 176 -15Q176 -12 154 304T131 622Q129 631 121 633T82 637H58Q51 644 51 648Q52 671 64 683H76Q118 680 176 680Q301 680 313 683H323Q329 677 329 674T327 656Q322 641 318 637H297Q236 634 232 620Q262 160 266 136L501 550L499 587Q496 629 489 632Q483 636 447 637Q428 637 422 639T416 648Q416 650 418 660Q419 664 420 669T421 676T424 680T428 682T436 683Z",style:{"stroke-width":"3"}})])])],-1)]))),a[9]||(a[9]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"W")])],-1))]),a[16]||(a[16]=e(" and ")),s("mjx-container",g,[(t(),i("svg",m,a[10]||(a[10]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44F",d:"M73 647Q73 657 77 670T89 683Q90 683 161 688T234 694Q246 694 246 685T212 542Q204 508 195 472T180 418L176 399Q176 396 182 402Q231 442 283 442Q345 442 383 396T422 280Q422 169 343 79T173 -11Q123 -11 82 27T40 150V159Q40 180 48 217T97 414Q147 611 147 623T109 637Q104 637 101 637H96Q86 637 83 637T76 640T73 647ZM336 325V331Q336 405 275 405Q258 405 240 397T207 376T181 352T163 330L157 322L136 236Q114 150 114 114Q114 66 138 42Q154 26 178 26Q211 26 245 58Q270 81 285 114T318 219Q336 291 336 325Z",style:{"stroke-width":"3"}})])])],-1)]))),a[11]||(a[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"b")])],-1))]),a[17]||(a[17]=e(", s.t. ")),s("mjx-container",u,[(t(),i("svg",y,a[12]||(a[12]=[n("",1)]))),a[13]||(a[13]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"W"),s("mi",null,"x"),s("mo",null,"+"),s("mi",null,"b")])],-1))]),a[18]||(a[18]=e(", which minimizes the mean squared error:"))]),s("mjx-container",E,[(t(),i("svg",C,a[19]||(a[19]=[n("",1)]))),a[20]||(a[20]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",{stretchy:"false"},"("),s("mi",null,"W"),s("mo",null,","),s("mi",null,"b"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"⟶"),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"k")])]),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("msubsup",null,[s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("mn",null,"2"),s("mn",null,"2")])])],-1))]),a[23]||(a[23]=n("",28))])}const L=l(p,[["render",b]]);export{D as __pageData,L as default}; diff --git a/dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.lean.js b/dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.lean.js deleted file mode 100644 index c36de999f4..0000000000 --- a/dev/assets/tutorials_beginner_1_Basics.md.BYin9je8.lean.js +++ /dev/null @@ -1,182 +0,0 @@ -import{_ as l,c as i,a2 as t,j as s,a as e,o as n}from"./chunks/framework.I-x9Gl6h.js";const D=JSON.parse('{"title":"Julia & Lux for the Uninitiated","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/1_Basics.md","filePath":"tutorials/beginner/1_Basics.md","lastUpdated":null}'),p={name:"tutorials/beginner/1_Basics.md"},h={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.781ex"},xmlns:"http://www.w3.org/2000/svg",width:"13.013ex",height:"2.737ex",role:"img",focusable:"false",viewBox:"0 -864.9 5751.9 1209.9","aria-hidden":"true"},r={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},o={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"10.494ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 4638.6 1000","aria-hidden":"true"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},Q={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"40.51ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 17905.2 1000","aria-hidden":"true"},T={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},g={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.05ex"},xmlns:"http://www.w3.org/2000/svg",width:"2.371ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -683 1048 705","aria-hidden":"true"},c={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},m={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.025ex"},xmlns:"http://www.w3.org/2000/svg",width:"0.971ex",height:"1.595ex",role:"img",focusable:"false",viewBox:"0 -694 429 705","aria-hidden":"true"},u={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.65ex"},xmlns:"http://www.w3.org/2000/svg",width:"17.577ex",height:"2.347ex",role:"img",focusable:"false",viewBox:"0 -750 7769 1037.2","aria-hidden":"true"},y={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},b={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-2.819ex"},xmlns:"http://www.w3.org/2000/svg",width:"34.144ex",height:"6.757ex",role:"img",focusable:"false",viewBox:"0 -1740.7 15091.8 2986.6","aria-hidden":"true"};function F(v,a,C,f,x,w){return n(),i("div",null,[a[21]||(a[21]=t(`

      Julia & Lux for the Uninitiated

      This is a quick intro to Lux loosely based on:

      1. PyTorch's tutorial.

      2. Flux's tutorial (the link for which has now been lost to abyss).

      3. Jax's tutorial.

      It introduces basic Julia programming, as well Zygote, a source-to-source automatic differentiation (AD) framework in Julia. We'll use these tools to build a very simple neural network. Let's start with importing Lux.jl

      julia
      using Lux, Random

      Now let us control the randomness in our code using proper Pseudo Random Number Generator (PRNG)

      julia
      rng = Random.default_rng()
      -Random.seed!(rng, 0)
      Random.TaskLocalRNG()

      Arrays

      The starting point for all of our models is the Array (sometimes referred to as a Tensor in other frameworks). This is really just a list of numbers, which might be arranged into a shape like a square. Let's write down an array with three elements.

      julia
      x = [1, 2, 3]
      3-element Vector{Int64}:
      - 1
      - 2
      - 3

      Here's a matrix – a square array with four elements.

      julia
      x = [1 2; 3 4]
      2×2 Matrix{Int64}:
      - 1  2
      - 3  4

      We often work with arrays of thousands of elements, and don't usually write them down by hand. Here's how we can create an array of 5×3 = 15 elements, each a random number from zero to one.

      julia
      x = rand(rng, 5, 3)
      5×3 Matrix{Float64}:
      - 0.455238   0.746943   0.193291
      - 0.547642   0.746801   0.116989
      - 0.773354   0.97667    0.899766
      - 0.940585   0.0869468  0.422918
      - 0.0296477  0.351491   0.707534

      There's a few functions like this; try replacing rand with ones, zeros, or randn.

      By default, Julia works stores numbers is a high-precision format called Float64. In ML we often don't need all those digits, and can ask Julia to work with Float32 instead. We can even ask for more digits using BigFloat.

      julia
      x = rand(BigFloat, 5, 3)
      5×3 Matrix{BigFloat}:
      - 0.981339    0.793159  0.459019
      - 0.043883    0.624384  0.56055
      - 0.164786    0.524008  0.0355555
      - 0.414769    0.577181  0.621958
      - 0.00823197  0.30215   0.655881
      julia
      x = rand(Float32, 5, 3)
      5×3 Matrix{Float32}:
      - 0.567794   0.369178   0.342539
      - 0.0985227  0.201145   0.587206
      - 0.776598   0.148248   0.0851708
      - 0.723731   0.0770206  0.839303
      - 0.404728   0.230954   0.679087

      We can ask the array how many elements it has.

      julia
      length(x)
      15

      Or, more specifically, what size it has.

      julia
      size(x)
      (5, 3)

      We sometimes want to see some elements of the array on their own.

      julia
      x
      5×3 Matrix{Float32}:
      - 0.567794   0.369178   0.342539
      - 0.0985227  0.201145   0.587206
      - 0.776598   0.148248   0.0851708
      - 0.723731   0.0770206  0.839303
      - 0.404728   0.230954   0.679087
      julia
      x[2, 3]
      0.58720636f0

      This means get the second row and the third column. We can also get every row of the third column.

      julia
      x[:, 3]
      5-element Vector{Float32}:
      - 0.34253937
      - 0.58720636
      - 0.085170805
      - 0.8393034
      - 0.67908657

      We can add arrays, and subtract them, which adds or subtracts each element of the array.

      julia
      x + x
      5×3 Matrix{Float32}:
      - 1.13559   0.738356  0.685079
      - 0.197045  0.40229   1.17441
      - 1.5532    0.296496  0.170342
      - 1.44746   0.154041  1.67861
      - 0.809456  0.461908  1.35817
      julia
      x - x
      5×3 Matrix{Float32}:
      - 0.0  0.0  0.0
      - 0.0  0.0  0.0
      - 0.0  0.0  0.0
      - 0.0  0.0  0.0
      - 0.0  0.0  0.0

      Julia supports a feature called broadcasting, using the . syntax. This tiles small arrays (or single numbers) to fill bigger ones.

      julia
      x .+ 1
      5×3 Matrix{Float32}:
      - 1.56779  1.36918  1.34254
      - 1.09852  1.20114  1.58721
      - 1.7766   1.14825  1.08517
      - 1.72373  1.07702  1.8393
      - 1.40473  1.23095  1.67909

      We can see Julia tile the column vector 1:5 across all rows of the larger array.

      julia
      zeros(5, 5) .+ (1:5)
      5×5 Matrix{Float64}:
      - 1.0  1.0  1.0  1.0  1.0
      - 2.0  2.0  2.0  2.0  2.0
      - 3.0  3.0  3.0  3.0  3.0
      - 4.0  4.0  4.0  4.0  4.0
      - 5.0  5.0  5.0  5.0  5.0

      The x' syntax is used to transpose a column 1:5 into an equivalent row, and Julia will tile that across columns.

      julia
      zeros(5, 5) .+ (1:5)'
      5×5 Matrix{Float64}:
      - 1.0  2.0  3.0  4.0  5.0
      - 1.0  2.0  3.0  4.0  5.0
      - 1.0  2.0  3.0  4.0  5.0
      - 1.0  2.0  3.0  4.0  5.0
      - 1.0  2.0  3.0  4.0  5.0

      We can use this to make a times table.

      julia
      (1:5) .* (1:5)'
      5×5 Matrix{Int64}:
      - 1   2   3   4   5
      - 2   4   6   8  10
      - 3   6   9  12  15
      - 4   8  12  16  20
      - 5  10  15  20  25

      Finally, and importantly for machine learning, we can conveniently do things like matrix multiply.

      julia
      W = randn(5, 10)
      -x = rand(10)
      -W * x
      5-element Vector{Float64}:
      -  1.2197981041108443
      - -2.62625877100596
      - -2.8573820474674845
      - -2.4319346874291314
      -  1.0108668577150213

      Julia's arrays are very powerful, and you can learn more about what they can do here.

      CUDA Arrays

      CUDA functionality is provided separately by the CUDA.jl package. If you have a GPU and LuxCUDA is installed, Lux will provide CUDA capabilities. For additional details on backends see the manual section.

      You can manually add CUDA. Once CUDA is loaded you can move any array to the GPU with the cu function (or the gpu function exported by \`Lux\`\`), and it supports all of the above operations with the same syntax.

      julia
      using LuxCUDA
      -
      -if LuxCUDA.functional()
      -    x_cu = cu(rand(5, 3))
      -    @show x_cu
      -end

      (Im)mutability

      Lux as you might have read is Immutable by convention which means that the core library is built without any form of mutation and all functions are pure. However, we don't enforce it in any form. We do strongly recommend that users extending this framework for their respective applications don't mutate their arrays.

      julia
      x = reshape(1:8, 2, 4)
      2×4 reshape(::UnitRange{Int64}, 2, 4) with eltype Int64:
      - 1  3  5  7
      - 2  4  6  8

      To update this array, we should first copy the array.

      julia
      x_copy = copy(x)
      -view(x_copy, :, 1) .= 0
      -
      -println("Original Array ", x)
      -println("Mutated Array ", x_copy)
      Original Array [1 3 5 7; 2 4 6 8]
      -Mutated Array [0 3 5 7; 0 4 6 8]

      Note that our current default AD engine (Zygote) is unable to differentiate through this mutation, however, for these specialized cases it is quite trivial to write custom backward passes. (This problem will be fixed once we move towards Enzyme.jl)

      Managing Randomness

      We rely on the Julia StdLib Random for managing the randomness in our execution. First, we create an PRNG (pseudorandom number generator) and seed it.

      julia
      rng = Xoshiro(0)     # Creates a Xoshiro PRNG with seed 0
      Random.Xoshiro(0xdb2fa90498613fdf, 0x48d73dc42d195740, 0x8c49bc52dc8a77ea, 0x1911b814c02405e8, 0x22a21880af5dc689)

      If we call any function that relies on rng and uses it via randn, rand, etc. rng will be mutated. As we have already established we care a lot about immutability, hence we should use Lux.replicate on PRNGs before using them.

      First, let us run a random number generator 3 times with the replicated rng.

      julia
      random_vectors = Vector{Vector{Float64}}(undef, 3)
      -for i in 1:3
      -    random_vectors[i] = rand(Lux.replicate(rng), 10)
      -    println("Iteration $i ", random_vectors[i])
      -end
      -@assert random_vectors[1]  random_vectors[2]  random_vectors[3]
      Iteration 1 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]
      -Iteration 2 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]
      -Iteration 3 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]

      As expected we get the same output. We can remove the replicate call and we will get different outputs.

      julia
      for i in 1:3
      -    println("Iteration $i ", rand(rng, 10))
      -end
      Iteration 1 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]
      -Iteration 2 [0.018743665453639813, 0.8601828553599953, 0.6556360448565952, 0.7746656838366666, 0.7817315740767116, 0.5553797706980106, 0.1261990389976131, 0.4488101521328277, 0.624383955429775, 0.05657739601024536]
      -Iteration 3 [0.19597391412112541, 0.6830945313415872, 0.6776220912718907, 0.6456416023530093, 0.6340362477836592, 0.5595843665394066, 0.5675557670686644, 0.34351700231383653, 0.7237308297251812, 0.3691778381831775]

      Automatic Differentiation

      Julia has quite a few (maybe too many) AD tools. For the purpose of this tutorial, we will use:

      1. ForwardDiff.jl – For Jacobian-Vector Product (JVP)

      2. Zygote.jl – For Vector-Jacobian Product (VJP)

      Slight Detour: We have had several questions regarding if we will be considering any other AD system for the reverse-diff backend. For now we will stick to Zygote.jl, however once we have tested Lux extensively with Enzyme.jl, we will make the switch.

      Even though, theoretically, a VJP (Vector-Jacobian product - reverse autodiff) and a JVP (Jacobian-Vector product - forward-mode autodiff) are similar—they compute a product of a Jacobian and a vector—they differ by the computational complexity of the operation. In short, when you have a large number of parameters (hence a wide matrix), a JVP is less efficient computationally than a VJP, and, conversely, a JVP is more efficient when the Jacobian matrix is a tall matrix.

      julia
      using ComponentArrays, ForwardDiff, Zygote

      Gradients

      `,89)),s("p",null,[a[4]||(a[4]=e("For our first example, consider a simple function computing ")),s("mjx-container",h,[(n(),i("svg",d,a[0]||(a[0]=[t('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("msup",null,[s("mi",null,"x"),s("mi",null,"T")]),s("mi",null,"x")])],-1))]),a[5]||(a[5]=e(", where ")),s("mjx-container",r,[(n(),i("svg",o,a[2]||(a[2]=[t('',1)]))),a[3]||(a[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",{mathvariant:"normal"},"∇"),s("mi",null,"f"),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"x")])],-1))])]),a[22]||(a[22]=t(`
      julia
      f(x) = x' * x / 2
      -∇f(x) = x  # \`∇\` can be typed as \`\\nabla<TAB>\`
      -v = randn(rng, Float32, 4)
      4-element Vector{Float32}:
      - -0.4051151
      - -0.4593922
      -  0.92155594
      -  1.1871622

      Let's use ForwardDiff and Zygote to compute the gradients.

      julia
      println("Actual Gradient: ", ∇f(v))
      -println("Computed Gradient via Reverse Mode AD (Zygote): ", only(Zygote.gradient(f, v)))
      -println("Computed Gradient via Forward Mode AD (ForwardDiff): ", ForwardDiff.gradient(f, v))
      Actual Gradient: Float32[-0.4051151, -0.4593922, 0.92155594, 1.1871622]
      -Computed Gradient via Reverse Mode AD (Zygote): Float32[-0.4051151, -0.4593922, 0.92155594, 1.1871622]
      -Computed Gradient via Forward Mode AD (ForwardDiff): Float32[-0.4051151, -0.4593922, 0.92155594, 1.1871622]

      Note that AD.gradient will only work for scalar valued outputs.

      Jacobian-Vector Product

      I will defer the discussion on forward-mode AD to https://book.sciml.ai/notes/08-Forward-Mode_Automatic_Differentiation_(AD)_via_High_Dimensional_Algebras/. Here let us just look at a mini example on how to use it.

      julia
      f(x) = x .* x ./ 2
      -x = randn(rng, Float32, 5)
      -v = ones(Float32, 5)
      5-element Vector{Float32}:
      - 1.0
      - 1.0
      - 1.0
      - 1.0
      - 1.0

      Using DifferentiationInterface

      While DifferentiationInterface provides these functions for a wider range of backends, we currently don't recommend using them with Lux models, since the functions presented here come with additional goodies like fast second-order derivatives.

      Compute the jvp. AutoForwardDiff specifies that we want to use ForwardDiff.jl for the Jacobian-Vector Product

      julia
      jvp = jacobian_vector_product(f, AutoForwardDiff(), x, v)
      -println("JVP: ", jvp)
      JVP: Float32[-0.877497, 1.1953009, -0.057005208, 0.25055695, 0.09351656]

      Vector-Jacobian Product

      Using the same function and inputs, let us compute the VJP.

      julia
      vjp = vector_jacobian_product(f, AutoZygote(), x, v)
      -println("VJP: ", vjp)
      VJP: Float32[-0.877497, 1.1953009, -0.057005208, 0.25055695, 0.09351656]

      Linear Regression

      `,19)),s("p",null,[a[14]||(a[14]=e("Finally, now let us consider a linear regression problem. From a set of data-points ")),s("mjx-container",k,[(n(),i("svg",Q,a[6]||(a[6]=[t('',1)]))),a[7]||(a[7]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mo",{fence:"false",stretchy:"false"},"{"),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("mo",null,","),s("mi",null,"i"),s("mo",null,"∈"),s("mo",{fence:"false",stretchy:"false"},"{"),s("mn",null,"1"),s("mo",null,","),s("mo",null,"…"),s("mo",null,","),s("mi",null,"k"),s("mo",{fence:"false",stretchy:"false"},"}"),s("mo",null,","),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"n")]),s("mo",null,","),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"∈"),s("msup",null,[s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",{mathvariant:"double-struck"},"R")]),s("mi",null,"m")]),s("mo",{fence:"false",stretchy:"false"},"}")])],-1))]),a[15]||(a[15]=e(", we try to find a set of parameters ")),s("mjx-container",T,[(n(),i("svg",g,a[8]||(a[8]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44A",d:"M436 683Q450 683 486 682T553 680Q604 680 638 681T677 682Q695 682 695 674Q695 670 692 659Q687 641 683 639T661 637Q636 636 621 632T600 624T597 615Q597 603 613 377T629 138L631 141Q633 144 637 151T649 170T666 200T690 241T720 295T759 362Q863 546 877 572T892 604Q892 619 873 628T831 637Q817 637 817 647Q817 650 819 660Q823 676 825 679T839 682Q842 682 856 682T895 682T949 681Q1015 681 1034 683Q1048 683 1048 672Q1048 666 1045 655T1038 640T1028 637Q1006 637 988 631T958 617T939 600T927 584L923 578L754 282Q586 -14 585 -15Q579 -22 561 -22Q546 -22 542 -17Q539 -14 523 229T506 480L494 462Q472 425 366 239Q222 -13 220 -15T215 -19Q210 -22 197 -22Q178 -22 176 -15Q176 -12 154 304T131 622Q129 631 121 633T82 637H58Q51 644 51 648Q52 671 64 683H76Q118 680 176 680Q301 680 313 683H323Q329 677 329 674T327 656Q322 641 318 637H297Q236 634 232 620Q262 160 266 136L501 550L499 587Q496 629 489 632Q483 636 447 637Q428 637 422 639T416 648Q416 650 418 660Q419 664 420 669T421 676T424 680T428 682T436 683Z",style:{"stroke-width":"3"}})])])],-1)]))),a[9]||(a[9]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"W")])],-1))]),a[16]||(a[16]=e(" and ")),s("mjx-container",c,[(n(),i("svg",m,a[10]||(a[10]=[s("g",{stroke:"currentColor",fill:"currentColor","stroke-width":"0",transform:"scale(1,-1)"},[s("g",{"data-mml-node":"math"},[s("g",{"data-mml-node":"mi"},[s("path",{"data-c":"1D44F",d:"M73 647Q73 657 77 670T89 683Q90 683 161 688T234 694Q246 694 246 685T212 542Q204 508 195 472T180 418L176 399Q176 396 182 402Q231 442 283 442Q345 442 383 396T422 280Q422 169 343 79T173 -11Q123 -11 82 27T40 150V159Q40 180 48 217T97 414Q147 611 147 623T109 637Q104 637 101 637H96Q86 637 83 637T76 640T73 647ZM336 325V331Q336 405 275 405Q258 405 240 397T207 376T181 352T163 330L157 322L136 236Q114 150 114 114Q114 66 138 42Q154 26 178 26Q211 26 245 58Q270 81 285 114T318 219Q336 291 336 325Z",style:{"stroke-width":"3"}})])])],-1)]))),a[11]||(a[11]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"b")])],-1))]),a[17]||(a[17]=e(", s.t. ")),s("mjx-container",u,[(n(),i("svg",E,a[12]||(a[12]=[t('',1)]))),a[13]||(a[13]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("mi",null,"x"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("mi",null,"W"),s("mi",null,"x"),s("mo",null,"+"),s("mi",null,"b")])],-1))]),a[18]||(a[18]=e(", which minimizes the mean squared error:"))]),s("mjx-container",y,[(n(),i("svg",b,a[19]||(a[19]=[t('',1)]))),a[20]||(a[20]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"L"),s("mo",{stretchy:"false"},"("),s("mi",null,"W"),s("mo",null,","),s("mi",null,"b"),s("mo",{stretchy:"false"},")"),s("mo",{stretchy:"false"},"⟶"),s("munderover",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"i"),s("mo",null,"="),s("mn",null,"1")]),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"k")])]),s("mfrac",null,[s("mn",null,"1"),s("mn",null,"2")]),s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("msub",null,[s("mi",null,"y"),s("mi",null,"i")]),s("mo",null,"−"),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"W"),s("mo",null,","),s("mi",null,"b")])]),s("mo",{stretchy:"false"},"("),s("msub",null,[s("mi",null,"x"),s("mi",null,"i")]),s("mo",{stretchy:"false"},")"),s("msubsup",null,[s("mo",{"data-mjx-texclass":"ORD"},"∥"),s("mn",null,"2"),s("mn",null,"2")])])],-1))]),a[23]||(a[23]=t(`

      We can write f from scratch, but to demonstrate Lux, let us use the Dense layer.

      julia
      model = Dense(10 => 5)
      -
      -rng = Random.default_rng()
      -Random.seed!(rng, 0)
      Random.TaskLocalRNG()

      Let us initialize the parameters and states (in this case it is empty) for the model.

      julia
      ps, st = Lux.setup(rng, model)
      -ps = ps |> ComponentArray
      ComponentVector{Float32}(weight = Float32[-0.48351598 0.29944375 0.44048917 0.5221656 0.20001543 0.1437841 4.8317274f-6 0.5310851 -0.30674052 0.034259234; -0.04903387 -0.4242767 0.27051234 0.40789893 -0.43846482 -0.17706361 -0.03258145 0.46514034 0.1958431 0.23992883; 0.45016125 0.48263642 -0.2990853 -0.18695377 -0.11023762 -0.4418456 0.40354207 0.25278285 0.18056087 -0.3523193; 0.05218964 -0.09701932 0.27035674 0.12589 -0.29561827 0.34717593 -0.42189494 -0.13073668 0.36829436 -0.3097294; 0.20277858 -0.51524514 -0.22635892 0.18841726 0.29828635 0.21690917 -0.04265762 -0.41919118 0.071482725 -0.45247704], bias = Float32[-0.04199602, -0.093925126, -0.0007736237, -0.19397983, 0.0066712513])

      Set problem dimensions.

      julia
      n_samples = 20
      -x_dim = 10
      -y_dim = 5
      5

      Generate random ground truth W and b.

      julia
      W = randn(rng, Float32, y_dim, x_dim)
      -b = randn(rng, Float32, y_dim)
      5-element Vector{Float32}:
      - -0.9436797
      -  1.5164032
      -  0.011937321
      -  1.4339262
      - -0.2771789

      Generate samples with additional noise.

      julia
      x_samples = randn(rng, Float32, x_dim, n_samples)
      -y_samples = W * x_samples .+ b .+ 0.01f0 .* randn(rng, Float32, y_dim, n_samples)
      -println("x shape: ", size(x_samples), "; y shape: ", size(y_samples))
      x shape: (10, 20); y shape: (5, 20)

      For updating our parameters let's use Optimisers.jl. We will use Stochastic Gradient Descent (SGD) with a learning rate of 0.01.

      julia
      using Optimisers, Printf

      Define the loss function

      julia
      lossfn = MSELoss()
      -
      -println("Loss Value with ground true parameters: ", lossfn(W * x_samples .+ b, y_samples))
      Loss Value with ground true parameters: 9.3742405e-5

      We will train the model using our training API.

      julia
      function train_model!(model, ps, st, opt, nepochs::Int)
      -    tstate = Training.TrainState(model, ps, st, opt)
      -    for i in 1:nepochs
      -        grads, loss, _, tstate = Training.single_train_step!(
      -            AutoZygote(), lossfn, (x_samples, y_samples), tstate)
      -        if i % 1000 == 1 || i == nepochs
      -            @printf "Loss Value after %6d iterations: %.8f\\n" i loss
      -        end
      -    end
      -    return tstate.model, tstate.parameters, tstate.states
      -end
      -
      -model, ps, st = train_model!(model, ps, st, Descent(0.01f0), 10000)
      -
      -println("Loss Value after training: ", lossfn(first(model(x_samples, ps, st)), y_samples))
      Loss Value after      1 iterations: 7.80465555
      -Loss Value after   1001 iterations: 0.12477568
      -Loss Value after   2001 iterations: 0.02535537
      -Loss Value after   3001 iterations: 0.00914141
      -Loss Value after   4001 iterations: 0.00407581
      -Loss Value after   5001 iterations: 0.00198415
      -Loss Value after   6001 iterations: 0.00101147
      -Loss Value after   7001 iterations: 0.00053332
      -Loss Value after   8001 iterations: 0.00029203
      -Loss Value after   9001 iterations: 0.00016878
      -Loss Value after  10000 iterations: 0.00010551
      -Loss Value after training: 0.00010546855

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 16 default, 0 interactive, 8 GC (on 16 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 16
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 16
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28))])}const H=l(p,[["render",F]]);export{D as __pageData,H as default}; diff --git a/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.js b/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.js similarity index 95% rename from dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.js rename to dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.js index 94d9131b89..9e0658a021 100644 --- a/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.js +++ b/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.js @@ -1,160 +1,46 @@ -import{_ as p,c as i,a2 as a,j as s,a as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Fitting a Polynomial using MLP","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/2_PolynomialFitting.md","filePath":"tutorials/beginner/2_PolynomialFitting.md","lastUpdated":null}'),t={name:"tutorials/beginner/2_PolynomialFitting.md"},l={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.599ex",height:"2.351ex",role:"img",focusable:"false",viewBox:"0 -833.9 5126.6 1038.9","aria-hidden":"true"};function r(E,A,d,k,o,g){return e(),i("div",null,[A[4]||(A[4]=a(`

      Fitting a Polynomial using MLP

      In this tutorial we will fit a MultiLayer Perceptron (MLP) on data generated from a polynomial.

      Package Imports

      julia
      using Lux, ADTypes, Optimisers, Printf, Random, Reactant, Statistics, CairoMakie
      Precompiling Lux...
      -    529.0 ms  ✓ Requires
      -    530.5 ms  ✓ Compat
      -    386.7 ms  ✓ Compat → CompatLinearAlgebraExt
      -   1151.1 ms  ✓ LuxCore
      -   1200.9 ms  ✓ ChainRulesCore
      -    610.8 ms  ✓ Functors
      -    437.8 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    462.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -   1521.2 ms  ✓ StaticArrayInterface
      -    399.8 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    642.7 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -   4009.4 ms  ✓ KernelAbstractions
      -    646.5 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -   1360.1 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    415.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    621.8 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1701.0 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -    962.1 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    451.1 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    788.0 ms  ✓ MLDataDevices
      -    492.9 ms  ✓ CloseOpenIntervals
      -    646.9 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   1097.0 ms  ✓ Optimisers
      -    623.9 ms  ✓ LayoutPointers
      -    673.7 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    729.6 ms  ✓ KernelAbstractions → EnzymeExt
      -    472.8 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    652.4 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    430.9 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    414.9 ms  ✓ Optimisers → OptimisersAdaptExt
      -    941.3 ms  ✓ StrideArraysCore
      -    776.8 ms  ✓ Polyester
      -   5261.6 ms  ✓ NNlib
      -    829.6 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    990.3 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5674.2 ms  ✓ LuxLib
      -   9238.1 ms  ✓ Lux
      -  37 dependencies successfully precompiled in 30 seconds. 72 already precompiled.
      -Precompiling Reactant...
      -    990.3 ms  ✓ LazyArtifacts
      -   1394.9 ms  ✓ Enzyme_jll
      -   1420.0 ms  ✓ LLVMExtra_jll
      -   2244.4 ms  ✓ Reactant_jll
      -   6311.8 ms  ✓ LLVM
      -  26586.6 ms  ✓ GPUCompiler
      - 223529.4 ms  ✓ Enzyme
      -   6414.6 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live \x1B[0K
      -\x1B[0K2025-01-20 22:36:53.782407: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 5524933572539585731
      -  64426.5 ms  ✓ Reactant
      -  9 dependencies successfully precompiled in 330 seconds. 52 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling ChainRulesCoreSparseArraysExt...
      -    637.5 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      -Precompiling SparseArraysExt...
      -    903.9 ms  ✓ KernelAbstractions → SparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    678.9 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling UnsafeAtomicsLLVM...
      -   1762.2 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      +import{_ as e,c as a,a2 as i,j as s,a as n,o as t}from"./chunks/framework.BetCMmtc.js";const v=JSON.parse('{"title":"Fitting a Polynomial using MLP","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/2_PolynomialFitting.md","filePath":"tutorials/beginner/2_PolynomialFitting.md","lastUpdated":null}'),l={name:"tutorials/beginner/2_PolynomialFitting.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.599ex",height:"2.351ex",role:"img",focusable:"false",viewBox:"0 -833.9 5126.6 1038.9","aria-hidden":"true"};function r(k,A,E,d,g,o){return t(),a("div",null,[A[4]||(A[4]=i(`

      Fitting a Polynomial using MLP

      In this tutorial we will fit a MultiLayer Perceptron (MLP) on data generated from a polynomial.

      Package Imports

      julia
      using Lux, ADTypes, Optimisers, Printf, Random, Reactant, Statistics, CairoMakie
      Precompiling Lux...
      +    901.4 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +   6123.7 ms  ✓ LuxLib
      +   9384.6 ms  ✓ Lux
      +  3 dependencies successfully precompiled in 17 seconds. 106 already precompiled.
       Precompiling LuxLibEnzymeExt...
      -   6930.7 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6478.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1422.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17255.9 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  18958.9 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 19 seconds. 126 already precompiled.
      +   1293.8 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
       Precompiling LuxEnzymeExt...
      -   7693.1 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20495.7 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20685.0 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      +   6868.0 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
       Precompiling LuxLibReactantExt...
      -  20460.8 ms  ✓ Reactant → ReactantStatisticsExt
      -  20780.8 ms  ✓ Reactant → ReactantNNlibExt
      -  21412.0 ms  ✓ LuxLib → LuxLibReactantExt
      -  20497.7 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20385.4 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20824.7 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      +  12883.5 ms  ✓ LuxLib → LuxLibReactantExt
      +  1 dependency successfully precompiled in 13 seconds. 143 already precompiled.
       Precompiling LuxReactantExt...
      -   9886.8 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.
      +   8417.6 ms  ✓ Lux → LuxReactantExt
      +  1 dependency successfully precompiled in 9 seconds. 161 already precompiled.
       Precompiling CairoMakie...
      -    439.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -   1626.9 ms  ✓ DataStructures
      -   3537.7 ms  ✓ Test
      -    749.1 ms  ✓ FilePaths
      -   3961.4 ms  ✓ PkgVersion
      -   1185.8 ms  ✓ IntelOpenMP_jll
      -   4150.2 ms  ✓ FileIO
      -   1960.0 ms  ✓ Interpolations
      -   1181.2 ms  ✓ ColorBrewer
      -    513.0 ms  ✓ SortingAlgorithms
      -   1497.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    968.4 ms  ✓ QuadGK
      -    586.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   1294.5 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -   1182.8 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1948.3 ms  ✓ Netpbm
      -   3206.6 ms  ✓ JpegTurbo
      -   8126.4 ms  ✓ MKL_jll
      -   1508.5 ms  ✓ QOI
      -   4115.3 ms  ✓ Sixel
      -   1493.5 ms  ✓ OpenEXR
      -  13826.7 ms  ✓ MathTeXEngine
      -   2539.9 ms  ✓ WebP
      -   1188.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   2240.7 ms  ✓ StatsBase
      -   4921.5 ms  ✓ Distributions
      -   1405.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1413.8 ms  ✓ Distributions → DistributionsTestExt
      -   1760.2 ms  ✓ KernelDensity
      -  60596.8 ms  ✓ TiffImages
      -   1229.7 ms  ✓ ImageIO
      - 153721.8 ms  ✓ Makie
      -  88146.2 ms  ✓ CairoMakie
      -  33 dependencies successfully precompiled in 325 seconds. 238 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ MKL_jll
      -│  \x1B[32m\x1B[1m Downloading\x1B[22m\x1B[39m artifact: IntelOpenMP
      -
      +    600.9 ms  ✓ Graphics
      +   1224.7 ms  ✓ HypergeometricFunctions
      +   1225.4 ms  ✓ IntelOpenMP_jll
      +   1421.3 ms  ✓ Cairo
      +   1268.6 ms  ✓ MKL_jll
      +   1918.0 ms  ✓ StatsFuns
      +    701.8 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +   1650.1 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +   5239.5 ms  ✓ Distributions
      +   1476.6 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +   1532.2 ms  ✓ Distributions → DistributionsTestExt
      +   9676.7 ms  ✓ FFTW
      +   1771.5 ms  ✓ KernelDensity
      + 153295.0 ms  ✓ Makie
      +  96578.4 ms  ✓ CairoMakie
      +  15 dependencies successfully precompiled in 265 seconds. 256 already precompiled.
       Precompiling StructArraysGPUArraysCoreExt...
      -    716.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    721.7 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
         1 dependency successfully precompiled in 1 seconds. 34 already precompiled.
      -Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    455.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling ReactantOffsetArraysExt...
      -  20631.6 ms  ✓ Reactant → ReactantOffsetArraysExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      -Precompiling QuadGKEnzymeExt...
      -   6457.9 ms  ✓ QuadGK → QuadGKEnzymeExt
      -  1 dependency successfully precompiled in 7 seconds. 49 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    414.6 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling ReactantAbstractFFTsExt...
      -  20623.8 ms  ✓ Reactant → ReactantAbstractFFTsExt
      -  1 dependency successfully precompiled in 21 seconds. 62 already precompiled.
       Precompiling NNlibFFTWExt...
      -    899.9 ms  ✓ NNlib → NNlibFFTWExt
      -  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.

      Dataset

      `,6)),s("p",null,[A[2]||(A[2]=n("Generate 128 datapoints from the polynomial ")),s("mjx-container",l,[(e(),i("svg",h,A[0]||(A[0]=[a('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"="),s("msup",null,[s("mi",null,"x"),s("mn",null,"2")]),s("mo",null,"−"),s("mn",null,"2"),s("mi",null,"x")])],-1))]),A[3]||(A[3]=n("."))]),A[5]||(A[5]=a(`
      julia
      function generate_data(rng::AbstractRNG)
      +    923.3 ms  ✓ NNlib → NNlibFFTWExt
      +  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
      +Precompiling IntervalArithmeticForwardDiffExt...
      +    712.0 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 43 already precompiled.

      Dataset

      `,6)),s("p",null,[A[2]||(A[2]=n("Generate 128 datapoints from the polynomial ")),s("mjx-container",p,[(t(),a("svg",h,A[0]||(A[0]=[i('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"="),s("msup",null,[s("mi",null,"x"),s("mn",null,"2")]),s("mo",null,"−"),s("mn",null,"2"),s("mi",null,"x")])],-1))]),A[3]||(A[3]=n("."))]),A[5]||(A[5]=i(`
      julia
      function generate_data(rng::AbstractRNG)
           x = reshape(collect(range(-2.0f0, 2.0f0, 128)), (1, 128))
           y = evalpoly.(x, ((0, -2, 1),)) .+ randn(rng, Float32, (1, 128)) .* 0.1f0
           return (x, y)
      @@ -236,8 +122,8 @@ import{_ as p,c as i,a2 as a,j as s,a as n,o as e}from"./chunks/framework.I-x9Gl
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -254,4 +140,4 @@ import{_ as p,c as i,a2 as a,j as s,a as n,o as e}from"./chunks/framework.I-x9Gl
         JULIA_NUM_THREADS = 48
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,40))])}const v=p(t,[["render",r]]);export{c as __pageData,v as default}; + JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,40))])}const C=e(l,[["render",r]]);export{v as __pageData,C as default}; diff --git a/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.lean.js b/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.lean.js new file mode 100644 index 0000000000..4451ddfe43 --- /dev/null +++ b/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BCdraYWF.lean.js @@ -0,0 +1 @@ +import{_ as e,c as a,a2 as i,j as s,a as n,o as t}from"./chunks/framework.BetCMmtc.js";const v=JSON.parse('{"title":"Fitting a Polynomial using MLP","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/2_PolynomialFitting.md","filePath":"tutorials/beginner/2_PolynomialFitting.md","lastUpdated":null}'),l={name:"tutorials/beginner/2_PolynomialFitting.md"},p={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.599ex",height:"2.351ex",role:"img",focusable:"false",viewBox:"0 -833.9 5126.6 1038.9","aria-hidden":"true"};function r(k,A,E,d,g,o){return t(),a("div",null,[A[4]||(A[4]=i("",6)),s("p",null,[A[2]||(A[2]=n("Generate 128 datapoints from the polynomial ")),s("mjx-container",p,[(t(),a("svg",h,A[0]||(A[0]=[i("",1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"="),s("msup",null,[s("mi",null,"x"),s("mn",null,"2")]),s("mo",null,"−"),s("mn",null,"2"),s("mi",null,"x")])],-1))]),A[3]||(A[3]=n("."))]),A[5]||(A[5]=i("",40))])}const C=e(l,[["render",r]]);export{v as __pageData,C as default}; diff --git a/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.lean.js b/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.lean.js deleted file mode 100644 index 94d9131b89..0000000000 --- a/dev/assets/tutorials_beginner_2_PolynomialFitting.md.BnLeWgHX.lean.js +++ /dev/null @@ -1,257 +0,0 @@ -import{_ as p,c as i,a2 as a,j as s,a as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const c=JSON.parse('{"title":"Fitting a Polynomial using MLP","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/2_PolynomialFitting.md","filePath":"tutorials/beginner/2_PolynomialFitting.md","lastUpdated":null}'),t={name:"tutorials/beginner/2_PolynomialFitting.md"},l={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},h={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.464ex"},xmlns:"http://www.w3.org/2000/svg",width:"11.599ex",height:"2.351ex",role:"img",focusable:"false",viewBox:"0 -833.9 5126.6 1038.9","aria-hidden":"true"};function r(E,A,d,k,o,g){return e(),i("div",null,[A[4]||(A[4]=a(`

      Fitting a Polynomial using MLP

      In this tutorial we will fit a MultiLayer Perceptron (MLP) on data generated from a polynomial.

      Package Imports

      julia
      using Lux, ADTypes, Optimisers, Printf, Random, Reactant, Statistics, CairoMakie
      Precompiling Lux...
      -    529.0 ms  ✓ Requires
      -    530.5 ms  ✓ Compat
      -    386.7 ms  ✓ Compat → CompatLinearAlgebraExt
      -   1151.1 ms  ✓ LuxCore
      -   1200.9 ms  ✓ ChainRulesCore
      -    610.8 ms  ✓ Functors
      -    437.8 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    462.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -   1521.2 ms  ✓ StaticArrayInterface
      -    399.8 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    642.7 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -   4009.4 ms  ✓ KernelAbstractions
      -    646.5 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -   1360.1 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    415.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    621.8 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1701.0 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -    962.1 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    451.1 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    788.0 ms  ✓ MLDataDevices
      -    492.9 ms  ✓ CloseOpenIntervals
      -    646.9 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   1097.0 ms  ✓ Optimisers
      -    623.9 ms  ✓ LayoutPointers
      -    673.7 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    729.6 ms  ✓ KernelAbstractions → EnzymeExt
      -    472.8 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    652.4 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    430.9 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    414.9 ms  ✓ Optimisers → OptimisersAdaptExt
      -    941.3 ms  ✓ StrideArraysCore
      -    776.8 ms  ✓ Polyester
      -   5261.6 ms  ✓ NNlib
      -    829.6 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    990.3 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5674.2 ms  ✓ LuxLib
      -   9238.1 ms  ✓ Lux
      -  37 dependencies successfully precompiled in 30 seconds. 72 already precompiled.
      -Precompiling Reactant...
      -    990.3 ms  ✓ LazyArtifacts
      -   1394.9 ms  ✓ Enzyme_jll
      -   1420.0 ms  ✓ LLVMExtra_jll
      -   2244.4 ms  ✓ Reactant_jll
      -   6311.8 ms  ✓ LLVM
      -  26586.6 ms  ✓ GPUCompiler
      - 223529.4 ms  ✓ Enzyme
      -   6414.6 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live \x1B[0K
      -\x1B[0K2025-01-20 22:36:53.782407: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 5524933572539585731
      -  64426.5 ms  ✓ Reactant
      -  9 dependencies successfully precompiled in 330 seconds. 52 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling ChainRulesCoreSparseArraysExt...
      -    637.5 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      -Precompiling SparseArraysExt...
      -    903.9 ms  ✓ KernelAbstractions → SparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    678.9 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling UnsafeAtomicsLLVM...
      -   1762.2 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      -Precompiling LuxLibEnzymeExt...
      -   6930.7 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6478.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1422.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17255.9 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  18958.9 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 19 seconds. 126 already precompiled.
      -Precompiling LuxEnzymeExt...
      -   7693.1 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20495.7 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20685.0 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      -Precompiling LuxLibReactantExt...
      -  20460.8 ms  ✓ Reactant → ReactantStatisticsExt
      -  20780.8 ms  ✓ Reactant → ReactantNNlibExt
      -  21412.0 ms  ✓ LuxLib → LuxLibReactantExt
      -  20497.7 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20385.4 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20824.7 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      -Precompiling LuxReactantExt...
      -   9886.8 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.
      -Precompiling CairoMakie...
      -    439.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -   1626.9 ms  ✓ DataStructures
      -   3537.7 ms  ✓ Test
      -    749.1 ms  ✓ FilePaths
      -   3961.4 ms  ✓ PkgVersion
      -   1185.8 ms  ✓ IntelOpenMP_jll
      -   4150.2 ms  ✓ FileIO
      -   1960.0 ms  ✓ Interpolations
      -   1181.2 ms  ✓ ColorBrewer
      -    513.0 ms  ✓ SortingAlgorithms
      -   1497.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    968.4 ms  ✓ QuadGK
      -    586.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   1294.5 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -   1182.8 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1948.3 ms  ✓ Netpbm
      -   3206.6 ms  ✓ JpegTurbo
      -   8126.4 ms  ✓ MKL_jll
      -   1508.5 ms  ✓ QOI
      -   4115.3 ms  ✓ Sixel
      -   1493.5 ms  ✓ OpenEXR
      -  13826.7 ms  ✓ MathTeXEngine
      -   2539.9 ms  ✓ WebP
      -   1188.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   2240.7 ms  ✓ StatsBase
      -   4921.5 ms  ✓ Distributions
      -   1405.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1413.8 ms  ✓ Distributions → DistributionsTestExt
      -   1760.2 ms  ✓ KernelDensity
      -  60596.8 ms  ✓ TiffImages
      -   1229.7 ms  ✓ ImageIO
      - 153721.8 ms  ✓ Makie
      -  88146.2 ms  ✓ CairoMakie
      -  33 dependencies successfully precompiled in 325 seconds. 238 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ MKL_jll
      -│  \x1B[32m\x1B[1m Downloading\x1B[22m\x1B[39m artifact: IntelOpenMP
      -
      -Precompiling StructArraysGPUArraysCoreExt...
      -    716.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -  1 dependency successfully precompiled in 1 seconds. 34 already precompiled.
      -Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    455.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling ReactantOffsetArraysExt...
      -  20631.6 ms  ✓ Reactant → ReactantOffsetArraysExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      -Precompiling QuadGKEnzymeExt...
      -   6457.9 ms  ✓ QuadGK → QuadGKEnzymeExt
      -  1 dependency successfully precompiled in 7 seconds. 49 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    414.6 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling ReactantAbstractFFTsExt...
      -  20623.8 ms  ✓ Reactant → ReactantAbstractFFTsExt
      -  1 dependency successfully precompiled in 21 seconds. 62 already precompiled.
      -Precompiling NNlibFFTWExt...
      -    899.9 ms  ✓ NNlib → NNlibFFTWExt
      -  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.

      Dataset

      `,6)),s("p",null,[A[2]||(A[2]=n("Generate 128 datapoints from the polynomial ")),s("mjx-container",l,[(e(),i("svg",h,A[0]||(A[0]=[a('',1)]))),A[1]||(A[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"="),s("msup",null,[s("mi",null,"x"),s("mn",null,"2")]),s("mo",null,"−"),s("mn",null,"2"),s("mi",null,"x")])],-1))]),A[3]||(A[3]=n("."))]),A[5]||(A[5]=a(`
      julia
      function generate_data(rng::AbstractRNG)
      -    x = reshape(collect(range(-2.0f0, 2.0f0, 128)), (1, 128))
      -    y = evalpoly.(x, ((0, -2, 1),)) .+ randn(rng, Float32, (1, 128)) .* 0.1f0
      -    return (x, y)
      -end
      generate_data (generic function with 1 method)

      Initialize the random number generator and fetch the dataset.

      julia
      rng = MersenneTwister()
      -Random.seed!(rng, 12345)
      -
      -(x, y) = generate_data(rng)
      (Float32[-2.0 -1.968504 -1.9370079 -1.9055119 -1.8740157 -1.8425196 -1.8110236 -1.7795275 -1.7480315 -1.7165354 -1.6850394 -1.6535434 -1.6220472 -1.5905511 -1.5590551 -1.527559 -1.496063 -1.464567 -1.4330709 -1.4015749 -1.3700787 -1.3385826 -1.3070866 -1.2755905 -1.2440945 -1.2125984 -1.1811024 -1.1496063 -1.1181102 -1.0866141 -1.0551181 -1.023622 -0.992126 -0.96062994 -0.92913383 -0.8976378 -0.86614174 -0.8346457 -0.8031496 -0.77165353 -0.7401575 -0.70866144 -0.6771653 -0.6456693 -0.61417323 -0.5826772 -0.5511811 -0.51968503 -0.48818898 -0.4566929 -0.42519686 -0.39370078 -0.36220473 -0.33070865 -0.2992126 -0.26771653 -0.23622048 -0.20472442 -0.17322835 -0.14173229 -0.11023622 -0.07874016 -0.047244094 -0.015748031 0.015748031 0.047244094 0.07874016 0.11023622 0.14173229 0.17322835 0.20472442 0.23622048 0.26771653 0.2992126 0.33070865 0.36220473 0.39370078 0.42519686 0.4566929 0.48818898 0.51968503 0.5511811 0.5826772 0.61417323 0.6456693 0.6771653 0.70866144 0.7401575 0.77165353 0.8031496 0.8346457 0.86614174 0.8976378 0.92913383 0.96062994 0.992126 1.023622 1.0551181 1.0866141 1.1181102 1.1496063 1.1811024 1.2125984 1.2440945 1.2755905 1.3070866 1.3385826 1.3700787 1.4015749 1.4330709 1.464567 1.496063 1.527559 1.5590551 1.5905511 1.6220472 1.6535434 1.6850394 1.7165354 1.7480315 1.7795275 1.8110236 1.8425196 1.8740157 1.9055119 1.9370079 1.968504 2.0], Float32[8.080871 7.562357 7.451749 7.5005703 7.295229 7.2245107 6.8731666 6.7092047 6.5385857 6.4631066 6.281978 5.960991 5.963052 5.68927 5.3667717 5.519665 5.2999034 5.0238676 5.174298 4.6706038 4.570324 4.439068 4.4462147 4.299262 3.9799082 3.9492173 3.8747025 3.7264304 3.3844414 3.2934628 3.1180353 3.0698316 3.0491123 2.592982 2.8164148 2.3875027 2.3781595 2.4269633 2.2763796 2.3316176 2.0829067 1.9049499 1.8581494 1.7632381 1.7745113 1.5406592 1.3689325 1.2614254 1.1482575 1.2801026 0.9070533 0.91188717 0.9415703 0.85747254 0.6692604 0.7172643 0.48259094 0.48990166 0.35299227 0.31578436 0.25483933 0.37486005 0.19847682 -0.042415008 -0.05951088 0.014774345 -0.114184186 -0.15978265 -0.29916334 -0.22005874 -0.17161606 -0.3613516 -0.5489093 -0.7267406 -0.5943626 -0.62129945 -0.50063384 -0.6346849 -0.86081326 -0.58715504 -0.5171875 -0.6575044 -0.71243864 -0.78395927 -0.90537953 -0.9515314 -0.8603811 -0.92880917 -1.0078154 -0.90215015 -1.0109437 -1.0764086 -1.1691734 -1.0740278 -1.1429857 -1.104191 -0.948015 -0.9233653 -0.82379496 -0.9810639 -0.92863405 -0.9360056 -0.92652786 -0.847396 -1.115507 -1.0877254 -0.92295444 -0.86975616 -0.81879705 -0.8482455 -0.6524158 -0.6184501 -0.7483137 -0.60395515 -0.67555165 -0.6288941 -0.6774449 -0.49889082 -0.43817532 -0.46497717 -0.30316323 -0.36745527 -0.3227286 -0.20977046 -0.09777648 -0.053120755 -0.15877295 -0.06777584])

      Let's visualize the dataset

      julia
      begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -
      -    l = lines!(ax, x[1, :], x -> evalpoly(x, (0, -2, 1)); linewidth=3, color=:blue)
      -    s = scatter!(ax, x[1, :], y[1, :]; markersize=12, alpha=0.5,
      -        color=:orange, strokecolor=:black, strokewidth=2)
      -
      -    axislegend(ax, [l, s], ["True Quadratic Function", "Data Points"])
      -
      -    fig
      -end

      Neural Network

      For this problem, you should not be using a neural network. But let's still do that!

      julia
      model = Chain(Dense(1 => 16, relu), Dense(16 => 1))
      Chain(
      -    layer_1 = Dense(1 => 16, relu),     # 32 parameters
      -    layer_2 = Dense(16 => 1),           # 17 parameters
      -)         # Total: 49 parameters,
      -          #        plus 0 states.

      Optimizer

      We will use Adam from Optimisers.jl

      julia
      opt = Adam(0.03f0)
      Adam(0.03, (0.9, 0.999), 1.0e-8)

      Loss Function

      We will use the Training API so we need to ensure that our loss function takes 4 inputs – model, parameters, states and data. The function must return 3 values – loss, updated_state, and any computed statistics. This is already satisfied by the loss functions provided by Lux.

      julia
      const loss_function = MSELoss()
      -
      -const cdev = cpu_device()
      -const xdev = reactant_device()
      -
      -ps, st = Lux.setup(rng, model) |> xdev
      ((layer_1 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[2.2569513; 1.8385266; 1.8834435; -1.4215803; -0.1289033; -1.4116536; -1.4359436; -2.3610642; -0.847535; 1.6091344; -0.34999675; 1.9372884; -0.41628727; 1.1786895; -1.4312565; 0.34652048;;]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.9155488, -0.005158901, 0.5026965, -0.84174657, -0.9167142, -0.14881086, -0.8202727, 0.19286752, 0.60171676, 0.951689, 0.4595859, -0.33281517, -0.692657, 0.4369135, 0.3800323, 0.61768365])), layer_2 = (weight = Reactant.ConcreteRArray{Float32, 2}(Float32[0.20061705 0.22529833 0.07667785 0.115506485 0.22827768 0.22680467 0.0035893882 -0.39495495 0.18033011 -0.02850357 -0.08613788 -0.3103005 0.12508307 -0.087390475 -0.13759731 0.08034529]), bias = Reactant.ConcreteRArray{Float32, 1}(Float32[0.06066203]))), (layer_1 = NamedTuple(), layer_2 = NamedTuple()))

      Training

      First we will create a Training.TrainState which is essentially a convenience wrapper over parameters, states and optimizer states.

      julia
      tstate = Training.TrainState(model, ps, st, opt)
      TrainState
      -    model: Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.relu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(1 => 16, relu), layer_2 = Dense(16 => 1)), nothing)
      -    # of parameters: 49
      -    # of states: 0
      -    optimizer: Adam(0.03, (0.9, 0.999), 1.0e-8)
      -    step: 0

      Now we will use Enzyme (Reactant) for our AD requirements.

      julia
      vjp_rule = AutoEnzyme()
      ADTypes.AutoEnzyme()

      Finally the training loop.

      julia
      function main(tstate::Training.TrainState, vjp, data, epochs)
      -    data = data |> xdev
      -    for epoch in 1:epochs
      -        _, loss, _, tstate = Training.single_train_step!(vjp, loss_function, data, tstate)
      -        if epoch % 50 == 1 || epoch == epochs
      -            @printf "Epoch: %3d \\t Loss: %.5g\\n" epoch loss
      -        end
      -    end
      -    return tstate
      -end
      -
      -tstate = main(tstate, vjp_rule, (x, y), 250)
      TrainState
      -    model: Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(NNlib.relu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(1 => 16, relu), layer_2 = Dense(16 => 1)), nothing)
      -    # of parameters: 49
      -    # of states: 0
      -    optimizer: Adam(0.03, (0.9, 0.999), 1.0e-8)
      -    step: 250
      -    cache: TrainingBackendCache(Lux.Training.ReactantBackend{Static.True}(static(true)))
      -    objective_function: GenericLossFunction

      Since we are using Reactant, we need to compile the model before we can use it.

      julia
      forward_pass = @compile Lux.apply(
      -    tstate.model, xdev(x), tstate.parameters, Lux.testmode(tstate.states)
      -)
      -
      -y_pred = cdev(first(forward_pass(
      -    tstate.model, xdev(x), tstate.parameters, Lux.testmode(tstate.states)
      -)))

      Let's plot the results

      julia
      begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -
      -    l = lines!(ax, x[1, :], x -> evalpoly(x, (0, -2, 1)); linewidth=3)
      -    s1 = scatter!(ax, x[1, :], y[1, :]; markersize=12, alpha=0.5,
      -        color=:orange, strokecolor=:black, strokewidth=2)
      -    s2 = scatter!(ax, x[1, :], y_pred[1, :]; markersize=12, alpha=0.5,
      -        color=:green, strokecolor=:black, strokewidth=2)
      -
      -    axislegend(ax, [l, s1, s2], ["True Quadratic Function", "Actual Data", "Predictions"])
      -
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,40))])}const v=p(t,[["render",r]]);export{c as __pageData,v as default}; diff --git a/dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.lean.js b/dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.lean.js deleted file mode 100644 index c48d3ed566..0000000000 --- a/dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.lean.js +++ /dev/null @@ -1,347 +0,0 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Training a Simple LSTM","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/3_SimpleRNN.md","filePath":"tutorials/beginner/3_SimpleRNN.md","lastUpdated":null}'),l={name:"tutorials/beginner/3_SimpleRNN.md"};function t(e,s,h,k,r,d){return p(),i("div",null,s[0]||(s[0]=[n(`

      Training a Simple LSTM

      In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:

      1. Create custom Lux models.

      2. Become familiar with the Lux recurrent neural network API.

      3. Training using Optimisers.jl and Zygote.jl.

      Package Imports

      julia
      using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, Random
      Precompiling Lux...
      -    332.2 ms  ✓ SIMDTypes
      -    380.9 ms  ✓ ManualMemory
      -    324.8 ms  ✓ FastClosures
      -    544.2 ms  ✓ EnzymeCore
      -    533.0 ms  ✓ ArrayInterface
      -   1791.7 ms  ✓ UnsafeAtomics
      -    827.1 ms  ✓ ThreadingUtilities
      -    514.5 ms  ✓ EnzymeCore → AdaptExt
      -    378.5 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    434.8 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -    443.2 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    435.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    371.2 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    410.5 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    367.8 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -    497.5 ms  ✓ Atomix
      -    651.3 ms  ✓ PolyesterWeave
      -   1495.3 ms  ✓ StaticArrayInterface
      -    505.8 ms  ✓ CloseOpenIntervals
      -    657.3 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -    590.9 ms  ✓ LayoutPointers
      -    961.8 ms  ✓ StrideArraysCore
      -    789.0 ms  ✓ Polyester
      -   3904.2 ms  ✓ KernelAbstractions
      -    670.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    781.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5249.4 ms  ✓ NNlib
      -    859.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    989.9 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5584.1 ms  ✓ LuxLib
      -   9258.3 ms  ✓ Lux
      -  31 dependencies successfully precompiled in 30 seconds. 78 already precompiled.
      -Precompiling MLUtils...
      -    331.5 ms  ✓ PtrArrays
      -    450.2 ms  ✓ AliasTables
      -    941.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -   2234.7 ms  ✓ StatsBase
      -   6135.5 ms  ✓ MLUtils
      -  5 dependencies successfully precompiled in 10 seconds. 93 already precompiled.
      -Precompiling ArrayInterfaceSparseArraysExt...
      -    651.6 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1676.4 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      -Precompiling LuxMLUtilsExt...
      -   2335.3 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling Reactant...
      -   2459.2 ms  ✓ Reactant_jll
      - 224139.2 ms  ✓ Enzyme
      -   6428.0 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live \x1B[0K
      -\x1B[0K2025-01-20 22:37:27.775820: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 12032411168801079817
      -  65476.8 ms  ✓ Reactant
      -  4 dependencies successfully precompiled in 297 seconds. 57 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling UnsafeAtomicsLLVM...
      -   1774.4 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      -Precompiling LuxLibEnzymeExt...
      -   6628.4 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6521.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1380.4 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17170.7 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19038.5 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 20 seconds. 126 already precompiled.
      -Precompiling LuxEnzymeExt...
      -   7795.4 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20461.9 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20508.9 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      -Precompiling LuxLibReactantExt...
      -  20586.0 ms  ✓ Reactant → ReactantStatisticsExt
      -  21062.9 ms  ✓ Reactant → ReactantNNlibExt
      -  21496.8 ms  ✓ LuxLib → LuxLibReactantExt
      -  20521.6 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20457.3 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20610.5 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      -Precompiling LuxReactantExt...
      -   9920.4 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.

      Dataset

      We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

      julia
      function get_dataloaders(; dataset_size=1000, sequence_length=50)
      -    # Create the spirals
      -    data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
      -    # Get the labels
      -    labels = vcat(repeat([0.0f0], dataset_size ÷ 2), repeat([1.0f0], dataset_size ÷ 2))
      -    clockwise_spirals = [reshape(d[1][:, 1:sequence_length], :, sequence_length, 1)
      -                         for d in data[1:(dataset_size ÷ 2)]]
      -    anticlockwise_spirals = [reshape(
      -                                 d[1][:, (sequence_length + 1):end], :, sequence_length, 1)
      -                             for d in data[((dataset_size ÷ 2) + 1):end]]
      -    x_data = Float32.(cat(clockwise_spirals..., anticlockwise_spirals...; dims=3))
      -    # Split the dataset
      -    (x_train, y_train), (x_val, y_val) = splitobs((x_data, labels); at=0.8, shuffle=true)
      -    # Create DataLoaders
      -    return (
      -        # Use DataLoader to automatically minibatch and shuffle the data
      -        DataLoader(
      -            collect.((x_train, y_train)); batchsize=128, shuffle=true, partial=false),
      -        # Don't shuffle the validation data
      -        DataLoader(collect.((x_val, y_val)); batchsize=128, shuffle=false, partial=false)
      -    )
      -end
      get_dataloaders (generic function with 1 method)

      Creating a Classifier

      We will be extending the Lux.AbstractLuxContainerLayer type for our custom model since it will contain a lstm block and a classifier head.

      We pass the fieldnames lstm_cell and classifier to the type to ensure that the parameters and states are automatically populated and we don't have to define Lux.initialparameters and Lux.initialstates.

      To understand more about container layers, please look at Container Layer.

      julia
      struct SpiralClassifier{L, C} <: AbstractLuxContainerLayer{(:lstm_cell, :classifier)}
      -    lstm_cell::L
      -    classifier::C
      -end

      We won't define the model from scratch but rather use the Lux.LSTMCell and Lux.Dense.

      julia
      function SpiralClassifier(in_dims, hidden_dims, out_dims)
      -    return SpiralClassifier(
      -        LSTMCell(in_dims => hidden_dims), Dense(hidden_dims => out_dims, sigmoid))
      -end
      Main.var"##230".SpiralClassifier

      We can use default Lux blocks – Recurrence(LSTMCell(in_dims => hidden_dims) – instead of defining the following. But let's still do it for the sake of it.

      Now we need to define the behavior of the Classifier when it is invoked.

      julia
      function (s::SpiralClassifier)(
      -        x::AbstractArray{T, 3}, ps::NamedTuple, st::NamedTuple) where {T}
      -    # First we will have to run the sequence through the LSTM Cell
      -    # The first call to LSTM Cell will create the initial hidden state
      -    # See that the parameters and states are automatically populated into a field called
      -    # \`lstm_cell\` We use \`eachslice\` to get the elements in the sequence without copying,
      -    # and \`Iterators.peel\` to split out the first element for LSTM initialization.
      -    x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
      -    (y, carry), st_lstm = s.lstm_cell(x_init, ps.lstm_cell, st.lstm_cell)
      -    # Now that we have the hidden state and memory in \`carry\` we will pass the input and
      -    # \`carry\` jointly
      -    for x in x_rest
      -        (y, carry), st_lstm = s.lstm_cell((x, carry), ps.lstm_cell, st_lstm)
      -    end
      -    # After running through the sequence we will pass the output through the classifier
      -    y, st_classifier = s.classifier(y, ps.classifier, st.classifier)
      -    # Finally remember to create the updated state
      -    st = merge(st, (classifier=st_classifier, lstm_cell=st_lstm))
      -    return vec(y), st
      -end

      Using the @compact API

      We can also define the model using the Lux.@compact API, which is a more concise way of defining models. This macro automatically handles the boilerplate code for you and as such we recommend this way of defining custom layers

      julia
      function SpiralClassifierCompact(in_dims, hidden_dims, out_dims)
      -    lstm_cell = LSTMCell(in_dims => hidden_dims)
      -    classifier = Dense(hidden_dims => out_dims, sigmoid)
      -    return @compact(; lstm_cell, classifier) do x::AbstractArray{T, 3} where {T}
      -        x_init, x_rest = Iterators.peel(LuxOps.eachslice(x, Val(2)))
      -        y, carry = lstm_cell(x_init)
      -        for x in x_rest
      -            y, carry = lstm_cell((x, carry))
      -        end
      -        @return vec(classifier(y))
      -    end
      -end
      SpiralClassifierCompact (generic function with 1 method)

      Defining Accuracy, Loss and Optimiser

      Now let's define the binarycrossentropy loss. Typically it is recommended to use logitbinarycrossentropy since it is more numerically stable, but for the sake of simplicity we will use binarycrossentropy.

      julia
      const lossfn = BinaryCrossEntropyLoss()
      -
      -function compute_loss(model, ps, st, (x, y))
      -    ŷ, st_ = model(x, ps, st)
      -    loss = lossfn(ŷ, y)
      -    return loss, st_, (; y_pred=ŷ)
      -end
      -
      -matches(y_pred, y_true) = sum((y_pred .> 0.5f0) .== y_true)
      -accuracy(y_pred, y_true) = matches(y_pred, y_true) / length(y_pred)
      accuracy (generic function with 1 method)

      Training the Model

      julia
      function main(model_type)
      -    dev = reactant_device()
      -    cdev = cpu_device()
      -
      -    # Get the dataloaders
      -    train_loader, val_loader = get_dataloaders() |> dev
      -
      -    # Create the model
      -    model = model_type(2, 8, 1)
      -    ps, st = Lux.setup(Random.default_rng(), model) |> dev
      -
      -    train_state = Training.TrainState(model, ps, st, Adam(0.01f0))
      -    model_compiled = if dev isa ReactantDevice
      -        @compile model(first(train_loader)[1], ps, Lux.testmode(st))
      -    else
      -        model
      -    end
      -    ad = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()
      -
      -    for epoch in 1:25
      -        # Train the model
      -        total_loss = 0.0f0
      -        total_samples = 0
      -        for (x, y) in train_loader
      -            (_, loss, _, train_state) = Training.single_train_step!(
      -                ad, lossfn, (x, y), train_state
      -            )
      -            total_loss += loss * length(y)
      -            total_samples += length(y)
      -        end
      -        @printf "Epoch [%3d]: Loss %4.5f\\n" epoch (total_loss/total_samples)
      -
      -        # Validate the model
      -        total_acc = 0.0f0
      -        total_loss = 0.0f0
      -        total_samples = 0
      -
      -        st_ = Lux.testmode(train_state.states)
      -        for (x, y) in val_loader
      -            ŷ, st_ = model_compiled(x, train_state.parameters, st_)
      -            ŷ, y = cdev(ŷ), cdev(y)
      -            total_acc += accuracy(ŷ, y) * length(y)
      -            total_loss += lossfn(ŷ, y) * length(y)
      -            total_samples += length(y)
      -        end
      -
      -        @printf "Validation:\\tLoss %4.5f\\tAccuracy %4.5f\\n" (total_loss/total_samples) (total_acc/total_samples)
      -    end
      -
      -    return (train_state.parameters, train_state.states) |> cpu_device()
      -end
      -
      -ps_trained, st_trained = main(SpiralClassifier)
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -2025-01-20 22:46:45.587352: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 15513715445056030835
      -Epoch [  1]: Loss 0.67334
      -Validation:	Loss 0.61507	Accuracy 1.00000
      -Epoch [  2]: Loss 0.57744
      -Validation:	Loss 0.52876	Accuracy 1.00000
      -Epoch [  3]: Loss 0.49384
      -Validation:	Loss 0.43874	Accuracy 1.00000
      -Epoch [  4]: Loss 0.39211
      -Validation:	Loss 0.31860	Accuracy 1.00000
      -Epoch [  5]: Loss 0.27281
      -Validation:	Loss 0.21501	Accuracy 1.00000
      -Epoch [  6]: Loss 0.18506
      -Validation:	Loss 0.14783	Accuracy 1.00000
      -Epoch [  7]: Loss 0.12820
      -Validation:	Loss 0.10396	Accuracy 1.00000
      -Epoch [  8]: Loss 0.09103
      -Validation:	Loss 0.07621	Accuracy 1.00000
      -Epoch [  9]: Loss 0.06807
      -Validation:	Loss 0.05899	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.05358
      -Validation:	Loss 0.04783	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.04407
      -Validation:	Loss 0.04008	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.03731
      -Validation:	Loss 0.03443	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.03229
      -Validation:	Loss 0.03012	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.02840
      -Validation:	Loss 0.02670	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.02528
      -Validation:	Loss 0.02391	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.02279
      -Validation:	Loss 0.02157	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.02055
      -Validation:	Loss 0.01954	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.01861
      -Validation:	Loss 0.01774	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.01699
      -Validation:	Loss 0.01614	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.01549
      -Validation:	Loss 0.01472	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.01413
      -Validation:	Loss 0.01346	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.01294
      -Validation:	Loss 0.01233	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.01186
      -Validation:	Loss 0.01129	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.01086
      -Validation:	Loss 0.01033	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.00996
      -Validation:	Loss 0.00945	Accuracy 1.00000

      We can also train the compact model with the exact same code!

      julia
      ps_trained2, st_trained2 = main(SpiralClassifierCompact)
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -Epoch [  1]: Loss 0.90661
      -Validation:	Loss 0.81372	Accuracy 0.46094
      -Epoch [  2]: Loss 0.71494
      -Validation:	Loss 0.63222	Accuracy 0.46094
      -Epoch [  3]: Loss 0.56538
      -Validation:	Loss 0.50507	Accuracy 1.00000
      -Epoch [  4]: Loss 0.45855
      -Validation:	Loss 0.40424	Accuracy 1.00000
      -Epoch [  5]: Loss 0.37246
      -Validation:	Loss 0.33160	Accuracy 1.00000
      -Epoch [  6]: Loss 0.31438
      -Validation:	Loss 0.28433	Accuracy 1.00000
      -Epoch [  7]: Loss 0.27284
      -Validation:	Loss 0.24691	Accuracy 1.00000
      -Epoch [  8]: Loss 0.23912
      -Validation:	Loss 0.21678	Accuracy 1.00000
      -Epoch [  9]: Loss 0.21161
      -Validation:	Loss 0.19076	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.18482
      -Validation:	Loss 0.16652	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.16043
      -Validation:	Loss 0.14203	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.13420
      -Validation:	Loss 0.11569	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.10542
      -Validation:	Loss 0.09126	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.08311
      -Validation:	Loss 0.07461	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.06884
      -Validation:	Loss 0.06360	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.05892
      -Validation:	Loss 0.05550	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.05177
      -Validation:	Loss 0.04936	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.04635
      -Validation:	Loss 0.04460	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.04199
      -Validation:	Loss 0.04063	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.03849
      -Validation:	Loss 0.03719	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.03532
      -Validation:	Loss 0.03413	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.03245
      -Validation:	Loss 0.03123	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.02946
      -Validation:	Loss 0.02835	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.02653
      -Validation:	Loss 0.02533	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.02337
      -Validation:	Loss 0.02250	Accuracy 1.00000

      Saving the Model

      We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

      julia
      @save "trained_model.jld2" ps_trained st_trained

      Let's try loading the model

      julia
      @load "trained_model.jld2" ps_trained st_trained
      2-element Vector{Symbol}:
      - :ps_trained
      - :st_trained

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,46)]))}const o=a(l,[["render",t]]);export{E as __pageData,o as default}; diff --git a/dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.js b/dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.js similarity index 79% rename from dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.js rename to dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.js index c48d3ed566..d25f77b5a8 100644 --- a/dev/assets/tutorials_beginner_3_SimpleRNN.md.DmnqxtSU.js +++ b/dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.js @@ -1,96 +1,22 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Training a Simple LSTM","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/3_SimpleRNN.md","filePath":"tutorials/beginner/3_SimpleRNN.md","lastUpdated":null}'),l={name:"tutorials/beginner/3_SimpleRNN.md"};function t(e,s,h,k,r,d){return p(),i("div",null,s[0]||(s[0]=[n(`

      Training a Simple LSTM

      In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:

      1. Create custom Lux models.

      2. Become familiar with the Lux recurrent neural network API.

      3. Training using Optimisers.jl and Zygote.jl.

      Package Imports

      julia
      using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, Random
      Precompiling Lux...
      -    332.2 ms  ✓ SIMDTypes
      -    380.9 ms  ✓ ManualMemory
      -    324.8 ms  ✓ FastClosures
      -    544.2 ms  ✓ EnzymeCore
      -    533.0 ms  ✓ ArrayInterface
      -   1791.7 ms  ✓ UnsafeAtomics
      -    827.1 ms  ✓ ThreadingUtilities
      -    514.5 ms  ✓ EnzymeCore → AdaptExt
      -    378.5 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    434.8 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -    443.2 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    435.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    371.2 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    410.5 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    367.8 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -    497.5 ms  ✓ Atomix
      -    651.3 ms  ✓ PolyesterWeave
      -   1495.3 ms  ✓ StaticArrayInterface
      -    505.8 ms  ✓ CloseOpenIntervals
      -    657.3 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -    590.9 ms  ✓ LayoutPointers
      -    961.8 ms  ✓ StrideArraysCore
      -    789.0 ms  ✓ Polyester
      -   3904.2 ms  ✓ KernelAbstractions
      -    670.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    781.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5249.4 ms  ✓ NNlib
      -    859.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    989.9 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5584.1 ms  ✓ LuxLib
      -   9258.3 ms  ✓ Lux
      -  31 dependencies successfully precompiled in 30 seconds. 78 already precompiled.
      -Precompiling MLUtils...
      -    331.5 ms  ✓ PtrArrays
      -    450.2 ms  ✓ AliasTables
      -    941.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -   2234.7 ms  ✓ StatsBase
      -   6135.5 ms  ✓ MLUtils
      -  5 dependencies successfully precompiled in 10 seconds. 93 already precompiled.
      -Precompiling ArrayInterfaceSparseArraysExt...
      -    651.6 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1676.4 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      +import{_ as i,c as a,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Training a Simple LSTM","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/3_SimpleRNN.md","filePath":"tutorials/beginner/3_SimpleRNN.md","lastUpdated":null}'),l={name:"tutorials/beginner/3_SimpleRNN.md"};function t(h,s,e,k,d,r){return p(),a("div",null,s[0]||(s[0]=[n(`

      Training a Simple LSTM

      In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:

      1. Create custom Lux models.

      2. Become familiar with the Lux recurrent neural network API.

      3. Training using Optimisers.jl and Zygote.jl.

      Package Imports

      julia
      using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, Random
      Precompiling Lux...
      +   5868.9 ms  ✓ LuxLib
      +   9340.6 ms  ✓ Lux
      +  2 dependencies successfully precompiled in 15 seconds. 107 already precompiled.
       Precompiling LuxMLUtilsExt...
      -   2335.3 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling Reactant...
      -   2459.2 ms  ✓ Reactant_jll
      - 224139.2 ms  ✓ Enzyme
      -   6428.0 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live \x1B[0K
      -\x1B[0K2025-01-20 22:37:27.775820: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 12032411168801079817
      -  65476.8 ms  ✓ Reactant
      -  4 dependencies successfully precompiled in 297 seconds. 57 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling UnsafeAtomicsLLVM...
      -   1774.4 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      +   2224.8 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 167 already precompiled.
       Precompiling LuxLibEnzymeExt...
      -   6628.4 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6521.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1380.4 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17170.7 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19038.5 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 20 seconds. 126 already precompiled.
      +   1341.8 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
       Precompiling LuxEnzymeExt...
      -   7795.4 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20461.9 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20508.9 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      +   6784.3 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
       Precompiling LuxLibReactantExt...
      -  20586.0 ms  ✓ Reactant → ReactantStatisticsExt
      -  21062.9 ms  ✓ Reactant → ReactantNNlibExt
      -  21496.8 ms  ✓ LuxLib → LuxLibReactantExt
      -  20521.6 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20457.3 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20610.5 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      +  12962.1 ms  ✓ LuxLib → LuxLibReactantExt
      +  1 dependency successfully precompiled in 13 seconds. 143 already precompiled.
       Precompiling LuxReactantExt...
      -   9920.4 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.

      Dataset

      We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

      julia
      function get_dataloaders(; dataset_size=1000, sequence_length=50)
      +   8587.6 ms  ✓ Lux → LuxReactantExt
      +  1 dependency successfully precompiled in 9 seconds. 161 already precompiled.

      Dataset

      We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

      julia
      function get_dataloaders(; dataset_size=1000, sequence_length=50)
           # Create the spirals
           data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
           # Get the labels
      @@ -209,109 +135,109 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
       end
       
       ps_trained, st_trained = main(SpiralClassifier)
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -2025-01-20 22:46:45.587352: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 15513715445056030835
      -Epoch [  1]: Loss 0.67334
      -Validation:	Loss 0.61507	Accuracy 1.00000
      -Epoch [  2]: Loss 0.57744
      -Validation:	Loss 0.52876	Accuracy 1.00000
      -Epoch [  3]: Loss 0.49384
      -Validation:	Loss 0.43874	Accuracy 1.00000
      -Epoch [  4]: Loss 0.39211
      -Validation:	Loss 0.31860	Accuracy 1.00000
      -Epoch [  5]: Loss 0.27281
      -Validation:	Loss 0.21501	Accuracy 1.00000
      -Epoch [  6]: Loss 0.18506
      -Validation:	Loss 0.14783	Accuracy 1.00000
      -Epoch [  7]: Loss 0.12820
      -Validation:	Loss 0.10396	Accuracy 1.00000
      -Epoch [  8]: Loss 0.09103
      -Validation:	Loss 0.07621	Accuracy 1.00000
      -Epoch [  9]: Loss 0.06807
      -Validation:	Loss 0.05899	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.05358
      -Validation:	Loss 0.04783	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.04407
      -Validation:	Loss 0.04008	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.03731
      -Validation:	Loss 0.03443	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.03229
      -Validation:	Loss 0.03012	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.02840
      -Validation:	Loss 0.02670	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.02528
      -Validation:	Loss 0.02391	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.02279
      -Validation:	Loss 0.02157	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.02055
      -Validation:	Loss 0.01954	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.01861
      -Validation:	Loss 0.01774	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.01699
      -Validation:	Loss 0.01614	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.01549
      -Validation:	Loss 0.01472	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.01413
      -Validation:	Loss 0.01346	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.01294
      -Validation:	Loss 0.01233	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.01186
      -Validation:	Loss 0.01129	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.01086
      -Validation:	Loss 0.01033	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.00996
      -Validation:	Loss 0.00945	Accuracy 1.00000

      We can also train the compact model with the exact same code!

      julia
      ps_trained2, st_trained2 = main(SpiralClassifierCompact)
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -Epoch [  1]: Loss 0.90661
      -Validation:	Loss 0.81372	Accuracy 0.46094
      -Epoch [  2]: Loss 0.71494
      -Validation:	Loss 0.63222	Accuracy 0.46094
      -Epoch [  3]: Loss 0.56538
      -Validation:	Loss 0.50507	Accuracy 1.00000
      -Epoch [  4]: Loss 0.45855
      -Validation:	Loss 0.40424	Accuracy 1.00000
      -Epoch [  5]: Loss 0.37246
      -Validation:	Loss 0.33160	Accuracy 1.00000
      -Epoch [  6]: Loss 0.31438
      -Validation:	Loss 0.28433	Accuracy 1.00000
      -Epoch [  7]: Loss 0.27284
      -Validation:	Loss 0.24691	Accuracy 1.00000
      -Epoch [  8]: Loss 0.23912
      -Validation:	Loss 0.21678	Accuracy 1.00000
      -Epoch [  9]: Loss 0.21161
      -Validation:	Loss 0.19076	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.18482
      -Validation:	Loss 0.16652	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.16043
      -Validation:	Loss 0.14203	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.13420
      -Validation:	Loss 0.11569	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.10542
      -Validation:	Loss 0.09126	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.08311
      -Validation:	Loss 0.07461	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.06884
      -Validation:	Loss 0.06360	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.05892
      -Validation:	Loss 0.05550	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.05177
      -Validation:	Loss 0.04936	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.04635
      -Validation:	Loss 0.04460	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.04199
      -Validation:	Loss 0.04063	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.03849
      -Validation:	Loss 0.03719	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.03532
      -Validation:	Loss 0.03413	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.03245
      -Validation:	Loss 0.03123	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.02946
      -Validation:	Loss 0.02835	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.02653
      -Validation:	Loss 0.02533	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.02337
      -Validation:	Loss 0.02250	Accuracy 1.00000

      Saving the Model

      We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

      julia
      @save "trained_model.jld2" ps_trained st_trained

      Let's try loading the model

      julia
      @load "trained_model.jld2" ps_trained st_trained
      2-element Vector{Symbol}:
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-6/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +2025-01-24 04:36:23.325653: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 3095950067347489316
      +Epoch [  1]: Loss 0.74550
      +Validation:	Loss 0.70002	Accuracy 0.35938
      +Epoch [  2]: Loss 0.66010
      +Validation:	Loss 0.63599	Accuracy 1.00000
      +Epoch [  3]: Loss 0.59519
      +Validation:	Loss 0.57492	Accuracy 1.00000
      +Epoch [  4]: Loss 0.52772
      +Validation:	Loss 0.50520	Accuracy 1.00000
      +Epoch [  5]: Loss 0.45341
      +Validation:	Loss 0.43112	Accuracy 1.00000
      +Epoch [  6]: Loss 0.38036
      +Validation:	Loss 0.36367	Accuracy 1.00000
      +Epoch [  7]: Loss 0.31730
      +Validation:	Loss 0.30521	Accuracy 1.00000
      +Epoch [  8]: Loss 0.25966
      +Validation:	Loss 0.25263	Accuracy 1.00000
      +Epoch [  9]: Loss 0.20685
      +Validation:	Loss 0.20167	Accuracy 1.00000
      +Epoch [ 10]: Loss 0.15741
      +Validation:	Loss 0.16060	Accuracy 1.00000
      +Epoch [ 11]: Loss 0.12361
      +Validation:	Loss 0.12724	Accuracy 1.00000
      +Epoch [ 12]: Loss 0.09638
      +Validation:	Loss 0.09984	Accuracy 1.00000
      +Epoch [ 13]: Loss 0.07572
      +Validation:	Loss 0.07530	Accuracy 1.00000
      +Epoch [ 14]: Loss 0.05634
      +Validation:	Loss 0.05227	Accuracy 1.00000
      +Epoch [ 15]: Loss 0.03886
      +Validation:	Loss 0.03394	Accuracy 1.00000
      +Epoch [ 16]: Loss 0.02640
      +Validation:	Loss 0.02209	Accuracy 1.00000
      +Epoch [ 17]: Loss 0.01860
      +Validation:	Loss 0.01601	Accuracy 1.00000
      +Epoch [ 18]: Loss 0.01482
      +Validation:	Loss 0.01309	Accuracy 1.00000
      +Epoch [ 19]: Loss 0.01229
      +Validation:	Loss 0.01084	Accuracy 1.00000
      +Epoch [ 20]: Loss 0.01058
      +Validation:	Loss 0.00959	Accuracy 1.00000
      +Epoch [ 21]: Loss 0.00944
      +Validation:	Loss 0.00863	Accuracy 1.00000
      +Epoch [ 22]: Loss 0.00851
      +Validation:	Loss 0.00785	Accuracy 1.00000
      +Epoch [ 23]: Loss 0.00778
      +Validation:	Loss 0.00722	Accuracy 1.00000
      +Epoch [ 24]: Loss 0.00715
      +Validation:	Loss 0.00664	Accuracy 1.00000
      +Epoch [ 25]: Loss 0.00659
      +Validation:	Loss 0.00610	Accuracy 1.00000

      We can also train the compact model with the exact same code!

      julia
      ps_trained2, st_trained2 = main(SpiralClassifierCompact)
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-6/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +Epoch [  1]: Loss 0.81240
      +Validation:	Loss 0.76507	Accuracy 0.45312
      +Epoch [  2]: Loss 0.71654
      +Validation:	Loss 0.67404	Accuracy 0.45312
      +Epoch [  3]: Loss 0.63550
      +Validation:	Loss 0.59338	Accuracy 1.00000
      +Epoch [  4]: Loss 0.56080
      +Validation:	Loss 0.51291	Accuracy 1.00000
      +Epoch [  5]: Loss 0.48194
      +Validation:	Loss 0.42384	Accuracy 1.00000
      +Epoch [  6]: Loss 0.39456
      +Validation:	Loss 0.32968	Accuracy 1.00000
      +Epoch [  7]: Loss 0.30701
      +Validation:	Loss 0.24736	Accuracy 1.00000
      +Epoch [  8]: Loss 0.22990
      +Validation:	Loss 0.17724	Accuracy 1.00000
      +Epoch [  9]: Loss 0.16442
      +Validation:	Loss 0.12688	Accuracy 1.00000
      +Epoch [ 10]: Loss 0.12165
      +Validation:	Loss 0.09625	Accuracy 1.00000
      +Epoch [ 11]: Loss 0.09444
      +Validation:	Loss 0.07708	Accuracy 1.00000
      +Epoch [ 12]: Loss 0.07802
      +Validation:	Loss 0.06419	Accuracy 1.00000
      +Epoch [ 13]: Loss 0.06459
      +Validation:	Loss 0.05486	Accuracy 1.00000
      +Epoch [ 14]: Loss 0.05622
      +Validation:	Loss 0.04781	Accuracy 1.00000
      +Epoch [ 15]: Loss 0.04940
      +Validation:	Loss 0.04230	Accuracy 1.00000
      +Epoch [ 16]: Loss 0.04372
      +Validation:	Loss 0.03781	Accuracy 1.00000
      +Epoch [ 17]: Loss 0.03940
      +Validation:	Loss 0.03394	Accuracy 1.00000
      +Epoch [ 18]: Loss 0.03520
      +Validation:	Loss 0.03044	Accuracy 1.00000
      +Epoch [ 19]: Loss 0.03168
      +Validation:	Loss 0.02703	Accuracy 1.00000
      +Epoch [ 20]: Loss 0.02804
      +Validation:	Loss 0.02359	Accuracy 1.00000
      +Epoch [ 21]: Loss 0.02441
      +Validation:	Loss 0.02034	Accuracy 1.00000
      +Epoch [ 22]: Loss 0.02077
      +Validation:	Loss 0.01761	Accuracy 1.00000
      +Epoch [ 23]: Loss 0.01803
      +Validation:	Loss 0.01546	Accuracy 1.00000
      +Epoch [ 24]: Loss 0.01596
      +Validation:	Loss 0.01381	Accuracy 1.00000
      +Epoch [ 25]: Loss 0.01431
      +Validation:	Loss 0.01252	Accuracy 1.00000

      Saving the Model

      We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

      julia
      @save "trained_model.jld2" ps_trained st_trained

      Let's try loading the model

      julia
      @load "trained_model.jld2" ps_trained st_trained
      2-element Vector{Symbol}:
        :ps_trained
        :st_trained

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
      @@ -326,8 +252,8 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -344,4 +270,4 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
         JULIA_NUM_THREADS = 48
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,46)]))}const o=a(l,[["render",t]]);export{E as __pageData,o as default}; + JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,46)]))}const g=i(l,[["render",t]]);export{c as __pageData,g as default}; diff --git a/dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.lean.js b/dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.lean.js new file mode 100644 index 0000000000..bbe2025468 --- /dev/null +++ b/dev/assets/tutorials_beginner_3_SimpleRNN.md.mE6TZyjC.lean.js @@ -0,0 +1 @@ +import{_ as i,c as a,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const c=JSON.parse('{"title":"Training a Simple LSTM","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/3_SimpleRNN.md","filePath":"tutorials/beginner/3_SimpleRNN.md","lastUpdated":null}'),l={name:"tutorials/beginner/3_SimpleRNN.md"};function t(h,s,e,k,d,r){return p(),a("div",null,s[0]||(s[0]=[n("",46)]))}const g=i(l,[["render",t]]);export{c as __pageData,g as default}; diff --git a/dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.js b/dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.js similarity index 94% rename from dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.js rename to dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.js index 38c222b3cc..a1af8dc652 100644 --- a/dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.js +++ b/dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.js @@ -1,8 +1,8 @@ -import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"MNIST Classification with SimpleChains","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/4_SimpleChains.md","filePath":"tutorials/beginner/4_SimpleChains.md","lastUpdated":null}'),p={name:"tutorials/beginner/4_SimpleChains.md"};function t(h,s,e,k,r,E){return l(),a("div",null,s[0]||(s[0]=[n(`

      MNIST Classification with SimpleChains

      SimpleChains.jl is an excellent framework for training small neural networks. In this tutorial we will demonstrate how to use the same API as Lux.jl to train a model using SimpleChains.jl. We will use the tutorial from SimpleChains.jl as a reference.

      Package Imports

      julia
      using Lux, MLUtils, Optimisers, Zygote, OneHotArrays, Random, Statistics, Printf, Reactant
      +import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"MNIST Classification with SimpleChains","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/4_SimpleChains.md","filePath":"tutorials/beginner/4_SimpleChains.md","lastUpdated":null}'),p={name:"tutorials/beginner/4_SimpleChains.md"};function t(h,s,e,k,r,E){return l(),a("div",null,s[0]||(s[0]=[n(`

      MNIST Classification with SimpleChains

      SimpleChains.jl is an excellent framework for training small neural networks. In this tutorial we will demonstrate how to use the same API as Lux.jl to train a model using SimpleChains.jl. We will use the tutorial from SimpleChains.jl as a reference.

      Package Imports

      julia
      using Lux, MLUtils, Optimisers, Zygote, OneHotArrays, Random, Statistics, Printf, Reactant
       using MLDatasets: MNIST
       using SimpleChains: SimpleChains
       
      -Reactant.set_default_backend("cpu")
      Reactant.XLA.Client(Ptr{Nothing} @0x000000000705d7b0)

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      +Reactant.set_default_backend("cpu")
      Reactant.XLA.Client(Ptr{Nothing} @0x00000000086f4710)

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
           # Load MNIST
           N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
           dataset = MNIST(; split=:train)
      @@ -114,26 +114,26 @@ import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g
           end
       
           return tr_acc, te_acc
      -end
      train (generic function with 2 methods)

      Finally Training the Model

      First we will train the Lux model

      julia
      tr_acc, te_acc = train(lux_model, reactant_device())
      2025-01-20 22:46:34.664977: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 14589407905380107767
      -[ 1/10] 	 Time 480.95s 	 Training Accuracy: 15.23% 	 Test Accuracy: 17.19%
      -[ 2/10] 	 Time 0.59s 	 Training Accuracy: 33.75% 	 Test Accuracy: 32.03%
      -[ 3/10] 	 Time 0.60s 	 Training Accuracy: 47.27% 	 Test Accuracy: 46.09%
      -[ 4/10] 	 Time 0.60s 	 Training Accuracy: 54.69% 	 Test Accuracy: 53.91%
      -[ 5/10] 	 Time 0.59s 	 Training Accuracy: 64.38% 	 Test Accuracy: 61.72%
      -[ 6/10] 	 Time 0.58s 	 Training Accuracy: 73.75% 	 Test Accuracy: 70.31%
      -[ 7/10] 	 Time 0.57s 	 Training Accuracy: 78.20% 	 Test Accuracy: 75.78%
      -[ 8/10] 	 Time 0.59s 	 Training Accuracy: 80.00% 	 Test Accuracy: 81.25%
      -[ 9/10] 	 Time 0.60s 	 Training Accuracy: 81.95% 	 Test Accuracy: 84.38%
      -[10/10] 	 Time 0.59s 	 Training Accuracy: 83.75% 	 Test Accuracy: 85.16%

      Now we will train the SimpleChains model

      julia
      tr_acc, te_acc = train(simple_chains_model)
      [ 1/10] 	 Time 869.66s 	 Training Accuracy: 28.36% 	 Test Accuracy: 28.12%
      -[ 2/10] 	 Time 13.19s 	 Training Accuracy: 36.48% 	 Test Accuracy: 32.81%
      -[ 3/10] 	 Time 13.20s 	 Training Accuracy: 46.64% 	 Test Accuracy: 44.53%
      -[ 4/10] 	 Time 13.21s 	 Training Accuracy: 59.53% 	 Test Accuracy: 52.34%
      -[ 5/10] 	 Time 13.19s 	 Training Accuracy: 71.09% 	 Test Accuracy: 65.62%
      -[ 6/10] 	 Time 13.19s 	 Training Accuracy: 75.23% 	 Test Accuracy: 67.19%
      -[ 7/10] 	 Time 13.19s 	 Training Accuracy: 79.53% 	 Test Accuracy: 70.31%
      -[ 8/10] 	 Time 13.19s 	 Training Accuracy: 80.62% 	 Test Accuracy: 74.22%
      -[ 9/10] 	 Time 13.22s 	 Training Accuracy: 84.22% 	 Test Accuracy: 81.25%
      -[10/10] 	 Time 13.20s 	 Training Accuracy: 85.00% 	 Test Accuracy: 82.03%

      On my local machine we see a 3-4x speedup when using SimpleChains.jl. The conditions of the server this documentation is being built on is not ideal for CPU benchmarking hence, the speedup may not be as significant and even there might be regressions.

      Appendix

      julia
      using InteractiveUtils
      +end
      train (generic function with 2 methods)

      Finally Training the Model

      First we will train the Lux model

      julia
      tr_acc, te_acc = train(lux_model, reactant_device())
      2025-01-24 04:33:37.063340: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 2039916195886688483
      +[ 1/10] 	 Time 498.21s 	 Training Accuracy: 18.98% 	 Test Accuracy: 14.06%
      +[ 2/10] 	 Time 0.53s 	 Training Accuracy: 18.98% 	 Test Accuracy: 17.19%
      +[ 3/10] 	 Time 0.55s 	 Training Accuracy: 34.06% 	 Test Accuracy: 31.25%
      +[ 4/10] 	 Time 0.57s 	 Training Accuracy: 46.88% 	 Test Accuracy: 35.94%
      +[ 5/10] 	 Time 0.56s 	 Training Accuracy: 56.41% 	 Test Accuracy: 41.41%
      +[ 6/10] 	 Time 0.55s 	 Training Accuracy: 64.14% 	 Test Accuracy: 48.44%
      +[ 7/10] 	 Time 0.57s 	 Training Accuracy: 69.69% 	 Test Accuracy: 62.50%
      +[ 8/10] 	 Time 0.58s 	 Training Accuracy: 75.08% 	 Test Accuracy: 69.53%
      +[ 9/10] 	 Time 0.59s 	 Training Accuracy: 78.44% 	 Test Accuracy: 75.78%
      +[10/10] 	 Time 0.56s 	 Training Accuracy: 82.19% 	 Test Accuracy: 77.34%

      Now we will train the SimpleChains model

      julia
      tr_acc, te_acc = train(simple_chains_model)
      [ 1/10] 	 Time 877.38s 	 Training Accuracy: 33.67% 	 Test Accuracy: 31.25%
      +[ 2/10] 	 Time 12.46s 	 Training Accuracy: 54.61% 	 Test Accuracy: 50.78%
      +[ 3/10] 	 Time 12.47s 	 Training Accuracy: 63.83% 	 Test Accuracy: 60.16%
      +[ 4/10] 	 Time 12.47s 	 Training Accuracy: 70.31% 	 Test Accuracy: 60.16%
      +[ 5/10] 	 Time 12.48s 	 Training Accuracy: 76.09% 	 Test Accuracy: 69.53%
      +[ 6/10] 	 Time 12.45s 	 Training Accuracy: 80.31% 	 Test Accuracy: 78.12%
      +[ 7/10] 	 Time 12.47s 	 Training Accuracy: 82.97% 	 Test Accuracy: 80.47%
      +[ 8/10] 	 Time 12.47s 	 Training Accuracy: 85.23% 	 Test Accuracy: 81.25%
      +[ 9/10] 	 Time 12.46s 	 Training Accuracy: 87.42% 	 Test Accuracy: 82.03%
      +[10/10] 	 Time 12.46s 	 Training Accuracy: 88.67% 	 Test Accuracy: 82.81%

      On my local machine we see a 3-4x speedup when using SimpleChains.jl. The conditions of the server this documentation is being built on is not ideal for CPU benchmarking hence, the speedup may not be as significant and even there might be regressions.

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -146,8 +146,8 @@ import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      diff --git a/dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.lean.js b/dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.lean.js
      new file mode 100644
      index 0000000000..baaa155ec7
      --- /dev/null
      +++ b/dev/assets/tutorials_beginner_4_SimpleChains.md.B9RS1fYd.lean.js
      @@ -0,0 +1 @@
      +import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.BetCMmtc.js";const g=JSON.parse('{"title":"MNIST Classification with SimpleChains","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/4_SimpleChains.md","filePath":"tutorials/beginner/4_SimpleChains.md","lastUpdated":null}'),p={name:"tutorials/beginner/4_SimpleChains.md"};function t(h,s,e,k,r,E){return l(),a("div",null,s[0]||(s[0]=[n("",33)]))}const c=i(p,[["render",t]]);export{g as __pageData,c as default};
      diff --git a/dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.lean.js b/dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.lean.js
      deleted file mode 100644
      index 38c222b3cc..0000000000
      --- a/dev/assets/tutorials_beginner_4_SimpleChains.md.snvLiqaS.lean.js
      +++ /dev/null
      @@ -1,167 +0,0 @@
      -import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"MNIST Classification with SimpleChains","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/4_SimpleChains.md","filePath":"tutorials/beginner/4_SimpleChains.md","lastUpdated":null}'),p={name:"tutorials/beginner/4_SimpleChains.md"};function t(h,s,e,k,r,E){return l(),a("div",null,s[0]||(s[0]=[n(`

      MNIST Classification with SimpleChains

      SimpleChains.jl is an excellent framework for training small neural networks. In this tutorial we will demonstrate how to use the same API as Lux.jl to train a model using SimpleChains.jl. We will use the tutorial from SimpleChains.jl as a reference.

      Package Imports

      julia
      using Lux, MLUtils, Optimisers, Zygote, OneHotArrays, Random, Statistics, Printf, Reactant
      -using MLDatasets: MNIST
      -using SimpleChains: SimpleChains
      -
      -Reactant.set_default_backend("cpu")
      Reactant.XLA.Client(Ptr{Nothing} @0x000000000705d7b0)

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      -    # Load MNIST
      -    N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
      -    dataset = MNIST(; split=:train)
      -    if N !== nothing
      -        imgs = dataset.features[:, :, 1:N]
      -        labels_raw = dataset.targets[1:N]
      -    else
      -        imgs = dataset.features
      -        labels_raw = dataset.targets
      -    end
      -
      -    # Process images into (H, W, C, BS) batches
      -    x_data = Float32.(reshape(imgs, size(imgs, 1), size(imgs, 2), 1, size(imgs, 3)))
      -    y_data = onehotbatch(labels_raw, 0:9)
      -    (x_train, y_train), (x_test, y_test) = splitobs((x_data, y_data); at=train_split)
      -
      -    return (
      -        # Use DataLoader to automatically minibatch and shuffle the data
      -        DataLoader(collect.((x_train, y_train)); batchsize, shuffle=true, partial=false),
      -        # Don't shuffle the test data
      -        DataLoader(collect.((x_test, y_test)); batchsize, shuffle=false, partial=false)
      -    )
      -end
      loadmnist (generic function with 1 method)

      Define the Model

      julia
      lux_model = Chain(
      -    Conv((5, 5), 1 => 6, relu),
      -    MaxPool((2, 2)),
      -    Conv((5, 5), 6 => 16, relu),
      -    MaxPool((2, 2)),
      -    FlattenLayer(3),
      -    Chain(
      -        Dense(256 => 128, relu),
      -        Dense(128 => 84, relu),
      -        Dense(84 => 10)
      -    )
      -)
      Chain(
      -    layer_1 = Conv((5, 5), 1 => 6, relu),  # 156 parameters
      -    layer_2 = MaxPool((2, 2)),
      -    layer_3 = Conv((5, 5), 6 => 16, relu),  # 2_416 parameters
      -    layer_4 = MaxPool((2, 2)),
      -    layer_5 = Lux.FlattenLayer{Static.StaticInt{3}}(static(3)),
      -    layer_6 = Chain(
      -        layer_1 = Dense(256 => 128, relu),  # 32_896 parameters
      -        layer_2 = Dense(128 => 84, relu),  # 10_836 parameters
      -        layer_3 = Dense(84 => 10),      # 850 parameters
      -    ),
      -)         # Total: 47_154 parameters,
      -          #        plus 0 states.

      We now need to convert the lux_model to SimpleChains.jl. We need to do this by defining the ToSimpleChainsAdaptor and providing the input dimensions.

      julia
      adaptor = ToSimpleChainsAdaptor((28, 28, 1))
      -simple_chains_model = adaptor(lux_model)
      SimpleChainsLayer(
      -    Chain(
      -        layer_1 = Conv((5, 5), 1 => 6, relu),  # 156 parameters
      -        layer_2 = MaxPool((2, 2)),
      -        layer_3 = Conv((5, 5), 6 => 16, relu),  # 2_416 parameters
      -        layer_4 = MaxPool((2, 2)),
      -        layer_5 = Lux.FlattenLayer{Static.StaticInt{3}}(static(3)),
      -        layer_6 = Chain(
      -            layer_1 = Dense(256 => 128, relu),  # 32_896 parameters
      -            layer_2 = Dense(128 => 84, relu),  # 10_836 parameters
      -            layer_3 = Dense(84 => 10),  # 850 parameters
      -        ),
      -    ),
      -)         # Total: 47_154 parameters,
      -          #        plus 0 states.

      Helper Functions

      julia
      const lossfn = CrossEntropyLoss(; logits=Val(true))
      -
      -function accuracy(model, ps, st, dataloader)
      -    total_correct, total = 0, 0
      -    st = Lux.testmode(st)
      -    for (x, y) in dataloader
      -        target_class = onecold(y)
      -        predicted_class = onecold(Array(first(model(x, ps, st))))
      -        total_correct += sum(target_class .== predicted_class)
      -        total += length(target_class)
      -    end
      -    return total_correct / total
      -end
      accuracy (generic function with 1 method)

      Define the Training Loop

      julia
      function train(model, dev=cpu_device(); rng=Random.default_rng(), kwargs...)
      -    train_dataloader, test_dataloader = loadmnist(128, 0.9) |> dev
      -    ps, st = Lux.setup(rng, model) |> dev
      -
      -    vjp = dev isa ReactantDevice ? AutoEnzyme() : AutoZygote()
      -
      -    train_state = Training.TrainState(model, ps, st, Adam(3.0f-4))
      -
      -    if dev isa ReactantDevice
      -        x_ra = first(test_dataloader)[1]
      -        model_compiled = @compile model(x_ra, ps, Lux.testmode(st))
      -    else
      -        model_compiled = model
      -    end
      -
      -    ### Lets train the model
      -    nepochs = 10
      -    tr_acc, te_acc = 0.0, 0.0
      -    for epoch in 1:nepochs
      -        stime = time()
      -        for (x, y) in train_dataloader
      -            _, _, _, train_state = Training.single_train_step!(
      -                vjp, lossfn, (x, y), train_state
      -            )
      -        end
      -        ttime = time() - stime
      -
      -        tr_acc = accuracy(
      -            model_compiled, train_state.parameters, train_state.states, train_dataloader) *
      -                 100
      -        te_acc = accuracy(
      -            model_compiled, train_state.parameters, train_state.states, test_dataloader) *
      -                 100
      -
      -        @printf "[%2d/%2d] \\t Time %.2fs \\t Training Accuracy: %.2f%% \\t Test Accuracy: \\
      -                 %.2f%%\\n" epoch nepochs ttime tr_acc te_acc
      -    end
      -
      -    return tr_acc, te_acc
      -end
      train (generic function with 2 methods)

      Finally Training the Model

      First we will train the Lux model

      julia
      tr_acc, te_acc = train(lux_model, reactant_device())
      2025-01-20 22:46:34.664977: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 14589407905380107767
      -[ 1/10] 	 Time 480.95s 	 Training Accuracy: 15.23% 	 Test Accuracy: 17.19%
      -[ 2/10] 	 Time 0.59s 	 Training Accuracy: 33.75% 	 Test Accuracy: 32.03%
      -[ 3/10] 	 Time 0.60s 	 Training Accuracy: 47.27% 	 Test Accuracy: 46.09%
      -[ 4/10] 	 Time 0.60s 	 Training Accuracy: 54.69% 	 Test Accuracy: 53.91%
      -[ 5/10] 	 Time 0.59s 	 Training Accuracy: 64.38% 	 Test Accuracy: 61.72%
      -[ 6/10] 	 Time 0.58s 	 Training Accuracy: 73.75% 	 Test Accuracy: 70.31%
      -[ 7/10] 	 Time 0.57s 	 Training Accuracy: 78.20% 	 Test Accuracy: 75.78%
      -[ 8/10] 	 Time 0.59s 	 Training Accuracy: 80.00% 	 Test Accuracy: 81.25%
      -[ 9/10] 	 Time 0.60s 	 Training Accuracy: 81.95% 	 Test Accuracy: 84.38%
      -[10/10] 	 Time 0.59s 	 Training Accuracy: 83.75% 	 Test Accuracy: 85.16%

      Now we will train the SimpleChains model

      julia
      tr_acc, te_acc = train(simple_chains_model)
      [ 1/10] 	 Time 869.66s 	 Training Accuracy: 28.36% 	 Test Accuracy: 28.12%
      -[ 2/10] 	 Time 13.19s 	 Training Accuracy: 36.48% 	 Test Accuracy: 32.81%
      -[ 3/10] 	 Time 13.20s 	 Training Accuracy: 46.64% 	 Test Accuracy: 44.53%
      -[ 4/10] 	 Time 13.21s 	 Training Accuracy: 59.53% 	 Test Accuracy: 52.34%
      -[ 5/10] 	 Time 13.19s 	 Training Accuracy: 71.09% 	 Test Accuracy: 65.62%
      -[ 6/10] 	 Time 13.19s 	 Training Accuracy: 75.23% 	 Test Accuracy: 67.19%
      -[ 7/10] 	 Time 13.19s 	 Training Accuracy: 79.53% 	 Test Accuracy: 70.31%
      -[ 8/10] 	 Time 13.19s 	 Training Accuracy: 80.62% 	 Test Accuracy: 74.22%
      -[ 9/10] 	 Time 13.22s 	 Training Accuracy: 84.22% 	 Test Accuracy: 81.25%
      -[10/10] 	 Time 13.20s 	 Training Accuracy: 85.00% 	 Test Accuracy: 82.03%

      On my local machine we see a 3-4x speedup when using SimpleChains.jl. The conditions of the server this documentation is being built on is not ideal for CPU benchmarking hence, the speedup may not be as significant and even there might be regressions.

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,33)]))}const c=i(p,[["render",t]]);export{g as __pageData,c as default}; diff --git a/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.js b/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.js similarity index 59% rename from dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.js rename to dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.js index 90c43978d3..de3c4529dc 100644 --- a/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.js +++ b/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.js @@ -1,4 +1,4 @@ -import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const d=JSON.parse('{"title":"Training Lux Models using Optimization.jl","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/5_OptimizationIntegration.md","filePath":"tutorials/beginner/5_OptimizationIntegration.md","lastUpdated":null}'),t={name:"tutorials/beginner/5_OptimizationIntegration.md"};function h(p,A,l,e,k,E){return n(),i("div",null,A[0]||(A[0]=[a(`

      Training Lux Models using Optimization.jl

      Lux's native Training.TrainState is a great API for gradient-based learning of neural networks, however, it is geared towards using Optimisers.jl as the backend. However, often times we want to train the neural networks with other optimization methods like BFGS, LBFGS, etc. In this tutorial, we will show how to train Lux models with Optimization.jl that provides a simple unified interface to various optimization methods.

      We will base our tutorial on the minibatching tutorial from the official Optimization.jl docs.

      Neural ODE

      This tutorial uses a Neural ODE, however, we won't discuss that part in this tutorial. Please refer to the Neural ODE tutorial for more information.

      Imports packages

      julia
      using Lux, Optimization, OptimizationOptimisers, OptimizationOptimJL, OrdinaryDiffEqTsit5,
      +import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Training Lux Models using Optimization.jl","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/5_OptimizationIntegration.md","filePath":"tutorials/beginner/5_OptimizationIntegration.md","lastUpdated":null}'),t={name:"tutorials/beginner/5_OptimizationIntegration.md"};function h(p,A,l,e,k,E){return n(),i("div",null,A[0]||(A[0]=[a(`

      Training Lux Models using Optimization.jl

      Lux's native Training.TrainState is a great API for gradient-based learning of neural networks, however, it is geared towards using Optimisers.jl as the backend. However, often times we want to train the neural networks with other optimization methods like BFGS, LBFGS, etc. In this tutorial, we will show how to train Lux models with Optimization.jl that provides a simple unified interface to various optimization methods.

      We will base our tutorial on the minibatching tutorial from the official Optimization.jl docs.

      Neural ODE

      This tutorial uses a Neural ODE, however, we won't discuss that part in this tutorial. Please refer to the Neural ODE tutorial for more information.

      Imports packages

      julia
      using Lux, Optimization, OptimizationOptimisers, OptimizationOptimJL, OrdinaryDiffEqTsit5,
             SciMLSensitivity, Random, MLUtils, CairoMakie, ComponentArrays, Printf
       
       const gdev = gpu_device()
      @@ -82,31 +82,31 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const d
           return StatefulLuxLayer{true}(model, res.u, smodel.st)
       end
       
      -trained_model = train_model(dataloader)
      Iteration:     1, Loss: 2.565469e-01
      -Iteration:    26, Loss: 2.280865e-01
      -Iteration:    51, Loss: 4.032271e-01
      -Iteration:    76, Loss: 1.888162e-01
      -Iteration:     1, Loss: 1.996490e-01
      -Iteration:     1, Loss: 2.400779e-02
      -Iteration:    26, Loss: 4.619161e-02
      -Iteration:    51, Loss: 2.679822e-02
      -Iteration:    76, Loss: 2.340804e-02
      -Iteration:   101, Loss: 2.250691e-02
      -Iteration:   126, Loss: 2.183978e-02
      -Iteration:   151, Loss: 2.115316e-02
      -Iteration:   176, Loss: 2.031358e-02
      -Iteration:   201, Loss: 1.912147e-02
      -Iteration:   226, Loss: 1.786558e-02
      -Iteration:   251, Loss: 1.672475e-02
      -Iteration:   276, Loss: 2.473499e-02
      -Iteration:   301, Loss: 1.778203e-02
      -Iteration:   326, Loss: 1.589140e-02
      -Iteration:   351, Loss: 1.458615e-02
      -Iteration:   376, Loss: 1.336883e-02
      -Iteration:   401, Loss: 1.203184e-02
      -Iteration:   426, Loss: 2.416629e-02
      -Iteration:   451, Loss: 1.476158e-02
      -Iteration:   476, Loss: 1.223332e-02

      Plotting the results

      julia
      dudt(u, p, t) = trained_model(u, p)
      +trained_model = train_model(dataloader)
      Iteration:     1, Loss: 2.275711e-01
      +Iteration:    26, Loss: 7.558612e-02
      +Iteration:    51, Loss: 1.401070e-01
      +Iteration:    76, Loss: 2.368127e-01
      +Iteration:     1, Loss: 3.022343e-01
      +Iteration:     1, Loss: 2.887229e-02
      +Iteration:    26, Loss: 3.702169e-02
      +Iteration:    51, Loss: 3.170453e-02
      +Iteration:    76, Loss: 2.796158e-02
      +Iteration:   101, Loss: 2.620041e-02
      +Iteration:   126, Loss: 2.488948e-02
      +Iteration:   151, Loss: 2.385878e-02
      +Iteration:   176, Loss: 2.295489e-02
      +Iteration:   201, Loss: 2.205519e-02
      +Iteration:   226, Loss: 2.108170e-02
      +Iteration:   251, Loss: 1.997719e-02
      +Iteration:   276, Loss: 1.870240e-02
      +Iteration:   301, Loss: 2.226003e-02
      +Iteration:   326, Loss: 1.756263e-02
      +Iteration:   351, Loss: 1.611929e-02
      +Iteration:   376, Loss: 1.807818e-02
      +Iteration:   401, Loss: 1.606911e-02
      +Iteration:   426, Loss: 1.444251e-02
      +Iteration:   451, Loss: 1.293867e-02
      +Iteration:   476, Loss: 4.631519e-02

      Plotting the results

      julia
      dudt(u, p, t) = trained_model(u, p)
       prob = ODEProblem(dudt, gdev(u0), (tspan[1], tspan[2]), trained_model.ps)
       sol = solve(prob, Tsit5(); saveat=t)
       pred = convert(AbstractArray, sol) |> cdev
      @@ -120,7 +120,7 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const d
           lines!(ax, t, pred[2, :]; label=L"\\hat{u}_2(t)", color=:red, linewidth=4)
           axislegend(ax; position=:lt)
           fig
      -end

      Appendix

      julia
      using InteractiveUtils
      +end

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -133,8 +133,8 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const d
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      diff --git a/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.lean.js b/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.lean.js
      new file mode 100644
      index 0000000000..030f1e909b
      --- /dev/null
      +++ b/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.DhkgwbPj.lean.js
      @@ -0,0 +1 @@
      +import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Training Lux Models using Optimization.jl","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/5_OptimizationIntegration.md","filePath":"tutorials/beginner/5_OptimizationIntegration.md","lastUpdated":null}'),t={name:"tutorials/beginner/5_OptimizationIntegration.md"};function h(p,A,l,e,k,E){return n(),i("div",null,A[0]||(A[0]=[a("",29)]))}const g=s(t,[["render",h]]);export{d as __pageData,g as default};
      diff --git a/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.lean.js b/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.lean.js
      deleted file mode 100644
      index 90c43978d3..0000000000
      --- a/dev/assets/tutorials_beginner_5_OptimizationIntegration.md.audP0w97.lean.js
      +++ /dev/null
      @@ -1,153 +0,0 @@
      -import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const d=JSON.parse('{"title":"Training Lux Models using Optimization.jl","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/beginner/5_OptimizationIntegration.md","filePath":"tutorials/beginner/5_OptimizationIntegration.md","lastUpdated":null}'),t={name:"tutorials/beginner/5_OptimizationIntegration.md"};function h(p,A,l,e,k,E){return n(),i("div",null,A[0]||(A[0]=[a(`

      Training Lux Models using Optimization.jl

      Lux's native Training.TrainState is a great API for gradient-based learning of neural networks, however, it is geared towards using Optimisers.jl as the backend. However, often times we want to train the neural networks with other optimization methods like BFGS, LBFGS, etc. In this tutorial, we will show how to train Lux models with Optimization.jl that provides a simple unified interface to various optimization methods.

      We will base our tutorial on the minibatching tutorial from the official Optimization.jl docs.

      Neural ODE

      This tutorial uses a Neural ODE, however, we won't discuss that part in this tutorial. Please refer to the Neural ODE tutorial for more information.

      Imports packages

      julia
      using Lux, Optimization, OptimizationOptimisers, OptimizationOptimJL, OrdinaryDiffEqTsit5,
      -      SciMLSensitivity, Random, MLUtils, CairoMakie, ComponentArrays, Printf
      -
      -const gdev = gpu_device()
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Generate some training data

      julia
      function lotka_volterra(du, u, p, t)
      -    x, y = u
      -    α, β, δ, γ = p
      -    du[1] = α * x - β * x * y
      -    du[2] = -δ * y + γ * x * y
      -    return nothing
      -end
      -
      -u0 = [1.0f0, 1.0f0]
      -
      -datasize = 32
      -tspan = (0.0f0, 2.0f0)
      -
      -const t = range(tspan[1], tspan[2]; length=datasize)
      -true_prob = ODEProblem(lotka_volterra, u0, (tspan[1], tspan[2]), [1.5, 1.0, 3.0, 1.0])
      -const ode_data = Array(solve(true_prob, Tsit5(); saveat=t))
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1])
      -    lines!(ax, t, ode_data[1, :]; label=L"u_1(t)", color=:blue, linestyle=:dot, linewidth=4)
      -    lines!(ax, t, ode_data[2, :]; label=L"u_2(t)", color=:red, linestyle=:dot, linewidth=4)
      -    axislegend(ax; position=:lt)
      -    fig
      -end

      Define the DataLoader

      We will define the DataLoader to batch over the data, additionally we will pipe it through the gdev device to move the data to the GPU on each iteration.

      By default gdev will move all objects to the GPU. But we don't want to move the time vector to the GPU. So we will wrap it in a struct and mark it as a leaf using MLDataDevices.isleaf

      julia
      struct TimeWrapper{T}
      -    t::T
      -end
      -
      -MLDataDevices.isleaf(::TimeWrapper) = true
      -
      -Base.length(t::TimeWrapper) = length(t.t)
      -
      -Base.getindex(t::TimeWrapper, i) = TimeWrapper(t.t[i])
      -
      -dataloader = DataLoader((ode_data, TimeWrapper(t)); batchsize=8) |> gdev
      MLDataDevices.DeviceIterator{MLDataDevices.CPUDevice, MLUtils.DataLoader{Tuple{Matrix{Float32}, Main.var"##230".TimeWrapper{StepRangeLen{Float32, Float64, Float64, Int64}}}, Random.TaskLocalRNG, Val{nothing}}}(MLDataDevices.CPUDevice(), DataLoader(::Tuple{Matrix{Float32}, Main.var"##230".TimeWrapper{StepRangeLen{Float32, Float64, Float64, Int64}}}, batchsize=8))

      Training the model

      Here we are using different optimization methods for demonstration purposes. This problem is trivial enough to not require this.

      Optimization.jl requires an abstract array as the parameters, hence we will construct a ComponentArray to store the parameters.

      Parameter Estimation vs State Estimation

      Optimization.jl performs state estimation, which effectively means for a function f(u, p), it is trying to compute the optimal u for a given p. This terminology might be confusing to ML practitioners, since in the ML world, we usually do parameter estimation. This effectively means that the u in Optimization.jl corresponds to our model parameters that is being optimized.

      julia
      function train_model(dataloader)
      -    model = Chain(Dense(2, 32, tanh), Dense(32, 32, tanh), Dense(32, 2))
      -    ps, st = Lux.setup(Random.default_rng(), model)
      -
      -    ps_ca = ComponentArray(ps) |> gdev
      -    st = st |> gdev
      -
      -    function callback(state, l)
      -        state.iter % 25 == 1 && @printf "Iteration: %5d, Loss: %.6e\\n" state.iter l
      -        return l < 1e-8 ## Terminate if loss is small
      -    end
      -
      -    smodel = StatefulLuxLayer{true}(model, nothing, st)
      -
      -    function loss_adjoint(θ, (u_batch, t_batch))
      -        t_batch = t_batch.t
      -        u0 = u_batch[:, 1]
      -        dudt(u, p, t) = smodel(u, p)
      -        prob = ODEProblem(dudt, u0, (t_batch[1], t_batch[end]), θ)
      -        sol = solve(prob, Tsit5(); sensealg=InterpolatingAdjoint(), saveat=t_batch)
      -        pred = stack(sol.u)
      -        return MSELoss()(pred, u_batch)
      -    end
      -
      -    # Define the Optimization Function that takes in the optimization state (our parameters)
      -    # and optimization parameters (nothing in our case) and data from the dataloader and
      -    # returns the loss.
      -    opt_func = OptimizationFunction(loss_adjoint, Optimization.AutoZygote())
      -    opt_prob = OptimizationProblem(opt_func, ps_ca, dataloader)
      -
      -    epochs = 25
      -    res_adam = solve(opt_prob, Optimisers.Adam(0.001); callback, epochs)
      -
      -    # Let's finetune a bit with L-BFGS
      -    opt_prob = OptimizationProblem(opt_func, res_adam.u, (gdev(ode_data), TimeWrapper(t)))
      -    res_lbfgs = solve(opt_prob, LBFGS(); callback, maxiters=epochs)
      -
      -    # Now that we have a good fit, let's train it on the entire dataset without
      -    # Minibatching. We need to do this since ODE solves can lead to accumulated errors if
      -    # the model was trained on individual parts (without a data-shooting approach).
      -    opt_prob = remake(opt_prob; u0=res_lbfgs.u)
      -    res = solve(opt_prob, Optimisers.Adam(0.005); maxiters=500, callback)
      -
      -    return StatefulLuxLayer{true}(model, res.u, smodel.st)
      -end
      -
      -trained_model = train_model(dataloader)
      Iteration:     1, Loss: 2.565469e-01
      -Iteration:    26, Loss: 2.280865e-01
      -Iteration:    51, Loss: 4.032271e-01
      -Iteration:    76, Loss: 1.888162e-01
      -Iteration:     1, Loss: 1.996490e-01
      -Iteration:     1, Loss: 2.400779e-02
      -Iteration:    26, Loss: 4.619161e-02
      -Iteration:    51, Loss: 2.679822e-02
      -Iteration:    76, Loss: 2.340804e-02
      -Iteration:   101, Loss: 2.250691e-02
      -Iteration:   126, Loss: 2.183978e-02
      -Iteration:   151, Loss: 2.115316e-02
      -Iteration:   176, Loss: 2.031358e-02
      -Iteration:   201, Loss: 1.912147e-02
      -Iteration:   226, Loss: 1.786558e-02
      -Iteration:   251, Loss: 1.672475e-02
      -Iteration:   276, Loss: 2.473499e-02
      -Iteration:   301, Loss: 1.778203e-02
      -Iteration:   326, Loss: 1.589140e-02
      -Iteration:   351, Loss: 1.458615e-02
      -Iteration:   376, Loss: 1.336883e-02
      -Iteration:   401, Loss: 1.203184e-02
      -Iteration:   426, Loss: 2.416629e-02
      -Iteration:   451, Loss: 1.476158e-02
      -Iteration:   476, Loss: 1.223332e-02

      Plotting the results

      julia
      dudt(u, p, t) = trained_model(u, p)
      -prob = ODEProblem(dudt, gdev(u0), (tspan[1], tspan[2]), trained_model.ps)
      -sol = solve(prob, Tsit5(); saveat=t)
      -pred = convert(AbstractArray, sol) |> cdev
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1])
      -    lines!(ax, t, ode_data[1, :]; label=L"u_1(t)", color=:blue, linestyle=:dot, linewidth=4)
      -    lines!(ax, t, ode_data[2, :]; label=L"u_2(t)", color=:red, linestyle=:dot, linewidth=4)
      -    lines!(ax, t, pred[1, :]; label=L"\\hat{u}_1(t)", color=:blue, linewidth=4)
      -    lines!(ax, t, pred[2, :]; label=L"\\hat{u}_2(t)", color=:red, linewidth=4)
      -    axislegend(ax; position=:lt)
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 16 default, 0 interactive, 8 GC (on 16 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 16
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 16
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,29)]))}const g=s(t,[["render",h]]);export{d as __pageData,g as default}; diff --git a/dev/assets/tutorials_index.md.DjU0cWXL.js b/dev/assets/tutorials_index.md.B-_qyAzm.js similarity index 98% rename from dev/assets/tutorials_index.md.DjU0cWXL.js rename to dev/assets/tutorials_index.md.B-_qyAzm.js index 0f9026e756..62694fde98 100644 --- a/dev/assets/tutorials_index.md.DjU0cWXL.js +++ b/dev/assets/tutorials_index.md.B-_qyAzm.js @@ -1 +1 @@ -import{d,o as n,c as r,j as e,k as f,g as b,t as p,_ as m,F as _,C as w,b as v,K as N,a as t,G as o}from"./chunks/framework.I-x9Gl6h.js";const y={class:"img-box"},x=["href"],D=["src"],T={class:"transparent-box1"},P={class:"caption"},L={class:"transparent-box2"},I={class:"subcaption"},k={class:"opacity-low"},C=d({__name:"GalleryImage",props:{href:{},src:{},caption:{},desc:{}},setup(u){return(i,l)=>(n(),r("div",y,[e("a",{href:i.href},[e("img",{src:f(b)(i.src),height:"150px",alt:""},null,8,D),e("div",T,[e("div",P,[e("h2",null,p(i.caption),1)])]),e("div",L,[e("div",I,[e("p",k,p(i.desc),1)])])],8,x)]))}}),E=m(C,[["__scopeId","data-v-06a0366f"]]),j={class:"gallery-image"},S=d({__name:"Gallery",props:{images:{}},setup(u){return(i,l)=>(n(),r("div",j,[(n(!0),r(_,null,w(i.images,c=>(n(),v(E,N({ref_for:!0},c),null,16))),256))]))}}),s=m(S,[["__scopeId","data-v-578d61bc"]]),F=JSON.parse('{"title":"Tutorials","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/index.md","filePath":"tutorials/index.md","lastUpdated":null}'),M={name:"tutorials/index.md"},z=d({...M,setup(u){const i=[{href:"beginner/1_Basics",src:"https://picsum.photos/350/250?image=444",caption:"Julia & Lux for the Uninitiated",desc:"How to get started with Julia and Lux for those who have never used Julia before."},{href:"beginner/2_PolynomialFitting",src:"../mlp.webp",caption:"Fitting a Polynomial using MLP",desc:"Learn the Basics of Lux by fitting a Multi-Layer Perceptron to a Polynomial."},{href:"beginner/3_SimpleRNN",src:"../lstm-illustrative.webp",caption:"Training a Simple LSTM",desc:"Learn how to define custom layers and train an RNN on time-series data."},{href:"beginner/4_SimpleChains",src:"../blas_optimizations.jpg",caption:"Use SimpleChains.jl as a Backend",desc:"Learn how to train small neural networks really fast on CPU."},{href:"beginner/5_OptimizationIntegration",src:"../optimization_integration.png",caption:"Fitting with Optimization.jl",desc:"Learn how to use Optimization.jl with Lux (on GPUs)."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/1_GettingStarted",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"Pre-Built Deep Learning Models",desc:"Use Boltz.jl to load pre-built DL and SciML models."}],l=[{href:"intermediate/1_NeuralODE",src:"../mnist.jpg",caption:"MNIST Classification using Neural ODE",desc:"Train a Neural Ordinary Differential Equations to classify MNIST Images."},{href:"intermediate/2_BayesianNN",src:"https://github.com/TuringLang.png",caption:"Bayesian Neural Networks",desc:"Figure out how to use Probabilistic Programming Frameworks like Turing with Lux."},{href:"intermediate/3_HyperNet",src:"../hypernet.jpg",caption:"Training a HyperNetwork",desc:"Train a hypernetwork to work on multiple datasets by predicting NN parameters."},{href:"intermediate/4_PINN2DPDE",src:"../pinn_nested_ad.gif",caption:"Training a PINN",desc:"Train a PINN to solve 2D PDEs (using Nested AD)."},{href:"intermediate/5_ConvolutionalVAE",src:"../conditional_vae.png",caption:"Convolutional VAE for MNIST using Reactant",desc:"Train a Convolutional VAE to generate images from a latent space."},{href:"intermediate/6_GCN_Cora",src:"../gcn_cora.jpg",caption:"Graph Convolutional Network on Cora",desc:"Train a Graph Convolutional Network on Cora dataset."},{href:"intermediate/7_RealNVP",src:"../realnvp.png",caption:"Normalizing Flows for Density Estimation",desc:"Train a normalizing flow for density estimation on the Moons dataset."}],c=[{href:"advanced/1_GravitationalWaveForm",src:"../gravitational_waveform.png",caption:"Neural ODE to Model Gravitational Waveforms",desc:"Training a Neural ODE to fit simulated data of gravitational waveforms."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/2_SymbolicOptimalControl",src:"../symbolic_optimal_control.png",caption:"Optimal Control with Symbolic UDE",desc:"Train a UDE and replace a part of it with Symbolic Regression."}],h=[{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/ImageNet",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"ImageNet Classification",desc:"Train Large Image Classifiers using Lux (on Distributed GPUs)."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/DDIM",src:"https://raw.githubusercontent.com/LuxDL/Lux.jl/main/examples/DDIM/assets/flowers_generated.png",caption:"Denoising Diffusion Implicit Model (DDIM)",desc:"Train a Diffusion Model to generate images from Gaussian noises."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/CIFAR10",src:"https://datasets.activeloop.ai/wp-content/uploads/2022/09/CIFAR-10-dataset-Activeloop-Platform-visualization-image-1.webp",caption:"Vision Models on CIFAR-10",desc:"Train different vision models on CIFAR-10 to 90% accuracy within 10 minutes."}],g=[{href:"https://docs.sciml.ai/Overview/stable/showcase/pinngpu/",src:"../pinn.gif",caption:"GPU-Accelerated Physics-Informed Neural Networks",desc:"Use Machine Learning (PINNs) to solve the Heat Equation PDE on a GPU."},{href:"https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode_weather_forecast/",src:"../weather-neural-ode.gif",caption:"Weather Forecasting with Neural ODEs",desc:"Train a neural ODEs to a multidimensional weather dataset and use it for weather forecasting."},{href:"https://docs.sciml.ai/SciMLSensitivity/stable/examples/sde/SDE_control/",src:"../neural-sde.png",caption:"Controlling Stochastic Differential Equations",desc:"Control the time evolution of a continuously monitored qubit described by an SDE with multiplicative scalar noise."},{href:"https://github.com/Dale-Black/ComputerVisionTutorials.jl/",src:"https://raw.githubusercontent.com/Dale-Black/ComputerVisionTutorials.jl/main/assets/image-seg-green.jpeg",caption:"Medical Image Segmentation",desc:"Explore various aspects of deep learning for medical imaging and a comprehensive overview of Julia packages."},{href:"https://github.com/agdestein/NeuralClosureTutorials",src:"https://raw.githubusercontent.com/agdestein/NeuralClosureTutorials/main/assets/navier_stokes.gif",caption:"Neural PDE closures",desc:"Learn an unknown term in a PDE using convolutional neural networks and Fourier neural operators."}];return(B,a)=>(n(),r("div",null,[a[0]||(a[0]=e("h1",{id:"tutorials",tabindex:"-1"},[t("Tutorials "),e("a",{class:"header-anchor",href:"#tutorials","aria-label":'Permalink to "Tutorials"'},"​")],-1)),a[1]||(a[1]=e("h2",{id:"beginner-tutorials",tabindex:"-1"},[t("Beginner Tutorials "),e("a",{class:"header-anchor",href:"#beginner-tutorials","aria-label":'Permalink to "Beginner Tutorials"'},"​")],-1)),o(s,{images:i}),a[2]||(a[2]=e("h2",{id:"intermediate-tutorials",tabindex:"-1"},[t("Intermediate Tutorials "),e("a",{class:"header-anchor",href:"#intermediate-tutorials","aria-label":'Permalink to "Intermediate Tutorials"'},"​")],-1)),o(s,{images:l}),a[3]||(a[3]=e("h2",{id:"advanced-tutorials",tabindex:"-1"},[t("Advanced Tutorials "),e("a",{class:"header-anchor",href:"#advanced-tutorials","aria-label":'Permalink to "Advanced Tutorials"'},"​")],-1)),o(s,{images:c}),a[4]||(a[4]=e("h2",{id:"larger-models",tabindex:"-1"},[t("Larger Models "),e("a",{class:"header-anchor",href:"#larger-models","aria-label":'Permalink to "Larger Models"'},"​")],-1)),a[5]||(a[5]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,"These models are part of the Lux examples, however, these are larger model that cannot be run on CI and aren't frequently tested. If you find a bug in one of these models, please open an issue or PR to fix it.")],-1)),o(s,{images:h}),a[6]||(a[6]=e("h2",{id:"selected-3rd-party-tutorials",tabindex:"-1"},[t("Selected 3rd Party Tutorials "),e("a",{class:"header-anchor",href:"#selected-3rd-party-tutorials","aria-label":'Permalink to "Selected 3rd Party Tutorials"'},"​")],-1)),a[7]||(a[7]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,[t("These tutorials are developed by the community and may not be up-to-date with the latest version of "),e("code",null,"Lux.jl"),t(". Please refer to the official documentation for the most up-to-date information.")]),e("p",null,[t("Please open an issue (ideally both at "),e("code",null,"Lux.jl"),t(" and at the downstream linked package) if any of them are non-functional and we will try to get them updated.")])],-1)),o(s,{images:g}),a[8]||(a[8]=e("div",{class:"tip custom-block"},[e("p",{class:"custom-block-title"},"TIP"),e("p",null,[t("If you found an amazing tutorial showcasing "),e("code",null,"Lux.jl"),t(" online, or wrote one yourself, please open an issue or PR to add it to the list!")])],-1))]))}});export{F as __pageData,z as default}; +import{d,o as n,c as r,j as e,k as f,g as b,t as p,_ as m,F as _,C as w,b as v,K as N,a as t,G as o}from"./chunks/framework.BetCMmtc.js";const y={class:"img-box"},x=["href"],D=["src"],T={class:"transparent-box1"},P={class:"caption"},L={class:"transparent-box2"},I={class:"subcaption"},k={class:"opacity-low"},C=d({__name:"GalleryImage",props:{href:{},src:{},caption:{},desc:{}},setup(u){return(i,l)=>(n(),r("div",y,[e("a",{href:i.href},[e("img",{src:f(b)(i.src),height:"150px",alt:""},null,8,D),e("div",T,[e("div",P,[e("h2",null,p(i.caption),1)])]),e("div",L,[e("div",I,[e("p",k,p(i.desc),1)])])],8,x)]))}}),E=m(C,[["__scopeId","data-v-06a0366f"]]),j={class:"gallery-image"},S=d({__name:"Gallery",props:{images:{}},setup(u){return(i,l)=>(n(),r("div",j,[(n(!0),r(_,null,w(i.images,c=>(n(),v(E,N({ref_for:!0},c),null,16))),256))]))}}),s=m(S,[["__scopeId","data-v-578d61bc"]]),F=JSON.parse('{"title":"Tutorials","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/index.md","filePath":"tutorials/index.md","lastUpdated":null}'),M={name:"tutorials/index.md"},z=d({...M,setup(u){const i=[{href:"beginner/1_Basics",src:"https://picsum.photos/350/250?image=444",caption:"Julia & Lux for the Uninitiated",desc:"How to get started with Julia and Lux for those who have never used Julia before."},{href:"beginner/2_PolynomialFitting",src:"../mlp.webp",caption:"Fitting a Polynomial using MLP",desc:"Learn the Basics of Lux by fitting a Multi-Layer Perceptron to a Polynomial."},{href:"beginner/3_SimpleRNN",src:"../lstm-illustrative.webp",caption:"Training a Simple LSTM",desc:"Learn how to define custom layers and train an RNN on time-series data."},{href:"beginner/4_SimpleChains",src:"../blas_optimizations.jpg",caption:"Use SimpleChains.jl as a Backend",desc:"Learn how to train small neural networks really fast on CPU."},{href:"beginner/5_OptimizationIntegration",src:"../optimization_integration.png",caption:"Fitting with Optimization.jl",desc:"Learn how to use Optimization.jl with Lux (on GPUs)."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/1_GettingStarted",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"Pre-Built Deep Learning Models",desc:"Use Boltz.jl to load pre-built DL and SciML models."}],l=[{href:"intermediate/1_NeuralODE",src:"../mnist.jpg",caption:"MNIST Classification using Neural ODE",desc:"Train a Neural Ordinary Differential Equations to classify MNIST Images."},{href:"intermediate/2_BayesianNN",src:"https://github.com/TuringLang.png",caption:"Bayesian Neural Networks",desc:"Figure out how to use Probabilistic Programming Frameworks like Turing with Lux."},{href:"intermediate/3_HyperNet",src:"../hypernet.jpg",caption:"Training a HyperNetwork",desc:"Train a hypernetwork to work on multiple datasets by predicting NN parameters."},{href:"intermediate/4_PINN2DPDE",src:"../pinn_nested_ad.gif",caption:"Training a PINN",desc:"Train a PINN to solve 2D PDEs (using Nested AD)."},{href:"intermediate/5_ConvolutionalVAE",src:"../conditional_vae.png",caption:"Convolutional VAE for MNIST using Reactant",desc:"Train a Convolutional VAE to generate images from a latent space."},{href:"intermediate/6_GCN_Cora",src:"../gcn_cora.jpg",caption:"Graph Convolutional Network on Cora",desc:"Train a Graph Convolutional Network on Cora dataset."},{href:"intermediate/7_RealNVP",src:"../realnvp.png",caption:"Normalizing Flows for Density Estimation",desc:"Train a normalizing flow for density estimation on the Moons dataset."}],c=[{href:"advanced/1_GravitationalWaveForm",src:"../gravitational_waveform.png",caption:"Neural ODE to Model Gravitational Waveforms",desc:"Training a Neural ODE to fit simulated data of gravitational waveforms."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/2_SymbolicOptimalControl",src:"../symbolic_optimal_control.png",caption:"Optimal Control with Symbolic UDE",desc:"Train a UDE and replace a part of it with Symbolic Regression."}],h=[{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/ImageNet",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"ImageNet Classification",desc:"Train Large Image Classifiers using Lux (on Distributed GPUs)."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/DDIM",src:"https://raw.githubusercontent.com/LuxDL/Lux.jl/main/examples/DDIM/assets/flowers_generated.png",caption:"Denoising Diffusion Implicit Model (DDIM)",desc:"Train a Diffusion Model to generate images from Gaussian noises."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/CIFAR10",src:"https://datasets.activeloop.ai/wp-content/uploads/2022/09/CIFAR-10-dataset-Activeloop-Platform-visualization-image-1.webp",caption:"Vision Models on CIFAR-10",desc:"Train different vision models on CIFAR-10 to 90% accuracy within 10 minutes."}],g=[{href:"https://docs.sciml.ai/Overview/stable/showcase/pinngpu/",src:"../pinn.gif",caption:"GPU-Accelerated Physics-Informed Neural Networks",desc:"Use Machine Learning (PINNs) to solve the Heat Equation PDE on a GPU."},{href:"https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode_weather_forecast/",src:"../weather-neural-ode.gif",caption:"Weather Forecasting with Neural ODEs",desc:"Train a neural ODEs to a multidimensional weather dataset and use it for weather forecasting."},{href:"https://docs.sciml.ai/SciMLSensitivity/stable/examples/sde/SDE_control/",src:"../neural-sde.png",caption:"Controlling Stochastic Differential Equations",desc:"Control the time evolution of a continuously monitored qubit described by an SDE with multiplicative scalar noise."},{href:"https://github.com/Dale-Black/ComputerVisionTutorials.jl/",src:"https://raw.githubusercontent.com/Dale-Black/ComputerVisionTutorials.jl/main/assets/image-seg-green.jpeg",caption:"Medical Image Segmentation",desc:"Explore various aspects of deep learning for medical imaging and a comprehensive overview of Julia packages."},{href:"https://github.com/agdestein/NeuralClosureTutorials",src:"https://raw.githubusercontent.com/agdestein/NeuralClosureTutorials/main/assets/navier_stokes.gif",caption:"Neural PDE closures",desc:"Learn an unknown term in a PDE using convolutional neural networks and Fourier neural operators."}];return(B,a)=>(n(),r("div",null,[a[0]||(a[0]=e("h1",{id:"tutorials",tabindex:"-1"},[t("Tutorials "),e("a",{class:"header-anchor",href:"#tutorials","aria-label":'Permalink to "Tutorials"'},"​")],-1)),a[1]||(a[1]=e("h2",{id:"beginner-tutorials",tabindex:"-1"},[t("Beginner Tutorials "),e("a",{class:"header-anchor",href:"#beginner-tutorials","aria-label":'Permalink to "Beginner Tutorials"'},"​")],-1)),o(s,{images:i}),a[2]||(a[2]=e("h2",{id:"intermediate-tutorials",tabindex:"-1"},[t("Intermediate Tutorials "),e("a",{class:"header-anchor",href:"#intermediate-tutorials","aria-label":'Permalink to "Intermediate Tutorials"'},"​")],-1)),o(s,{images:l}),a[3]||(a[3]=e("h2",{id:"advanced-tutorials",tabindex:"-1"},[t("Advanced Tutorials "),e("a",{class:"header-anchor",href:"#advanced-tutorials","aria-label":'Permalink to "Advanced Tutorials"'},"​")],-1)),o(s,{images:c}),a[4]||(a[4]=e("h2",{id:"larger-models",tabindex:"-1"},[t("Larger Models "),e("a",{class:"header-anchor",href:"#larger-models","aria-label":'Permalink to "Larger Models"'},"​")],-1)),a[5]||(a[5]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,"These models are part of the Lux examples, however, these are larger model that cannot be run on CI and aren't frequently tested. If you find a bug in one of these models, please open an issue or PR to fix it.")],-1)),o(s,{images:h}),a[6]||(a[6]=e("h2",{id:"selected-3rd-party-tutorials",tabindex:"-1"},[t("Selected 3rd Party Tutorials "),e("a",{class:"header-anchor",href:"#selected-3rd-party-tutorials","aria-label":'Permalink to "Selected 3rd Party Tutorials"'},"​")],-1)),a[7]||(a[7]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,[t("These tutorials are developed by the community and may not be up-to-date with the latest version of "),e("code",null,"Lux.jl"),t(". Please refer to the official documentation for the most up-to-date information.")]),e("p",null,[t("Please open an issue (ideally both at "),e("code",null,"Lux.jl"),t(" and at the downstream linked package) if any of them are non-functional and we will try to get them updated.")])],-1)),o(s,{images:g}),a[8]||(a[8]=e("div",{class:"tip custom-block"},[e("p",{class:"custom-block-title"},"TIP"),e("p",null,[t("If you found an amazing tutorial showcasing "),e("code",null,"Lux.jl"),t(" online, or wrote one yourself, please open an issue or PR to add it to the list!")])],-1))]))}});export{F as __pageData,z as default}; diff --git a/dev/assets/tutorials_index.md.DjU0cWXL.lean.js b/dev/assets/tutorials_index.md.B-_qyAzm.lean.js similarity index 98% rename from dev/assets/tutorials_index.md.DjU0cWXL.lean.js rename to dev/assets/tutorials_index.md.B-_qyAzm.lean.js index 0f9026e756..62694fde98 100644 --- a/dev/assets/tutorials_index.md.DjU0cWXL.lean.js +++ b/dev/assets/tutorials_index.md.B-_qyAzm.lean.js @@ -1 +1 @@ -import{d,o as n,c as r,j as e,k as f,g as b,t as p,_ as m,F as _,C as w,b as v,K as N,a as t,G as o}from"./chunks/framework.I-x9Gl6h.js";const y={class:"img-box"},x=["href"],D=["src"],T={class:"transparent-box1"},P={class:"caption"},L={class:"transparent-box2"},I={class:"subcaption"},k={class:"opacity-low"},C=d({__name:"GalleryImage",props:{href:{},src:{},caption:{},desc:{}},setup(u){return(i,l)=>(n(),r("div",y,[e("a",{href:i.href},[e("img",{src:f(b)(i.src),height:"150px",alt:""},null,8,D),e("div",T,[e("div",P,[e("h2",null,p(i.caption),1)])]),e("div",L,[e("div",I,[e("p",k,p(i.desc),1)])])],8,x)]))}}),E=m(C,[["__scopeId","data-v-06a0366f"]]),j={class:"gallery-image"},S=d({__name:"Gallery",props:{images:{}},setup(u){return(i,l)=>(n(),r("div",j,[(n(!0),r(_,null,w(i.images,c=>(n(),v(E,N({ref_for:!0},c),null,16))),256))]))}}),s=m(S,[["__scopeId","data-v-578d61bc"]]),F=JSON.parse('{"title":"Tutorials","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/index.md","filePath":"tutorials/index.md","lastUpdated":null}'),M={name:"tutorials/index.md"},z=d({...M,setup(u){const i=[{href:"beginner/1_Basics",src:"https://picsum.photos/350/250?image=444",caption:"Julia & Lux for the Uninitiated",desc:"How to get started with Julia and Lux for those who have never used Julia before."},{href:"beginner/2_PolynomialFitting",src:"../mlp.webp",caption:"Fitting a Polynomial using MLP",desc:"Learn the Basics of Lux by fitting a Multi-Layer Perceptron to a Polynomial."},{href:"beginner/3_SimpleRNN",src:"../lstm-illustrative.webp",caption:"Training a Simple LSTM",desc:"Learn how to define custom layers and train an RNN on time-series data."},{href:"beginner/4_SimpleChains",src:"../blas_optimizations.jpg",caption:"Use SimpleChains.jl as a Backend",desc:"Learn how to train small neural networks really fast on CPU."},{href:"beginner/5_OptimizationIntegration",src:"../optimization_integration.png",caption:"Fitting with Optimization.jl",desc:"Learn how to use Optimization.jl with Lux (on GPUs)."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/1_GettingStarted",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"Pre-Built Deep Learning Models",desc:"Use Boltz.jl to load pre-built DL and SciML models."}],l=[{href:"intermediate/1_NeuralODE",src:"../mnist.jpg",caption:"MNIST Classification using Neural ODE",desc:"Train a Neural Ordinary Differential Equations to classify MNIST Images."},{href:"intermediate/2_BayesianNN",src:"https://github.com/TuringLang.png",caption:"Bayesian Neural Networks",desc:"Figure out how to use Probabilistic Programming Frameworks like Turing with Lux."},{href:"intermediate/3_HyperNet",src:"../hypernet.jpg",caption:"Training a HyperNetwork",desc:"Train a hypernetwork to work on multiple datasets by predicting NN parameters."},{href:"intermediate/4_PINN2DPDE",src:"../pinn_nested_ad.gif",caption:"Training a PINN",desc:"Train a PINN to solve 2D PDEs (using Nested AD)."},{href:"intermediate/5_ConvolutionalVAE",src:"../conditional_vae.png",caption:"Convolutional VAE for MNIST using Reactant",desc:"Train a Convolutional VAE to generate images from a latent space."},{href:"intermediate/6_GCN_Cora",src:"../gcn_cora.jpg",caption:"Graph Convolutional Network on Cora",desc:"Train a Graph Convolutional Network on Cora dataset."},{href:"intermediate/7_RealNVP",src:"../realnvp.png",caption:"Normalizing Flows for Density Estimation",desc:"Train a normalizing flow for density estimation on the Moons dataset."}],c=[{href:"advanced/1_GravitationalWaveForm",src:"../gravitational_waveform.png",caption:"Neural ODE to Model Gravitational Waveforms",desc:"Training a Neural ODE to fit simulated data of gravitational waveforms."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/2_SymbolicOptimalControl",src:"../symbolic_optimal_control.png",caption:"Optimal Control with Symbolic UDE",desc:"Train a UDE and replace a part of it with Symbolic Regression."}],h=[{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/ImageNet",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"ImageNet Classification",desc:"Train Large Image Classifiers using Lux (on Distributed GPUs)."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/DDIM",src:"https://raw.githubusercontent.com/LuxDL/Lux.jl/main/examples/DDIM/assets/flowers_generated.png",caption:"Denoising Diffusion Implicit Model (DDIM)",desc:"Train a Diffusion Model to generate images from Gaussian noises."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/CIFAR10",src:"https://datasets.activeloop.ai/wp-content/uploads/2022/09/CIFAR-10-dataset-Activeloop-Platform-visualization-image-1.webp",caption:"Vision Models on CIFAR-10",desc:"Train different vision models on CIFAR-10 to 90% accuracy within 10 minutes."}],g=[{href:"https://docs.sciml.ai/Overview/stable/showcase/pinngpu/",src:"../pinn.gif",caption:"GPU-Accelerated Physics-Informed Neural Networks",desc:"Use Machine Learning (PINNs) to solve the Heat Equation PDE on a GPU."},{href:"https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode_weather_forecast/",src:"../weather-neural-ode.gif",caption:"Weather Forecasting with Neural ODEs",desc:"Train a neural ODEs to a multidimensional weather dataset and use it for weather forecasting."},{href:"https://docs.sciml.ai/SciMLSensitivity/stable/examples/sde/SDE_control/",src:"../neural-sde.png",caption:"Controlling Stochastic Differential Equations",desc:"Control the time evolution of a continuously monitored qubit described by an SDE with multiplicative scalar noise."},{href:"https://github.com/Dale-Black/ComputerVisionTutorials.jl/",src:"https://raw.githubusercontent.com/Dale-Black/ComputerVisionTutorials.jl/main/assets/image-seg-green.jpeg",caption:"Medical Image Segmentation",desc:"Explore various aspects of deep learning for medical imaging and a comprehensive overview of Julia packages."},{href:"https://github.com/agdestein/NeuralClosureTutorials",src:"https://raw.githubusercontent.com/agdestein/NeuralClosureTutorials/main/assets/navier_stokes.gif",caption:"Neural PDE closures",desc:"Learn an unknown term in a PDE using convolutional neural networks and Fourier neural operators."}];return(B,a)=>(n(),r("div",null,[a[0]||(a[0]=e("h1",{id:"tutorials",tabindex:"-1"},[t("Tutorials "),e("a",{class:"header-anchor",href:"#tutorials","aria-label":'Permalink to "Tutorials"'},"​")],-1)),a[1]||(a[1]=e("h2",{id:"beginner-tutorials",tabindex:"-1"},[t("Beginner Tutorials "),e("a",{class:"header-anchor",href:"#beginner-tutorials","aria-label":'Permalink to "Beginner Tutorials"'},"​")],-1)),o(s,{images:i}),a[2]||(a[2]=e("h2",{id:"intermediate-tutorials",tabindex:"-1"},[t("Intermediate Tutorials "),e("a",{class:"header-anchor",href:"#intermediate-tutorials","aria-label":'Permalink to "Intermediate Tutorials"'},"​")],-1)),o(s,{images:l}),a[3]||(a[3]=e("h2",{id:"advanced-tutorials",tabindex:"-1"},[t("Advanced Tutorials "),e("a",{class:"header-anchor",href:"#advanced-tutorials","aria-label":'Permalink to "Advanced Tutorials"'},"​")],-1)),o(s,{images:c}),a[4]||(a[4]=e("h2",{id:"larger-models",tabindex:"-1"},[t("Larger Models "),e("a",{class:"header-anchor",href:"#larger-models","aria-label":'Permalink to "Larger Models"'},"​")],-1)),a[5]||(a[5]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,"These models are part of the Lux examples, however, these are larger model that cannot be run on CI and aren't frequently tested. If you find a bug in one of these models, please open an issue or PR to fix it.")],-1)),o(s,{images:h}),a[6]||(a[6]=e("h2",{id:"selected-3rd-party-tutorials",tabindex:"-1"},[t("Selected 3rd Party Tutorials "),e("a",{class:"header-anchor",href:"#selected-3rd-party-tutorials","aria-label":'Permalink to "Selected 3rd Party Tutorials"'},"​")],-1)),a[7]||(a[7]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,[t("These tutorials are developed by the community and may not be up-to-date with the latest version of "),e("code",null,"Lux.jl"),t(". Please refer to the official documentation for the most up-to-date information.")]),e("p",null,[t("Please open an issue (ideally both at "),e("code",null,"Lux.jl"),t(" and at the downstream linked package) if any of them are non-functional and we will try to get them updated.")])],-1)),o(s,{images:g}),a[8]||(a[8]=e("div",{class:"tip custom-block"},[e("p",{class:"custom-block-title"},"TIP"),e("p",null,[t("If you found an amazing tutorial showcasing "),e("code",null,"Lux.jl"),t(" online, or wrote one yourself, please open an issue or PR to add it to the list!")])],-1))]))}});export{F as __pageData,z as default}; +import{d,o as n,c as r,j as e,k as f,g as b,t as p,_ as m,F as _,C as w,b as v,K as N,a as t,G as o}from"./chunks/framework.BetCMmtc.js";const y={class:"img-box"},x=["href"],D=["src"],T={class:"transparent-box1"},P={class:"caption"},L={class:"transparent-box2"},I={class:"subcaption"},k={class:"opacity-low"},C=d({__name:"GalleryImage",props:{href:{},src:{},caption:{},desc:{}},setup(u){return(i,l)=>(n(),r("div",y,[e("a",{href:i.href},[e("img",{src:f(b)(i.src),height:"150px",alt:""},null,8,D),e("div",T,[e("div",P,[e("h2",null,p(i.caption),1)])]),e("div",L,[e("div",I,[e("p",k,p(i.desc),1)])])],8,x)]))}}),E=m(C,[["__scopeId","data-v-06a0366f"]]),j={class:"gallery-image"},S=d({__name:"Gallery",props:{images:{}},setup(u){return(i,l)=>(n(),r("div",j,[(n(!0),r(_,null,w(i.images,c=>(n(),v(E,N({ref_for:!0},c),null,16))),256))]))}}),s=m(S,[["__scopeId","data-v-578d61bc"]]),F=JSON.parse('{"title":"Tutorials","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/index.md","filePath":"tutorials/index.md","lastUpdated":null}'),M={name:"tutorials/index.md"},z=d({...M,setup(u){const i=[{href:"beginner/1_Basics",src:"https://picsum.photos/350/250?image=444",caption:"Julia & Lux for the Uninitiated",desc:"How to get started with Julia and Lux for those who have never used Julia before."},{href:"beginner/2_PolynomialFitting",src:"../mlp.webp",caption:"Fitting a Polynomial using MLP",desc:"Learn the Basics of Lux by fitting a Multi-Layer Perceptron to a Polynomial."},{href:"beginner/3_SimpleRNN",src:"../lstm-illustrative.webp",caption:"Training a Simple LSTM",desc:"Learn how to define custom layers and train an RNN on time-series data."},{href:"beginner/4_SimpleChains",src:"../blas_optimizations.jpg",caption:"Use SimpleChains.jl as a Backend",desc:"Learn how to train small neural networks really fast on CPU."},{href:"beginner/5_OptimizationIntegration",src:"../optimization_integration.png",caption:"Fitting with Optimization.jl",desc:"Learn how to use Optimization.jl with Lux (on GPUs)."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/1_GettingStarted",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"Pre-Built Deep Learning Models",desc:"Use Boltz.jl to load pre-built DL and SciML models."}],l=[{href:"intermediate/1_NeuralODE",src:"../mnist.jpg",caption:"MNIST Classification using Neural ODE",desc:"Train a Neural Ordinary Differential Equations to classify MNIST Images."},{href:"intermediate/2_BayesianNN",src:"https://github.com/TuringLang.png",caption:"Bayesian Neural Networks",desc:"Figure out how to use Probabilistic Programming Frameworks like Turing with Lux."},{href:"intermediate/3_HyperNet",src:"../hypernet.jpg",caption:"Training a HyperNetwork",desc:"Train a hypernetwork to work on multiple datasets by predicting NN parameters."},{href:"intermediate/4_PINN2DPDE",src:"../pinn_nested_ad.gif",caption:"Training a PINN",desc:"Train a PINN to solve 2D PDEs (using Nested AD)."},{href:"intermediate/5_ConvolutionalVAE",src:"../conditional_vae.png",caption:"Convolutional VAE for MNIST using Reactant",desc:"Train a Convolutional VAE to generate images from a latent space."},{href:"intermediate/6_GCN_Cora",src:"../gcn_cora.jpg",caption:"Graph Convolutional Network on Cora",desc:"Train a Graph Convolutional Network on Cora dataset."},{href:"intermediate/7_RealNVP",src:"../realnvp.png",caption:"Normalizing Flows for Density Estimation",desc:"Train a normalizing flow for density estimation on the Moons dataset."}],c=[{href:"advanced/1_GravitationalWaveForm",src:"../gravitational_waveform.png",caption:"Neural ODE to Model Gravitational Waveforms",desc:"Training a Neural ODE to fit simulated data of gravitational waveforms."},{href:"https://luxdl.github.io/Boltz.jl/stable/tutorials/2_SymbolicOptimalControl",src:"../symbolic_optimal_control.png",caption:"Optimal Control with Symbolic UDE",desc:"Train a UDE and replace a part of it with Symbolic Regression."}],h=[{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/ImageNet",src:"https://production-media.paperswithcode.com/datasets/ImageNet-0000000008-f2e87edd_Y0fT5zg.jpg",caption:"ImageNet Classification",desc:"Train Large Image Classifiers using Lux (on Distributed GPUs)."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/DDIM",src:"https://raw.githubusercontent.com/LuxDL/Lux.jl/main/examples/DDIM/assets/flowers_generated.png",caption:"Denoising Diffusion Implicit Model (DDIM)",desc:"Train a Diffusion Model to generate images from Gaussian noises."},{href:"https://github.com/LuxDL/Lux.jl/tree/main/examples/CIFAR10",src:"https://datasets.activeloop.ai/wp-content/uploads/2022/09/CIFAR-10-dataset-Activeloop-Platform-visualization-image-1.webp",caption:"Vision Models on CIFAR-10",desc:"Train different vision models on CIFAR-10 to 90% accuracy within 10 minutes."}],g=[{href:"https://docs.sciml.ai/Overview/stable/showcase/pinngpu/",src:"../pinn.gif",caption:"GPU-Accelerated Physics-Informed Neural Networks",desc:"Use Machine Learning (PINNs) to solve the Heat Equation PDE on a GPU."},{href:"https://docs.sciml.ai/DiffEqFlux/stable/examples/neural_ode_weather_forecast/",src:"../weather-neural-ode.gif",caption:"Weather Forecasting with Neural ODEs",desc:"Train a neural ODEs to a multidimensional weather dataset and use it for weather forecasting."},{href:"https://docs.sciml.ai/SciMLSensitivity/stable/examples/sde/SDE_control/",src:"../neural-sde.png",caption:"Controlling Stochastic Differential Equations",desc:"Control the time evolution of a continuously monitored qubit described by an SDE with multiplicative scalar noise."},{href:"https://github.com/Dale-Black/ComputerVisionTutorials.jl/",src:"https://raw.githubusercontent.com/Dale-Black/ComputerVisionTutorials.jl/main/assets/image-seg-green.jpeg",caption:"Medical Image Segmentation",desc:"Explore various aspects of deep learning for medical imaging and a comprehensive overview of Julia packages."},{href:"https://github.com/agdestein/NeuralClosureTutorials",src:"https://raw.githubusercontent.com/agdestein/NeuralClosureTutorials/main/assets/navier_stokes.gif",caption:"Neural PDE closures",desc:"Learn an unknown term in a PDE using convolutional neural networks and Fourier neural operators."}];return(B,a)=>(n(),r("div",null,[a[0]||(a[0]=e("h1",{id:"tutorials",tabindex:"-1"},[t("Tutorials "),e("a",{class:"header-anchor",href:"#tutorials","aria-label":'Permalink to "Tutorials"'},"​")],-1)),a[1]||(a[1]=e("h2",{id:"beginner-tutorials",tabindex:"-1"},[t("Beginner Tutorials "),e("a",{class:"header-anchor",href:"#beginner-tutorials","aria-label":'Permalink to "Beginner Tutorials"'},"​")],-1)),o(s,{images:i}),a[2]||(a[2]=e("h2",{id:"intermediate-tutorials",tabindex:"-1"},[t("Intermediate Tutorials "),e("a",{class:"header-anchor",href:"#intermediate-tutorials","aria-label":'Permalink to "Intermediate Tutorials"'},"​")],-1)),o(s,{images:l}),a[3]||(a[3]=e("h2",{id:"advanced-tutorials",tabindex:"-1"},[t("Advanced Tutorials "),e("a",{class:"header-anchor",href:"#advanced-tutorials","aria-label":'Permalink to "Advanced Tutorials"'},"​")],-1)),o(s,{images:c}),a[4]||(a[4]=e("h2",{id:"larger-models",tabindex:"-1"},[t("Larger Models "),e("a",{class:"header-anchor",href:"#larger-models","aria-label":'Permalink to "Larger Models"'},"​")],-1)),a[5]||(a[5]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,"These models are part of the Lux examples, however, these are larger model that cannot be run on CI and aren't frequently tested. If you find a bug in one of these models, please open an issue or PR to fix it.")],-1)),o(s,{images:h}),a[6]||(a[6]=e("h2",{id:"selected-3rd-party-tutorials",tabindex:"-1"},[t("Selected 3rd Party Tutorials "),e("a",{class:"header-anchor",href:"#selected-3rd-party-tutorials","aria-label":'Permalink to "Selected 3rd Party Tutorials"'},"​")],-1)),a[7]||(a[7]=e("div",{class:"warning custom-block"},[e("p",{class:"custom-block-title"},"WARNING"),e("p",null,[t("These tutorials are developed by the community and may not be up-to-date with the latest version of "),e("code",null,"Lux.jl"),t(". Please refer to the official documentation for the most up-to-date information.")]),e("p",null,[t("Please open an issue (ideally both at "),e("code",null,"Lux.jl"),t(" and at the downstream linked package) if any of them are non-functional and we will try to get them updated.")])],-1)),o(s,{images:g}),a[8]||(a[8]=e("div",{class:"tip custom-block"},[e("p",{class:"custom-block-title"},"TIP"),e("p",null,[t("If you found an amazing tutorial showcasing "),e("code",null,"Lux.jl"),t(" online, or wrote one yourself, please open an issue or PR to add it to the list!")])],-1))]))}});export{F as __pageData,z as default}; diff --git a/dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.js b/dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.js similarity index 81% rename from dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.js rename to dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.js index 2513ae2814..8e65610c8a 100644 --- a/dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.js +++ b/dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.js @@ -1,156 +1,258 @@ -import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y=JSON.parse('{"title":"MNIST Classification using Neural ODEs","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/1_NeuralODE.md","filePath":"tutorials/intermediate/1_NeuralODE.md","lastUpdated":null}'),t={name:"tutorials/intermediate/1_NeuralODE.md"};function l(p,s,h,r,k,d){return e(),i("div",null,s[0]||(s[0]=[n(`

      MNIST Classification using Neural ODEs

      To understand Neural ODEs, users should look up these lecture notes. We recommend users to directly use DiffEqFlux.jl, instead of implementing Neural ODEs from scratch.

      Package Imports

      julia
      using Lux, ComponentArrays, SciMLSensitivity, LuxCUDA, Optimisers, OrdinaryDiffEqTsit5,
      +import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"MNIST Classification using Neural ODEs","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/1_NeuralODE.md","filePath":"tutorials/intermediate/1_NeuralODE.md","lastUpdated":null}'),p={name:"tutorials/intermediate/1_NeuralODE.md"};function l(t,s,h,r,k,c){return e(),i("div",null,s[0]||(s[0]=[n(`

      MNIST Classification using Neural ODEs

      To understand Neural ODEs, users should look up these lecture notes. We recommend users to directly use DiffEqFlux.jl, instead of implementing Neural ODEs from scratch.

      Package Imports

      julia
      using Lux, ComponentArrays, SciMLSensitivity, LuxCUDA, Optimisers, OrdinaryDiffEqTsit5,
             Random, Statistics, Zygote, OneHotArrays, InteractiveUtils, Printf
       using MLDatasets: MNIST
       using MLUtils: DataLoader, splitobs
       
      -CUDA.allowscalar(false)
      Precompiling SciMLSensitivity...
      -    459.9 ms  ✓ EnumX
      -    405.0 ms  ✓ Parameters
      -    416.4 ms  ✓ RuntimeGeneratedFunctions
      -    856.1 ms  ✓ DifferentiationInterface
      -    950.2 ms  ✓ KLU
      -    901.9 ms  ✓ PDMats
      -    373.5 ms  ✓ SciMLStructures
      -    496.2 ms  ✓ TruncatedStacktraces
      -   1156.2 ms  ✓ Sparspak
      -   6366.1 ms  ✓ Krylov
      -   1855.0 ms  ✓ SciMLOperators
      -    595.0 ms  ✓ ResettableStacks
      -   1013.6 ms  ✓ QuadGK
      -    493.1 ms  ✓ FunctionProperties
      -   1255.9 ms  ✓ HypergeometricFunctions
      -    805.0 ms  ✓ FastPower → FastPowerForwardDiffExt
      -    759.7 ms  ✓ PreallocationTools
      -   1064.1 ms  ✓ NLSolversBase
      -   1103.1 ms  ✓ FastPower → FastPowerTrackerExt
      -    773.9 ms  ✓ FastBroadcast
      -  11893.3 ms  ✓ ArrayLayouts
      -   3497.7 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   1467.7 ms  ✓ SymbolicIndexingInterface
      -   3983.2 ms  ✓ TriangularSolve
      -    606.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -    423.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -   3680.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -    423.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1163.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    836.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -   7196.5 ms  ✓ FastPower → FastPowerEnzymeExt
      -    660.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -   1736.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -    674.2 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    578.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1437.5 ms  ✓ Tracker → TrackerPDMatsExt
      -    828.0 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1865.3 ms  ✓ StatsFuns
      -   7263.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   3452.8 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -    778.5 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   6674.2 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   1848.5 ms  ✓ LineSearches
      -    684.6 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   2217.5 ms  ✓ RecursiveArrayTools
      -   1593.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   2524.6 ms  ✓ LazyArrays
      -   5109.7 ms  ✓ Distributions
      -    924.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -   3276.1 ms  ✓ Optim
      -    661.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -    934.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   1268.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    789.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -   3277.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   1295.5 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -   1439.2 ms  ✓ Distributions → DistributionsTestExt
      -  16123.6 ms  ✓ RecursiveFactorization
      -   1410.3 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   5446.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  12207.4 ms  ✓ SciMLBase
      -   1132.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2826.2 ms  ✓ SciMLJacobianOperators
      -   3556.3 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -   6279.1 ms  ✓ DiffEqBase
      -   1580.8 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      -   2457.9 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   1993.1 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -   1730.4 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      -   4864.7 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   4526.5 ms  ✓ DiffEqCallbacks
      -   3928.1 ms  ✓ DiffEqNoiseProcess
      -   5087.3 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18101.4 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  35745.3 ms  ✓ LinearSolve
      -   2708.0 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2747.1 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4383.7 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      -  30389.4 ms  ✓ SciMLSensitivity
      -  79 dependencies successfully precompiled in 125 seconds. 212 already precompiled.
      +CUDA.allowscalar(false)
      Precompiling Lux...
      +   5857.2 ms  ✓ LuxLib
      +   9238.5 ms  ✓ Lux
      +  2 dependencies successfully precompiled in 16 seconds. 107 already precompiled.
      +Precompiling LuxComponentArraysExt...
      +   1552.9 ms  ✓ Lux → LuxComponentArraysExt
      +  1 dependency successfully precompiled in 2 seconds. 113 already precompiled.
      +Precompiling SciMLSensitivity...
      +    385.5 ms  ✓ MuladdMacro
      +    526.0 ms  ✓ PositiveFactorizations
      +    328.0 ms  ✓ CommonSolve
      +    402.5 ms  ✓ PoissonRandom
      +    935.9 ms  ✓ Cassette
      +    428.7 ms  ✓ RuntimeGeneratedFunctions
      +    344.0 ms  ✓ FastPower
      +    434.7 ms  ✓ Parameters
      +    362.3 ms  ✓ FunctionWrappersWrappers
      +    873.3 ms  ✓ DifferentiationInterface
      +   1244.0 ms  ✓ FastLapackInterface
      +    995.4 ms  ✓ KLU
      +    387.8 ms  ✓ SciMLStructures
      +    504.2 ms  ✓ TruncatedStacktraces
      +   1183.1 ms  ✓ Sparspak
      +   1883.5 ms  ✓ SciMLOperators
      +   6120.6 ms  ✓ Krylov
      +   5447.4 ms  ✓ ChainRules
      +  11841.4 ms  ✓ ArrayLayouts
      +    597.8 ms  ✓ ResettableStacks
      +    743.8 ms  ✓ PreallocationTools
      +   1024.7 ms  ✓ NLSolversBase
      +    713.8 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    727.0 ms  ✓ FastBroadcast
      +   1333.3 ms  ✓ Tracker → TrackerPDMatsExt
      +    525.4 ms  ✓ FunctionProperties
      +   8341.0 ms  ✓ Expronicon
      +   1083.1 ms  ✓ FastPower → FastPowerTrackerExt
      +   1507.4 ms  ✓ SymbolicIndexingInterface
      +   3459.6 ms  ✓ TriangularSolve
      +    642.5 ms  ✓ FastPower → FastPowerForwardDiffExt
      +   3357.1 ms  ✓ FastPower → FastPowerReverseDiffExt
      +    598.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +    426.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +   3519.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      +   1109.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   5732.7 ms  ✓ FastPower → FastPowerEnzymeExt
      +    411.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +    651.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    797.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +    568.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +    824.9 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +    799.5 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +    796.6 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      +   1782.5 ms  ✓ LineSearches
      +   3351.8 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      +   5880.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      +   2130.5 ms  ✓ RecursiveArrayTools
      +   2520.3 ms  ✓ LazyArrays
      +   3391.5 ms  ✓ Optim
      +    805.3 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      +    627.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    886.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1205.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +    771.3 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +  14634.6 ms  ✓ RecursiveFactorization
      +   1312.4 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      +  12143.8 ms  ✓ SciMLBase
      +   1101.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   2787.0 ms  ✓ SciMLJacobianOperators
      +   6309.2 ms  ✓ DiffEqBase
      +  34179.3 ms  ✓ Zygote
      +   1580.3 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      +   2393.5 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      +   1907.2 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      +   4967.0 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      +   1645.7 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      +   1934.6 ms  ✓ Zygote → ZygoteTrackerExt
      +   1601.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      +   4504.4 ms  ✓ DiffEqCallbacks
      +   3313.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      +   3574.1 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      +   3803.9 ms  ✓ DiffEqNoiseProcess
      +   8492.1 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      +   5526.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      +   4875.9 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      +  33232.0 ms  ✓ LinearSolve
      +   2533.3 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      +   2584.2 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      +   4250.5 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      +  21623.0 ms  ✓ SciMLSensitivity
      +  81 dependencies successfully precompiled in 116 seconds. 210 already precompiled.
       Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    637.4 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +    605.5 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
       Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    726.9 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      +    697.3 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
       Precompiling ComponentArraysSciMLBaseExt...
      -   1158.7 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      +   1117.1 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
         1 dependency successfully precompiled in 1 seconds. 98 already precompiled.
      +Precompiling LuxLibSLEEFPiratesExt...
      +   2436.5 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      +  1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
      +Precompiling LuxLibLoopVectorizationExt...
      +   3957.6 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      +  1 dependency successfully precompiled in 4 seconds. 105 already precompiled.
      +Precompiling LuxLibEnzymeExt...
      +   1322.0 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
      +Precompiling LuxEnzymeExt...
      +   6746.2 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
      +Precompiling LuxLibTrackerExt...
      +   1136.8 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3346.0 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
      +Precompiling LuxTrackerExt...
      +   2194.9 ms  ✓ Lux → LuxTrackerExt
      +  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      +Precompiling LuxLibReverseDiffExt...
      +   4156.4 ms  ✓ LuxLib → LuxLibReverseDiffExt
      +  1 dependency successfully precompiled in 4 seconds. 99 already precompiled.
      +Precompiling LuxReverseDiffExt...
      +   4350.1 ms  ✓ Lux → LuxReverseDiffExt
      +  1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    824.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling MLDataDevicesZygoteExt...
      +   1592.5 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      +  1 dependency successfully precompiled in 2 seconds. 109 already precompiled.
      +Precompiling LuxZygoteExt...
      +   1749.0 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   3082.2 ms  ✓ Lux → LuxZygoteExt
      +  2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
      +Precompiling ComponentArraysZygoteExt...
      +   1576.4 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +  1 dependency successfully precompiled in 2 seconds. 117 already precompiled.
       Precompiling LuxCUDA...
      -   5348.9 ms  ✓ LuxCUDA
      -  1 dependency successfully precompiled in 6 seconds. 101 already precompiled.
      +  45492.8 ms  ✓ CUDA
      +   5210.1 ms  ✓ Atomix → AtomixCUDAExt
      +   7985.1 ms  ✓ cuDNN
      +   5372.0 ms  ✓ LuxCUDA
      +  4 dependencies successfully precompiled in 64 seconds. 98 already precompiled.
      +Precompiling EnzymeBFloat16sExt...
      +   6039.1 ms  ✓ Enzyme → EnzymeBFloat16sExt
      +  1 dependency successfully precompiled in 6 seconds. 47 already precompiled.
      +Precompiling ZygoteColorsExt...
      +   1863.8 ms  ✓ Zygote → ZygoteColorsExt
      +  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
      +Precompiling ArrayInterfaceCUDAExt...
      +   4981.7 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
      +  1 dependency successfully precompiled in 5 seconds. 103 already precompiled.
      +Precompiling NNlibCUDAExt...
      +   4965.1 ms  ✓ CUDA → ChainRulesCoreExt
      +   5450.7 ms  ✓ NNlib → NNlibCUDAExt
      +  2 dependencies successfully precompiled in 6 seconds. 104 already precompiled.
      +Precompiling MLDataDevicesCUDAExt...
      +   5160.7 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
      +  1 dependency successfully precompiled in 6 seconds. 106 already precompiled.
      +Precompiling LuxLibCUDAExt...
      +   5047.4 ms  ✓ CUDA → SpecialFunctionsExt
      +   5202.7 ms  ✓ CUDA → EnzymeCoreExt
      +   5519.2 ms  ✓ LuxLib → LuxLibCUDAExt
      +  3 dependencies successfully precompiled in 6 seconds. 169 already precompiled.
       Precompiling DiffEqBaseCUDAExt...
      -   5925.6 ms  ✓ DiffEqBase → DiffEqBaseCUDAExt
      +   5613.2 ms  ✓ DiffEqBase → DiffEqBaseCUDAExt
         1 dependency successfully precompiled in 6 seconds. 186 already precompiled.
       Precompiling LinearSolveCUDAExt...
      -   7171.0 ms  ✓ LinearSolve → LinearSolveCUDAExt
      -  1 dependency successfully precompiled in 8 seconds. 189 already precompiled.
      +   6913.8 ms  ✓ LinearSolve → LinearSolveCUDAExt
      +  1 dependency successfully precompiled in 7 seconds. 189 already precompiled.
      +Precompiling WeightInitializersCUDAExt...
      +   4884.7 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
      +  1 dependency successfully precompiled in 5 seconds. 111 already precompiled.
      +Precompiling NNlibCUDACUDNNExt...
      +   5201.0 ms  ✓ NNlib → NNlibCUDACUDNNExt
      +  1 dependency successfully precompiled in 6 seconds. 108 already precompiled.
      +Precompiling MLDataDevicescuDNNExt...
      +   4865.1 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      +  1 dependency successfully precompiled in 5 seconds. 109 already precompiled.
      +Precompiling LuxLibcuDNNExt...
      +   5822.7 ms  ✓ LuxLib → LuxLibcuDNNExt
      +  1 dependency successfully precompiled in 6 seconds. 176 already precompiled.
       Precompiling OrdinaryDiffEqTsit5...
      -   4612.7 ms  ✓ OrdinaryDiffEqCore
      -   1516.0 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   7501.0 ms  ✓ OrdinaryDiffEqTsit5
      -  3 dependencies successfully precompiled in 14 seconds. 122 already precompiled.
      +    357.0 ms  ✓ SimpleUnPack
      +   4989.3 ms  ✓ OrdinaryDiffEqCore
      +   1538.3 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      +   7349.3 ms  ✓ OrdinaryDiffEqTsit5
      +  4 dependencies successfully precompiled in 14 seconds. 121 already precompiled.
      +Precompiling OneHotArrays...
      +    975.6 ms  ✓ OneHotArrays
      +  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      +Precompiling MLDataDevicesOneHotArraysExt...
      +    730.6 ms  ✓ MLDataDevices → MLDataDevicesOneHotArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 35 already precompiled.
       Precompiling MLDatasets...
      -    359.3 ms  ✓ Glob
      -    400.3 ms  ✓ WorkerUtilities
      -    428.1 ms  ✓ BufferedStreams
      -    324.8 ms  ✓ SimpleBufferStream
      -    302.1 ms  ✓ PackageExtensionCompat
      -    563.8 ms  ✓ URIs
      -    340.6 ms  ✓ BitFlags
      -    622.7 ms  ✓ GZip
      -    686.1 ms  ✓ ConcurrentUtilities
      -    611.1 ms  ✓ ZipFile
      -    589.6 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    634.9 ms  ✓ Accessors → UnitfulExt
      -    335.9 ms  ✓ InternedStrings
      -    478.0 ms  ✓ ExceptionUnwrapping
      -   2085.3 ms  ✓ ColorVectorSpace
      -   1449.8 ms  ✓ MPICH_jll
      -    858.5 ms  ✓ WeakRefStrings
      -    576.8 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   2333.7 ms  ✓ AtomsBase
      -   1187.5 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -    458.7 ms  ✓ StridedViews
      -   2056.4 ms  ✓ OpenSSL
      -   1493.2 ms  ✓ NPZ
      -  11070.4 ms  ✓ JSON3
      -   3396.8 ms  ✓ ColorSchemes
      -   1577.2 ms  ✓ HDF5_jll
      -  19765.7 ms  ✓ ImageCore
      -   2412.2 ms  ✓ Chemfiles
      -   2510.7 ms  ✓ Pickle
      -  19761.2 ms  ✓ CSV
      -  34760.8 ms  ✓ JLD2
      -   2154.8 ms  ✓ ImageBase
      -   2026.5 ms  ✓ ImageShow
      -   7672.4 ms  ✓ HDF5
      -   2433.0 ms  ✓ MAT
      -  19388.6 ms  ✓ HTTP
      -   1917.2 ms  ✓ FileIO → HTTPExt
      -   3092.3 ms  ✓ DataDeps
      -   9295.4 ms  ✓ MLDatasets
      -  39 dependencies successfully precompiled in 67 seconds. 160 already precompiled.
      +    381.4 ms  ✓ Glob
      +    427.4 ms  ✓ WorkerUtilities
      +    486.3 ms  ✓ BufferedStreams
      +    372.1 ms  ✓ SimpleBufferStream
      +    621.4 ms  ✓ URIs
      +    504.6 ms  ✓ CodecZlib
      +    366.5 ms  ✓ PackageExtensionCompat
      +    386.4 ms  ✓ BitFlags
      +    698.9 ms  ✓ GZip
      +    733.9 ms  ✓ ConcurrentUtilities
      +    630.0 ms  ✓ ZipFile
      +    827.1 ms  ✓ StructTypes
      +   1055.6 ms  ✓ MbedTLS
      +    590.9 ms  ✓ MPIPreferences
      +    369.7 ms  ✓ InternedStrings
      +    516.6 ms  ✓ ExceptionUnwrapping
      +   2347.6 ms  ✓ PeriodicTable
      +   2960.5 ms  ✓ UnitfulAtomic
      +    628.9 ms  ✓ Chemfiles_jll
      +    511.7 ms  ✓ MicrosoftMPI_jll
      +    664.1 ms  ✓ libaec_jll
      +    573.5 ms  ✓ StringEncodings
      +    781.0 ms  ✓ WeakRefStrings
      +   1457.6 ms  ✓ Transducers → TransducersDataFramesExt
      +   1964.6 ms  ✓ ImageShow
      +   1669.9 ms  ✓ BangBang → BangBangDataFramesExt
      +    470.1 ms  ✓ StridedViews
      +   1537.2 ms  ✓ NPZ
      +   1928.1 ms  ✓ OpenSSL
      +   1157.1 ms  ✓ OpenMPI_jll
      +   1512.1 ms  ✓ MPICH_jll
      +   6401.7 ms  ✓ MLUtils
      +   1176.0 ms  ✓ MPItrampoline_jll
      +   2302.3 ms  ✓ AtomsBase
      +   2416.3 ms  ✓ Pickle
      +   9985.4 ms  ✓ JSON3
      +   1541.5 ms  ✓ HDF5_jll
      +   2277.5 ms  ✓ Chemfiles
      +   7572.7 ms  ✓ HDF5
      +  17252.6 ms  ✓ CSV
      +   2666.7 ms  ✓ MAT
      +  19080.0 ms  ✓ HTTP
      +   2096.1 ms  ✓ FileIO → HTTPExt
      +   3299.8 ms  ✓ DataDeps
      +   9534.9 ms  ✓ MLDatasets
      +  45 dependencies successfully precompiled in 51 seconds. 154 already precompiled.
       Precompiling TransducersLazyArraysExt...
      -   1239.6 ms  ✓ Transducers → TransducersLazyArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      +   1227.9 ms  ✓ Transducers → TransducersLazyArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      +Precompiling MLDataDevicesMLUtilsExt...
      +   1745.8 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      +Precompiling LuxMLUtilsExt...
      +   2170.9 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
           # Load MNIST: Only 1500 for demonstration purposes
           N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
           dataset = MNIST(; split=:train)
      @@ -259,47 +361,47 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
           end
       end
       
      -train(NeuralODECompact)
      [1/9]	Time 141.6985s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.6474s	Training Accuracy: 58.22222%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7605s	Training Accuracy: 67.85185%	Test Accuracy: 70.66667%
      -[4/9]	Time 0.5718s	Training Accuracy: 74.29630%	Test Accuracy: 74.66667%
      -[5/9]	Time 0.6067s	Training Accuracy: 76.29630%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5708s	Training Accuracy: 78.74074%	Test Accuracy: 80.00000%
      -[7/9]	Time 0.5745s	Training Accuracy: 82.22222%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5886s	Training Accuracy: 83.62963%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5742s	Training Accuracy: 85.18519%	Test Accuracy: 82.66667%
      julia
      train(NeuralODE)
      [1/9]	Time 35.3338s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5750s	Training Accuracy: 57.18519%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7617s	Training Accuracy: 68.37037%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5737s	Training Accuracy: 73.77778%	Test Accuracy: 75.33333%
      -[5/9]	Time 0.5843s	Training Accuracy: 76.14815%	Test Accuracy: 77.33333%
      -[6/9]	Time 0.8359s	Training Accuracy: 79.48148%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.5807s	Training Accuracy: 81.25926%	Test Accuracy: 80.66667%
      -[8/9]	Time 0.5803s	Training Accuracy: 83.40741%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5896s	Training Accuracy: 84.81481%	Test Accuracy: 82.00000%

      We can also change the sensealg and train the model! GaussAdjoint allows you to use any arbitrary parameter structure and not just a flat vector (ComponentArray).

      julia
      train(NeuralODE; sensealg=GaussAdjoint(; autojacvec=ZygoteVJP()), use_named_tuple=true)
      [1/9]	Time 42.9532s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5744s	Training Accuracy: 58.44444%	Test Accuracy: 58.00000%
      -[3/9]	Time 0.5544s	Training Accuracy: 66.96296%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5532s	Training Accuracy: 72.44444%	Test Accuracy: 73.33333%
      -[5/9]	Time 0.7753s	Training Accuracy: 76.37037%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5458s	Training Accuracy: 78.81481%	Test Accuracy: 79.33333%
      -[7/9]	Time 0.5535s	Training Accuracy: 80.51852%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5598s	Training Accuracy: 82.74074%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5679s	Training Accuracy: 85.25926%	Test Accuracy: 82.66667%

      But remember some AD backends like ReverseDiff is not GPU compatible. For a model this size, you will notice that training time is significantly lower for training on CPU than on GPU.

      julia
      train(NeuralODE; sensealg=InterpolatingAdjoint(; autojacvec=ReverseDiffVJP()), cpu=true)
      [1/9]	Time 109.8302s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 17.2264s	Training Accuracy: 58.74074%	Test Accuracy: 56.66667%
      -[3/9]	Time 17.9930s	Training Accuracy: 69.92593%	Test Accuracy: 71.33333%
      -[4/9]	Time 15.8127s	Training Accuracy: 72.81481%	Test Accuracy: 74.00000%
      -[5/9]	Time 14.3418s	Training Accuracy: 76.37037%	Test Accuracy: 78.66667%
      -[6/9]	Time 17.5343s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 16.2169s	Training Accuracy: 81.62963%	Test Accuracy: 80.66667%
      -[8/9]	Time 17.0311s	Training Accuracy: 83.33333%	Test Accuracy: 80.00000%
      -[9/9]	Time 15.2291s	Training Accuracy: 85.40741%	Test Accuracy: 82.00000%

      For completeness, let's also test out discrete sensitivities!

      julia
      train(NeuralODE; sensealg=ReverseDiffAdjoint(), cpu=true)
      [1/9]	Time 54.7833s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 28.2044s	Training Accuracy: 58.66667%	Test Accuracy: 57.33333%
      -[3/9]	Time 28.1614s	Training Accuracy: 69.70370%	Test Accuracy: 71.33333%
      -[4/9]	Time 28.1398s	Training Accuracy: 72.74074%	Test Accuracy: 74.00000%
      -[5/9]	Time 24.6311s	Training Accuracy: 76.14815%	Test Accuracy: 78.66667%
      -[6/9]	Time 26.7294s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 23.7177s	Training Accuracy: 81.55556%	Test Accuracy: 80.66667%
      -[8/9]	Time 23.6944s	Training Accuracy: 83.40741%	Test Accuracy: 80.00000%
      -[9/9]	Time 23.0376s	Training Accuracy: 85.25926%	Test Accuracy: 81.33333%

      Alternate Implementation using Stateful Layer

      Starting v0.5.5, Lux provides a StatefulLuxLayer which can be used to avoid the Boxing of st. Using the @compact API avoids this problem entirely.

      julia
      struct StatefulNeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <:
      +train(NeuralODECompact)
      [1/9]	Time 145.6055s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.6923s	Training Accuracy: 58.22222%	Test Accuracy: 57.33333%
      +[3/9]	Time 0.5880s	Training Accuracy: 67.85185%	Test Accuracy: 70.66667%
      +[4/9]	Time 0.7365s	Training Accuracy: 74.29630%	Test Accuracy: 74.66667%
      +[5/9]	Time 0.5768s	Training Accuracy: 76.29630%	Test Accuracy: 76.00000%
      +[6/9]	Time 0.7812s	Training Accuracy: 78.74074%	Test Accuracy: 80.00000%
      +[7/9]	Time 0.5771s	Training Accuracy: 82.22222%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.5847s	Training Accuracy: 83.62963%	Test Accuracy: 83.33333%
      +[9/9]	Time 0.5776s	Training Accuracy: 85.18519%	Test Accuracy: 82.66667%
      julia
      train(NeuralODE)
      [1/9]	Time 34.0779s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.6949s	Training Accuracy: 57.18519%	Test Accuracy: 57.33333%
      +[3/9]	Time 0.5889s	Training Accuracy: 68.37037%	Test Accuracy: 68.00000%
      +[4/9]	Time 0.7887s	Training Accuracy: 73.77778%	Test Accuracy: 75.33333%
      +[5/9]	Time 0.5735s	Training Accuracy: 76.14815%	Test Accuracy: 77.33333%
      +[6/9]	Time 0.5772s	Training Accuracy: 79.48148%	Test Accuracy: 80.66667%
      +[7/9]	Time 0.8112s	Training Accuracy: 81.25926%	Test Accuracy: 80.66667%
      +[8/9]	Time 0.5766s	Training Accuracy: 83.40741%	Test Accuracy: 82.66667%
      +[9/9]	Time 0.5746s	Training Accuracy: 84.81481%	Test Accuracy: 82.00000%

      We can also change the sensealg and train the model! GaussAdjoint allows you to use any arbitrary parameter structure and not just a flat vector (ComponentArray).

      julia
      train(NeuralODE; sensealg=GaussAdjoint(; autojacvec=ZygoteVJP()), use_named_tuple=true)
      [1/9]	Time 42.2203s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.5489s	Training Accuracy: 58.44444%	Test Accuracy: 58.00000%
      +[3/9]	Time 0.7084s	Training Accuracy: 66.96296%	Test Accuracy: 68.00000%
      +[4/9]	Time 0.5476s	Training Accuracy: 72.44444%	Test Accuracy: 73.33333%
      +[5/9]	Time 0.7615s	Training Accuracy: 76.37037%	Test Accuracy: 76.00000%
      +[6/9]	Time 0.5601s	Training Accuracy: 78.81481%	Test Accuracy: 79.33333%
      +[7/9]	Time 0.5529s	Training Accuracy: 80.51852%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.7865s	Training Accuracy: 82.74074%	Test Accuracy: 83.33333%
      +[9/9]	Time 0.5436s	Training Accuracy: 85.25926%	Test Accuracy: 82.66667%

      But remember some AD backends like ReverseDiff is not GPU compatible. For a model this size, you will notice that training time is significantly lower for training on CPU than on GPU.

      julia
      train(NeuralODE; sensealg=InterpolatingAdjoint(; autojacvec=ReverseDiffVJP()), cpu=true)
      [1/9]	Time 98.5688s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 15.5218s	Training Accuracy: 58.74074%	Test Accuracy: 56.66667%
      +[3/9]	Time 17.8079s	Training Accuracy: 69.92593%	Test Accuracy: 71.33333%
      +[4/9]	Time 4.9542s	Training Accuracy: 72.81481%	Test Accuracy: 74.00000%
      +[5/9]	Time 4.8191s	Training Accuracy: 76.37037%	Test Accuracy: 78.66667%
      +[6/9]	Time 5.0913s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      +[7/9]	Time 6.6715s	Training Accuracy: 81.62963%	Test Accuracy: 80.66667%
      +[8/9]	Time 15.3470s	Training Accuracy: 83.33333%	Test Accuracy: 80.00000%
      +[9/9]	Time 14.9399s	Training Accuracy: 85.40741%	Test Accuracy: 82.00000%

      For completeness, let's also test out discrete sensitivities!

      julia
      train(NeuralODE; sensealg=ReverseDiffAdjoint(), cpu=true)
      [1/9]	Time 53.4534s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 28.3519s	Training Accuracy: 58.66667%	Test Accuracy: 57.33333%
      +[3/9]	Time 27.1995s	Training Accuracy: 69.70370%	Test Accuracy: 71.33333%
      +[4/9]	Time 27.9635s	Training Accuracy: 72.74074%	Test Accuracy: 74.00000%
      +[5/9]	Time 28.7721s	Training Accuracy: 76.14815%	Test Accuracy: 78.66667%
      +[6/9]	Time 28.9540s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      +[7/9]	Time 29.1487s	Training Accuracy: 81.55556%	Test Accuracy: 80.66667%
      +[8/9]	Time 27.9752s	Training Accuracy: 83.40741%	Test Accuracy: 80.00000%
      +[9/9]	Time 26.8059s	Training Accuracy: 85.25926%	Test Accuracy: 81.33333%

      Alternate Implementation using Stateful Layer

      Starting v0.5.5, Lux provides a StatefulLuxLayer which can be used to avoid the Boxing of st. Using the @compact API avoids this problem entirely.

      julia
      struct StatefulNeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <:
              Lux.AbstractLuxWrapperLayer{:model}
           model::M
           solver::So
      @@ -317,20 +419,20 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
           dudt(u, p, t) = st_model(u, p)
           prob = ODEProblem{false}(ODEFunction{false}(dudt), x, n.tspan, ps)
           return solve(prob, n.solver; n.kwargs...), st_model.st
      -end

      Train the new Stateful Neural ODE

      julia
      train(StatefulNeuralODE)
      [1/9]	Time 38.7067s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5533s	Training Accuracy: 58.22222%	Test Accuracy: 55.33333%
      -[3/9]	Time 0.8202s	Training Accuracy: 68.29630%	Test Accuracy: 68.66667%
      -[4/9]	Time 0.5436s	Training Accuracy: 73.11111%	Test Accuracy: 76.00000%
      -[5/9]	Time 0.5437s	Training Accuracy: 75.92593%	Test Accuracy: 76.66667%
      -[6/9]	Time 0.5719s	Training Accuracy: 78.96296%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.8741s	Training Accuracy: 80.81481%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5530s	Training Accuracy: 83.25926%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5426s	Training Accuracy: 84.59259%	Test Accuracy: 82.00000%

      We might not see a significant difference in the training time, but let us investigate the type stabilities of the layers.

      Type Stability

      julia
      model, ps, st = create_model(NeuralODE)
      +end

      Train the new Stateful Neural ODE

      julia
      train(StatefulNeuralODE)
      [1/9]	Time 37.5871s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.8049s	Training Accuracy: 58.22222%	Test Accuracy: 55.33333%
      +[3/9]	Time 0.5681s	Training Accuracy: 68.29630%	Test Accuracy: 68.66667%
      +[4/9]	Time 0.5527s	Training Accuracy: 73.11111%	Test Accuracy: 76.00000%
      +[5/9]	Time 0.8499s	Training Accuracy: 75.92593%	Test Accuracy: 76.66667%
      +[6/9]	Time 0.5449s	Training Accuracy: 78.96296%	Test Accuracy: 80.66667%
      +[7/9]	Time 0.5404s	Training Accuracy: 80.81481%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.5494s	Training Accuracy: 83.25926%	Test Accuracy: 82.66667%
      +[9/9]	Time 0.8664s	Training Accuracy: 84.59259%	Test Accuracy: 82.00000%

      We might not see a significant difference in the training time, but let us investigate the type stabilities of the layers.

      Type Stability

      julia
      model, ps, st = create_model(NeuralODE)
       
       model_stateful, ps_stateful, st_stateful = create_model(StatefulNeuralODE)
       
       x = gpu_device()(ones(Float32, 28, 28, 1, 3));

      NeuralODE is not type stable due to the boxing of st

      julia
      @code_warntype model(x, ps, st)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -341,7 +443,7 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
       │   %2 = Base.getproperty(c, :layers)::@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}
       │   %3 = (%1)(%2, x, ps, st)::TUPLE{CUDA.CUARRAY{FLOAT32, 2, CUDA.DEVICEMEMORY}, NAMEDTUPLE{(:LAYER_1, :LAYER_2, :LAYER_3, :LAYER_4, :LAYER_5), <:TUPLE{@NAMEDTUPLE{}, @NAMEDTUPLE{}, ANY, @NAMEDTUPLE{}, @NAMEDTUPLE{}}}}
       └──      return %3

      We avoid the problem entirely by using StatefulNeuralODE

      julia
      @code_warntype model_stateful(x, ps_stateful, st_stateful)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -354,7 +456,7 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
       └──      return %3

      Note, that we still recommend using this layer internally and not exposing this as the default API to the users.

      Finally checking the compact model

      julia
      model_compact, ps_compact, st_compact = create_model(NeuralODECompact)
       
       @code_warntype model_compact(x, ps_compact, st_compact)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(model = ViewAxis(1:540, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))),)), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -377,8 +479,8 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -416,11 +518,11 @@ import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y
       - CUDA_Runtime_jll: 0.15.5+0
       
       Toolchain:
      -- Julia: 1.11.2
      +- Julia: 1.11.3
       - LLVM: 16.0.6
       
       Environment:
       - JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
       
       1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.920 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,63)]))}const o=a(t,[["render",l]]);export{y as __pageData,o as default}; + 0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.982 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,63)]))}const y=a(p,[["render",l]]);export{o as __pageData,y as default}; diff --git a/dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.lean.js b/dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.lean.js new file mode 100644 index 0000000000..c4227731e0 --- /dev/null +++ b/dev/assets/tutorials_intermediate_1_NeuralODE.md.BenIrmiX.lean.js @@ -0,0 +1 @@ +import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.BetCMmtc.js";const o=JSON.parse('{"title":"MNIST Classification using Neural ODEs","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/1_NeuralODE.md","filePath":"tutorials/intermediate/1_NeuralODE.md","lastUpdated":null}'),p={name:"tutorials/intermediate/1_NeuralODE.md"};function l(t,s,h,r,k,c){return e(),i("div",null,s[0]||(s[0]=[n("",63)]))}const y=a(p,[["render",l]]);export{o as __pageData,y as default}; diff --git a/dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.lean.js b/dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.lean.js deleted file mode 100644 index 2513ae2814..0000000000 --- a/dev/assets/tutorials_intermediate_1_NeuralODE.md.Cq8Z3u1S.lean.js +++ /dev/null @@ -1,426 +0,0 @@ -import{_ as a,c as i,a2 as n,o as e}from"./chunks/framework.I-x9Gl6h.js";const y=JSON.parse('{"title":"MNIST Classification using Neural ODEs","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/1_NeuralODE.md","filePath":"tutorials/intermediate/1_NeuralODE.md","lastUpdated":null}'),t={name:"tutorials/intermediate/1_NeuralODE.md"};function l(p,s,h,r,k,d){return e(),i("div",null,s[0]||(s[0]=[n(`

      MNIST Classification using Neural ODEs

      To understand Neural ODEs, users should look up these lecture notes. We recommend users to directly use DiffEqFlux.jl, instead of implementing Neural ODEs from scratch.

      Package Imports

      julia
      using Lux, ComponentArrays, SciMLSensitivity, LuxCUDA, Optimisers, OrdinaryDiffEqTsit5,
      -      Random, Statistics, Zygote, OneHotArrays, InteractiveUtils, Printf
      -using MLDatasets: MNIST
      -using MLUtils: DataLoader, splitobs
      -
      -CUDA.allowscalar(false)
      Precompiling SciMLSensitivity...
      -    459.9 ms  ✓ EnumX
      -    405.0 ms  ✓ Parameters
      -    416.4 ms  ✓ RuntimeGeneratedFunctions
      -    856.1 ms  ✓ DifferentiationInterface
      -    950.2 ms  ✓ KLU
      -    901.9 ms  ✓ PDMats
      -    373.5 ms  ✓ SciMLStructures
      -    496.2 ms  ✓ TruncatedStacktraces
      -   1156.2 ms  ✓ Sparspak
      -   6366.1 ms  ✓ Krylov
      -   1855.0 ms  ✓ SciMLOperators
      -    595.0 ms  ✓ ResettableStacks
      -   1013.6 ms  ✓ QuadGK
      -    493.1 ms  ✓ FunctionProperties
      -   1255.9 ms  ✓ HypergeometricFunctions
      -    805.0 ms  ✓ FastPower → FastPowerForwardDiffExt
      -    759.7 ms  ✓ PreallocationTools
      -   1064.1 ms  ✓ NLSolversBase
      -   1103.1 ms  ✓ FastPower → FastPowerTrackerExt
      -    773.9 ms  ✓ FastBroadcast
      -  11893.3 ms  ✓ ArrayLayouts
      -   3497.7 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   1467.7 ms  ✓ SymbolicIndexingInterface
      -   3983.2 ms  ✓ TriangularSolve
      -    606.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -    423.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -   3680.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -    423.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1163.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    836.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -   7196.5 ms  ✓ FastPower → FastPowerEnzymeExt
      -    660.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -   1736.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -    674.2 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    578.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1437.5 ms  ✓ Tracker → TrackerPDMatsExt
      -    828.0 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1865.3 ms  ✓ StatsFuns
      -   7263.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   3452.8 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -    778.5 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   6674.2 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   1848.5 ms  ✓ LineSearches
      -    684.6 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   2217.5 ms  ✓ RecursiveArrayTools
      -   1593.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   2524.6 ms  ✓ LazyArrays
      -   5109.7 ms  ✓ Distributions
      -    924.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -   3276.1 ms  ✓ Optim
      -    661.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -    934.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   1268.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    789.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -   3277.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   1295.5 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -   1439.2 ms  ✓ Distributions → DistributionsTestExt
      -  16123.6 ms  ✓ RecursiveFactorization
      -   1410.3 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   5446.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  12207.4 ms  ✓ SciMLBase
      -   1132.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2826.2 ms  ✓ SciMLJacobianOperators
      -   3556.3 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -   6279.1 ms  ✓ DiffEqBase
      -   1580.8 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      -   2457.9 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   1993.1 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -   1730.4 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      -   4864.7 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   4526.5 ms  ✓ DiffEqCallbacks
      -   3928.1 ms  ✓ DiffEqNoiseProcess
      -   5087.3 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18101.4 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  35745.3 ms  ✓ LinearSolve
      -   2708.0 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2747.1 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4383.7 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      -  30389.4 ms  ✓ SciMLSensitivity
      -  79 dependencies successfully precompiled in 125 seconds. 212 already precompiled.
      -Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    637.4 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      -Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    726.9 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
      -Precompiling ComponentArraysSciMLBaseExt...
      -   1158.7 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 98 already precompiled.
      -Precompiling LuxCUDA...
      -   5348.9 ms  ✓ LuxCUDA
      -  1 dependency successfully precompiled in 6 seconds. 101 already precompiled.
      -Precompiling DiffEqBaseCUDAExt...
      -   5925.6 ms  ✓ DiffEqBase → DiffEqBaseCUDAExt
      -  1 dependency successfully precompiled in 6 seconds. 186 already precompiled.
      -Precompiling LinearSolveCUDAExt...
      -   7171.0 ms  ✓ LinearSolve → LinearSolveCUDAExt
      -  1 dependency successfully precompiled in 8 seconds. 189 already precompiled.
      -Precompiling OrdinaryDiffEqTsit5...
      -   4612.7 ms  ✓ OrdinaryDiffEqCore
      -   1516.0 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   7501.0 ms  ✓ OrdinaryDiffEqTsit5
      -  3 dependencies successfully precompiled in 14 seconds. 122 already precompiled.
      -Precompiling MLDatasets...
      -    359.3 ms  ✓ Glob
      -    400.3 ms  ✓ WorkerUtilities
      -    428.1 ms  ✓ BufferedStreams
      -    324.8 ms  ✓ SimpleBufferStream
      -    302.1 ms  ✓ PackageExtensionCompat
      -    563.8 ms  ✓ URIs
      -    340.6 ms  ✓ BitFlags
      -    622.7 ms  ✓ GZip
      -    686.1 ms  ✓ ConcurrentUtilities
      -    611.1 ms  ✓ ZipFile
      -    589.6 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    634.9 ms  ✓ Accessors → UnitfulExt
      -    335.9 ms  ✓ InternedStrings
      -    478.0 ms  ✓ ExceptionUnwrapping
      -   2085.3 ms  ✓ ColorVectorSpace
      -   1449.8 ms  ✓ MPICH_jll
      -    858.5 ms  ✓ WeakRefStrings
      -    576.8 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   2333.7 ms  ✓ AtomsBase
      -   1187.5 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -    458.7 ms  ✓ StridedViews
      -   2056.4 ms  ✓ OpenSSL
      -   1493.2 ms  ✓ NPZ
      -  11070.4 ms  ✓ JSON3
      -   3396.8 ms  ✓ ColorSchemes
      -   1577.2 ms  ✓ HDF5_jll
      -  19765.7 ms  ✓ ImageCore
      -   2412.2 ms  ✓ Chemfiles
      -   2510.7 ms  ✓ Pickle
      -  19761.2 ms  ✓ CSV
      -  34760.8 ms  ✓ JLD2
      -   2154.8 ms  ✓ ImageBase
      -   2026.5 ms  ✓ ImageShow
      -   7672.4 ms  ✓ HDF5
      -   2433.0 ms  ✓ MAT
      -  19388.6 ms  ✓ HTTP
      -   1917.2 ms  ✓ FileIO → HTTPExt
      -   3092.3 ms  ✓ DataDeps
      -   9295.4 ms  ✓ MLDatasets
      -  39 dependencies successfully precompiled in 67 seconds. 160 already precompiled.
      -Precompiling TransducersLazyArraysExt...
      -   1239.6 ms  ✓ Transducers → TransducersLazyArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      -    # Load MNIST: Only 1500 for demonstration purposes
      -    N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
      -    dataset = MNIST(; split=:train)
      -    if N !== nothing
      -        imgs = dataset.features[:, :, 1:N]
      -        labels_raw = dataset.targets[1:N]
      -    else
      -        imgs = dataset.features
      -        labels_raw = dataset.targets
      -    end
      -
      -    # Process images into (H,W,C,BS) batches
      -    x_data = Float32.(reshape(imgs, size(imgs, 1), size(imgs, 2), 1, size(imgs, 3)))
      -    y_data = onehotbatch(labels_raw, 0:9)
      -    (x_train, y_train), (x_test, y_test) = splitobs((x_data, y_data); at=train_split)
      -
      -    return (
      -        # Use DataLoader to automatically minibatch and shuffle the data
      -        DataLoader(collect.((x_train, y_train)); batchsize, shuffle=true),
      -        # Don't shuffle the test data
      -        DataLoader(collect.((x_test, y_test)); batchsize, shuffle=false)
      -    )
      -end
      loadmnist (generic function with 1 method)

      Define the Neural ODE Layer

      First we will use the @compact macro to define the Neural ODE Layer.

      julia
      function NeuralODECompact(
      -        model::Lux.AbstractLuxLayer; solver=Tsit5(), tspan=(0.0f0, 1.0f0), kwargs...)
      -    return @compact(; model, solver, tspan, kwargs...) do x, p
      -        dudt(u, p, t) = vec(model(reshape(u, size(x)), p))
      -        # Note the \`p.model\` here
      -        prob = ODEProblem(ODEFunction{false}(dudt), vec(x), tspan, p.model)
      -        @return solve(prob, solver; kwargs...)
      -    end
      -end
      NeuralODECompact (generic function with 1 method)

      We recommend using the compact macro for creating custom layers. The below implementation exists mostly for historical reasons when @compact was not part of the stable API. Also, it helps users understand how the layer interface of Lux works.

      The NeuralODE is a ContainerLayer, which stores a model. The parameters and states of the NeuralODE are same as those of the underlying model.

      julia
      struct NeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <: Lux.AbstractLuxWrapperLayer{:model}
      -    model::M
      -    solver::So
      -    tspan::T
      -    kwargs::K
      -end
      -
      -function NeuralODE(
      -        model::Lux.AbstractLuxLayer; solver=Tsit5(), tspan=(0.0f0, 1.0f0), kwargs...)
      -    return NeuralODE(model, solver, tspan, kwargs)
      -end
      Main.var"##230".NeuralODE

      OrdinaryDiffEq.jl can deal with non-Vector Inputs! However, certain discrete sensitivities like ReverseDiffAdjoint can't handle non-Vector inputs. Hence, we need to convert the input and output of the ODE solver to a Vector.

      julia
      function (n::NeuralODE)(x, ps, st)
      -    function dudt(u, p, t)
      -        u_, st = n.model(reshape(u, size(x)), p, st)
      -        return vec(u_)
      -    end
      -    prob = ODEProblem{false}(ODEFunction{false}(dudt), vec(x), n.tspan, ps)
      -    return solve(prob, n.solver; n.kwargs...), st
      -end
      -
      -@views diffeqsol_to_array(l::Int, x::ODESolution) = reshape(last(x.u), (l, :))
      -@views diffeqsol_to_array(l::Int, x::AbstractMatrix) = reshape(x[:, end], (l, :))
      diffeqsol_to_array (generic function with 2 methods)

      Create and Initialize the Neural ODE Layer

      julia
      function create_model(model_fn=NeuralODE; dev=gpu_device(), use_named_tuple::Bool=false,
      -        sensealg=InterpolatingAdjoint(; autojacvec=ZygoteVJP()))
      -    # Construct the Neural ODE Model
      -    model = Chain(FlattenLayer(),
      -        Dense(784 => 20, tanh),
      -        model_fn(
      -            Chain(Dense(20 => 10, tanh), Dense(10 => 10, tanh), Dense(10 => 20, tanh));
      -            save_everystep=false, reltol=1.0f-3,
      -            abstol=1.0f-3, save_start=false, sensealg),
      -        Base.Fix1(diffeqsol_to_array, 20),
      -        Dense(20 => 10))
      -
      -    rng = Random.default_rng()
      -    Random.seed!(rng, 0)
      -
      -    ps, st = Lux.setup(rng, model)
      -    ps = (use_named_tuple ? ps : ComponentArray(ps)) |> dev
      -    st = st |> dev
      -
      -    return model, ps, st
      -end
      create_model (generic function with 2 methods)

      Define Utility Functions

      julia
      const logitcrossentropy = CrossEntropyLoss(; logits=Val(true))
      -
      -function accuracy(model, ps, st, dataloader)
      -    total_correct, total = 0, 0
      -    st = Lux.testmode(st)
      -    for (x, y) in dataloader
      -        target_class = onecold(y)
      -        predicted_class = onecold(first(model(x, ps, st)))
      -        total_correct += sum(target_class .== predicted_class)
      -        total += length(target_class)
      -    end
      -    return total_correct / total
      -end
      accuracy (generic function with 1 method)

      Training

      julia
      function train(model_function; cpu::Bool=false, kwargs...)
      -    dev = cpu ? cpu_device() : gpu_device()
      -    model, ps, st = create_model(model_function; dev, kwargs...)
      -
      -    # Training
      -    train_dataloader, test_dataloader = loadmnist(128, 0.9) |> dev
      -
      -    tstate = Training.TrainState(model, ps, st, Adam(0.001f0))
      -
      -    ### Lets train the model
      -    nepochs = 9
      -    for epoch in 1:nepochs
      -        stime = time()
      -        for (x, y) in train_dataloader
      -            _, _, _, tstate = Training.single_train_step!(
      -                AutoZygote(), logitcrossentropy, (x, y), tstate)
      -        end
      -        ttime = time() - stime
      -
      -        tr_acc = accuracy(model, tstate.parameters, tstate.states, train_dataloader) * 100
      -        te_acc = accuracy(model, tstate.parameters, tstate.states, test_dataloader) * 100
      -        @printf "[%d/%d]\\tTime %.4fs\\tTraining Accuracy: %.5f%%\\tTest \\
      -                 Accuracy: %.5f%%\\n" epoch nepochs ttime tr_acc te_acc
      -    end
      -end
      -
      -train(NeuralODECompact)
      [1/9]	Time 141.6985s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.6474s	Training Accuracy: 58.22222%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7605s	Training Accuracy: 67.85185%	Test Accuracy: 70.66667%
      -[4/9]	Time 0.5718s	Training Accuracy: 74.29630%	Test Accuracy: 74.66667%
      -[5/9]	Time 0.6067s	Training Accuracy: 76.29630%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5708s	Training Accuracy: 78.74074%	Test Accuracy: 80.00000%
      -[7/9]	Time 0.5745s	Training Accuracy: 82.22222%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5886s	Training Accuracy: 83.62963%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5742s	Training Accuracy: 85.18519%	Test Accuracy: 82.66667%
      julia
      train(NeuralODE)
      [1/9]	Time 35.3338s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5750s	Training Accuracy: 57.18519%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7617s	Training Accuracy: 68.37037%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5737s	Training Accuracy: 73.77778%	Test Accuracy: 75.33333%
      -[5/9]	Time 0.5843s	Training Accuracy: 76.14815%	Test Accuracy: 77.33333%
      -[6/9]	Time 0.8359s	Training Accuracy: 79.48148%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.5807s	Training Accuracy: 81.25926%	Test Accuracy: 80.66667%
      -[8/9]	Time 0.5803s	Training Accuracy: 83.40741%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5896s	Training Accuracy: 84.81481%	Test Accuracy: 82.00000%

      We can also change the sensealg and train the model! GaussAdjoint allows you to use any arbitrary parameter structure and not just a flat vector (ComponentArray).

      julia
      train(NeuralODE; sensealg=GaussAdjoint(; autojacvec=ZygoteVJP()), use_named_tuple=true)
      [1/9]	Time 42.9532s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5744s	Training Accuracy: 58.44444%	Test Accuracy: 58.00000%
      -[3/9]	Time 0.5544s	Training Accuracy: 66.96296%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5532s	Training Accuracy: 72.44444%	Test Accuracy: 73.33333%
      -[5/9]	Time 0.7753s	Training Accuracy: 76.37037%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5458s	Training Accuracy: 78.81481%	Test Accuracy: 79.33333%
      -[7/9]	Time 0.5535s	Training Accuracy: 80.51852%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5598s	Training Accuracy: 82.74074%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5679s	Training Accuracy: 85.25926%	Test Accuracy: 82.66667%

      But remember some AD backends like ReverseDiff is not GPU compatible. For a model this size, you will notice that training time is significantly lower for training on CPU than on GPU.

      julia
      train(NeuralODE; sensealg=InterpolatingAdjoint(; autojacvec=ReverseDiffVJP()), cpu=true)
      [1/9]	Time 109.8302s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 17.2264s	Training Accuracy: 58.74074%	Test Accuracy: 56.66667%
      -[3/9]	Time 17.9930s	Training Accuracy: 69.92593%	Test Accuracy: 71.33333%
      -[4/9]	Time 15.8127s	Training Accuracy: 72.81481%	Test Accuracy: 74.00000%
      -[5/9]	Time 14.3418s	Training Accuracy: 76.37037%	Test Accuracy: 78.66667%
      -[6/9]	Time 17.5343s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 16.2169s	Training Accuracy: 81.62963%	Test Accuracy: 80.66667%
      -[8/9]	Time 17.0311s	Training Accuracy: 83.33333%	Test Accuracy: 80.00000%
      -[9/9]	Time 15.2291s	Training Accuracy: 85.40741%	Test Accuracy: 82.00000%

      For completeness, let's also test out discrete sensitivities!

      julia
      train(NeuralODE; sensealg=ReverseDiffAdjoint(), cpu=true)
      [1/9]	Time 54.7833s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 28.2044s	Training Accuracy: 58.66667%	Test Accuracy: 57.33333%
      -[3/9]	Time 28.1614s	Training Accuracy: 69.70370%	Test Accuracy: 71.33333%
      -[4/9]	Time 28.1398s	Training Accuracy: 72.74074%	Test Accuracy: 74.00000%
      -[5/9]	Time 24.6311s	Training Accuracy: 76.14815%	Test Accuracy: 78.66667%
      -[6/9]	Time 26.7294s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 23.7177s	Training Accuracy: 81.55556%	Test Accuracy: 80.66667%
      -[8/9]	Time 23.6944s	Training Accuracy: 83.40741%	Test Accuracy: 80.00000%
      -[9/9]	Time 23.0376s	Training Accuracy: 85.25926%	Test Accuracy: 81.33333%

      Alternate Implementation using Stateful Layer

      Starting v0.5.5, Lux provides a StatefulLuxLayer which can be used to avoid the Boxing of st. Using the @compact API avoids this problem entirely.

      julia
      struct StatefulNeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <:
      -       Lux.AbstractLuxWrapperLayer{:model}
      -    model::M
      -    solver::So
      -    tspan::T
      -    kwargs::K
      -end
      -
      -function StatefulNeuralODE(
      -        model::Lux.AbstractLuxLayer; solver=Tsit5(), tspan=(0.0f0, 1.0f0), kwargs...)
      -    return StatefulNeuralODE(model, solver, tspan, kwargs)
      -end
      -
      -function (n::StatefulNeuralODE)(x, ps, st)
      -    st_model = StatefulLuxLayer{true}(n.model, ps, st)
      -    dudt(u, p, t) = st_model(u, p)
      -    prob = ODEProblem{false}(ODEFunction{false}(dudt), x, n.tspan, ps)
      -    return solve(prob, n.solver; n.kwargs...), st_model.st
      -end

      Train the new Stateful Neural ODE

      julia
      train(StatefulNeuralODE)
      [1/9]	Time 38.7067s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5533s	Training Accuracy: 58.22222%	Test Accuracy: 55.33333%
      -[3/9]	Time 0.8202s	Training Accuracy: 68.29630%	Test Accuracy: 68.66667%
      -[4/9]	Time 0.5436s	Training Accuracy: 73.11111%	Test Accuracy: 76.00000%
      -[5/9]	Time 0.5437s	Training Accuracy: 75.92593%	Test Accuracy: 76.66667%
      -[6/9]	Time 0.5719s	Training Accuracy: 78.96296%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.8741s	Training Accuracy: 80.81481%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5530s	Training Accuracy: 83.25926%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5426s	Training Accuracy: 84.59259%	Test Accuracy: 82.00000%

      We might not see a significant difference in the training time, but let us investigate the type stabilities of the layers.

      Type Stability

      julia
      model, ps, st = create_model(NeuralODE)
      -
      -model_stateful, ps_stateful, st_stateful = create_model(StatefulNeuralODE)
      -
      -x = gpu_device()(ones(Float32, 28, 28, 1, 3));

      NeuralODE is not type stable due to the boxing of st

      julia
      @code_warntype model(x, ps, st)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      -Arguments
      -  c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
      -  x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      -  ps::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}
      -  st::Core.Const((layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), layer_4 = NamedTuple(), layer_5 = NamedTuple()))
      -Body::TUPLE{CUDA.CUARRAY{FLOAT32, 2, CUDA.DEVICEMEMORY}, NAMEDTUPLE{(:LAYER_1, :LAYER_2, :LAYER_3, :LAYER_4, :LAYER_5), <:TUPLE{@NAMEDTUPLE{}, @NAMEDTUPLE{}, ANY, @NAMEDTUPLE{}, @NAMEDTUPLE{}}}}
      -1 ─ %1 = Lux.applychain::Core.Const(Lux.applychain)
      -│   %2 = Base.getproperty(c, :layers)::@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}
      -│   %3 = (%1)(%2, x, ps, st)::TUPLE{CUDA.CUARRAY{FLOAT32, 2, CUDA.DEVICEMEMORY}, NAMEDTUPLE{(:LAYER_1, :LAYER_2, :LAYER_3, :LAYER_4, :LAYER_5), <:TUPLE{@NAMEDTUPLE{}, @NAMEDTUPLE{}, ANY, @NAMEDTUPLE{}, @NAMEDTUPLE{}}}}
      -└──      return %3

      We avoid the problem entirely by using StatefulNeuralODE

      julia
      @code_warntype model_stateful(x, ps_stateful, st_stateful)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      -Arguments
      -  c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
      -  x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      -  ps::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}
      -  st::Core.Const((layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()), layer_4 = NamedTuple(), layer_5 = NamedTuple()))
      -Body::Tuple{CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}}}
      -1 ─ %1 = Lux.applychain::Core.Const(Lux.applychain)
      -│   %2 = Base.getproperty(c, :layers)::@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}
      -│   %3 = (%1)(%2, x, ps, st)::Tuple{CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}}}
      -└──      return %3

      Note, that we still recommend using this layer internally and not exposing this as the default API to the users.

      Finally checking the compact model

      julia
      model_compact, ps_compact, st_compact = create_model(NeuralODECompact)
      -
      -@code_warntype model_compact(x, ps_compact, st_compact)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(model = ViewAxis(1:540, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))),)), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      -Arguments
      -  c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
      -  x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      -  ps::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(model = ViewAxis(1:540, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))),)), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}
      -  st::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}}
      -Body::Tuple{CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}}}
      -1 ─ %1 = Lux.applychain::Core.Const(Lux.applychain)
      -│   %2 = Base.getproperty(c, :layers)::@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}
      -│   %3 = (%1)(%2, x, ps, st)::Tuple{CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}}}
      -└──      return %3

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate
      -
      -CUDA runtime 12.6, artifact installation
      -CUDA driver 12.6
      -NVIDIA driver 560.35.3
      -
      -CUDA libraries: 
      -- CUBLAS: 12.6.4
      -- CURAND: 10.3.7
      -- CUFFT: 11.3.0
      -- CUSOLVER: 11.7.1
      -- CUSPARSE: 12.5.4
      -- CUPTI: 2024.3.2 (API 24.0.0)
      -- NVML: 12.0.0+560.35.3
      -
      -Julia packages: 
      -- CUDA: 5.6.1
      -- CUDA_Driver_jll: 0.10.4+0
      -- CUDA_Runtime_jll: 0.15.5+0
      -
      -Toolchain:
      -- Julia: 1.11.2
      -- LLVM: 16.0.6
      -
      -Environment:
      -- JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
      -
      -1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.920 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,63)]))}const o=a(t,[["render",l]]);export{y as __pageData,o as default}; diff --git a/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.js b/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.js deleted file mode 100644 index 1bb3358a71..0000000000 --- a/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.js +++ /dev/null @@ -1,693 +0,0 @@ -import{_ as p,c as i,a2 as n,j as s,o as A}from"./chunks/framework.I-x9Gl6h.js";const e="/dev/assets/results.CFQJNj5o.gif",Q=JSON.parse('{"title":"Bayesian Neural Network","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/2_BayesianNN.md","filePath":"tutorials/intermediate/2_BayesianNN.md","lastUpdated":null}'),l={name:"tutorials/intermediate/2_BayesianNN.md"},t={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-3.222ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.264ex",height:"6.301ex",role:"img",focusable:"false",viewBox:"0 -1361 20448.8 2785.1","aria-hidden":"true"};function h(r,a,g,k,d,C){return A(),i("div",null,[a[2]||(a[2]=n(`

      Bayesian Neural Network

      We borrow this tutorial from the official Turing Docs. We will show how the explicit parameterization of Lux enables first-class composability with packages which expect flattened out parameter vectors.

      Note: The tutorial in the official Turing docs is now using Lux instead of Flux.

      We will use Turing.jl with Lux.jl to implement implementing a classification algorithm. Lets start by importing the relevant libraries.

      julia
      # Import libraries
      -
      -using Lux, Turing, CairoMakie, Random, Tracker, Functors, LinearAlgebra
      -
      -# Sampling progress
      -Turing.setprogress!(true);
      Precompiling Lux...
      -    303.5 ms  ✓ Reexport
      -    395.0 ms  ✓ ConcreteStructs
      -    305.7 ms  ✓ SIMDTypes
      -    328.3 ms  ✓ IfElse
      -    340.6 ms  ✓ Future
      -    362.3 ms  ✓ CEnum
      -    366.1 ms  ✓ OpenLibm_jll
      -    366.1 ms  ✓ ArgCheck
      -    374.4 ms  ✓ ManualMemory
      -    448.0 ms  ✓ CompilerSupportLibraries_jll
      -    500.9 ms  ✓ Statistics
      -    538.3 ms  ✓ EnzymeCore
      -    558.0 ms  ✓ ADTypes
      -    444.0 ms  ✓ FastClosures
      -    441.7 ms  ✓ StaticArraysCore
      -    416.6 ms  ✓ NaNMath
      -    473.7 ms  ✓ ConstructionBase
      -    469.3 ms  ✓ CommonWorldInvalidations
      -    470.6 ms  ✓ Adapt
      -    858.7 ms  ✓ IrrationalConstants
      -    592.6 ms  ✓ OpenSpecFun_jll
      -    354.8 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    766.0 ms  ✓ ThreadingUtilities
      -    549.5 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    357.2 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      -    359.7 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      -    358.6 ms  ✓ EnzymeCore → AdaptExt
      -    376.0 ms  ✓ DiffResults
      -    436.2 ms  ✓ GPUArraysCore
      -    500.7 ms  ✓ ArrayInterface
      -    559.6 ms  ✓ LogExpFunctions
      -    750.7 ms  ✓ Static
      -    356.5 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    343.5 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -   1718.2 ms  ✓ UnsafeAtomics
      -    384.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    583.8 ms  ✓ Functors
      -    381.4 ms  ✓ BitTwiddlingConvenienceFunctions
      -   1978.8 ms  ✓ MacroTools
      -    452.3 ms  ✓ Atomix
      -   2017.7 ms  ✓ Hwloc
      -    943.0 ms  ✓ CPUSummary
      -    766.9 ms  ✓ MLDataDevices
      -    597.3 ms  ✓ CommonSubexpressions
      -   1285.2 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -   1066.4 ms  ✓ Optimisers
      -   1492.6 ms  ✓ StaticArrayInterface
      -    609.5 ms  ✓ PolyesterWeave
      -    594.2 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    397.3 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    395.3 ms  ✓ Optimisers → OptimisersAdaptExt
      -   1318.7 ms  ✓ Setfield
      -   1465.1 ms  ✓ DispatchDoctor
      -    445.8 ms  ✓ CloseOpenIntervals
      -    568.8 ms  ✓ LayoutPointers
      -    387.2 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   2415.8 ms  ✓ SpecialFunctions
      -    582.8 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    573.6 ms  ✓ DiffRules
      -    885.0 ms  ✓ StrideArraysCore
      -   1111.1 ms  ✓ LuxCore
      -    416.3 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    417.8 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    422.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    441.5 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    545.9 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -    711.4 ms  ✓ Polyester
      -   1776.5 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   2539.2 ms  ✓ WeightInitializers
      -   5980.7 ms  ✓ StaticArrays
      -    877.2 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    572.5 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    575.4 ms  ✓ Adapt → AdaptStaticArraysExt
      -    587.3 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    597.1 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    644.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3333.4 ms  ✓ ForwardDiff
      -    838.1 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3101.5 ms  ✓ KernelAbstractions
      -    644.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    709.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5063.5 ms  ✓ NNlib
      -    803.9 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    907.5 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5642.3 ms  ✓ LuxLib
      -   8885.5 ms  ✓ Lux
      -  86 dependencies successfully precompiled in 32 seconds. 23 already precompiled.
      -Precompiling Turing...
      -    298.3 ms  ✓ IteratorInterfaceExtensions
      -    308.6 ms  ✓ NaturalSort
      -    310.6 ms  ✓ UnPack
      -    323.4 ms  ✓ SimpleUnPack
      -    344.6 ms  ✓ RangeArrays
      -    360.8 ms  ✓ LaTeXStrings
      -    369.6 ms  ✓ ScientificTypesBase
      -    368.0 ms  ✓ ExprTools
      -    366.3 ms  ✓ StatsAPI
      -    416.3 ms  ✓ ChangesOfVariables
      -    444.3 ms  ✓ PositiveFactorizations
      -    544.6 ms  ✓ AbstractFFTs
      -    300.6 ms  ✓ CommonSolve
      -    287.3 ms  ✓ DataValueInterfaces
      -    669.0 ms  ✓ FunctionWrappers
      -    400.3 ms  ✓ InverseFunctions
      -    327.2 ms  ✓ EnumX
      -    410.4 ms  ✓ SuiteSparse_jll
      -    334.9 ms  ✓ RealDot
      -    829.8 ms  ✓ Combinatorics
      -    809.3 ms  ✓ InitialValues
      -    502.5 ms  ✓ IterTools
      -    480.4 ms  ✓ OrderedCollections
      -    553.1 ms  ✓ Serialization
      -    363.5 ms  ✓ Zlib_jll
      -    951.8 ms  ✓ OffsetArrays
      -    337.4 ms  ✓ CompositionsBase
      -    312.6 ms  ✓ PtrArrays
      -    331.7 ms  ✓ DefineSingletons
      -    452.6 ms  ✓ IntervalSets
      -    347.9 ms  ✓ Ratios
      -    511.0 ms  ✓ AbstractTrees
      -    343.3 ms  ✓ InvertedIndices
      -    343.8 ms  ✓ DataAPI
      -    434.8 ms  ✓ DelimitedFiles
      -    949.3 ms  ✓ FillArrays
      -    907.8 ms  ✓ RandomNumbers
      -    447.0 ms  ✓ LRUCache
      -    425.8 ms  ✓ ProgressLogging
      -    390.4 ms  ✓ MappedArrays
      -    376.3 ms  ✓ SciMLStructures
      -    490.4 ms  ✓ LoggingExtras
      -    546.5 ms  ✓ Rmath_jll
      -    324.1 ms  ✓ TableTraits
      -    584.1 ms  ✓ FiniteDiff
      -    597.0 ms  ✓ oneTBB_jll
      -    838.2 ms  ✓ DifferentiationInterface
      -    741.9 ms  ✓ LogDensityProblems
      -    386.1 ms  ✓ StatisticalTraits
      -   1060.4 ms  ✓ Crayons
      -    455.3 ms  ✓ LogExpFunctions → LogExpFunctionsChangesOfVariablesExt
      -   1024.5 ms  ✓ Baselet
      -    345.4 ms  ✓ FunctionWrappersWrappers
      -    396.5 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    433.1 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    445.7 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    394.0 ms  ✓ ChangesOfVariables → ChangesOfVariablesInverseFunctionsExt
      -    412.2 ms  ✓ Parameters
      -   1048.7 ms  ✓ ZygoteRules
      -   1122.5 ms  ✓ HypergeometricFunctions
      -    381.4 ms  ✓ RuntimeGeneratedFunctions
      -   1441.9 ms  ✓ RecipesBase
      -    396.5 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      -    379.8 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      -    569.5 ms  ✓ FFTW_jll
      -    595.5 ms  ✓ L_BFGS_B_jll
      -    443.5 ms  ✓ AliasTables
      -    356.3 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    370.3 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -   1735.1 ms  ✓ StringManipulation
      -    352.6 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    377.2 ms  ✓ LeftChildRightSiblingTrees
      -    389.3 ms  ✓ FillArrays → FillArraysStatisticsExt
      -    421.2 ms  ✓ Missings
      -    360.9 ms  ✓ LRUCache → SerializationExt
      -    640.7 ms  ✓ Libtask
      -    412.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -    400.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1255.9 ms  ✓ IntelOpenMP_jll
      -    813.0 ms  ✓ Random123
      -   1668.3 ms  ✓ DataStructures
      -    788.9 ms  ✓ Rmath
      -    589.3 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      -    602.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -   1700.0 ms  ✓ Distributed
      -    776.0 ms  ✓ Tables
      -    504.2 ms  ✓ LogDensityProblemsAD
      -    807.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -    499.3 ms  ✓ LBFGSB
      -    553.2 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      -    486.9 ms  ✓ SortingAlgorithms
      -    728.6 ms  ✓ MLJModelInterface
      -    648.4 ms  ✓ TerminalLoggers
      -    728.5 ms  ✓ AxisArrays
      -    641.1 ms  ✓ SharedArrays
      -    968.9 ms  ✓ QuadGK
      -    504.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADADTypesExt
      -    770.8 ms  ✓ StructArrays
      -    881.7 ms  ✓ ProgressMeter
      -   1283.9 ms  ✓ MKL_jll
      -   2882.4 ms  ✓ Test
      -    703.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADForwardDiffExt
      -   1045.4 ms  ✓ NLSolversBase
      -    483.0 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADDifferentiationInterfaceExt
      -    390.7 ms  ✓ StructArrays → StructArraysAdaptExt
      -    419.5 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      -    327.9 ms  ✓ InplaceOps
      -    455.6 ms  ✓ ConsoleProgressMonitor
      -   1712.3 ms  ✓ StatsFuns
      -    672.5 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    713.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2139.0 ms  ✓ Accessors
      -    575.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   3812.8 ms  ✓ SparseArrays
      -    934.4 ms  ✓ ChangesOfVariables → ChangesOfVariablesTestExt
      -   1121.7 ms  ✓ SplittablesBase
      -    461.9 ms  ✓ Accessors → StructArraysExt
      -    660.4 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -    616.3 ms  ✓ Accessors → TestExt
      -   5051.1 ms  ✓ Tracker
      -    653.3 ms  ✓ Accessors → StaticArraysExt
      -    778.3 ms  ✓ Accessors → LinearAlgebraExt
      -    631.3 ms  ✓ DensityInterface
      -    858.3 ms  ✓ Accessors → IntervalSetsExt
      -   1436.4 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    628.2 ms  ✓ Statistics → SparseArraysExt
      -    692.8 ms  ✓ WoodburyMatrices
      -    608.9 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -    628.3 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -    583.8 ms  ✓ SuiteSparse
      -   1557.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    615.5 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      -   1840.5 ms  ✓ LineSearches
      -    620.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -    675.1 ms  ✓ FillArrays → FillArraysSparseArraysExt
      -    958.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -    644.1 ms  ✓ StructArrays → StructArraysSparseArraysExt
      -    743.7 ms  ✓ BangBang
      -   1216.1 ms  ✓ SparseMatrixColorings
      -    649.1 ms  ✓ AxisAlgorithms
      -   1122.4 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -    588.2 ms  ✓ SparseInverseSubset
      -   1185.1 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADTrackerExt
      -   1205.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    853.1 ms  ✓ PDMats
      -   1108.3 ms  ✓ NamedArrays
      -    497.9 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    483.8 ms  ✓ BangBang → BangBangStructArraysExt
      -    471.7 ms  ✓ BangBang → BangBangTablesExt
      -    674.3 ms  ✓ BangBang → BangBangStaticArraysExt
      -   1377.9 ms  ✓ SymbolicIndexingInterface
      -    845.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -    897.5 ms  ✓ MicroCollections
      -   1776.8 ms  ✓ SciMLOperators
      -    647.4 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    571.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   2401.9 ms  ✓ StatsBase
      -   4735.1 ms  ✓ FFTW
      -    954.6 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1528.7 ms  ✓ Tracker → TrackerPDMatsExt
      -   3196.9 ms  ✓ Roots
      -   2120.6 ms  ✓ Interpolations
      -    906.3 ms  ✓ NNlib → NNlibFFTWExt
      -    567.6 ms  ✓ Roots → RootsChainRulesCoreExt
      -   2205.4 ms  ✓ RecursiveArrayTools
      -    680.0 ms  ✓ Roots → RootsForwardDiffExt
      -    585.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   3676.4 ms  ✓ SparseConnectivityTracer
      -    727.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    932.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   2782.9 ms  ✓ Transducers
      -   1341.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    673.8 ms  ✓ Transducers → TransducersAdaptExt
      -   1343.1 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   3231.5 ms  ✓ Optim
      -  11966.7 ms  ✓ MLStyle
      -   1474.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      -   1765.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      -   1829.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      -   1928.3 ms  ✓ AbstractMCMC
      -   5670.3 ms  ✓ ChainRules
      -   4938.6 ms  ✓ Distributions
      -    774.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1201.7 ms  ✓ SSMProblems
      -   1409.6 ms  ✓ AbstractPPL
      -   1221.7 ms  ✓ Distributions → DistributionsDensityInterfaceExt
      -   1357.6 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1363.2 ms  ✓ Distributions → DistributionsTestExt
      -   1479.1 ms  ✓ MCMCDiagnosticTools
      -   2732.1 ms  ✓ AdvancedHMC
      -   1771.3 ms  ✓ EllipticalSliceSampling
      -   1915.6 ms  ✓ KernelDensity
      -   1931.6 ms  ✓ AdvancedPS
      -   1937.9 ms  ✓ AdvancedMH
      -  13717.5 ms  ✓ PrettyTables
      -   3259.4 ms  ✓ Bijectors
      -   1550.6 ms  ✓ AdvancedMH → AdvancedMHStructArraysExt
      -   1635.4 ms  ✓ AdvancedPS → AdvancedPSLibtaskExt
      -   1704.5 ms  ✓ AdvancedMH → AdvancedMHForwardDiffExt
      -   3759.9 ms  ✓ DistributionsAD
      -   7528.4 ms  ✓ Expronicon
      -   1318.1 ms  ✓ Bijectors → BijectorsForwardDiffExt
      -   2942.8 ms  ✓ MCMCChains
      -   1396.7 ms  ✓ DistributionsAD → DistributionsADForwardDiffExt
      -   1492.0 ms  ✓ Bijectors → BijectorsDistributionsADExt
      -   2460.9 ms  ✓ Bijectors → BijectorsTrackerExt
      -   2743.4 ms  ✓ DistributionsAD → DistributionsADTrackerExt
      -   2173.5 ms  ✓ AdvancedHMC → AdvancedHMCMCMCChainsExt
      -   2180.1 ms  ✓ AdvancedMH → AdvancedMHMCMCChainsExt
      -   1979.3 ms  ✓ AdvancedVI
      -   8087.6 ms  ✓ DynamicPPL
      -   1818.1 ms  ✓ DynamicPPL → DynamicPPLForwardDiffExt
      -   1908.3 ms  ✓ DynamicPPL → DynamicPPLChainRulesCoreExt
      -   2408.4 ms  ✓ DynamicPPL → DynamicPPLMCMCChainsExt
      -   2658.7 ms  ✓ DynamicPPL → DynamicPPLZygoteRulesExt
      -  11009.7 ms  ✓ SciMLBase
      -    958.1 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2133.2 ms  ✓ OptimizationBase
      -    351.6 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      -    617.7 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      -   1968.5 ms  ✓ Optimization
      -  11867.2 ms  ✓ OptimizationOptimJL
      -   5158.5 ms  ✓ Turing
      -   4136.3 ms  ✓ Turing → TuringOptimExt
      -  224 dependencies successfully precompiled in 57 seconds. 82 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    650.3 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    428.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    806.1 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling BijectorsEnzymeCoreExt...
      -   1281.2 ms  ✓ Bijectors → BijectorsEnzymeCoreExt
      -  1 dependency successfully precompiled in 1 seconds. 79 already precompiled.
      -Precompiling HwlocTrees...
      -    500.2 ms  ✓ Hwloc → HwlocTrees
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    437.5 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling MLDataDevicesTrackerExt...
      -   1157.5 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      -Precompiling LuxLibTrackerExt...
      -   1079.1 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.3 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      -Precompiling LuxTrackerExt...
      -   2027.4 ms  ✓ Lux → LuxTrackerExt
      -  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      -Precompiling DynamicPPLEnzymeCoreExt...
      -   1751.3 ms  ✓ DynamicPPL → DynamicPPLEnzymeCoreExt
      -  1 dependency successfully precompiled in 2 seconds. 128 already precompiled.
      -Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    580.6 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      -Precompiling OptimizationMLDataDevicesExt...
      -   1366.7 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      -  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      -Precompiling CairoMakie...
      -    346.1 ms  ✓ SignedDistanceFields
      -    424.0 ms  ✓ PaddedViews
      -    389.4 ms  ✓ StackViews
      -    394.2 ms  ✓ Scratch
      -    401.6 ms  ✓ Showoff
      -    411.6 ms  ✓ Extents
      -    561.8 ms  ✓ Xorg_libXau_jll
      -    568.1 ms  ✓ Graphite2_jll
      -    569.3 ms  ✓ LLVMOpenMP_jll
      -    574.2 ms  ✓ Libmount_jll
      -    597.0 ms  ✓ OpenSSL_jll
      -    597.2 ms  ✓ Bzip2_jll
      -    568.4 ms  ✓ libpng_jll
      -    548.2 ms  ✓ libfdk_aac_jll
      -    549.5 ms  ✓ Imath_jll
      -    554.2 ms  ✓ Giflib_jll
      -    567.1 ms  ✓ LERC_jll
      -    590.7 ms  ✓ LAME_jll
      -   1043.7 ms  ✓ SimpleTraits
      -    567.2 ms  ✓ EarCut_jll
      -    576.2 ms  ✓ CRlibm_jll
      -    574.5 ms  ✓ Ogg_jll
      -    576.5 ms  ✓ x265_jll
      -    606.5 ms  ✓ XZ_jll
      -    629.7 ms  ✓ JpegTurbo_jll
      -    565.6 ms  ✓ Xorg_libXdmcp_jll
      -    578.7 ms  ✓ x264_jll
      -    593.2 ms  ✓ libaom_jll
      -    561.6 ms  ✓ LZO_jll
      -    583.0 ms  ✓ Expat_jll
      -    587.3 ms  ✓ Zstd_jll
      -    489.3 ms  ✓ Xorg_xtrans_jll
      -   1577.5 ms  ✓ UnicodeFun
      -    561.9 ms  ✓ Opus_jll
      -    493.8 ms  ✓ Xorg_libpthread_stubs_jll
      -    550.5 ms  ✓ Libffi_jll
      -    555.9 ms  ✓ Libgpg_error_jll
      -    600.9 ms  ✓ Libiconv_jll
      -    575.0 ms  ✓ isoband_jll
      -   1892.0 ms  ✓ FixedPointNumbers
      -    376.6 ms  ✓ RelocatableFolders
      -    443.9 ms  ✓ MosaicViews
      -    568.0 ms  ✓ Libuuid_jll
      -    586.2 ms  ✓ FriBidi_jll
      -    612.7 ms  ✓ Pixman_jll
      -    607.5 ms  ✓ FreeType2_jll
      -    643.2 ms  ✓ libvorbis_jll
      -    707.4 ms  ✓ OpenEXR_jll
      -    419.2 ms  ✓ Isoband
      -    610.5 ms  ✓ libsixel_jll
      -   1008.9 ms  ✓ FilePathsBase
      -    646.3 ms  ✓ Libtiff_jll
      -    611.6 ms  ✓ Libgcrypt_jll
      -    386.5 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -   1102.2 ms  ✓ GeoInterface
      -    706.5 ms  ✓ XML2_jll
      -    496.2 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -    751.5 ms  ✓ Fontconfig_jll
      -    722.6 ms  ✓ FilePaths
      -    974.0 ms  ✓ FreeType
      -    648.4 ms  ✓ XSLT_jll
      -    737.1 ms  ✓ Gettext_jll
      -   1178.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   2515.9 ms  ✓ IntervalArithmetic
      -    756.7 ms  ✓ Glib_jll
      -   2178.2 ms  ✓ ColorTypes
      -   1105.7 ms  ✓ Xorg_libxcb_jll
      -    485.8 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   3187.4 ms  ✓ PkgVersion
      -   3356.3 ms  ✓ FileIO
      -    619.0 ms  ✓ Xorg_libX11_jll
      -    582.1 ms  ✓ Xorg_libXrender_jll
      -    583.6 ms  ✓ Xorg_libXext_jll
      -   1733.1 ms  ✓ ColorVectorSpace
      -   1460.0 ms  ✓ QOI
      -    713.2 ms  ✓ Libglvnd_jll
      -    746.8 ms  ✓ Cairo_jll
      -    694.6 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -    726.9 ms  ✓ HarfBuzz_jll
      -    764.7 ms  ✓ libwebp_jll
      -   4594.7 ms  ✓ GeometryBasics
      -   3577.6 ms  ✓ ExactPredicates
      -    701.1 ms  ✓ libass_jll
      -    739.5 ms  ✓ Pango_jll
      -   6456.8 ms  ✓ SIMD
      -   1037.6 ms  ✓ Packing
      -   3970.8 ms  ✓ Colors
      -   1281.3 ms  ✓ ShaderAbstractions
      -    912.6 ms  ✓ FFMPEG_jll
      -    548.3 ms  ✓ Graphics
      -    568.5 ms  ✓ Animations
      -   1174.1 ms  ✓ ColorBrewer
      -   1529.2 ms  ✓ OpenEXR
      -   1819.0 ms  ✓ FreeTypeAbstraction
      -   1418.2 ms  ✓ Cairo
      -   3473.9 ms  ✓ MakieCore
      -   3316.1 ms  ✓ ColorSchemes
      -   4876.0 ms  ✓ GridLayoutBase
      -   5146.7 ms  ✓ DelaunayTriangulation
      -  15078.1 ms  ✓ Unitful
      -    543.7 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    547.9 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   8162.9 ms  ✓ Automa
      -   1205.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   7868.3 ms  ✓ PlotUtils
      -  13993.4 ms  ✓ ImageCore
      -   1903.7 ms  ✓ ImageBase
      -   2553.6 ms  ✓ WebP
      -   3189.4 ms  ✓ PNGFiles
      -   3444.6 ms  ✓ JpegTurbo
      -   1909.7 ms  ✓ ImageAxes
      -   4182.7 ms  ✓ Sixel
      -  10561.3 ms  ✓ MathTeXEngine
      -   1120.1 ms  ✓ ImageMetadata
      -   1885.7 ms  ✓ Netpbm
      -  43255.8 ms  ✓ TiffImages
      -   1183.9 ms  ✓ ImageIO
      - 106062.6 ms  ✓ Makie
      -  72508.2 ms  ✓ CairoMakie
      -  119 dependencies successfully precompiled in 231 seconds. 151 already precompiled.
      -Precompiling SparseMatrixColoringsColorsExt...
      -    874.9 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      -  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      -Precompiling UnitfulExt...
      -    582.9 ms  ✓ Accessors → UnitfulExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling IntervalArithmeticForwardDiffExt...
      -    455.6 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    640.7 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      -  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      -Precompiling IntervalArithmeticRecipesBaseExt...
      -    763.1 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      -Precompiling SciMLBaseMakieExt...
      -   9177.8 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.
      -[ Info: [Turing]: progress logging is enabled globally
      -[ Info: [AdvancedVI]: global PROGRESS is set as true

      Generating data

      Our goal here is to use a Bayesian neural network to classify points in an artificial dataset. The code below generates data points arranged in a box-like pattern and displays a graph of the dataset we'll be working with.

      julia
      # Number of points to generate
      -N = 80
      -M = round(Int, N / 4)
      -rng = Random.default_rng()
      -Random.seed!(rng, 1234)
      -
      -# Generate artificial data
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -xt1s = Array([[x1s[i] + 0.5f0; x2s[i] + 0.5f0] for i in 1:M])
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -append!(xt1s, Array([[x1s[i] - 5.0f0; x2s[i] - 5.0f0] for i in 1:M]))
      -
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -xt0s = Array([[x1s[i] + 0.5f0; x2s[i] - 5.0f0] for i in 1:M])
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -append!(xt0s, Array([[x1s[i] - 5.0f0; x2s[i] + 0.5f0] for i in 1:M]))
      -
      -# Store all the data for later
      -xs = [xt1s; xt0s]
      -ts = [ones(2 * M); zeros(2 * M)]
      -
      -# Plot data points
      -
      -function plot_data()
      -    x1 = first.(xt1s)
      -    y1 = last.(xt1s)
      -    x2 = first.(xt0s)
      -    y2 = last.(xt0s)
      -
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -
      -    scatter!(ax, x1, y1; markersize=16, color=:red, strokecolor=:black, strokewidth=2)
      -    scatter!(ax, x2, y2; markersize=16, color=:blue, strokecolor=:black, strokewidth=2)
      -
      -    return fig
      -end
      -
      -plot_data()

      Building the Neural Network

      The next step is to define a feedforward neural network where we express our parameters as distributions, and not single points as with traditional neural networks. For this we will use Dense to define liner layers and compose them via Chain, both are neural network primitives from Lux. The network nn we will create will have two hidden layers with tanh activations and one output layer with sigmoid activation, as shown below.

      The nn is an instance that acts as a function and can take data, parameters and current state as inputs and output predictions. We will define distributions on the neural network parameters.

      julia
      # Construct a neural network using Lux
      -nn = Chain(Dense(2 => 3, tanh), Dense(3 => 2, tanh), Dense(2 => 1, sigmoid))
      -
      -# Initialize the model weights and state
      -ps, st = Lux.setup(rng, nn)
      -
      -Lux.parameterlength(nn) # number of parameters in NN
      20

      The probabilistic model specification below creates a parameters variable, which has IID normal variables. The parameters represents all parameters of our neural net (weights and biases).

      julia
      # Create a regularization term and a Gaussian prior variance term.
      -alpha = 0.09
      -sig = sqrt(1.0 / alpha)
      3.3333333333333335

      Construct named tuple from a sampled parameter vector. We could also use ComponentArrays here and simply broadcast to avoid doing this. But let's do it this way to avoid dependencies.

      julia
      function vector_to_parameters(ps_new::AbstractVector, ps::NamedTuple)
      -    @assert length(ps_new) == Lux.parameterlength(ps)
      -    i = 1
      -    function get_ps(x)
      -        z = reshape(view(ps_new, i:(i + length(x) - 1)), size(x))
      -        i += length(x)
      -        return z
      -    end
      -    return fmap(get_ps, ps)
      -end
      vector_to_parameters (generic function with 1 method)

      To interface with external libraries it is often desirable to use the StatefulLuxLayer to automatically handle the neural network states.

      julia
      const model = StatefulLuxLayer{true}(nn, nothing, st)
      -
      -# Specify the probabilistic model.
      -@model function bayes_nn(xs, ts)
      -    # Sample the parameters
      -    nparameters = Lux.parameterlength(nn)
      -    parameters ~ MvNormal(zeros(nparameters), Diagonal(abs2.(sig .* ones(nparameters))))
      -
      -    # Forward NN to make predictions
      -    preds = Lux.apply(model, xs, vector_to_parameters(parameters, ps))
      -
      -    # Observe each prediction.
      -    for i in eachindex(ts)
      -        ts[i] ~ Bernoulli(preds[i])
      -    end
      -end
      bayes_nn (generic function with 2 methods)

      Inference can now be performed by calling sample. We use the HMC sampler here.

      julia
      # Perform inference.
      -N = 5000
      -ch = sample(bayes_nn(reduce(hcat, xs), ts), HMC(0.05, 4; adtype=AutoTracker()), N)
      Chains MCMC chain (5000×30×1 Array{Float64, 3}):
      -
      -Iterations        = 1:1:5000
      -Number of chains  = 1
      -Samples per chain = 5000
      -Wall duration     = 30.89 seconds
      -Compute duration  = 30.89 seconds
      -parameters        = parameters[1], parameters[2], parameters[3], parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9], parameters[10], parameters[11], parameters[12], parameters[13], parameters[14], parameters[15], parameters[16], parameters[17], parameters[18], parameters[19], parameters[20]
      -internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size
      -
      -Summary Statistics
      -      parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   ess_per_sec
      -          Symbol   Float64   Float64   Float64    Float64    Float64   Float64       Float64
      -
      -   parameters[1]    5.8536    2.5579    0.6449    16.6210    21.2567    1.2169        0.5381
      -   parameters[2]    0.1106    0.3642    0.0467    76.1493    35.8893    1.0235        2.4651
      -   parameters[3]    4.1685    2.2970    0.6025    15.9725    62.4537    1.0480        0.5171
      -   parameters[4]    1.0580    1.9179    0.4441    22.3066    51.3818    1.0513        0.7221
      -   parameters[5]    4.7925    2.0622    0.5484    15.4001    28.2539    1.1175        0.4985
      -   parameters[6]    0.7155    1.3734    0.2603    28.7492    59.2257    1.0269        0.9307
      -   parameters[7]    0.4981    2.7530    0.7495    14.5593    22.0260    1.2506        0.4713
      -   parameters[8]    0.4568    1.1324    0.2031    31.9424    38.7102    1.0447        1.0340
      -   parameters[9]   -1.0215    2.6186    0.7268    14.2896    22.8493    1.2278        0.4626
      -  parameters[10]    2.1324    1.6319    0.4231    15.0454    43.2111    1.3708        0.4870
      -  parameters[11]   -2.0262    1.8130    0.4727    15.0003    23.5212    1.2630        0.4856
      -  parameters[12]   -4.5525    1.9168    0.4399    18.6812    29.9668    1.0581        0.6047
      -  parameters[13]    3.7207    1.3736    0.2889    22.9673    55.7445    1.0128        0.7435
      -  parameters[14]    2.5799    1.7626    0.4405    17.7089    38.8364    1.1358        0.5733
      -  parameters[15]   -1.3181    1.9554    0.5213    14.6312    22.0160    1.1793        0.4736
      -  parameters[16]   -2.9322    1.2308    0.2334    28.3970   130.8667    1.0216        0.9193
      -  parameters[17]   -2.4957    2.7976    0.7745    16.2068    20.1562    1.0692        0.5246
      -  parameters[18]   -5.0880    1.1401    0.1828    39.8971    52.4786    1.1085        1.2915
      -  parameters[19]   -4.7674    2.0627    0.5354    21.4562    18.3886    1.0764        0.6946
      -  parameters[20]   -4.7466    1.2214    0.2043    38.5170    32.7162    1.0004        1.2469
      -
      -Quantiles
      -      parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      -          Symbol   Float64   Float64   Float64   Float64   Float64
      -
      -   parameters[1]    0.9164    4.2536    5.9940    7.2512   12.0283
      -   parameters[2]   -0.5080   -0.1044    0.0855    0.2984    1.0043
      -   parameters[3]    0.3276    2.1438    4.2390    6.1737    7.8532
      -   parameters[4]   -1.4579   -0.1269    0.4550    1.6893    5.8331
      -   parameters[5]    1.4611    3.3711    4.4965    5.6720    9.3282
      -   parameters[6]   -1.2114   -0.1218    0.4172    1.2724    4.1938
      -   parameters[7]   -6.0297   -0.5712    0.5929    2.1686    5.8786
      -   parameters[8]   -1.8791   -0.2492    0.4862    1.1814    2.9032
      -   parameters[9]   -6.7656   -2.6609   -0.4230    0.9269    2.8021
      -  parameters[10]   -1.2108    1.0782    2.0899    3.3048    5.0428
      -  parameters[11]   -6.1454   -3.0731   -2.0592   -1.0526    1.8166
      -  parameters[12]   -8.8873   -5.8079   -4.2395   -3.2409   -1.2353
      -  parameters[13]    1.2909    2.6693    3.7502    4.6268    6.7316
      -  parameters[14]   -0.2741    1.2807    2.2801    3.5679    6.4876
      -  parameters[15]   -4.7115   -2.6584   -1.4956   -0.2644    3.3498
      -  parameters[16]   -5.4427   -3.7860   -2.8946   -1.9382   -0.8417
      -  parameters[17]   -6.4221   -4.0549   -2.9178   -1.7934    5.5835
      -  parameters[18]   -7.5413   -5.8069   -5.0388   -4.3025   -3.0121
      -  parameters[19]   -7.2611   -5.9449   -5.2768   -4.3663    2.1958
      -  parameters[20]   -7.0130   -5.5204   -4.8727   -3.9813   -1.9280

      Now we extract the parameter samples from the sampled chain as θ (this is of size 5000 x 20 where 5000 is the number of iterations and 20 is the number of parameters). We'll use these primarily to determine how good our model's classifier is.

      julia
      # Extract all weight and bias parameters.
      -θ = MCMCChains.group(ch, :parameters).value;

      Prediction Visualization

      julia
      # A helper to run the nn through data \`x\` using parameters \`θ\`
      -nn_forward(x, θ) = model(x, vector_to_parameters(θ, ps))
      -
      -# Plot the data we have.
      -fig = plot_data()
      -
      -# Find the index that provided the highest log posterior in the chain.
      -_, i = findmax(ch[:lp])
      -
      -# Extract the max row value from i.
      -i = i.I[1]
      -
      -# Plot the posterior distribution with a contour plot
      -x1_range = collect(range(-6; stop=6, length=25))
      -x2_range = collect(range(-6; stop=6, length=25))
      -Z = [nn_forward([x1, x2], θ[i, :])[1] for x1 in x1_range, x2 in x2_range]
      -contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      The contour plot above shows that the MAP method is not too bad at classifying our data. Now we can visualize our predictions.

      `,33)),s("mjx-container",t,[(A(),i("svg",E,a[0]||(a[0]=[n('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("msub",null,[s("mo",{"data-mjx-texclass":"OP"},"∫"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"θ"),s("mo",{stretchy:"false"},")"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"≈"),s("munder",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ"),s("mo",null,"∼"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")")])]),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")")])],-1))]),a[3]||(a[3]=n(`

      The nn_predict function takes the average predicted value from a network parameterized by weights drawn from the MCMC chain.

      julia
      # Return the average predicted value across multiple weights.
      -nn_predict(x, θ, num) = mean([first(nn_forward(x, view(θ, i, :))) for i in 1:10:num])
      nn_predict (generic function with 1 method)

      Next, we use the nn_predict function to predict the value at a sample of points where the x1 and x2 coordinates range between -6 and 6. As we can see below, we still have a satisfactory fit to our data, and more importantly, we can also see where the neural network is uncertain about its predictions much easier–-those regions between cluster boundaries.

      Plot the average prediction.

      julia
      fig = plot_data()
      -
      -n_end = 1500
      -x1_range = collect(range(-6; stop=6, length=25))
      -x2_range = collect(range(-6; stop=6, length=25))
      -Z = [nn_predict([x1, x2], θ, n_end)[1] for x1 in x1_range, x2 in x2_range]
      -contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 5,000.

      julia
      fig = plot_data()
      -Z = [first(nn_forward([x1, x2], θ[1, :])) for x1 in x1_range, x2 in x2_range]
      -c = contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -record(fig, "results.gif", 1:250:size(θ, 1)) do i
      -    fig.current_axis[].title = "Iteration: $i"
      -    Z = [first(nn_forward([x1, x2], θ[i, :])) for x1 in x1_range, x2 in x2_range]
      -    c[3] = Z
      -    return fig
      -end
      "results.gif"

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 128 default, 0 interactive, 64 GC (on 128 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 128
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 128
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,16))])}const o=p(l,[["render",h]]);export{Q as __pageData,o as default}; diff --git a/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.lean.js b/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.lean.js deleted file mode 100644 index 1bb3358a71..0000000000 --- a/dev/assets/tutorials_intermediate_2_BayesianNN.md.CrjRlGMZ.lean.js +++ /dev/null @@ -1,693 +0,0 @@ -import{_ as p,c as i,a2 as n,j as s,o as A}from"./chunks/framework.I-x9Gl6h.js";const e="/dev/assets/results.CFQJNj5o.gif",Q=JSON.parse('{"title":"Bayesian Neural Network","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/2_BayesianNN.md","filePath":"tutorials/intermediate/2_BayesianNN.md","lastUpdated":null}'),l={name:"tutorials/intermediate/2_BayesianNN.md"},t={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-3.222ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.264ex",height:"6.301ex",role:"img",focusable:"false",viewBox:"0 -1361 20448.8 2785.1","aria-hidden":"true"};function h(r,a,g,k,d,C){return A(),i("div",null,[a[2]||(a[2]=n(`

      Bayesian Neural Network

      We borrow this tutorial from the official Turing Docs. We will show how the explicit parameterization of Lux enables first-class composability with packages which expect flattened out parameter vectors.

      Note: The tutorial in the official Turing docs is now using Lux instead of Flux.

      We will use Turing.jl with Lux.jl to implement implementing a classification algorithm. Lets start by importing the relevant libraries.

      julia
      # Import libraries
      -
      -using Lux, Turing, CairoMakie, Random, Tracker, Functors, LinearAlgebra
      -
      -# Sampling progress
      -Turing.setprogress!(true);
      Precompiling Lux...
      -    303.5 ms  ✓ Reexport
      -    395.0 ms  ✓ ConcreteStructs
      -    305.7 ms  ✓ SIMDTypes
      -    328.3 ms  ✓ IfElse
      -    340.6 ms  ✓ Future
      -    362.3 ms  ✓ CEnum
      -    366.1 ms  ✓ OpenLibm_jll
      -    366.1 ms  ✓ ArgCheck
      -    374.4 ms  ✓ ManualMemory
      -    448.0 ms  ✓ CompilerSupportLibraries_jll
      -    500.9 ms  ✓ Statistics
      -    538.3 ms  ✓ EnzymeCore
      -    558.0 ms  ✓ ADTypes
      -    444.0 ms  ✓ FastClosures
      -    441.7 ms  ✓ StaticArraysCore
      -    416.6 ms  ✓ NaNMath
      -    473.7 ms  ✓ ConstructionBase
      -    469.3 ms  ✓ CommonWorldInvalidations
      -    470.6 ms  ✓ Adapt
      -    858.7 ms  ✓ IrrationalConstants
      -    592.6 ms  ✓ OpenSpecFun_jll
      -    354.8 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    766.0 ms  ✓ ThreadingUtilities
      -    549.5 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    357.2 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      -    359.7 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      -    358.6 ms  ✓ EnzymeCore → AdaptExt
      -    376.0 ms  ✓ DiffResults
      -    436.2 ms  ✓ GPUArraysCore
      -    500.7 ms  ✓ ArrayInterface
      -    559.6 ms  ✓ LogExpFunctions
      -    750.7 ms  ✓ Static
      -    356.5 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    343.5 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -   1718.2 ms  ✓ UnsafeAtomics
      -    384.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    583.8 ms  ✓ Functors
      -    381.4 ms  ✓ BitTwiddlingConvenienceFunctions
      -   1978.8 ms  ✓ MacroTools
      -    452.3 ms  ✓ Atomix
      -   2017.7 ms  ✓ Hwloc
      -    943.0 ms  ✓ CPUSummary
      -    766.9 ms  ✓ MLDataDevices
      -    597.3 ms  ✓ CommonSubexpressions
      -   1285.2 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -   1066.4 ms  ✓ Optimisers
      -   1492.6 ms  ✓ StaticArrayInterface
      -    609.5 ms  ✓ PolyesterWeave
      -    594.2 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    397.3 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    395.3 ms  ✓ Optimisers → OptimisersAdaptExt
      -   1318.7 ms  ✓ Setfield
      -   1465.1 ms  ✓ DispatchDoctor
      -    445.8 ms  ✓ CloseOpenIntervals
      -    568.8 ms  ✓ LayoutPointers
      -    387.2 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   2415.8 ms  ✓ SpecialFunctions
      -    582.8 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    573.6 ms  ✓ DiffRules
      -    885.0 ms  ✓ StrideArraysCore
      -   1111.1 ms  ✓ LuxCore
      -    416.3 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    417.8 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    422.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    441.5 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    545.9 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -    711.4 ms  ✓ Polyester
      -   1776.5 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   2539.2 ms  ✓ WeightInitializers
      -   5980.7 ms  ✓ StaticArrays
      -    877.2 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    572.5 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    575.4 ms  ✓ Adapt → AdaptStaticArraysExt
      -    587.3 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    597.1 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    644.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3333.4 ms  ✓ ForwardDiff
      -    838.1 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3101.5 ms  ✓ KernelAbstractions
      -    644.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    709.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5063.5 ms  ✓ NNlib
      -    803.9 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    907.5 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5642.3 ms  ✓ LuxLib
      -   8885.5 ms  ✓ Lux
      -  86 dependencies successfully precompiled in 32 seconds. 23 already precompiled.
      -Precompiling Turing...
      -    298.3 ms  ✓ IteratorInterfaceExtensions
      -    308.6 ms  ✓ NaturalSort
      -    310.6 ms  ✓ UnPack
      -    323.4 ms  ✓ SimpleUnPack
      -    344.6 ms  ✓ RangeArrays
      -    360.8 ms  ✓ LaTeXStrings
      -    369.6 ms  ✓ ScientificTypesBase
      -    368.0 ms  ✓ ExprTools
      -    366.3 ms  ✓ StatsAPI
      -    416.3 ms  ✓ ChangesOfVariables
      -    444.3 ms  ✓ PositiveFactorizations
      -    544.6 ms  ✓ AbstractFFTs
      -    300.6 ms  ✓ CommonSolve
      -    287.3 ms  ✓ DataValueInterfaces
      -    669.0 ms  ✓ FunctionWrappers
      -    400.3 ms  ✓ InverseFunctions
      -    327.2 ms  ✓ EnumX
      -    410.4 ms  ✓ SuiteSparse_jll
      -    334.9 ms  ✓ RealDot
      -    829.8 ms  ✓ Combinatorics
      -    809.3 ms  ✓ InitialValues
      -    502.5 ms  ✓ IterTools
      -    480.4 ms  ✓ OrderedCollections
      -    553.1 ms  ✓ Serialization
      -    363.5 ms  ✓ Zlib_jll
      -    951.8 ms  ✓ OffsetArrays
      -    337.4 ms  ✓ CompositionsBase
      -    312.6 ms  ✓ PtrArrays
      -    331.7 ms  ✓ DefineSingletons
      -    452.6 ms  ✓ IntervalSets
      -    347.9 ms  ✓ Ratios
      -    511.0 ms  ✓ AbstractTrees
      -    343.3 ms  ✓ InvertedIndices
      -    343.8 ms  ✓ DataAPI
      -    434.8 ms  ✓ DelimitedFiles
      -    949.3 ms  ✓ FillArrays
      -    907.8 ms  ✓ RandomNumbers
      -    447.0 ms  ✓ LRUCache
      -    425.8 ms  ✓ ProgressLogging
      -    390.4 ms  ✓ MappedArrays
      -    376.3 ms  ✓ SciMLStructures
      -    490.4 ms  ✓ LoggingExtras
      -    546.5 ms  ✓ Rmath_jll
      -    324.1 ms  ✓ TableTraits
      -    584.1 ms  ✓ FiniteDiff
      -    597.0 ms  ✓ oneTBB_jll
      -    838.2 ms  ✓ DifferentiationInterface
      -    741.9 ms  ✓ LogDensityProblems
      -    386.1 ms  ✓ StatisticalTraits
      -   1060.4 ms  ✓ Crayons
      -    455.3 ms  ✓ LogExpFunctions → LogExpFunctionsChangesOfVariablesExt
      -   1024.5 ms  ✓ Baselet
      -    345.4 ms  ✓ FunctionWrappersWrappers
      -    396.5 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    433.1 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    445.7 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    394.0 ms  ✓ ChangesOfVariables → ChangesOfVariablesInverseFunctionsExt
      -    412.2 ms  ✓ Parameters
      -   1048.7 ms  ✓ ZygoteRules
      -   1122.5 ms  ✓ HypergeometricFunctions
      -    381.4 ms  ✓ RuntimeGeneratedFunctions
      -   1441.9 ms  ✓ RecipesBase
      -    396.5 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      -    379.8 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      -    569.5 ms  ✓ FFTW_jll
      -    595.5 ms  ✓ L_BFGS_B_jll
      -    443.5 ms  ✓ AliasTables
      -    356.3 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    370.3 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -   1735.1 ms  ✓ StringManipulation
      -    352.6 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    377.2 ms  ✓ LeftChildRightSiblingTrees
      -    389.3 ms  ✓ FillArrays → FillArraysStatisticsExt
      -    421.2 ms  ✓ Missings
      -    360.9 ms  ✓ LRUCache → SerializationExt
      -    640.7 ms  ✓ Libtask
      -    412.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -    400.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1255.9 ms  ✓ IntelOpenMP_jll
      -    813.0 ms  ✓ Random123
      -   1668.3 ms  ✓ DataStructures
      -    788.9 ms  ✓ Rmath
      -    589.3 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      -    602.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -   1700.0 ms  ✓ Distributed
      -    776.0 ms  ✓ Tables
      -    504.2 ms  ✓ LogDensityProblemsAD
      -    807.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -    499.3 ms  ✓ LBFGSB
      -    553.2 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      -    486.9 ms  ✓ SortingAlgorithms
      -    728.6 ms  ✓ MLJModelInterface
      -    648.4 ms  ✓ TerminalLoggers
      -    728.5 ms  ✓ AxisArrays
      -    641.1 ms  ✓ SharedArrays
      -    968.9 ms  ✓ QuadGK
      -    504.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADADTypesExt
      -    770.8 ms  ✓ StructArrays
      -    881.7 ms  ✓ ProgressMeter
      -   1283.9 ms  ✓ MKL_jll
      -   2882.4 ms  ✓ Test
      -    703.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADForwardDiffExt
      -   1045.4 ms  ✓ NLSolversBase
      -    483.0 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADDifferentiationInterfaceExt
      -    390.7 ms  ✓ StructArrays → StructArraysAdaptExt
      -    419.5 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      -    327.9 ms  ✓ InplaceOps
      -    455.6 ms  ✓ ConsoleProgressMonitor
      -   1712.3 ms  ✓ StatsFuns
      -    672.5 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    713.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2139.0 ms  ✓ Accessors
      -    575.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   3812.8 ms  ✓ SparseArrays
      -    934.4 ms  ✓ ChangesOfVariables → ChangesOfVariablesTestExt
      -   1121.7 ms  ✓ SplittablesBase
      -    461.9 ms  ✓ Accessors → StructArraysExt
      -    660.4 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -    616.3 ms  ✓ Accessors → TestExt
      -   5051.1 ms  ✓ Tracker
      -    653.3 ms  ✓ Accessors → StaticArraysExt
      -    778.3 ms  ✓ Accessors → LinearAlgebraExt
      -    631.3 ms  ✓ DensityInterface
      -    858.3 ms  ✓ Accessors → IntervalSetsExt
      -   1436.4 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    628.2 ms  ✓ Statistics → SparseArraysExt
      -    692.8 ms  ✓ WoodburyMatrices
      -    608.9 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -    628.3 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -    583.8 ms  ✓ SuiteSparse
      -   1557.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    615.5 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      -   1840.5 ms  ✓ LineSearches
      -    620.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -    675.1 ms  ✓ FillArrays → FillArraysSparseArraysExt
      -    958.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -    644.1 ms  ✓ StructArrays → StructArraysSparseArraysExt
      -    743.7 ms  ✓ BangBang
      -   1216.1 ms  ✓ SparseMatrixColorings
      -    649.1 ms  ✓ AxisAlgorithms
      -   1122.4 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -    588.2 ms  ✓ SparseInverseSubset
      -   1185.1 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADTrackerExt
      -   1205.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    853.1 ms  ✓ PDMats
      -   1108.3 ms  ✓ NamedArrays
      -    497.9 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    483.8 ms  ✓ BangBang → BangBangStructArraysExt
      -    471.7 ms  ✓ BangBang → BangBangTablesExt
      -    674.3 ms  ✓ BangBang → BangBangStaticArraysExt
      -   1377.9 ms  ✓ SymbolicIndexingInterface
      -    845.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -    897.5 ms  ✓ MicroCollections
      -   1776.8 ms  ✓ SciMLOperators
      -    647.4 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    571.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   2401.9 ms  ✓ StatsBase
      -   4735.1 ms  ✓ FFTW
      -    954.6 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1528.7 ms  ✓ Tracker → TrackerPDMatsExt
      -   3196.9 ms  ✓ Roots
      -   2120.6 ms  ✓ Interpolations
      -    906.3 ms  ✓ NNlib → NNlibFFTWExt
      -    567.6 ms  ✓ Roots → RootsChainRulesCoreExt
      -   2205.4 ms  ✓ RecursiveArrayTools
      -    680.0 ms  ✓ Roots → RootsForwardDiffExt
      -    585.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   3676.4 ms  ✓ SparseConnectivityTracer
      -    727.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    932.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   2782.9 ms  ✓ Transducers
      -   1341.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    673.8 ms  ✓ Transducers → TransducersAdaptExt
      -   1343.1 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   3231.5 ms  ✓ Optim
      -  11966.7 ms  ✓ MLStyle
      -   1474.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      -   1765.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      -   1829.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      -   1928.3 ms  ✓ AbstractMCMC
      -   5670.3 ms  ✓ ChainRules
      -   4938.6 ms  ✓ Distributions
      -    774.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1201.7 ms  ✓ SSMProblems
      -   1409.6 ms  ✓ AbstractPPL
      -   1221.7 ms  ✓ Distributions → DistributionsDensityInterfaceExt
      -   1357.6 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1363.2 ms  ✓ Distributions → DistributionsTestExt
      -   1479.1 ms  ✓ MCMCDiagnosticTools
      -   2732.1 ms  ✓ AdvancedHMC
      -   1771.3 ms  ✓ EllipticalSliceSampling
      -   1915.6 ms  ✓ KernelDensity
      -   1931.6 ms  ✓ AdvancedPS
      -   1937.9 ms  ✓ AdvancedMH
      -  13717.5 ms  ✓ PrettyTables
      -   3259.4 ms  ✓ Bijectors
      -   1550.6 ms  ✓ AdvancedMH → AdvancedMHStructArraysExt
      -   1635.4 ms  ✓ AdvancedPS → AdvancedPSLibtaskExt
      -   1704.5 ms  ✓ AdvancedMH → AdvancedMHForwardDiffExt
      -   3759.9 ms  ✓ DistributionsAD
      -   7528.4 ms  ✓ Expronicon
      -   1318.1 ms  ✓ Bijectors → BijectorsForwardDiffExt
      -   2942.8 ms  ✓ MCMCChains
      -   1396.7 ms  ✓ DistributionsAD → DistributionsADForwardDiffExt
      -   1492.0 ms  ✓ Bijectors → BijectorsDistributionsADExt
      -   2460.9 ms  ✓ Bijectors → BijectorsTrackerExt
      -   2743.4 ms  ✓ DistributionsAD → DistributionsADTrackerExt
      -   2173.5 ms  ✓ AdvancedHMC → AdvancedHMCMCMCChainsExt
      -   2180.1 ms  ✓ AdvancedMH → AdvancedMHMCMCChainsExt
      -   1979.3 ms  ✓ AdvancedVI
      -   8087.6 ms  ✓ DynamicPPL
      -   1818.1 ms  ✓ DynamicPPL → DynamicPPLForwardDiffExt
      -   1908.3 ms  ✓ DynamicPPL → DynamicPPLChainRulesCoreExt
      -   2408.4 ms  ✓ DynamicPPL → DynamicPPLMCMCChainsExt
      -   2658.7 ms  ✓ DynamicPPL → DynamicPPLZygoteRulesExt
      -  11009.7 ms  ✓ SciMLBase
      -    958.1 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2133.2 ms  ✓ OptimizationBase
      -    351.6 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      -    617.7 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      -   1968.5 ms  ✓ Optimization
      -  11867.2 ms  ✓ OptimizationOptimJL
      -   5158.5 ms  ✓ Turing
      -   4136.3 ms  ✓ Turing → TuringOptimExt
      -  224 dependencies successfully precompiled in 57 seconds. 82 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    650.3 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    428.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    806.1 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling BijectorsEnzymeCoreExt...
      -   1281.2 ms  ✓ Bijectors → BijectorsEnzymeCoreExt
      -  1 dependency successfully precompiled in 1 seconds. 79 already precompiled.
      -Precompiling HwlocTrees...
      -    500.2 ms  ✓ Hwloc → HwlocTrees
      -  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      -Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    437.5 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling MLDataDevicesTrackerExt...
      -   1157.5 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      -  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      -Precompiling LuxLibTrackerExt...
      -   1079.1 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.3 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      -Precompiling LuxTrackerExt...
      -   2027.4 ms  ✓ Lux → LuxTrackerExt
      -  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      -Precompiling DynamicPPLEnzymeCoreExt...
      -   1751.3 ms  ✓ DynamicPPL → DynamicPPLEnzymeCoreExt
      -  1 dependency successfully precompiled in 2 seconds. 128 already precompiled.
      -Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    580.6 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      -  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      -Precompiling OptimizationMLDataDevicesExt...
      -   1366.7 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      -  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      -Precompiling CairoMakie...
      -    346.1 ms  ✓ SignedDistanceFields
      -    424.0 ms  ✓ PaddedViews
      -    389.4 ms  ✓ StackViews
      -    394.2 ms  ✓ Scratch
      -    401.6 ms  ✓ Showoff
      -    411.6 ms  ✓ Extents
      -    561.8 ms  ✓ Xorg_libXau_jll
      -    568.1 ms  ✓ Graphite2_jll
      -    569.3 ms  ✓ LLVMOpenMP_jll
      -    574.2 ms  ✓ Libmount_jll
      -    597.0 ms  ✓ OpenSSL_jll
      -    597.2 ms  ✓ Bzip2_jll
      -    568.4 ms  ✓ libpng_jll
      -    548.2 ms  ✓ libfdk_aac_jll
      -    549.5 ms  ✓ Imath_jll
      -    554.2 ms  ✓ Giflib_jll
      -    567.1 ms  ✓ LERC_jll
      -    590.7 ms  ✓ LAME_jll
      -   1043.7 ms  ✓ SimpleTraits
      -    567.2 ms  ✓ EarCut_jll
      -    576.2 ms  ✓ CRlibm_jll
      -    574.5 ms  ✓ Ogg_jll
      -    576.5 ms  ✓ x265_jll
      -    606.5 ms  ✓ XZ_jll
      -    629.7 ms  ✓ JpegTurbo_jll
      -    565.6 ms  ✓ Xorg_libXdmcp_jll
      -    578.7 ms  ✓ x264_jll
      -    593.2 ms  ✓ libaom_jll
      -    561.6 ms  ✓ LZO_jll
      -    583.0 ms  ✓ Expat_jll
      -    587.3 ms  ✓ Zstd_jll
      -    489.3 ms  ✓ Xorg_xtrans_jll
      -   1577.5 ms  ✓ UnicodeFun
      -    561.9 ms  ✓ Opus_jll
      -    493.8 ms  ✓ Xorg_libpthread_stubs_jll
      -    550.5 ms  ✓ Libffi_jll
      -    555.9 ms  ✓ Libgpg_error_jll
      -    600.9 ms  ✓ Libiconv_jll
      -    575.0 ms  ✓ isoband_jll
      -   1892.0 ms  ✓ FixedPointNumbers
      -    376.6 ms  ✓ RelocatableFolders
      -    443.9 ms  ✓ MosaicViews
      -    568.0 ms  ✓ Libuuid_jll
      -    586.2 ms  ✓ FriBidi_jll
      -    612.7 ms  ✓ Pixman_jll
      -    607.5 ms  ✓ FreeType2_jll
      -    643.2 ms  ✓ libvorbis_jll
      -    707.4 ms  ✓ OpenEXR_jll
      -    419.2 ms  ✓ Isoband
      -    610.5 ms  ✓ libsixel_jll
      -   1008.9 ms  ✓ FilePathsBase
      -    646.3 ms  ✓ Libtiff_jll
      -    611.6 ms  ✓ Libgcrypt_jll
      -    386.5 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -   1102.2 ms  ✓ GeoInterface
      -    706.5 ms  ✓ XML2_jll
      -    496.2 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -    751.5 ms  ✓ Fontconfig_jll
      -    722.6 ms  ✓ FilePaths
      -    974.0 ms  ✓ FreeType
      -    648.4 ms  ✓ XSLT_jll
      -    737.1 ms  ✓ Gettext_jll
      -   1178.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   2515.9 ms  ✓ IntervalArithmetic
      -    756.7 ms  ✓ Glib_jll
      -   2178.2 ms  ✓ ColorTypes
      -   1105.7 ms  ✓ Xorg_libxcb_jll
      -    485.8 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   3187.4 ms  ✓ PkgVersion
      -   3356.3 ms  ✓ FileIO
      -    619.0 ms  ✓ Xorg_libX11_jll
      -    582.1 ms  ✓ Xorg_libXrender_jll
      -    583.6 ms  ✓ Xorg_libXext_jll
      -   1733.1 ms  ✓ ColorVectorSpace
      -   1460.0 ms  ✓ QOI
      -    713.2 ms  ✓ Libglvnd_jll
      -    746.8 ms  ✓ Cairo_jll
      -    694.6 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -    726.9 ms  ✓ HarfBuzz_jll
      -    764.7 ms  ✓ libwebp_jll
      -   4594.7 ms  ✓ GeometryBasics
      -   3577.6 ms  ✓ ExactPredicates
      -    701.1 ms  ✓ libass_jll
      -    739.5 ms  ✓ Pango_jll
      -   6456.8 ms  ✓ SIMD
      -   1037.6 ms  ✓ Packing
      -   3970.8 ms  ✓ Colors
      -   1281.3 ms  ✓ ShaderAbstractions
      -    912.6 ms  ✓ FFMPEG_jll
      -    548.3 ms  ✓ Graphics
      -    568.5 ms  ✓ Animations
      -   1174.1 ms  ✓ ColorBrewer
      -   1529.2 ms  ✓ OpenEXR
      -   1819.0 ms  ✓ FreeTypeAbstraction
      -   1418.2 ms  ✓ Cairo
      -   3473.9 ms  ✓ MakieCore
      -   3316.1 ms  ✓ ColorSchemes
      -   4876.0 ms  ✓ GridLayoutBase
      -   5146.7 ms  ✓ DelaunayTriangulation
      -  15078.1 ms  ✓ Unitful
      -    543.7 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    547.9 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   8162.9 ms  ✓ Automa
      -   1205.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   7868.3 ms  ✓ PlotUtils
      -  13993.4 ms  ✓ ImageCore
      -   1903.7 ms  ✓ ImageBase
      -   2553.6 ms  ✓ WebP
      -   3189.4 ms  ✓ PNGFiles
      -   3444.6 ms  ✓ JpegTurbo
      -   1909.7 ms  ✓ ImageAxes
      -   4182.7 ms  ✓ Sixel
      -  10561.3 ms  ✓ MathTeXEngine
      -   1120.1 ms  ✓ ImageMetadata
      -   1885.7 ms  ✓ Netpbm
      -  43255.8 ms  ✓ TiffImages
      -   1183.9 ms  ✓ ImageIO
      - 106062.6 ms  ✓ Makie
      -  72508.2 ms  ✓ CairoMakie
      -  119 dependencies successfully precompiled in 231 seconds. 151 already precompiled.
      -Precompiling SparseMatrixColoringsColorsExt...
      -    874.9 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      -  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      -Precompiling UnitfulExt...
      -    582.9 ms  ✓ Accessors → UnitfulExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling IntervalArithmeticForwardDiffExt...
      -    455.6 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    640.7 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      -  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      -Precompiling IntervalArithmeticRecipesBaseExt...
      -    763.1 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      -  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      -Precompiling SciMLBaseMakieExt...
      -   9177.8 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.
      -[ Info: [Turing]: progress logging is enabled globally
      -[ Info: [AdvancedVI]: global PROGRESS is set as true

      Generating data

      Our goal here is to use a Bayesian neural network to classify points in an artificial dataset. The code below generates data points arranged in a box-like pattern and displays a graph of the dataset we'll be working with.

      julia
      # Number of points to generate
      -N = 80
      -M = round(Int, N / 4)
      -rng = Random.default_rng()
      -Random.seed!(rng, 1234)
      -
      -# Generate artificial data
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -xt1s = Array([[x1s[i] + 0.5f0; x2s[i] + 0.5f0] for i in 1:M])
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -append!(xt1s, Array([[x1s[i] - 5.0f0; x2s[i] - 5.0f0] for i in 1:M]))
      -
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -xt0s = Array([[x1s[i] + 0.5f0; x2s[i] - 5.0f0] for i in 1:M])
      -x1s = rand(rng, Float32, M) * 4.5f0;
      -x2s = rand(rng, Float32, M) * 4.5f0;
      -append!(xt0s, Array([[x1s[i] - 5.0f0; x2s[i] + 0.5f0] for i in 1:M]))
      -
      -# Store all the data for later
      -xs = [xt1s; xt0s]
      -ts = [ones(2 * M); zeros(2 * M)]
      -
      -# Plot data points
      -
      -function plot_data()
      -    x1 = first.(xt1s)
      -    y1 = last.(xt1s)
      -    x2 = first.(xt0s)
      -    y2 = last.(xt0s)
      -
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -
      -    scatter!(ax, x1, y1; markersize=16, color=:red, strokecolor=:black, strokewidth=2)
      -    scatter!(ax, x2, y2; markersize=16, color=:blue, strokecolor=:black, strokewidth=2)
      -
      -    return fig
      -end
      -
      -plot_data()

      Building the Neural Network

      The next step is to define a feedforward neural network where we express our parameters as distributions, and not single points as with traditional neural networks. For this we will use Dense to define liner layers and compose them via Chain, both are neural network primitives from Lux. The network nn we will create will have two hidden layers with tanh activations and one output layer with sigmoid activation, as shown below.

      The nn is an instance that acts as a function and can take data, parameters and current state as inputs and output predictions. We will define distributions on the neural network parameters.

      julia
      # Construct a neural network using Lux
      -nn = Chain(Dense(2 => 3, tanh), Dense(3 => 2, tanh), Dense(2 => 1, sigmoid))
      -
      -# Initialize the model weights and state
      -ps, st = Lux.setup(rng, nn)
      -
      -Lux.parameterlength(nn) # number of parameters in NN
      20

      The probabilistic model specification below creates a parameters variable, which has IID normal variables. The parameters represents all parameters of our neural net (weights and biases).

      julia
      # Create a regularization term and a Gaussian prior variance term.
      -alpha = 0.09
      -sig = sqrt(1.0 / alpha)
      3.3333333333333335

      Construct named tuple from a sampled parameter vector. We could also use ComponentArrays here and simply broadcast to avoid doing this. But let's do it this way to avoid dependencies.

      julia
      function vector_to_parameters(ps_new::AbstractVector, ps::NamedTuple)
      -    @assert length(ps_new) == Lux.parameterlength(ps)
      -    i = 1
      -    function get_ps(x)
      -        z = reshape(view(ps_new, i:(i + length(x) - 1)), size(x))
      -        i += length(x)
      -        return z
      -    end
      -    return fmap(get_ps, ps)
      -end
      vector_to_parameters (generic function with 1 method)

      To interface with external libraries it is often desirable to use the StatefulLuxLayer to automatically handle the neural network states.

      julia
      const model = StatefulLuxLayer{true}(nn, nothing, st)
      -
      -# Specify the probabilistic model.
      -@model function bayes_nn(xs, ts)
      -    # Sample the parameters
      -    nparameters = Lux.parameterlength(nn)
      -    parameters ~ MvNormal(zeros(nparameters), Diagonal(abs2.(sig .* ones(nparameters))))
      -
      -    # Forward NN to make predictions
      -    preds = Lux.apply(model, xs, vector_to_parameters(parameters, ps))
      -
      -    # Observe each prediction.
      -    for i in eachindex(ts)
      -        ts[i] ~ Bernoulli(preds[i])
      -    end
      -end
      bayes_nn (generic function with 2 methods)

      Inference can now be performed by calling sample. We use the HMC sampler here.

      julia
      # Perform inference.
      -N = 5000
      -ch = sample(bayes_nn(reduce(hcat, xs), ts), HMC(0.05, 4; adtype=AutoTracker()), N)
      Chains MCMC chain (5000×30×1 Array{Float64, 3}):
      -
      -Iterations        = 1:1:5000
      -Number of chains  = 1
      -Samples per chain = 5000
      -Wall duration     = 30.89 seconds
      -Compute duration  = 30.89 seconds
      -parameters        = parameters[1], parameters[2], parameters[3], parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9], parameters[10], parameters[11], parameters[12], parameters[13], parameters[14], parameters[15], parameters[16], parameters[17], parameters[18], parameters[19], parameters[20]
      -internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size
      -
      -Summary Statistics
      -      parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   ess_per_sec
      -          Symbol   Float64   Float64   Float64    Float64    Float64   Float64       Float64
      -
      -   parameters[1]    5.8536    2.5579    0.6449    16.6210    21.2567    1.2169        0.5381
      -   parameters[2]    0.1106    0.3642    0.0467    76.1493    35.8893    1.0235        2.4651
      -   parameters[3]    4.1685    2.2970    0.6025    15.9725    62.4537    1.0480        0.5171
      -   parameters[4]    1.0580    1.9179    0.4441    22.3066    51.3818    1.0513        0.7221
      -   parameters[5]    4.7925    2.0622    0.5484    15.4001    28.2539    1.1175        0.4985
      -   parameters[6]    0.7155    1.3734    0.2603    28.7492    59.2257    1.0269        0.9307
      -   parameters[7]    0.4981    2.7530    0.7495    14.5593    22.0260    1.2506        0.4713
      -   parameters[8]    0.4568    1.1324    0.2031    31.9424    38.7102    1.0447        1.0340
      -   parameters[9]   -1.0215    2.6186    0.7268    14.2896    22.8493    1.2278        0.4626
      -  parameters[10]    2.1324    1.6319    0.4231    15.0454    43.2111    1.3708        0.4870
      -  parameters[11]   -2.0262    1.8130    0.4727    15.0003    23.5212    1.2630        0.4856
      -  parameters[12]   -4.5525    1.9168    0.4399    18.6812    29.9668    1.0581        0.6047
      -  parameters[13]    3.7207    1.3736    0.2889    22.9673    55.7445    1.0128        0.7435
      -  parameters[14]    2.5799    1.7626    0.4405    17.7089    38.8364    1.1358        0.5733
      -  parameters[15]   -1.3181    1.9554    0.5213    14.6312    22.0160    1.1793        0.4736
      -  parameters[16]   -2.9322    1.2308    0.2334    28.3970   130.8667    1.0216        0.9193
      -  parameters[17]   -2.4957    2.7976    0.7745    16.2068    20.1562    1.0692        0.5246
      -  parameters[18]   -5.0880    1.1401    0.1828    39.8971    52.4786    1.1085        1.2915
      -  parameters[19]   -4.7674    2.0627    0.5354    21.4562    18.3886    1.0764        0.6946
      -  parameters[20]   -4.7466    1.2214    0.2043    38.5170    32.7162    1.0004        1.2469
      -
      -Quantiles
      -      parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      -          Symbol   Float64   Float64   Float64   Float64   Float64
      -
      -   parameters[1]    0.9164    4.2536    5.9940    7.2512   12.0283
      -   parameters[2]   -0.5080   -0.1044    0.0855    0.2984    1.0043
      -   parameters[3]    0.3276    2.1438    4.2390    6.1737    7.8532
      -   parameters[4]   -1.4579   -0.1269    0.4550    1.6893    5.8331
      -   parameters[5]    1.4611    3.3711    4.4965    5.6720    9.3282
      -   parameters[6]   -1.2114   -0.1218    0.4172    1.2724    4.1938
      -   parameters[7]   -6.0297   -0.5712    0.5929    2.1686    5.8786
      -   parameters[8]   -1.8791   -0.2492    0.4862    1.1814    2.9032
      -   parameters[9]   -6.7656   -2.6609   -0.4230    0.9269    2.8021
      -  parameters[10]   -1.2108    1.0782    2.0899    3.3048    5.0428
      -  parameters[11]   -6.1454   -3.0731   -2.0592   -1.0526    1.8166
      -  parameters[12]   -8.8873   -5.8079   -4.2395   -3.2409   -1.2353
      -  parameters[13]    1.2909    2.6693    3.7502    4.6268    6.7316
      -  parameters[14]   -0.2741    1.2807    2.2801    3.5679    6.4876
      -  parameters[15]   -4.7115   -2.6584   -1.4956   -0.2644    3.3498
      -  parameters[16]   -5.4427   -3.7860   -2.8946   -1.9382   -0.8417
      -  parameters[17]   -6.4221   -4.0549   -2.9178   -1.7934    5.5835
      -  parameters[18]   -7.5413   -5.8069   -5.0388   -4.3025   -3.0121
      -  parameters[19]   -7.2611   -5.9449   -5.2768   -4.3663    2.1958
      -  parameters[20]   -7.0130   -5.5204   -4.8727   -3.9813   -1.9280

      Now we extract the parameter samples from the sampled chain as θ (this is of size 5000 x 20 where 5000 is the number of iterations and 20 is the number of parameters). We'll use these primarily to determine how good our model's classifier is.

      julia
      # Extract all weight and bias parameters.
      -θ = MCMCChains.group(ch, :parameters).value;

      Prediction Visualization

      julia
      # A helper to run the nn through data \`x\` using parameters \`θ\`
      -nn_forward(x, θ) = model(x, vector_to_parameters(θ, ps))
      -
      -# Plot the data we have.
      -fig = plot_data()
      -
      -# Find the index that provided the highest log posterior in the chain.
      -_, i = findmax(ch[:lp])
      -
      -# Extract the max row value from i.
      -i = i.I[1]
      -
      -# Plot the posterior distribution with a contour plot
      -x1_range = collect(range(-6; stop=6, length=25))
      -x2_range = collect(range(-6; stop=6, length=25))
      -Z = [nn_forward([x1, x2], θ[i, :])[1] for x1 in x1_range, x2 in x2_range]
      -contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      The contour plot above shows that the MAP method is not too bad at classifying our data. Now we can visualize our predictions.

      `,33)),s("mjx-container",t,[(A(),i("svg",E,a[0]||(a[0]=[n('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("msub",null,[s("mo",{"data-mjx-texclass":"OP"},"∫"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"θ"),s("mo",{stretchy:"false"},")"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"≈"),s("munder",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ"),s("mo",null,"∼"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")")])]),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")")])],-1))]),a[3]||(a[3]=n(`

      The nn_predict function takes the average predicted value from a network parameterized by weights drawn from the MCMC chain.

      julia
      # Return the average predicted value across multiple weights.
      -nn_predict(x, θ, num) = mean([first(nn_forward(x, view(θ, i, :))) for i in 1:10:num])
      nn_predict (generic function with 1 method)

      Next, we use the nn_predict function to predict the value at a sample of points where the x1 and x2 coordinates range between -6 and 6. As we can see below, we still have a satisfactory fit to our data, and more importantly, we can also see where the neural network is uncertain about its predictions much easier–-those regions between cluster boundaries.

      Plot the average prediction.

      julia
      fig = plot_data()
      -
      -n_end = 1500
      -x1_range = collect(range(-6; stop=6, length=25))
      -x2_range = collect(range(-6; stop=6, length=25))
      -Z = [nn_predict([x1, x2], θ, n_end)[1] for x1 in x1_range, x2 in x2_range]
      -contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 5,000.

      julia
      fig = plot_data()
      -Z = [first(nn_forward([x1, x2], θ[1, :])) for x1 in x1_range, x2 in x2_range]
      -c = contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -record(fig, "results.gif", 1:250:size(θ, 1)) do i
      -    fig.current_axis[].title = "Iteration: $i"
      -    Z = [first(nn_forward([x1, x2], θ[i, :])) for x1 in x1_range, x2 in x2_range]
      -    c[3] = Z
      -    return fig
      -end
      "results.gif"

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 128 × AMD EPYC 7502 32-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 128 default, 0 interactive, 64 GC (on 128 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 128
      -  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 128
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,16))])}const o=p(l,[["render",h]]);export{Q as __pageData,o as default}; diff --git a/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.js b/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.js new file mode 100644 index 0000000000..f5b51d1827 --- /dev/null +++ b/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.js @@ -0,0 +1,723 @@ +import{_ as p,c as i,a2 as n,j as s,o as A}from"./chunks/framework.BetCMmtc.js";const e="/dev/assets/results.CkivesIs.gif",I=JSON.parse('{"title":"Bayesian Neural Network","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/2_BayesianNN.md","filePath":"tutorials/intermediate/2_BayesianNN.md","lastUpdated":null}'),l={name:"tutorials/intermediate/2_BayesianNN.md"},t={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-3.222ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.264ex",height:"6.301ex",role:"img",focusable:"false",viewBox:"0 -1361 20448.8 2785.1","aria-hidden":"true"};function h(r,a,g,k,d,C){return A(),i("div",null,[a[2]||(a[2]=n(`

      Bayesian Neural Network

      We borrow this tutorial from the official Turing Docs. We will show how the explicit parameterization of Lux enables first-class composability with packages which expect flattened out parameter vectors.

      Note: The tutorial in the official Turing docs is now using Lux instead of Flux.

      We will use Turing.jl with Lux.jl to implement implementing a classification algorithm. Lets start by importing the relevant libraries.

      julia
      # Import libraries
      +
      +using Lux, Turing, CairoMakie, Random, Tracker, Functors, LinearAlgebra
      +
      +# Sampling progress
      +Turing.setprogress!(true);
      Precompiling Lux...
      +    322.6 ms  ✓ Reexport
      +    321.7 ms  ✓ SIMDTypes
      +    384.9 ms  ✓ ConcreteStructs
      +    349.2 ms  ✓ Future
      +    366.3 ms  ✓ CEnum
      +    373.7 ms  ✓ OpenLibm_jll
      +    373.7 ms  ✓ ManualMemory
      +    379.1 ms  ✓ ArgCheck
      +    452.5 ms  ✓ CompilerSupportLibraries_jll
      +    458.5 ms  ✓ Requires
      +    516.7 ms  ✓ Statistics
      +    537.8 ms  ✓ EnzymeCore
      +    598.6 ms  ✓ ADTypes
      +    324.3 ms  ✓ IfElse
      +    322.4 ms  ✓ CommonWorldInvalidations
      +    339.3 ms  ✓ FastClosures
      +    380.8 ms  ✓ StaticArraysCore
      +    430.2 ms  ✓ ConstructionBase
      +    856.6 ms  ✓ IrrationalConstants
      +    441.3 ms  ✓ NaNMath
      +    541.8 ms  ✓ Compat
      +    474.3 ms  ✓ JLLWrappers
      +    407.1 ms  ✓ Adapt
      +    368.9 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    616.7 ms  ✓ CpuId
      +    618.5 ms  ✓ DocStringExtensions
      +    368.6 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    366.7 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +    388.7 ms  ✓ DiffResults
      +    791.0 ms  ✓ ThreadingUtilities
      +    385.5 ms  ✓ EnzymeCore → AdaptExt
      +    440.3 ms  ✓ Compat → CompatLinearAlgebraExt
      +    762.0 ms  ✓ Static
      +    460.6 ms  ✓ GPUArraysCore
      +    523.4 ms  ✓ ArrayInterface
      +    575.8 ms  ✓ Hwloc_jll
      +    606.6 ms  ✓ OpenSpecFun_jll
      +    577.7 ms  ✓ LogExpFunctions
      +   1705.6 ms  ✓ UnsafeAtomics
      +    351.9 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +    400.8 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    483.0 ms  ✓ BitTwiddlingConvenienceFunctions
      +   1944.0 ms  ✓ MacroTools
      +    678.4 ms  ✓ Functors
      +    466.5 ms  ✓ Atomix
      +   1120.8 ms  ✓ CPUSummary
      +   1230.2 ms  ✓ ChainRulesCore
      +    648.4 ms  ✓ CommonSubexpressions
      +    778.8 ms  ✓ MLDataDevices
      +   1497.7 ms  ✓ StaticArrayInterface
      +    388.8 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +    388.6 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    595.1 ms  ✓ PolyesterWeave
      +   1335.2 ms  ✓ Setfield
      +    629.1 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +    473.3 ms  ✓ CloseOpenIntervals
      +   1579.7 ms  ✓ DispatchDoctor
      +    579.2 ms  ✓ LayoutPointers
      +   2021.7 ms  ✓ Hwloc
      +   1092.3 ms  ✓ Optimisers
      +   1282.9 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    409.8 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +   2408.6 ms  ✓ SpecialFunctions
      +    402.8 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    402.7 ms  ✓ Optimisers → OptimisersAdaptExt
      +    606.5 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +    966.4 ms  ✓ StrideArraysCore
      +    591.9 ms  ✓ DiffRules
      +   1152.0 ms  ✓ LuxCore
      +    427.9 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    430.9 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    441.5 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    444.7 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    682.8 ms  ✓ Polyester
      +    582.6 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1634.6 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   2545.3 ms  ✓ WeightInitializers
      +   6198.5 ms  ✓ StaticArrays
      +    913.3 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +    570.4 ms  ✓ Adapt → AdaptStaticArraysExt
      +    602.1 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    605.5 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    620.0 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    625.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +   3320.3 ms  ✓ ForwardDiff
      +    845.8 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3112.0 ms  ✓ KernelAbstractions
      +    641.7 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    693.2 ms  ✓ KernelAbstractions → EnzymeExt
      +   5189.7 ms  ✓ NNlib
      +    804.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    893.4 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5788.3 ms  ✓ LuxLib
      +   9017.6 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 32 seconds. 15 already precompiled.
      +Precompiling Turing...
      +    323.7 ms  ✓ NaturalSort
      +    332.3 ms  ✓ IteratorInterfaceExtensions
      +    349.1 ms  ✓ SimpleUnPack
      +    361.0 ms  ✓ ScientificTypesBase
      +    351.8 ms  ✓ UnPack
      +    368.4 ms  ✓ RangeArrays
      +    366.8 ms  ✓ LaTeXStrings
      +    384.8 ms  ✓ ExprTools
      +    389.6 ms  ✓ StatsAPI
      +    433.4 ms  ✓ ChangesOfVariables
      +    477.6 ms  ✓ PositiveFactorizations
      +    539.3 ms  ✓ AbstractFFTs
      +    693.6 ms  ✓ FunctionWrappers
      +    345.7 ms  ✓ CommonSolve
      +    325.9 ms  ✓ DataValueInterfaces
      +    428.7 ms  ✓ InverseFunctions
      +    358.5 ms  ✓ EnumX
      +    443.6 ms  ✓ SuiteSparse_jll
      +    845.5 ms  ✓ Combinatorics
      +    351.0 ms  ✓ RealDot
      +    828.6 ms  ✓ InitialValues
      +    491.3 ms  ✓ OrderedCollections
      +    521.2 ms  ✓ IterTools
      +    582.6 ms  ✓ Serialization
      +    953.8 ms  ✓ OffsetArrays
      +    346.8 ms  ✓ CompositionsBase
      +    516.2 ms  ✓ AbstractTrees
      +    360.1 ms  ✓ PtrArrays
      +    353.0 ms  ✓ DefineSingletons
      +    372.6 ms  ✓ Ratios
      +    473.7 ms  ✓ IntervalSets
      +    380.0 ms  ✓ InvertedIndices
      +    365.9 ms  ✓ DataAPI
      +    917.5 ms  ✓ FillArrays
      +    919.2 ms  ✓ RandomNumbers
      +    482.9 ms  ✓ DelimitedFiles
      +    464.6 ms  ✓ LRUCache
      +    457.1 ms  ✓ ProgressLogging
      +    410.9 ms  ✓ MappedArrays
      +    397.1 ms  ✓ SciMLStructures
      +    509.2 ms  ✓ LoggingExtras
      +    580.4 ms  ✓ Rmath_jll
      +    623.4 ms  ✓ FiniteDiff
      +    589.0 ms  ✓ FFTW_jll
      +    618.9 ms  ✓ oneTBB_jll
      +    612.6 ms  ✓ L_BFGS_B_jll
      +    856.8 ms  ✓ DifferentiationInterface
      +    815.2 ms  ✓ LogDensityProblems
      +   1069.2 ms  ✓ Crayons
      +   1044.1 ms  ✓ Baselet
      +    355.0 ms  ✓ TableTraits
      +    406.6 ms  ✓ StatisticalTraits
      +    381.0 ms  ✓ FunctionWrappersWrappers
      +    473.9 ms  ✓ LogExpFunctions → LogExpFunctionsChangesOfVariablesExt
      +    448.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +    432.7 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      +   1014.3 ms  ✓ ZygoteRules
      +    400.3 ms  ✓ ChangesOfVariables → ChangesOfVariablesInverseFunctionsExt
      +   1033.4 ms  ✓ LazyArtifacts
      +    474.8 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      +    441.1 ms  ✓ Parameters
      +   1137.2 ms  ✓ HypergeometricFunctions
      +   1429.7 ms  ✓ RecipesBase
      +    413.1 ms  ✓ RuntimeGeneratedFunctions
      +    392.3 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      +    383.1 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      +    398.9 ms  ✓ LeftChildRightSiblingTrees
      +    390.7 ms  ✓ IntervalSets → IntervalSetsRandomExt
      +    477.4 ms  ✓ AliasTables
      +    392.4 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      +    424.6 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      +   1790.9 ms  ✓ StringManipulation
      +    413.3 ms  ✓ FillArrays → FillArraysStatisticsExt
      +    456.9 ms  ✓ Missings
      +    454.1 ms  ✓ LRUCache → SerializationExt
      +    646.0 ms  ✓ Libtask
      +    519.1 ms  ✓ LBFGSB
      +    582.8 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      +    447.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +    807.8 ms  ✓ Random123
      +    413.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +    800.3 ms  ✓ Rmath
      +    612.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +    522.6 ms  ✓ LogDensityProblemsAD
      +   1673.0 ms  ✓ DataStructures
      +    815.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +   1805.1 ms  ✓ Distributed
      +    602.7 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      +    743.7 ms  ✓ MLJModelInterface
      +    666.8 ms  ✓ TerminalLoggers
      +    799.2 ms  ✓ Tables
      +    520.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADADTypesExt
      +    754.2 ms  ✓ AxisArrays
      +    545.8 ms  ✓ SortingAlgorithms
      +    733.9 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADForwardDiffExt
      +   1235.0 ms  ✓ IntelOpenMP_jll
      +    691.2 ms  ✓ SharedArrays
      +    508.8 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADDifferentiationInterfaceExt
      +    961.8 ms  ✓ QuadGK
      +    759.4 ms  ✓ StructArrays
      +   2736.2 ms  ✓ Test
      +    857.1 ms  ✓ ProgressMeter
      +   1107.4 ms  ✓ NLSolversBase
      +    394.2 ms  ✓ StructArrays → StructArraysAdaptExt
      +    409.2 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +    375.8 ms  ✓ InplaceOps
      +   1753.8 ms  ✓ StatsFuns
      +    655.5 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +    685.7 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +   2186.3 ms  ✓ Accessors
      +    498.5 ms  ✓ ConsoleProgressMonitor
      +    657.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      +   3741.7 ms  ✓ SparseArrays
      +    979.9 ms  ✓ ChangesOfVariables → ChangesOfVariablesTestExt
      +    669.2 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +    487.7 ms  ✓ Accessors → StructArraysExt
      +   1288.5 ms  ✓ SplittablesBase
      +    648.8 ms  ✓ Accessors → TestExt
      +   1396.0 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      +    859.7 ms  ✓ Accessors → LinearAlgebraExt
      +    684.1 ms  ✓ Accessors → StaticArraysExt
      +    668.3 ms  ✓ DensityInterface
      +   5133.3 ms  ✓ Tracker
      +    693.4 ms  ✓ WoodburyMatrices
      +    647.7 ms  ✓ Statistics → SparseArraysExt
      +    622.9 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +   1021.0 ms  ✓ Accessors → IntervalSetsExt
      +    652.3 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +   1621.0 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +    627.0 ms  ✓ SuiteSparse
      +   1834.5 ms  ✓ LineSearches
      +    700.0 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +    928.2 ms  ✓ KernelAbstractions → SparseArraysExt
      +    642.8 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      +    638.6 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    684.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    767.6 ms  ✓ BangBang
      +    608.3 ms  ✓ SparseInverseSubset
      +    714.9 ms  ✓ AxisAlgorithms
      +   1336.5 ms  ✓ SparseMatrixColorings
      +   1115.3 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      +    850.3 ms  ✓ PDMats
      +   1138.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   1133.7 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADTrackerExt
      +    512.3 ms  ✓ BangBang → BangBangChainRulesCoreExt
      +   1218.7 ms  ✓ NamedArrays
      +    519.8 ms  ✓ BangBang → BangBangStructArraysExt
      +   1616.7 ms  ✓ SymbolicIndexingInterface
      +    683.3 ms  ✓ BangBang → BangBangStaticArraysExt
      +    500.4 ms  ✓ BangBang → BangBangTablesExt
      +   1973.8 ms  ✓ SciMLOperators
      +    667.6 ms  ✓ FillArrays → FillArraysPDMatsExt
      +   1088.1 ms  ✓ MicroCollections
      +    872.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      +    543.3 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +   2217.7 ms  ✓ StatsBase
      +    886.2 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +   3188.6 ms  ✓ Roots
      +   1377.8 ms  ✓ Tracker → TrackerPDMatsExt
      +   1992.7 ms  ✓ Interpolations
      +    499.7 ms  ✓ Roots → RootsChainRulesCoreExt
      +    690.9 ms  ✓ Roots → RootsForwardDiffExt
      +   2088.6 ms  ✓ RecursiveArrayTools
      +   3626.3 ms  ✓ SparseConnectivityTracer
      +    603.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    733.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +    865.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1198.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +   2810.6 ms  ✓ Transducers
      +  11623.4 ms  ✓ MLStyle
      +   7835.0 ms  ✓ MKL_jll
      +   1383.8 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      +   3155.6 ms  ✓ Optim
      +   1498.4 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      +    658.4 ms  ✓ Transducers → TransducersAdaptExt
      +   1759.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      +   1808.7 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      +   5606.1 ms  ✓ ChainRules
      +   1781.7 ms  ✓ AbstractMCMC
      +    781.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +   5230.2 ms  ✓ Distributions
      +   1224.6 ms  ✓ SSMProblems
      +   1407.6 ms  ✓ AbstractPPL
      +   1375.9 ms  ✓ Distributions → DistributionsDensityInterfaceExt
      +   1392.6 ms  ✓ Distributions → DistributionsTestExt
      +   1395.5 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +   1506.9 ms  ✓ MCMCDiagnosticTools
      +   2658.7 ms  ✓ AdvancedHMC
      +   4645.9 ms  ✓ FFTW
      +   1686.5 ms  ✓ EllipticalSliceSampling
      +    859.4 ms  ✓ NNlib → NNlibFFTWExt
      +   1955.9 ms  ✓ AdvancedPS
      +   2000.2 ms  ✓ AdvancedMH
      +  14142.0 ms  ✓ PrettyTables
      +   1706.5 ms  ✓ KernelDensity
      +   3358.5 ms  ✓ Bijectors
      +   1567.2 ms  ✓ AdvancedMH → AdvancedMHStructArraysExt
      +   1682.5 ms  ✓ AdvancedMH → AdvancedMHForwardDiffExt
      +   1742.1 ms  ✓ AdvancedPS → AdvancedPSLibtaskExt
      +   3811.4 ms  ✓ DistributionsAD
      +   7849.5 ms  ✓ Expronicon
      +   1330.0 ms  ✓ Bijectors → BijectorsForwardDiffExt
      +   1407.4 ms  ✓ DistributionsAD → DistributionsADForwardDiffExt
      +   1470.6 ms  ✓ Bijectors → BijectorsDistributionsADExt
      +   3068.4 ms  ✓ MCMCChains
      +   2687.8 ms  ✓ Bijectors → BijectorsTrackerExt
      +   3164.1 ms  ✓ DistributionsAD → DistributionsADTrackerExt
      +   2180.3 ms  ✓ AdvancedHMC → AdvancedHMCMCMCChainsExt
      +   2178.8 ms  ✓ AdvancedMH → AdvancedMHMCMCChainsExt
      +   1978.5 ms  ✓ AdvancedVI
      +   8158.9 ms  ✓ DynamicPPL
      +   1826.0 ms  ✓ DynamicPPL → DynamicPPLForwardDiffExt
      +   1944.0 ms  ✓ DynamicPPL → DynamicPPLChainRulesCoreExt
      +   2415.6 ms  ✓ DynamicPPL → DynamicPPLMCMCChainsExt
      +   2798.5 ms  ✓ DynamicPPL → DynamicPPLZygoteRulesExt
      +  11139.8 ms  ✓ SciMLBase
      +   1045.6 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   2173.9 ms  ✓ OptimizationBase
      +    373.1 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      +    627.9 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      +   2020.0 ms  ✓ Optimization
      +  11893.1 ms  ✓ OptimizationOptimJL
      +   5188.7 ms  ✓ Turing
      +   4143.7 ms  ✓ Turing → TuringOptimExt
      +  224 dependencies successfully precompiled in 57 seconds. 82 already precompiled.
      +  1 dependency had output during precompilation:
      +┌ MKL_jll
      +│  \x1B[32m\x1B[1m Downloading\x1B[22m\x1B[39m artifact: IntelOpenMP
      +
      +Precompiling MLDataDevicesSparseArraysExt...
      +    664.4 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      +Precompiling MLDataDevicesFillArraysExt...
      +    445.6 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    833.7 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling BijectorsEnzymeCoreExt...
      +   1303.3 ms  ✓ Bijectors → BijectorsEnzymeCoreExt
      +  1 dependency successfully precompiled in 1 seconds. 79 already precompiled.
      +Precompiling HwlocTrees...
      +    553.5 ms  ✓ Hwloc → HwlocTrees
      +  1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
      +Precompiling StaticArrayInterfaceOffsetArraysExt...
      +    487.0 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      +Precompiling MLDataDevicesTrackerExt...
      +   1172.3 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      +  1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
      +Precompiling LuxLibTrackerExt...
      +   1078.5 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3289.7 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
      +Precompiling LuxTrackerExt...
      +   2082.8 ms  ✓ Lux → LuxTrackerExt
      +  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      +Precompiling DynamicPPLEnzymeCoreExt...
      +   1749.6 ms  ✓ DynamicPPL → DynamicPPLEnzymeCoreExt
      +  1 dependency successfully precompiled in 2 seconds. 144 already precompiled.
      +Precompiling MLDataDevicesRecursiveArrayToolsExt...
      +    604.1 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +  1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
      +Precompiling OptimizationMLDataDevicesExt...
      +   1386.8 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      +  1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
      +Precompiling CairoMakie...
      +    407.2 ms  ✓ IndirectArrays
      +    404.9 ms  ✓ PolygonOps
      +    435.0 ms  ✓ GeoFormatTypes
      +    443.2 ms  ✓ Contour
      +    433.4 ms  ✓ PCRE2_jll
      +    450.0 ms  ✓ TriplotBase
      +    461.2 ms  ✓ TensorCore
      +    456.2 ms  ✓ StableRNGs
      +    479.2 ms  ✓ PaddedViews
      +    482.8 ms  ✓ Observables
      +    498.0 ms  ✓ RoundingEmulator
      +    504.4 ms  ✓ Extents
      +    559.1 ms  ✓ TranscodingStreams
      +    337.5 ms  ✓ CRC32c
      +    409.0 ms  ✓ LazyModules
      +    803.0 ms  ✓ Grisu
      +    480.6 ms  ✓ Inflate
      +    391.8 ms  ✓ SignedDistanceFields
      +    442.8 ms  ✓ StackViews
      +    428.5 ms  ✓ Scratch
      +   1082.8 ms  ✓ Format
      +    610.3 ms  ✓ Graphite2_jll
      +    651.3 ms  ✓ OpenSSL_jll
      +    610.9 ms  ✓ Libmount_jll
      +    610.2 ms  ✓ LLVMOpenMP_jll
      +    595.8 ms  ✓ Bzip2_jll
      +    643.6 ms  ✓ Xorg_libXau_jll
      +    596.9 ms  ✓ libfdk_aac_jll
      +    606.9 ms  ✓ Giflib_jll
      +    609.1 ms  ✓ Imath_jll
      +    621.3 ms  ✓ libpng_jll
      +   1524.2 ms  ✓ AdaptivePredicates
      +   1207.4 ms  ✓ SimpleTraits
      +    629.3 ms  ✓ LAME_jll
      +    617.5 ms  ✓ LERC_jll
      +    594.2 ms  ✓ EarCut_jll
      +    614.4 ms  ✓ CRlibm_jll
      +    651.0 ms  ✓ XZ_jll
      +    655.7 ms  ✓ JpegTurbo_jll
      +   1551.4 ms  ✓ UnicodeFun
      +    601.0 ms  ✓ Ogg_jll
      +    599.2 ms  ✓ Xorg_libXdmcp_jll
      +    608.4 ms  ✓ x264_jll
      +    633.7 ms  ✓ x265_jll
      +    624.4 ms  ✓ libaom_jll
      +    635.1 ms  ✓ Zstd_jll
      +    525.6 ms  ✓ Xorg_xtrans_jll
      +    634.8 ms  ✓ Expat_jll
      +    617.3 ms  ✓ LZO_jll
      +    619.8 ms  ✓ Opus_jll
      +   1947.9 ms  ✓ FixedPointNumbers
      +    682.9 ms  ✓ Libiconv_jll
      +    563.0 ms  ✓ Xorg_libpthread_stubs_jll
      +    621.4 ms  ✓ Libgpg_error_jll
      +    624.1 ms  ✓ Libffi_jll
      +    638.6 ms  ✓ isoband_jll
      +    628.1 ms  ✓ FriBidi_jll
      +    616.1 ms  ✓ Libuuid_jll
      +    430.8 ms  ✓ Showoff
      +    433.5 ms  ✓ RelocatableFolders
      +    475.4 ms  ✓ MosaicViews
      +   1033.6 ms  ✓ FilePathsBase
      +    679.7 ms  ✓ Pixman_jll
      +    430.4 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      +    677.5 ms  ✓ FreeType2_jll
      +    656.0 ms  ✓ libsixel_jll
      +   1057.3 ms  ✓ GeoInterface
      +    687.2 ms  ✓ libvorbis_jll
      +    688.1 ms  ✓ Libtiff_jll
      +    755.9 ms  ✓ OpenEXR_jll
      +    698.1 ms  ✓ XML2_jll
      +    481.9 ms  ✓ Isoband
      +    535.5 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      +    654.2 ms  ✓ Libgcrypt_jll
      +    788.3 ms  ✓ FilePaths
      +    798.4 ms  ✓ Fontconfig_jll
      +   1445.8 ms  ✓ ColorTypes
      +    697.9 ms  ✓ Gettext_jll
      +    970.1 ms  ✓ FreeType
      +    683.1 ms  ✓ XSLT_jll
      +   1237.3 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      +   2382.1 ms  ✓ PkgVersion
      +    503.4 ms  ✓ ColorTypes → StyledStringsExt
      +    804.0 ms  ✓ Glib_jll
      +   2588.5 ms  ✓ IntervalArithmetic
      +   1088.3 ms  ✓ Xorg_libxcb_jll
      +   3343.9 ms  ✓ FileIO
      +    500.6 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      +   1831.6 ms  ✓ ColorVectorSpace
      +    652.9 ms  ✓ Xorg_libX11_jll
      +    749.8 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      +    633.6 ms  ✓ Xorg_libXext_jll
      +    637.6 ms  ✓ Xorg_libXrender_jll
      +   1451.9 ms  ✓ QOI
      +    759.2 ms  ✓ Libglvnd_jll
      +    783.0 ms  ✓ Cairo_jll
      +   3878.0 ms  ✓ Colors
      +    800.0 ms  ✓ HarfBuzz_jll
      +    835.2 ms  ✓ libwebp_jll
      +   6475.9 ms  ✓ SIMD
      +    595.2 ms  ✓ Graphics
      +    612.5 ms  ✓ Animations
      +    777.9 ms  ✓ ColorBrewer
      +   3643.0 ms  ✓ ExactPredicates
      +    802.8 ms  ✓ libass_jll
      +    799.5 ms  ✓ Pango_jll
      +   1547.5 ms  ✓ OpenEXR
      +    940.9 ms  ✓ FFMPEG_jll
      +   1308.4 ms  ✓ Cairo
      +   3468.3 ms  ✓ ColorSchemes
      +   9182.5 ms  ✓ GeometryBasics
      +   1036.5 ms  ✓ Packing
      +   1303.3 ms  ✓ ShaderAbstractions
      +   5206.5 ms  ✓ DelaunayTriangulation
      +   1930.0 ms  ✓ FreeTypeAbstraction
      +  15163.9 ms  ✓ Unitful
      +    600.9 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      +    633.4 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      +   3821.1 ms  ✓ MakieCore
      +   8219.0 ms  ✓ Automa
      +   1247.3 ms  ✓ Interpolations → InterpolationsUnitfulExt
      +   5098.3 ms  ✓ GridLayoutBase
      +   8204.5 ms  ✓ PlotUtils
      +  14493.9 ms  ✓ ImageCore
      +   1871.6 ms  ✓ ImageBase
      +   2308.9 ms  ✓ WebP
      +   8720.9 ms  ✓ MathTeXEngine
      +   3040.8 ms  ✓ PNGFiles
      +   3186.9 ms  ✓ JpegTurbo
      +   3478.8 ms  ✓ Sixel
      +   2074.0 ms  ✓ ImageAxes
      +   1102.7 ms  ✓ ImageMetadata
      +   1851.4 ms  ✓ Netpbm
      +  42547.7 ms  ✓ TiffImages
      +   1195.4 ms  ✓ ImageIO
      + 104912.3 ms  ✓ Makie
      +  81735.0 ms  ✓ CairoMakie
      +  137 dependencies successfully precompiled in 239 seconds. 134 already precompiled.
      +Precompiling SparseMatrixColoringsColorsExt...
      +    919.4 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      +  1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
      +Precompiling UnitfulExt...
      +    621.7 ms  ✓ Accessors → UnitfulExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling IntervalArithmeticForwardDiffExt...
      +    515.6 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      +    698.1 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +  2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
      +Precompiling IntervalArithmeticRecipesBaseExt...
      +    850.1 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      +  1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
      +Precompiling SciMLBaseMakieExt...
      +   7984.5 ms  ✓ SciMLBase → SciMLBaseMakieExt
      +  1 dependency successfully precompiled in 9 seconds. 304 already precompiled.
      +[ Info: [Turing]: progress logging is enabled globally
      +[ Info: [AdvancedVI]: global PROGRESS is set as true

      Generating data

      Our goal here is to use a Bayesian neural network to classify points in an artificial dataset. The code below generates data points arranged in a box-like pattern and displays a graph of the dataset we'll be working with.

      julia
      # Number of points to generate
      +N = 80
      +M = round(Int, N / 4)
      +rng = Random.default_rng()
      +Random.seed!(rng, 1234)
      +
      +# Generate artificial data
      +x1s = rand(rng, Float32, M) * 4.5f0;
      +x2s = rand(rng, Float32, M) * 4.5f0;
      +xt1s = Array([[x1s[i] + 0.5f0; x2s[i] + 0.5f0] for i in 1:M])
      +x1s = rand(rng, Float32, M) * 4.5f0;
      +x2s = rand(rng, Float32, M) * 4.5f0;
      +append!(xt1s, Array([[x1s[i] - 5.0f0; x2s[i] - 5.0f0] for i in 1:M]))
      +
      +x1s = rand(rng, Float32, M) * 4.5f0;
      +x2s = rand(rng, Float32, M) * 4.5f0;
      +xt0s = Array([[x1s[i] + 0.5f0; x2s[i] - 5.0f0] for i in 1:M])
      +x1s = rand(rng, Float32, M) * 4.5f0;
      +x2s = rand(rng, Float32, M) * 4.5f0;
      +append!(xt0s, Array([[x1s[i] - 5.0f0; x2s[i] + 0.5f0] for i in 1:M]))
      +
      +# Store all the data for later
      +xs = [xt1s; xt0s]
      +ts = [ones(2 * M); zeros(2 * M)]
      +
      +# Plot data points
      +
      +function plot_data()
      +    x1 = first.(xt1s)
      +    y1 = last.(xt1s)
      +    x2 = first.(xt0s)
      +    y2 = last.(xt0s)
      +
      +    fig = Figure()
      +    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      +
      +    scatter!(ax, x1, y1; markersize=16, color=:red, strokecolor=:black, strokewidth=2)
      +    scatter!(ax, x2, y2; markersize=16, color=:blue, strokecolor=:black, strokewidth=2)
      +
      +    return fig
      +end
      +
      +plot_data()

      Building the Neural Network

      The next step is to define a feedforward neural network where we express our parameters as distributions, and not single points as with traditional neural networks. For this we will use Dense to define liner layers and compose them via Chain, both are neural network primitives from Lux. The network nn we will create will have two hidden layers with tanh activations and one output layer with sigmoid activation, as shown below.

      The nn is an instance that acts as a function and can take data, parameters and current state as inputs and output predictions. We will define distributions on the neural network parameters.

      julia
      # Construct a neural network using Lux
      +nn = Chain(Dense(2 => 3, tanh), Dense(3 => 2, tanh), Dense(2 => 1, sigmoid))
      +
      +# Initialize the model weights and state
      +ps, st = Lux.setup(rng, nn)
      +
      +Lux.parameterlength(nn) # number of parameters in NN
      20

      The probabilistic model specification below creates a parameters variable, which has IID normal variables. The parameters represents all parameters of our neural net (weights and biases).

      julia
      # Create a regularization term and a Gaussian prior variance term.
      +alpha = 0.09
      +sig = sqrt(1.0 / alpha)
      3.3333333333333335

      Construct named tuple from a sampled parameter vector. We could also use ComponentArrays here and simply broadcast to avoid doing this. But let's do it this way to avoid dependencies.

      julia
      function vector_to_parameters(ps_new::AbstractVector, ps::NamedTuple)
      +    @assert length(ps_new) == Lux.parameterlength(ps)
      +    i = 1
      +    function get_ps(x)
      +        z = reshape(view(ps_new, i:(i + length(x) - 1)), size(x))
      +        i += length(x)
      +        return z
      +    end
      +    return fmap(get_ps, ps)
      +end
      vector_to_parameters (generic function with 1 method)

      To interface with external libraries it is often desirable to use the StatefulLuxLayer to automatically handle the neural network states.

      julia
      const model = StatefulLuxLayer{true}(nn, nothing, st)
      +
      +# Specify the probabilistic model.
      +@model function bayes_nn(xs, ts)
      +    # Sample the parameters
      +    nparameters = Lux.parameterlength(nn)
      +    parameters ~ MvNormal(zeros(nparameters), Diagonal(abs2.(sig .* ones(nparameters))))
      +
      +    # Forward NN to make predictions
      +    preds = Lux.apply(model, xs, vector_to_parameters(parameters, ps))
      +
      +    # Observe each prediction.
      +    for i in eachindex(ts)
      +        ts[i] ~ Bernoulli(preds[i])
      +    end
      +end
      bayes_nn (generic function with 2 methods)

      Inference can now be performed by calling sample. We use the HMC sampler here.

      julia
      # Perform inference.
      +N = 5000
      +ch = sample(bayes_nn(reduce(hcat, xs), ts), HMC(0.05, 4; adtype=AutoTracker()), N)
      Chains MCMC chain (5000×30×1 Array{Float64, 3}):
      +
      +Iterations        = 1:1:5000
      +Number of chains  = 1
      +Samples per chain = 5000
      +Wall duration     = 33.37 seconds
      +Compute duration  = 33.37 seconds
      +parameters        = parameters[1], parameters[2], parameters[3], parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9], parameters[10], parameters[11], parameters[12], parameters[13], parameters[14], parameters[15], parameters[16], parameters[17], parameters[18], parameters[19], parameters[20]
      +internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size
      +
      +Summary Statistics
      +      parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   ess_per_sec
      +          Symbol   Float64   Float64   Float64    Float64    Float64   Float64       Float64
      +
      +   parameters[1]    5.8536    2.5579    0.6449    16.6210    21.2567    1.2169        0.4980
      +   parameters[2]    0.1106    0.3642    0.0467    76.1493    35.8893    1.0235        2.2817
      +   parameters[3]    4.1685    2.2970    0.6025    15.9725    62.4537    1.0480        0.4786
      +   parameters[4]    1.0580    1.9179    0.4441    22.3066    51.3818    1.0513        0.6684
      +   parameters[5]    4.7925    2.0622    0.5484    15.4001    28.2539    1.1175        0.4614
      +   parameters[6]    0.7155    1.3734    0.2603    28.7492    59.2257    1.0269        0.8614
      +   parameters[7]    0.4981    2.7530    0.7495    14.5593    22.0260    1.2506        0.4362
      +   parameters[8]    0.4568    1.1324    0.2031    31.9424    38.7102    1.0447        0.9571
      +   parameters[9]   -1.0215    2.6186    0.7268    14.2896    22.8493    1.2278        0.4282
      +  parameters[10]    2.1324    1.6319    0.4231    15.0454    43.2111    1.3708        0.4508
      +  parameters[11]   -2.0262    1.8130    0.4727    15.0003    23.5212    1.2630        0.4495
      +  parameters[12]   -4.5525    1.9168    0.4399    18.6812    29.9668    1.0581        0.5598
      +  parameters[13]    3.7207    1.3736    0.2889    22.9673    55.7445    1.0128        0.6882
      +  parameters[14]    2.5799    1.7626    0.4405    17.7089    38.8364    1.1358        0.5306
      +  parameters[15]   -1.3181    1.9554    0.5213    14.6312    22.0160    1.1793        0.4384
      +  parameters[16]   -2.9322    1.2308    0.2334    28.3970   130.8667    1.0216        0.8509
      +  parameters[17]   -2.4957    2.7976    0.7745    16.2068    20.1562    1.0692        0.4856
      +  parameters[18]   -5.0880    1.1401    0.1828    39.8971    52.4786    1.1085        1.1955
      +  parameters[19]   -4.7674    2.0627    0.5354    21.4562    18.3886    1.0764        0.6429
      +  parameters[20]   -4.7466    1.2214    0.2043    38.5170    32.7162    1.0004        1.1541
      +
      +Quantiles
      +      parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      +          Symbol   Float64   Float64   Float64   Float64   Float64
      +
      +   parameters[1]    0.9164    4.2536    5.9940    7.2512   12.0283
      +   parameters[2]   -0.5080   -0.1044    0.0855    0.2984    1.0043
      +   parameters[3]    0.3276    2.1438    4.2390    6.1737    7.8532
      +   parameters[4]   -1.4579   -0.1269    0.4550    1.6893    5.8331
      +   parameters[5]    1.4611    3.3711    4.4965    5.6720    9.3282
      +   parameters[6]   -1.2114   -0.1218    0.4172    1.2724    4.1938
      +   parameters[7]   -6.0297   -0.5712    0.5929    2.1686    5.8786
      +   parameters[8]   -1.8791   -0.2492    0.4862    1.1814    2.9032
      +   parameters[9]   -6.7656   -2.6609   -0.4230    0.9269    2.8021
      +  parameters[10]   -1.2108    1.0782    2.0899    3.3048    5.0428
      +  parameters[11]   -6.1454   -3.0731   -2.0592   -1.0526    1.8166
      +  parameters[12]   -8.8873   -5.8079   -4.2395   -3.2409   -1.2353
      +  parameters[13]    1.2909    2.6693    3.7502    4.6268    6.7316
      +  parameters[14]   -0.2741    1.2807    2.2801    3.5679    6.4876
      +  parameters[15]   -4.7115   -2.6584   -1.4956   -0.2644    3.3498
      +  parameters[16]   -5.4427   -3.7860   -2.8946   -1.9382   -0.8417
      +  parameters[17]   -6.4221   -4.0549   -2.9178   -1.7934    5.5835
      +  parameters[18]   -7.5413   -5.8069   -5.0388   -4.3025   -3.0121
      +  parameters[19]   -7.2611   -5.9449   -5.2768   -4.3663    2.1958
      +  parameters[20]   -7.0130   -5.5204   -4.8727   -3.9813   -1.9280

      Now we extract the parameter samples from the sampled chain as θ (this is of size 5000 x 20 where 5000 is the number of iterations and 20 is the number of parameters). We'll use these primarily to determine how good our model's classifier is.

      julia
      # Extract all weight and bias parameters.
      +θ = MCMCChains.group(ch, :parameters).value;

      Prediction Visualization

      julia
      # A helper to run the nn through data \`x\` using parameters \`θ\`
      +nn_forward(x, θ) = model(x, vector_to_parameters(θ, ps))
      +
      +# Plot the data we have.
      +fig = plot_data()
      +
      +# Find the index that provided the highest log posterior in the chain.
      +_, i = findmax(ch[:lp])
      +
      +# Extract the max row value from i.
      +i = i.I[1]
      +
      +# Plot the posterior distribution with a contour plot
      +x1_range = collect(range(-6; stop=6, length=25))
      +x2_range = collect(range(-6; stop=6, length=25))
      +Z = [nn_forward([x1, x2], θ[i, :])[1] for x1 in x1_range, x2 in x2_range]
      +contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      +fig

      The contour plot above shows that the MAP method is not too bad at classifying our data. Now we can visualize our predictions.

      `,33)),s("mjx-container",t,[(A(),i("svg",E,a[0]||(a[0]=[n('',1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("msub",null,[s("mo",{"data-mjx-texclass":"OP"},"∫"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"θ"),s("mo",{stretchy:"false"},")"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"≈"),s("munder",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ"),s("mo",null,"∼"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")")])]),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")")])],-1))]),a[3]||(a[3]=n(`

      The nn_predict function takes the average predicted value from a network parameterized by weights drawn from the MCMC chain.

      julia
      # Return the average predicted value across multiple weights.
      +nn_predict(x, θ, num) = mean([first(nn_forward(x, view(θ, i, :))) for i in 1:10:num])
      nn_predict (generic function with 1 method)

      Next, we use the nn_predict function to predict the value at a sample of points where the x1 and x2 coordinates range between -6 and 6. As we can see below, we still have a satisfactory fit to our data, and more importantly, we can also see where the neural network is uncertain about its predictions much easier–-those regions between cluster boundaries.

      Plot the average prediction.

      julia
      fig = plot_data()
      +
      +n_end = 1500
      +x1_range = collect(range(-6; stop=6, length=25))
      +x2_range = collect(range(-6; stop=6, length=25))
      +Z = [nn_predict([x1, x2], θ, n_end)[1] for x1 in x1_range, x2 in x2_range]
      +contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      +fig

      Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 5,000.

      julia
      fig = plot_data()
      +Z = [first(nn_forward([x1, x2], θ[1, :])) for x1 in x1_range, x2 in x2_range]
      +c = contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      +record(fig, "results.gif", 1:250:size(θ, 1)) do i
      +    fig.current_axis[].title = "Iteration: $i"
      +    Z = [first(nn_forward([x1, x2], θ[i, :])) for x1 in x1_range, x2 in x2_range]
      +    c[3] = Z
      +    return fig
      +end
      "results.gif"

      Appendix

      julia
      using InteractiveUtils
      +InteractiveUtils.versioninfo()
      +
      +if @isdefined(MLDataDevices)
      +    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      +        println()
      +        CUDA.versioninfo()
      +    end
      +
      +    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      +        println()
      +        AMDGPU.versioninfo()
      +    end
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
      +Build Info:
      +  Official https://julialang.org/ release
      +Platform Info:
      +  OS: Linux (x86_64-linux-gnu)
      +  CPU: 128 × AMD EPYC 7502 32-Core Processor
      +  WORD_SIZE: 64
      +  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      +Threads: 128 default, 0 interactive, 64 GC (on 128 virtual cores)
      +Environment:
      +  JULIA_CPU_THREADS = 128
      +  JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      +  JULIA_PKG_SERVER = 
      +  JULIA_NUM_THREADS = 128
      +  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      +  JULIA_PKG_PRECOMPILE_AUTO = 0
      +  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,16))])}const Q=p(l,[["render",h]]);export{I as __pageData,Q as default}; diff --git a/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.lean.js b/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.lean.js new file mode 100644 index 0000000000..7286b79630 --- /dev/null +++ b/dev/assets/tutorials_intermediate_2_BayesianNN.md.DSd6_ExB.lean.js @@ -0,0 +1 @@ +import{_ as p,c as i,a2 as n,j as s,o as A}from"./chunks/framework.BetCMmtc.js";const e="/dev/assets/results.CkivesIs.gif",I=JSON.parse('{"title":"Bayesian Neural Network","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/2_BayesianNN.md","filePath":"tutorials/intermediate/2_BayesianNN.md","lastUpdated":null}'),l={name:"tutorials/intermediate/2_BayesianNN.md"},t={class:"MathJax",jax:"SVG",display:"true",style:{direction:"ltr",display:"block","text-align":"center",margin:"1em 0",position:"relative"}},E={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-3.222ex"},xmlns:"http://www.w3.org/2000/svg",width:"46.264ex",height:"6.301ex",role:"img",focusable:"false",viewBox:"0 -1361 20448.8 2785.1","aria-hidden":"true"};function h(r,a,g,k,d,C){return A(),i("div",null,[a[2]||(a[2]=n("",33)),s("mjx-container",t,[(A(),i("svg",E,a[0]||(a[0]=[n("",1)]))),a[1]||(a[1]=s("mjx-assistive-mml",{unselectable:"on",display:"block",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",overflow:"hidden",width:"100%"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML",display:"block"},[s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"="),s("msub",null,[s("mo",{"data-mjx-texclass":"OP"},"∫"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"θ"),s("mo",{stretchy:"false"},")"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")"),s("mo",null,"≈"),s("munder",null,[s("mo",{"data-mjx-texclass":"OP"},"∑"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ"),s("mo",null,"∼"),s("mi",null,"p"),s("mo",{stretchy:"false"},"("),s("mi",null,"θ"),s("mo",{"data-mjx-texclass":"ORD",stretchy:"false"},"|"),s("mi",null,"X"),s("mo",null,","),s("mi",null,"α"),s("mo",{stretchy:"false"},")")])]),s("msub",null,[s("mi",null,"f"),s("mrow",{"data-mjx-texclass":"ORD"},[s("mi",null,"θ")])]),s("mo",{stretchy:"false"},"("),s("mrow",{"data-mjx-texclass":"ORD"},[s("mover",null,[s("mi",null,"x"),s("mo",{stretchy:"false"},"~")])]),s("mo",{stretchy:"false"},")")])],-1))]),a[3]||(a[3]=n("",16))])}const Q=p(l,[["render",h]]);export{I as __pageData,Q as default}; diff --git a/dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.lean.js b/dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.lean.js deleted file mode 100644 index dd86f299c8..0000000000 --- a/dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.lean.js +++ /dev/null @@ -1,417 +0,0 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Training a HyperNetwork on MNIST and FashionMNIST","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/3_HyperNet.md","filePath":"tutorials/intermediate/3_HyperNet.md","lastUpdated":null}'),t={name:"tutorials/intermediate/3_HyperNet.md"};function l(e,s,h,k,c,r){return p(),i("div",null,s[0]||(s[0]=[n(`

      Training a HyperNetwork on MNIST and FashionMNIST

      Package Imports

      julia
      using Lux, ComponentArrays, LuxCUDA, MLDatasets, MLUtils, OneHotArrays, Optimisers,
      -      Printf, Random, Zygote
      -
      -CUDA.allowscalar(false)
      Precompiling ComponentArrays...
      -   1001.1 ms  ✓ ComponentArrays
      -  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      -Precompiling MLDataDevicesComponentArraysExt...
      -    661.5 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      -Precompiling LuxComponentArraysExt...
      -    533.0 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1563.8 ms  ✓ Lux → LuxComponentArraysExt
      -   1970.8 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      -  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      -Precompiling LuxCUDA...
      -   1325.0 ms  ✓ LLVM → BFloat16sExt
      -   2729.2 ms  ✓ CUDA_Runtime_jll
      -   2014.3 ms  ✓ CUDNN_jll
      -   4589.2 ms  ✓ GPUArrays
      -  45598.6 ms  ✓ DataFrames
      -  51987.5 ms  ✓ CUDA
      -   5027.8 ms  ✓ Atomix → AtomixCUDAExt
      -   8136.7 ms  ✓ cuDNN
      -   5323.2 ms  ✓ LuxCUDA
      -  9 dependencies successfully precompiled in 117 seconds. 93 already precompiled.
      -Precompiling MLDataDevicesGPUArraysExt...
      -   1645.2 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 57 already precompiled.
      -Precompiling WeightInitializersGPUArraysExt...
      -   1718.4 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 60 already precompiled.
      -Precompiling ComponentArraysGPUArraysExt...
      -   1879.8 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 84 already precompiled.
      -Precompiling ParsersExt...
      -    486.4 ms  ✓ InlineStrings → ParsersExt
      -  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
      -Precompiling ArrayInterfaceCUDAExt...
      -   4955.2 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
      -  1 dependency successfully precompiled in 5 seconds. 103 already precompiled.
      -Precompiling NNlibCUDAExt...
      -   4974.6 ms  ✓ CUDA → ChainRulesCoreExt
      -   5317.6 ms  ✓ NNlib → NNlibCUDAExt
      -  2 dependencies successfully precompiled in 6 seconds. 104 already precompiled.
      -Precompiling MLDataDevicesCUDAExt...
      -   4994.5 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
      -  1 dependency successfully precompiled in 5 seconds. 106 already precompiled.
      -Precompiling LuxLibCUDAExt...
      -   5181.0 ms  ✓ CUDA → EnzymeCoreExt
      -   5284.5 ms  ✓ CUDA → SpecialFunctionsExt
      -   5833.5 ms  ✓ LuxLib → LuxLibCUDAExt
      -  3 dependencies successfully precompiled in 6 seconds. 169 already precompiled.
      -Precompiling WeightInitializersCUDAExt...
      -   5018.4 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
      -  1 dependency successfully precompiled in 5 seconds. 111 already precompiled.
      -Precompiling NNlibCUDACUDNNExt...
      -   5394.3 ms  ✓ NNlib → NNlibCUDACUDNNExt
      -  1 dependency successfully precompiled in 6 seconds. 108 already precompiled.
      -Precompiling MLDataDevicescuDNNExt...
      -   5182.8 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      -  1 dependency successfully precompiled in 6 seconds. 109 already precompiled.
      -Precompiling LuxLibcuDNNExt...
      -   5880.1 ms  ✓ LuxLib → LuxLibcuDNNExt
      -  1 dependency successfully precompiled in 6 seconds. 176 already precompiled.
      -Precompiling MLDatasets...
      -    371.9 ms  ✓ ContextVariablesX
      -    497.1 ms  ✓ LoggingExtras
      -    818.8 ms  ✓ StructTypes
      -    555.4 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    532.6 ms  ✓ ExceptionUnwrapping
      -    630.5 ms  ✓ Accessors → TestExt
      -   1245.3 ms  ✓ SplittablesBase
      -   1345.1 ms  ✓ OpenMPI_jll
      -   1434.5 ms  ✓ MPICH_jll
      -    762.2 ms  ✓ WeakRefStrings
      -   2204.6 ms  ✓ AtomsBase
      -   1248.2 ms  ✓ MPItrampoline_jll
      -   1965.3 ms  ✓ ImageShow
      -   1508.6 ms  ✓ NPZ
      -   2277.0 ms  ✓ Pickle
      -   1654.9 ms  ✓ BangBang → BangBangDataFramesExt
      -    561.1 ms  ✓ FLoopsBase
      -  11194.3 ms  ✓ JSON3
      -   2865.5 ms  ✓ Transducers
      -  18568.5 ms  ✓ HTTP
      -   2237.4 ms  ✓ Chemfiles
      -   1487.7 ms  ✓ HDF5_jll
      -   1513.8 ms  ✓ Transducers → TransducersDataFramesExt
      -    703.0 ms  ✓ Transducers → TransducersAdaptExt
      -   5329.5 ms  ✓ FLoops
      -  33710.2 ms  ✓ JLD2
      -   1859.5 ms  ✓ FileIO → HTTPExt
      -  19361.0 ms  ✓ CSV
      -   3128.6 ms  ✓ DataDeps
      -   6299.5 ms  ✓ MLUtils
      -   7534.5 ms  ✓ HDF5
      -   2337.4 ms  ✓ MAT
      -   8931.5 ms  ✓ MLDatasets
      -  33 dependencies successfully precompiled in 59 seconds. 166 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1604.5 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      -Precompiling LuxMLUtilsExt...
      -   2242.9 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling OneHotArrays...
      -    958.0 ms  ✓ OneHotArrays
      -  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      -Precompiling MLDataDevicesOneHotArraysExt...
      -    741.6 ms  ✓ MLDataDevices → MLDataDevicesOneHotArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 35 already precompiled.
      -Precompiling Zygote...
      -    712.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   1059.8 ms  ✓ ZygoteRules
      -   5340.2 ms  ✓ ChainRules
      -  32833.6 ms  ✓ Zygote
      -  4 dependencies successfully precompiled in 38 seconds. 98 already precompiled.
      -Precompiling ArrayInterfaceChainRulesExt...
      -    789.6 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 39 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    836.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling MLDataDevicesZygoteExt...
      -   1601.4 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 109 already precompiled.
      -Precompiling LuxZygoteExt...
      -   2791.8 ms  ✓ Lux → LuxZygoteExt
      -  1 dependency successfully precompiled in 3 seconds. 166 already precompiled.
      -Precompiling ComponentArraysZygoteExt...
      -   1592.1 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 117 already precompiled.
      -Precompiling ZygoteColorsExt...
      -   1803.2 ms  ✓ Zygote → ZygoteColorsExt
      -  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.

      Loading Datasets

      julia
      function load_dataset(::Type{dset}, n_train::Union{Nothing, Int},
      -        n_eval::Union{Nothing, Int}, batchsize::Int) where {dset}
      -    if n_train === nothing
      -        imgs, labels = dset(:train)
      -    else
      -        imgs, labels = dset(:train)[1:n_train]
      -    end
      -    x_train, y_train = reshape(imgs, 28, 28, 1, n_train), onehotbatch(labels, 0:9)
      -
      -    if n_eval === nothing
      -        imgs, labels = dset(:test)
      -    else
      -        imgs, labels = dset(:test)[1:n_eval]
      -    end
      -    x_test, y_test = reshape(imgs, 28, 28, 1, n_eval), onehotbatch(labels, 0:9)
      -
      -    return (
      -        DataLoader((x_train, y_train); batchsize=min(batchsize, n_train), shuffle=true),
      -        DataLoader((x_test, y_test); batchsize=min(batchsize, n_eval), shuffle=false)
      -    )
      -end
      -
      -function load_datasets(batchsize=256)
      -    n_train = parse(Bool, get(ENV, "CI", "false")) ? 1024 : nothing
      -    n_eval = parse(Bool, get(ENV, "CI", "false")) ? 32 : nothing
      -    return load_dataset.((MNIST, FashionMNIST), n_train, n_eval, batchsize)
      -end
      load_datasets (generic function with 2 methods)

      Implement a HyperNet Layer

      julia
      function HyperNet(
      -        weight_generator::Lux.AbstractLuxLayer, core_network::Lux.AbstractLuxLayer)
      -    ca_axes = Lux.initialparameters(Random.default_rng(), core_network) |>
      -              ComponentArray |>
      -              getaxes
      -    return @compact(; ca_axes, weight_generator, core_network, dispatch=:HyperNet) do (x, y)
      -        # Generate the weights
      -        ps_new = ComponentArray(vec(weight_generator(x)), ca_axes)
      -        @return core_network(y, ps_new)
      -    end
      -end
      HyperNet (generic function with 1 method)

      Defining functions on the CompactLuxLayer requires some understanding of how the layer is structured, as such we don't recommend doing it unless you are familiar with the internals. In this case, we simply write it to ignore the initialization of the core_network parameters.

      julia
      function Lux.initialparameters(rng::AbstractRNG, hn::CompactLuxLayer{:HyperNet})
      -    return (; weight_generator=Lux.initialparameters(rng, hn.layers.weight_generator),)
      -end

      Create and Initialize the HyperNet

      julia
      function create_model()
      -    # Doesn't need to be a MLP can have any Lux Layer
      -    core_network = Chain(FlattenLayer(), Dense(784, 256, relu), Dense(256, 10))
      -    weight_generator = Chain(
      -        Embedding(2 => 32),
      -        Dense(32, 64, relu),
      -        Dense(64, Lux.parameterlength(core_network))
      -    )
      -
      -    model = HyperNet(weight_generator, core_network)
      -    return model
      -end
      create_model (generic function with 1 method)

      Define Utility Functions

      julia
      const loss = CrossEntropyLoss(; logits=Val(true))
      -
      -function accuracy(model, ps, st, dataloader, data_idx)
      -    total_correct, total = 0, 0
      -    st = Lux.testmode(st)
      -    for (x, y) in dataloader
      -        target_class = onecold(y)
      -        predicted_class = onecold(first(model((data_idx, x), ps, st)))
      -        total_correct += sum(target_class .== predicted_class)
      -        total += length(target_class)
      -    end
      -    return total_correct / total
      -end
      accuracy (generic function with 1 method)

      Training

      julia
      function train()
      -    model = create_model()
      -    dataloaders = load_datasets()
      -
      -    dev = gpu_device()
      -    rng = Xoshiro(0)
      -    ps, st = Lux.setup(rng, model) |> dev
      -
      -    train_state = Training.TrainState(model, ps, st, Adam(0.001f0))
      -
      -    ### Lets train the model
      -    nepochs = 50
      -    for epoch in 1:nepochs, data_idx in 1:2
      -        train_dataloader, test_dataloader = dataloaders[data_idx] .|> dev
      -
      -        stime = time()
      -        for (x, y) in train_dataloader
      -            (_, _, _, train_state) = Training.single_train_step!(
      -                AutoZygote(), loss, ((data_idx, x), y), train_state)
      -        end
      -        ttime = time() - stime
      -
      -        train_acc = round(
      -            accuracy(model, train_state.parameters,
      -                train_state.states, train_dataloader, data_idx) * 100;
      -            digits=2)
      -        test_acc = round(
      -            accuracy(model, train_state.parameters,
      -                train_state.states, test_dataloader, data_idx) * 100;
      -            digits=2)
      -
      -        data_name = data_idx == 1 ? "MNIST" : "FashionMNIST"
      -
      -        @printf "[%3d/%3d]\\t%12s\\tTime %3.5fs\\tTraining Accuracy: %3.2f%%\\tTest \\
      -                 Accuracy: %3.2f%%\\n" epoch nepochs data_name ttime train_acc test_acc
      -    end
      -
      -    println()
      -
      -    test_acc_list = [0.0, 0.0]
      -    for data_idx in 1:2
      -        train_dataloader, test_dataloader = dataloaders[data_idx] .|> dev
      -        train_acc = round(
      -            accuracy(model, train_state.parameters,
      -                train_state.states, train_dataloader, data_idx) * 100;
      -            digits=2)
      -        test_acc = round(
      -            accuracy(model, train_state.parameters,
      -                train_state.states, test_dataloader, data_idx) * 100;
      -            digits=2)
      -
      -        data_name = data_idx == 1 ? "MNIST" : "FashionMNIST"
      -
      -        @printf "[FINAL]\\t%12s\\tTraining Accuracy: %3.2f%%\\tTest Accuracy: \\
      -                 %3.2f%%\\n" data_name train_acc test_acc
      -        test_acc_list[data_idx] = test_acc
      -    end
      -    return test_acc_list
      -end
      -
      -test_acc_list = train()
      [  1/ 50]	       MNIST	Time 88.66826s	Training Accuracy: 56.45%	Test Accuracy: 56.25%
      -[  1/ 50]	FashionMNIST	Time 0.03836s	Training Accuracy: 53.71%	Test Accuracy: 53.12%
      -[  2/ 50]	       MNIST	Time 0.08196s	Training Accuracy: 66.50%	Test Accuracy: 65.62%
      -[  2/ 50]	FashionMNIST	Time 0.03446s	Training Accuracy: 59.96%	Test Accuracy: 50.00%
      -[  3/ 50]	       MNIST	Time 0.03057s	Training Accuracy: 79.20%	Test Accuracy: 65.62%
      -[  3/ 50]	FashionMNIST	Time 0.03504s	Training Accuracy: 65.53%	Test Accuracy: 59.38%
      -[  4/ 50]	       MNIST	Time 0.03339s	Training Accuracy: 77.05%	Test Accuracy: 62.50%
      -[  4/ 50]	FashionMNIST	Time 0.05684s	Training Accuracy: 67.19%	Test Accuracy: 71.88%
      -[  5/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 83.30%	Test Accuracy: 68.75%
      -[  5/ 50]	FashionMNIST	Time 0.02341s	Training Accuracy: 72.66%	Test Accuracy: 68.75%
      -[  6/ 50]	       MNIST	Time 0.02367s	Training Accuracy: 88.38%	Test Accuracy: 81.25%
      -[  6/ 50]	FashionMNIST	Time 0.02576s	Training Accuracy: 74.51%	Test Accuracy: 62.50%
      -[  7/ 50]	       MNIST	Time 0.03584s	Training Accuracy: 90.53%	Test Accuracy: 81.25%
      -[  7/ 50]	FashionMNIST	Time 0.02394s	Training Accuracy: 73.44%	Test Accuracy: 71.88%
      -[  8/ 50]	       MNIST	Time 0.02397s	Training Accuracy: 91.99%	Test Accuracy: 78.12%
      -[  8/ 50]	FashionMNIST	Time 0.02374s	Training Accuracy: 77.34%	Test Accuracy: 78.12%
      -[  9/ 50]	       MNIST	Time 0.03774s	Training Accuracy: 94.43%	Test Accuracy: 81.25%
      -[  9/ 50]	FashionMNIST	Time 0.02355s	Training Accuracy: 81.35%	Test Accuracy: 75.00%
      -[ 10/ 50]	       MNIST	Time 0.02356s	Training Accuracy: 96.29%	Test Accuracy: 81.25%
      -[ 10/ 50]	FashionMNIST	Time 0.03722s	Training Accuracy: 79.98%	Test Accuracy: 56.25%
      -[ 11/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 11/ 50]	FashionMNIST	Time 0.02326s	Training Accuracy: 77.15%	Test Accuracy: 68.75%
      -[ 12/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 12/ 50]	FashionMNIST	Time 0.02445s	Training Accuracy: 80.57%	Test Accuracy: 71.88%
      -[ 13/ 50]	       MNIST	Time 0.03150s	Training Accuracy: 98.63%	Test Accuracy: 84.38%
      -[ 13/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 80.08%	Test Accuracy: 68.75%
      -[ 14/ 50]	       MNIST	Time 0.02402s	Training Accuracy: 98.93%	Test Accuracy: 84.38%
      -[ 14/ 50]	FashionMNIST	Time 0.03091s	Training Accuracy: 81.25%	Test Accuracy: 62.50%
      -[ 15/ 50]	       MNIST	Time 0.02384s	Training Accuracy: 99.51%	Test Accuracy: 84.38%
      -[ 15/ 50]	FashionMNIST	Time 0.02378s	Training Accuracy: 81.05%	Test Accuracy: 65.62%
      -[ 16/ 50]	       MNIST	Time 0.02343s	Training Accuracy: 99.71%	Test Accuracy: 84.38%
      -[ 16/ 50]	FashionMNIST	Time 0.02407s	Training Accuracy: 82.52%	Test Accuracy: 62.50%
      -[ 17/ 50]	       MNIST	Time 0.02576s	Training Accuracy: 99.90%	Test Accuracy: 84.38%
      -[ 17/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 83.79%	Test Accuracy: 62.50%
      -[ 18/ 50]	       MNIST	Time 0.02349s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 18/ 50]	FashionMNIST	Time 0.03048s	Training Accuracy: 84.47%	Test Accuracy: 68.75%
      -[ 19/ 50]	       MNIST	Time 0.02385s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 19/ 50]	FashionMNIST	Time 0.02377s	Training Accuracy: 85.35%	Test Accuracy: 65.62%
      -[ 20/ 50]	       MNIST	Time 0.02316s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 20/ 50]	FashionMNIST	Time 0.02373s	Training Accuracy: 86.82%	Test Accuracy: 59.38%
      -[ 21/ 50]	       MNIST	Time 0.02517s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 21/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 87.79%	Test Accuracy: 59.38%
      -[ 22/ 50]	       MNIST	Time 0.02330s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 22/ 50]	FashionMNIST	Time 0.02971s	Training Accuracy: 87.40%	Test Accuracy: 65.62%
      -[ 23/ 50]	       MNIST	Time 0.02482s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 23/ 50]	FashionMNIST	Time 0.02376s	Training Accuracy: 87.60%	Test Accuracy: 62.50%
      -[ 24/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 24/ 50]	FashionMNIST	Time 0.02386s	Training Accuracy: 87.79%	Test Accuracy: 68.75%
      -[ 25/ 50]	       MNIST	Time 0.02470s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 25/ 50]	FashionMNIST	Time 0.02365s	Training Accuracy: 88.96%	Test Accuracy: 65.62%
      -[ 26/ 50]	       MNIST	Time 0.02345s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 26/ 50]	FashionMNIST	Time 0.02828s	Training Accuracy: 89.55%	Test Accuracy: 71.88%
      -[ 27/ 50]	       MNIST	Time 0.02378s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 27/ 50]	FashionMNIST	Time 0.02444s	Training Accuracy: 90.62%	Test Accuracy: 68.75%
      -[ 28/ 50]	       MNIST	Time 0.02371s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 28/ 50]	FashionMNIST	Time 0.02400s	Training Accuracy: 91.21%	Test Accuracy: 75.00%
      -[ 29/ 50]	       MNIST	Time 0.02480s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 29/ 50]	FashionMNIST	Time 0.02456s	Training Accuracy: 91.99%	Test Accuracy: 75.00%
      -[ 30/ 50]	       MNIST	Time 0.02380s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 30/ 50]	FashionMNIST	Time 0.02846s	Training Accuracy: 92.38%	Test Accuracy: 75.00%
      -[ 31/ 50]	       MNIST	Time 0.02401s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 31/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 93.07%	Test Accuracy: 71.88%
      -[ 32/ 50]	       MNIST	Time 0.02334s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 32/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 92.97%	Test Accuracy: 75.00%
      -[ 33/ 50]	       MNIST	Time 0.02533s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 33/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 92.68%	Test Accuracy: 75.00%
      -[ 34/ 50]	       MNIST	Time 0.02396s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 34/ 50]	FashionMNIST	Time 0.02806s	Training Accuracy: 93.36%	Test Accuracy: 75.00%
      -[ 35/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 35/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 93.65%	Test Accuracy: 75.00%
      -[ 36/ 50]	       MNIST	Time 0.02370s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 36/ 50]	FashionMNIST	Time 0.02325s	Training Accuracy: 93.46%	Test Accuracy: 75.00%
      -[ 37/ 50]	       MNIST	Time 0.02495s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 37/ 50]	FashionMNIST	Time 0.02470s	Training Accuracy: 93.26%	Test Accuracy: 75.00%
      -[ 38/ 50]	       MNIST	Time 0.02389s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 38/ 50]	FashionMNIST	Time 0.03001s	Training Accuracy: 94.24%	Test Accuracy: 68.75%
      -[ 39/ 50]	       MNIST	Time 0.02408s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 39/ 50]	FashionMNIST	Time 0.02401s	Training Accuracy: 94.04%	Test Accuracy: 75.00%
      -[ 40/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 40/ 50]	FashionMNIST	Time 0.02338s	Training Accuracy: 94.92%	Test Accuracy: 71.88%
      -[ 41/ 50]	       MNIST	Time 0.02520s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 41/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 94.53%	Test Accuracy: 71.88%
      -[ 42/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 42/ 50]	FashionMNIST	Time 0.02841s	Training Accuracy: 94.63%	Test Accuracy: 71.88%
      -[ 43/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 43/ 50]	FashionMNIST	Time 0.02380s	Training Accuracy: 95.61%	Test Accuracy: 65.62%
      -[ 44/ 50]	       MNIST	Time 0.02326s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 44/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 95.51%	Test Accuracy: 71.88%
      -[ 45/ 50]	       MNIST	Time 0.02467s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 45/ 50]	FashionMNIST	Time 0.02389s	Training Accuracy: 95.90%	Test Accuracy: 65.62%
      -[ 46/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 46/ 50]	FashionMNIST	Time 0.02879s	Training Accuracy: 95.61%	Test Accuracy: 68.75%
      -[ 47/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 47/ 50]	FashionMNIST	Time 0.02429s	Training Accuracy: 96.00%	Test Accuracy: 68.75%
      -[ 48/ 50]	       MNIST	Time 0.02374s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 48/ 50]	FashionMNIST	Time 0.02387s	Training Accuracy: 96.19%	Test Accuracy: 68.75%
      -[ 49/ 50]	       MNIST	Time 0.02529s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 49/ 50]	FashionMNIST	Time 0.02356s	Training Accuracy: 96.00%	Test Accuracy: 71.88%
      -[ 50/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 50/ 50]	FashionMNIST	Time 0.02772s	Training Accuracy: 96.88%	Test Accuracy: 68.75%
      -
      -[FINAL]	       MNIST	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[FINAL]	FashionMNIST	Training Accuracy: 96.88%	Test Accuracy: 68.75%

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate
      -
      -CUDA runtime 12.6, artifact installation
      -CUDA driver 12.6
      -NVIDIA driver 560.35.3
      -
      -CUDA libraries: 
      -- CUBLAS: 12.6.4
      -- CURAND: 10.3.7
      -- CUFFT: 11.3.0
      -- CUSOLVER: 11.7.1
      -- CUSPARSE: 12.5.4
      -- CUPTI: 2024.3.2 (API 24.0.0)
      -- NVML: 12.0.0+560.35.3
      -
      -Julia packages: 
      -- CUDA: 5.6.1
      -- CUDA_Driver_jll: 0.10.4+0
      -- CUDA_Runtime_jll: 0.15.5+0
      -
      -Toolchain:
      -- Julia: 1.11.2
      -- LLVM: 16.0.6
      -
      -Environment:
      -- JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
      -
      -1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.170 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,26)]))}const y=a(t,[["render",l]]);export{E as __pageData,y as default}; diff --git a/dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.js b/dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.js similarity index 68% rename from dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.js rename to dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.js index dd86f299c8..fdb4b54a7a 100644 --- a/dev/assets/tutorials_intermediate_3_HyperNet.md.BEqA10sc.js +++ b/dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.js @@ -1,156 +1,76 @@ -import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E=JSON.parse('{"title":"Training a HyperNetwork on MNIST and FashionMNIST","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/3_HyperNet.md","filePath":"tutorials/intermediate/3_HyperNet.md","lastUpdated":null}'),t={name:"tutorials/intermediate/3_HyperNet.md"};function l(e,s,h,k,c,r){return p(),i("div",null,s[0]||(s[0]=[n(`

      Training a HyperNetwork on MNIST and FashionMNIST

      Package Imports

      julia
      using Lux, ComponentArrays, LuxCUDA, MLDatasets, MLUtils, OneHotArrays, Optimisers,
      +import{_ as i,c as a,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Training a HyperNetwork on MNIST and FashionMNIST","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/3_HyperNet.md","filePath":"tutorials/intermediate/3_HyperNet.md","lastUpdated":null}'),t={name:"tutorials/intermediate/3_HyperNet.md"};function l(e,s,h,k,r,c){return p(),a("div",null,s[0]||(s[0]=[n(`

      Training a HyperNetwork on MNIST and FashionMNIST

      Package Imports

      julia
      using Lux, ComponentArrays, LuxCUDA, MLDatasets, MLUtils, OneHotArrays, Optimisers,
             Printf, Random, Zygote
       
      -CUDA.allowscalar(false)
      Precompiling ComponentArrays...
      -   1001.1 ms  ✓ ComponentArrays
      -  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      -Precompiling MLDataDevicesComponentArraysExt...
      -    661.5 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      -Precompiling LuxComponentArraysExt...
      -    533.0 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1563.8 ms  ✓ Lux → LuxComponentArraysExt
      -   1970.8 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      -  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      +CUDA.allowscalar(false)
      Precompiling LuxComponentArraysExt...
      +   1590.7 ms  ✓ Lux → LuxComponentArraysExt
      +  1 dependency successfully precompiled in 2 seconds. 113 already precompiled.
       Precompiling LuxCUDA...
      -   1325.0 ms  ✓ LLVM → BFloat16sExt
      -   2729.2 ms  ✓ CUDA_Runtime_jll
      -   2014.3 ms  ✓ CUDNN_jll
      -   4589.2 ms  ✓ GPUArrays
      -  45598.6 ms  ✓ DataFrames
      -  51987.5 ms  ✓ CUDA
      -   5027.8 ms  ✓ Atomix → AtomixCUDAExt
      -   8136.7 ms  ✓ cuDNN
      -   5323.2 ms  ✓ LuxCUDA
      -  9 dependencies successfully precompiled in 117 seconds. 93 already precompiled.
      -Precompiling MLDataDevicesGPUArraysExt...
      -   1645.2 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 57 already precompiled.
      -Precompiling WeightInitializersGPUArraysExt...
      -   1718.4 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 60 already precompiled.
      +  45981.1 ms  ✓ CUDA
      +   4883.1 ms  ✓ Atomix → AtomixCUDAExt
      +   8068.2 ms  ✓ cuDNN
      +   5238.1 ms  ✓ LuxCUDA
      +  4 dependencies successfully precompiled in 64 seconds. 98 already precompiled.
       Precompiling ComponentArraysGPUArraysExt...
      -   1879.8 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +   1844.3 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
         1 dependency successfully precompiled in 2 seconds. 84 already precompiled.
      -Precompiling ParsersExt...
      -    486.4 ms  ✓ InlineStrings → ParsersExt
      -  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
       Precompiling ArrayInterfaceCUDAExt...
      -   4955.2 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
      +   4736.3 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
         1 dependency successfully precompiled in 5 seconds. 103 already precompiled.
       Precompiling NNlibCUDAExt...
      -   4974.6 ms  ✓ CUDA → ChainRulesCoreExt
      -   5317.6 ms  ✓ NNlib → NNlibCUDAExt
      +   4930.1 ms  ✓ CUDA → ChainRulesCoreExt
      +   5731.7 ms  ✓ NNlib → NNlibCUDAExt
         2 dependencies successfully precompiled in 6 seconds. 104 already precompiled.
       Precompiling MLDataDevicesCUDAExt...
      -   4994.5 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
      +   4751.9 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
         1 dependency successfully precompiled in 5 seconds. 106 already precompiled.
       Precompiling LuxLibCUDAExt...
      -   5181.0 ms  ✓ CUDA → EnzymeCoreExt
      -   5284.5 ms  ✓ CUDA → SpecialFunctionsExt
      -   5833.5 ms  ✓ LuxLib → LuxLibCUDAExt
      +   5070.0 ms  ✓ CUDA → EnzymeCoreExt
      +   5159.6 ms  ✓ CUDA → SpecialFunctionsExt
      +   5610.4 ms  ✓ LuxLib → LuxLibCUDAExt
         3 dependencies successfully precompiled in 6 seconds. 169 already precompiled.
       Precompiling WeightInitializersCUDAExt...
      -   5018.4 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
      +   4901.3 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
         1 dependency successfully precompiled in 5 seconds. 111 already precompiled.
       Precompiling NNlibCUDACUDNNExt...
      -   5394.3 ms  ✓ NNlib → NNlibCUDACUDNNExt
      +   5615.2 ms  ✓ NNlib → NNlibCUDACUDNNExt
         1 dependency successfully precompiled in 6 seconds. 108 already precompiled.
       Precompiling MLDataDevicescuDNNExt...
      -   5182.8 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      -  1 dependency successfully precompiled in 6 seconds. 109 already precompiled.
      +   4950.2 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      +  1 dependency successfully precompiled in 5 seconds. 109 already precompiled.
       Precompiling LuxLibcuDNNExt...
      -   5880.1 ms  ✓ LuxLib → LuxLibcuDNNExt
      +   5761.4 ms  ✓ LuxLib → LuxLibcuDNNExt
         1 dependency successfully precompiled in 6 seconds. 176 already precompiled.
      -Precompiling MLDatasets...
      -    371.9 ms  ✓ ContextVariablesX
      -    497.1 ms  ✓ LoggingExtras
      -    818.8 ms  ✓ StructTypes
      -    555.4 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    532.6 ms  ✓ ExceptionUnwrapping
      -    630.5 ms  ✓ Accessors → TestExt
      -   1245.3 ms  ✓ SplittablesBase
      -   1345.1 ms  ✓ OpenMPI_jll
      -   1434.5 ms  ✓ MPICH_jll
      -    762.2 ms  ✓ WeakRefStrings
      -   2204.6 ms  ✓ AtomsBase
      -   1248.2 ms  ✓ MPItrampoline_jll
      -   1965.3 ms  ✓ ImageShow
      -   1508.6 ms  ✓ NPZ
      -   2277.0 ms  ✓ Pickle
      -   1654.9 ms  ✓ BangBang → BangBangDataFramesExt
      -    561.1 ms  ✓ FLoopsBase
      -  11194.3 ms  ✓ JSON3
      -   2865.5 ms  ✓ Transducers
      -  18568.5 ms  ✓ HTTP
      -   2237.4 ms  ✓ Chemfiles
      -   1487.7 ms  ✓ HDF5_jll
      -   1513.8 ms  ✓ Transducers → TransducersDataFramesExt
      -    703.0 ms  ✓ Transducers → TransducersAdaptExt
      -   5329.5 ms  ✓ FLoops
      -  33710.2 ms  ✓ JLD2
      -   1859.5 ms  ✓ FileIO → HTTPExt
      -  19361.0 ms  ✓ CSV
      -   3128.6 ms  ✓ DataDeps
      -   6299.5 ms  ✓ MLUtils
      -   7534.5 ms  ✓ HDF5
      -   2337.4 ms  ✓ MAT
      -   8931.5 ms  ✓ MLDatasets
      -  33 dependencies successfully precompiled in 59 seconds. 166 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1604.5 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
       Precompiling LuxMLUtilsExt...
      -   2242.9 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling OneHotArrays...
      -    958.0 ms  ✓ OneHotArrays
      -  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      -Precompiling MLDataDevicesOneHotArraysExt...
      -    741.6 ms  ✓ MLDataDevices → MLDataDevicesOneHotArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 35 already precompiled.
      -Precompiling Zygote...
      -    712.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   1059.8 ms  ✓ ZygoteRules
      -   5340.2 ms  ✓ ChainRules
      -  32833.6 ms  ✓ Zygote
      -  4 dependencies successfully precompiled in 38 seconds. 98 already precompiled.
      -Precompiling ArrayInterfaceChainRulesExt...
      -    789.6 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 39 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    836.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling MLDataDevicesZygoteExt...
      -   1601.4 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 109 already precompiled.
      +   2084.7 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 167 already precompiled.
       Precompiling LuxZygoteExt...
      -   2791.8 ms  ✓ Lux → LuxZygoteExt
      +   2880.7 ms  ✓ Lux → LuxZygoteExt
         1 dependency successfully precompiled in 3 seconds. 166 already precompiled.
      -Precompiling ComponentArraysZygoteExt...
      -   1592.1 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 117 already precompiled.
       Precompiling ZygoteColorsExt...
      -   1803.2 ms  ✓ Zygote → ZygoteColorsExt
      +   1843.1 ms  ✓ Zygote → ZygoteColorsExt
         1 dependency successfully precompiled in 2 seconds. 105 already precompiled.

      Loading Datasets

      julia
      function load_dataset(::Type{dset}, n_train::Union{Nothing, Int},
               n_eval::Union{Nothing, Int}, batchsize::Int) where {dset}
      -    if n_train === nothing
      -        imgs, labels = dset(:train)
      +    (; features, targets) = if n_train === nothing
      +        tmp = dset(:train)
      +        tmp[1:length(tmp)]
           else
      -        imgs, labels = dset(:train)[1:n_train]
      +        dset(:train)[1:n_train]
           end
      -    x_train, y_train = reshape(imgs, 28, 28, 1, n_train), onehotbatch(labels, 0:9)
      +    x_train, y_train = reshape(features, 28, 28, 1, :), onehotbatch(targets, 0:9)
       
      -    if n_eval === nothing
      -        imgs, labels = dset(:test)
      +    (; features, targets) = if n_eval === nothing
      +        tmp = dset(:test)
      +        tmp[1:length(tmp)]
           else
      -        imgs, labels = dset(:test)[1:n_eval]
      +        dset(:test)[1:n_eval]
           end
      -    x_test, y_test = reshape(imgs, 28, 28, 1, n_eval), onehotbatch(labels, 0:9)
      +    x_test, y_test = reshape(features, 28, 28, 1, :), onehotbatch(targets, 0:9)
       
           return (
      -        DataLoader((x_train, y_train); batchsize=min(batchsize, n_train), shuffle=true),
      -        DataLoader((x_test, y_test); batchsize=min(batchsize, n_eval), shuffle=false)
      +        DataLoader(
      +            (x_train, y_train); batchsize=min(batchsize, size(x_train, 4)), shuffle=true),
      +        DataLoader(
      +            (x_test, y_test); batchsize=min(batchsize, size(x_test, 4)), shuffle=false)
           )
       end
       
      @@ -253,109 +173,109 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
           return test_acc_list
       end
       
      -test_acc_list = train()
      [  1/ 50]	       MNIST	Time 88.66826s	Training Accuracy: 56.45%	Test Accuracy: 56.25%
      -[  1/ 50]	FashionMNIST	Time 0.03836s	Training Accuracy: 53.71%	Test Accuracy: 53.12%
      -[  2/ 50]	       MNIST	Time 0.08196s	Training Accuracy: 66.50%	Test Accuracy: 65.62%
      -[  2/ 50]	FashionMNIST	Time 0.03446s	Training Accuracy: 59.96%	Test Accuracy: 50.00%
      -[  3/ 50]	       MNIST	Time 0.03057s	Training Accuracy: 79.20%	Test Accuracy: 65.62%
      -[  3/ 50]	FashionMNIST	Time 0.03504s	Training Accuracy: 65.53%	Test Accuracy: 59.38%
      -[  4/ 50]	       MNIST	Time 0.03339s	Training Accuracy: 77.05%	Test Accuracy: 62.50%
      -[  4/ 50]	FashionMNIST	Time 0.05684s	Training Accuracy: 67.19%	Test Accuracy: 71.88%
      -[  5/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 83.30%	Test Accuracy: 68.75%
      -[  5/ 50]	FashionMNIST	Time 0.02341s	Training Accuracy: 72.66%	Test Accuracy: 68.75%
      -[  6/ 50]	       MNIST	Time 0.02367s	Training Accuracy: 88.38%	Test Accuracy: 81.25%
      -[  6/ 50]	FashionMNIST	Time 0.02576s	Training Accuracy: 74.51%	Test Accuracy: 62.50%
      -[  7/ 50]	       MNIST	Time 0.03584s	Training Accuracy: 90.53%	Test Accuracy: 81.25%
      -[  7/ 50]	FashionMNIST	Time 0.02394s	Training Accuracy: 73.44%	Test Accuracy: 71.88%
      -[  8/ 50]	       MNIST	Time 0.02397s	Training Accuracy: 91.99%	Test Accuracy: 78.12%
      -[  8/ 50]	FashionMNIST	Time 0.02374s	Training Accuracy: 77.34%	Test Accuracy: 78.12%
      -[  9/ 50]	       MNIST	Time 0.03774s	Training Accuracy: 94.43%	Test Accuracy: 81.25%
      -[  9/ 50]	FashionMNIST	Time 0.02355s	Training Accuracy: 81.35%	Test Accuracy: 75.00%
      -[ 10/ 50]	       MNIST	Time 0.02356s	Training Accuracy: 96.29%	Test Accuracy: 81.25%
      -[ 10/ 50]	FashionMNIST	Time 0.03722s	Training Accuracy: 79.98%	Test Accuracy: 56.25%
      -[ 11/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 11/ 50]	FashionMNIST	Time 0.02326s	Training Accuracy: 77.15%	Test Accuracy: 68.75%
      -[ 12/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 12/ 50]	FashionMNIST	Time 0.02445s	Training Accuracy: 80.57%	Test Accuracy: 71.88%
      -[ 13/ 50]	       MNIST	Time 0.03150s	Training Accuracy: 98.63%	Test Accuracy: 84.38%
      -[ 13/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 80.08%	Test Accuracy: 68.75%
      -[ 14/ 50]	       MNIST	Time 0.02402s	Training Accuracy: 98.93%	Test Accuracy: 84.38%
      -[ 14/ 50]	FashionMNIST	Time 0.03091s	Training Accuracy: 81.25%	Test Accuracy: 62.50%
      -[ 15/ 50]	       MNIST	Time 0.02384s	Training Accuracy: 99.51%	Test Accuracy: 84.38%
      -[ 15/ 50]	FashionMNIST	Time 0.02378s	Training Accuracy: 81.05%	Test Accuracy: 65.62%
      -[ 16/ 50]	       MNIST	Time 0.02343s	Training Accuracy: 99.71%	Test Accuracy: 84.38%
      -[ 16/ 50]	FashionMNIST	Time 0.02407s	Training Accuracy: 82.52%	Test Accuracy: 62.50%
      -[ 17/ 50]	       MNIST	Time 0.02576s	Training Accuracy: 99.90%	Test Accuracy: 84.38%
      -[ 17/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 83.79%	Test Accuracy: 62.50%
      -[ 18/ 50]	       MNIST	Time 0.02349s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 18/ 50]	FashionMNIST	Time 0.03048s	Training Accuracy: 84.47%	Test Accuracy: 68.75%
      -[ 19/ 50]	       MNIST	Time 0.02385s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 19/ 50]	FashionMNIST	Time 0.02377s	Training Accuracy: 85.35%	Test Accuracy: 65.62%
      -[ 20/ 50]	       MNIST	Time 0.02316s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 20/ 50]	FashionMNIST	Time 0.02373s	Training Accuracy: 86.82%	Test Accuracy: 59.38%
      -[ 21/ 50]	       MNIST	Time 0.02517s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 21/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 87.79%	Test Accuracy: 59.38%
      -[ 22/ 50]	       MNIST	Time 0.02330s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 22/ 50]	FashionMNIST	Time 0.02971s	Training Accuracy: 87.40%	Test Accuracy: 65.62%
      -[ 23/ 50]	       MNIST	Time 0.02482s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 23/ 50]	FashionMNIST	Time 0.02376s	Training Accuracy: 87.60%	Test Accuracy: 62.50%
      -[ 24/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 24/ 50]	FashionMNIST	Time 0.02386s	Training Accuracy: 87.79%	Test Accuracy: 68.75%
      -[ 25/ 50]	       MNIST	Time 0.02470s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 25/ 50]	FashionMNIST	Time 0.02365s	Training Accuracy: 88.96%	Test Accuracy: 65.62%
      -[ 26/ 50]	       MNIST	Time 0.02345s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 26/ 50]	FashionMNIST	Time 0.02828s	Training Accuracy: 89.55%	Test Accuracy: 71.88%
      -[ 27/ 50]	       MNIST	Time 0.02378s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 27/ 50]	FashionMNIST	Time 0.02444s	Training Accuracy: 90.62%	Test Accuracy: 68.75%
      -[ 28/ 50]	       MNIST	Time 0.02371s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 28/ 50]	FashionMNIST	Time 0.02400s	Training Accuracy: 91.21%	Test Accuracy: 75.00%
      -[ 29/ 50]	       MNIST	Time 0.02480s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 29/ 50]	FashionMNIST	Time 0.02456s	Training Accuracy: 91.99%	Test Accuracy: 75.00%
      -[ 30/ 50]	       MNIST	Time 0.02380s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 30/ 50]	FashionMNIST	Time 0.02846s	Training Accuracy: 92.38%	Test Accuracy: 75.00%
      -[ 31/ 50]	       MNIST	Time 0.02401s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 31/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 93.07%	Test Accuracy: 71.88%
      -[ 32/ 50]	       MNIST	Time 0.02334s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 32/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 92.97%	Test Accuracy: 75.00%
      -[ 33/ 50]	       MNIST	Time 0.02533s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 33/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 92.68%	Test Accuracy: 75.00%
      -[ 34/ 50]	       MNIST	Time 0.02396s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 34/ 50]	FashionMNIST	Time 0.02806s	Training Accuracy: 93.36%	Test Accuracy: 75.00%
      -[ 35/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 35/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 93.65%	Test Accuracy: 75.00%
      -[ 36/ 50]	       MNIST	Time 0.02370s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 36/ 50]	FashionMNIST	Time 0.02325s	Training Accuracy: 93.46%	Test Accuracy: 75.00%
      -[ 37/ 50]	       MNIST	Time 0.02495s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 37/ 50]	FashionMNIST	Time 0.02470s	Training Accuracy: 93.26%	Test Accuracy: 75.00%
      -[ 38/ 50]	       MNIST	Time 0.02389s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 38/ 50]	FashionMNIST	Time 0.03001s	Training Accuracy: 94.24%	Test Accuracy: 68.75%
      -[ 39/ 50]	       MNIST	Time 0.02408s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 39/ 50]	FashionMNIST	Time 0.02401s	Training Accuracy: 94.04%	Test Accuracy: 75.00%
      -[ 40/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 40/ 50]	FashionMNIST	Time 0.02338s	Training Accuracy: 94.92%	Test Accuracy: 71.88%
      -[ 41/ 50]	       MNIST	Time 0.02520s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 41/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 94.53%	Test Accuracy: 71.88%
      -[ 42/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 42/ 50]	FashionMNIST	Time 0.02841s	Training Accuracy: 94.63%	Test Accuracy: 71.88%
      -[ 43/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 43/ 50]	FashionMNIST	Time 0.02380s	Training Accuracy: 95.61%	Test Accuracy: 65.62%
      -[ 44/ 50]	       MNIST	Time 0.02326s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 44/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 95.51%	Test Accuracy: 71.88%
      -[ 45/ 50]	       MNIST	Time 0.02467s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 45/ 50]	FashionMNIST	Time 0.02389s	Training Accuracy: 95.90%	Test Accuracy: 65.62%
      -[ 46/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 46/ 50]	FashionMNIST	Time 0.02879s	Training Accuracy: 95.61%	Test Accuracy: 68.75%
      -[ 47/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 47/ 50]	FashionMNIST	Time 0.02429s	Training Accuracy: 96.00%	Test Accuracy: 68.75%
      -[ 48/ 50]	       MNIST	Time 0.02374s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 48/ 50]	FashionMNIST	Time 0.02387s	Training Accuracy: 96.19%	Test Accuracy: 68.75%
      -[ 49/ 50]	       MNIST	Time 0.02529s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 49/ 50]	FashionMNIST	Time 0.02356s	Training Accuracy: 96.00%	Test Accuracy: 71.88%
      -[ 50/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 50/ 50]	FashionMNIST	Time 0.02772s	Training Accuracy: 96.88%	Test Accuracy: 68.75%
      +test_acc_list = train()
      [  1/ 50]	       MNIST	Time 90.93891s	Training Accuracy: 58.30%	Test Accuracy: 50.00%
      +[  1/ 50]	FashionMNIST	Time 0.03589s	Training Accuracy: 52.25%	Test Accuracy: 40.62%
      +[  2/ 50]	       MNIST	Time 0.03529s	Training Accuracy: 65.82%	Test Accuracy: 59.38%
      +[  2/ 50]	FashionMNIST	Time 0.03677s	Training Accuracy: 61.43%	Test Accuracy: 53.12%
      +[  3/ 50]	       MNIST	Time 0.03784s	Training Accuracy: 78.71%	Test Accuracy: 62.50%
      +[  3/ 50]	FashionMNIST	Time 0.02366s	Training Accuracy: 63.87%	Test Accuracy: 65.62%
      +[  4/ 50]	       MNIST	Time 0.02413s	Training Accuracy: 78.91%	Test Accuracy: 59.38%
      +[  4/ 50]	FashionMNIST	Time 0.02402s	Training Accuracy: 62.70%	Test Accuracy: 50.00%
      +[  5/ 50]	       MNIST	Time 0.02468s	Training Accuracy: 83.01%	Test Accuracy: 71.88%
      +[  5/ 50]	FashionMNIST	Time 0.02561s	Training Accuracy: 66.60%	Test Accuracy: 59.38%
      +[  6/ 50]	       MNIST	Time 0.02672s	Training Accuracy: 87.40%	Test Accuracy: 71.88%
      +[  6/ 50]	FashionMNIST	Time 0.04283s	Training Accuracy: 75.39%	Test Accuracy: 56.25%
      +[  7/ 50]	       MNIST	Time 0.02878s	Training Accuracy: 90.92%	Test Accuracy: 78.12%
      +[  7/ 50]	FashionMNIST	Time 0.02569s	Training Accuracy: 77.73%	Test Accuracy: 65.62%
      +[  8/ 50]	       MNIST	Time 0.02505s	Training Accuracy: 91.99%	Test Accuracy: 78.12%
      +[  8/ 50]	FashionMNIST	Time 0.02606s	Training Accuracy: 75.68%	Test Accuracy: 71.88%
      +[  9/ 50]	       MNIST	Time 0.03860s	Training Accuracy: 95.41%	Test Accuracy: 78.12%
      +[  9/ 50]	FashionMNIST	Time 0.02483s	Training Accuracy: 80.57%	Test Accuracy: 71.88%
      +[ 10/ 50]	       MNIST	Time 0.02384s	Training Accuracy: 96.00%	Test Accuracy: 81.25%
      +[ 10/ 50]	FashionMNIST	Time 0.02516s	Training Accuracy: 80.08%	Test Accuracy: 78.12%
      +[ 11/ 50]	       MNIST	Time 0.04125s	Training Accuracy: 97.07%	Test Accuracy: 81.25%
      +[ 11/ 50]	FashionMNIST	Time 0.02846s	Training Accuracy: 82.13%	Test Accuracy: 75.00%
      +[ 12/ 50]	       MNIST	Time 0.02746s	Training Accuracy: 98.05%	Test Accuracy: 78.12%
      +[ 12/ 50]	FashionMNIST	Time 0.03960s	Training Accuracy: 83.30%	Test Accuracy: 68.75%
      +[ 13/ 50]	       MNIST	Time 0.02454s	Training Accuracy: 99.02%	Test Accuracy: 84.38%
      +[ 13/ 50]	FashionMNIST	Time 0.02544s	Training Accuracy: 84.57%	Test Accuracy: 78.12%
      +[ 14/ 50]	       MNIST	Time 0.03625s	Training Accuracy: 99.22%	Test Accuracy: 84.38%
      +[ 14/ 50]	FashionMNIST	Time 0.02589s	Training Accuracy: 85.94%	Test Accuracy: 71.88%
      +[ 15/ 50]	       MNIST	Time 0.02439s	Training Accuracy: 99.61%	Test Accuracy: 84.38%
      +[ 15/ 50]	FashionMNIST	Time 0.03808s	Training Accuracy: 86.72%	Test Accuracy: 78.12%
      +[ 16/ 50]	       MNIST	Time 0.02549s	Training Accuracy: 99.90%	Test Accuracy: 81.25%
      +[ 16/ 50]	FashionMNIST	Time 0.02385s	Training Accuracy: 87.60%	Test Accuracy: 65.62%
      +[ 17/ 50]	       MNIST	Time 0.03981s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 17/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 89.26%	Test Accuracy: 62.50%
      +[ 18/ 50]	       MNIST	Time 0.02439s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 18/ 50]	FashionMNIST	Time 0.03753s	Training Accuracy: 88.48%	Test Accuracy: 68.75%
      +[ 19/ 50]	       MNIST	Time 0.02457s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 19/ 50]	FashionMNIST	Time 0.02341s	Training Accuracy: 89.84%	Test Accuracy: 71.88%
      +[ 20/ 50]	       MNIST	Time 0.04014s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 20/ 50]	FashionMNIST	Time 0.02331s	Training Accuracy: 88.67%	Test Accuracy: 75.00%
      +[ 21/ 50]	       MNIST	Time 0.02383s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 21/ 50]	FashionMNIST	Time 0.03570s	Training Accuracy: 89.94%	Test Accuracy: 68.75%
      +[ 22/ 50]	       MNIST	Time 0.02272s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 22/ 50]	FashionMNIST	Time 0.02405s	Training Accuracy: 90.82%	Test Accuracy: 75.00%
      +[ 23/ 50]	       MNIST	Time 0.03896s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 23/ 50]	FashionMNIST	Time 0.02609s	Training Accuracy: 91.50%	Test Accuracy: 75.00%
      +[ 24/ 50]	       MNIST	Time 0.02517s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 24/ 50]	FashionMNIST	Time 0.04117s	Training Accuracy: 90.53%	Test Accuracy: 78.12%
      +[ 25/ 50]	       MNIST	Time 0.02548s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 25/ 50]	FashionMNIST	Time 0.02606s	Training Accuracy: 89.45%	Test Accuracy: 75.00%
      +[ 26/ 50]	       MNIST	Time 0.04266s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 26/ 50]	FashionMNIST	Time 0.02453s	Training Accuracy: 88.96%	Test Accuracy: 71.88%
      +[ 27/ 50]	       MNIST	Time 0.02379s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 27/ 50]	FashionMNIST	Time 0.04063s	Training Accuracy: 89.84%	Test Accuracy: 68.75%
      +[ 28/ 50]	       MNIST	Time 0.02613s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      +[ 28/ 50]	FashionMNIST	Time 0.02591s	Training Accuracy: 90.04%	Test Accuracy: 68.75%
      +[ 29/ 50]	       MNIST	Time 0.04262s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      +[ 29/ 50]	FashionMNIST	Time 0.02624s	Training Accuracy: 89.94%	Test Accuracy: 75.00%
      +[ 30/ 50]	       MNIST	Time 0.02549s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 30/ 50]	FashionMNIST	Time 0.04256s	Training Accuracy: 90.82%	Test Accuracy: 75.00%
      +[ 31/ 50]	       MNIST	Time 0.02368s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 31/ 50]	FashionMNIST	Time 0.02326s	Training Accuracy: 92.29%	Test Accuracy: 71.88%
      +[ 32/ 50]	       MNIST	Time 0.03570s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 32/ 50]	FashionMNIST	Time 0.02291s	Training Accuracy: 92.97%	Test Accuracy: 68.75%
      +[ 33/ 50]	       MNIST	Time 0.02354s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 33/ 50]	FashionMNIST	Time 0.03821s	Training Accuracy: 93.75%	Test Accuracy: 75.00%
      +[ 34/ 50]	       MNIST	Time 0.02333s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 34/ 50]	FashionMNIST	Time 0.02589s	Training Accuracy: 93.16%	Test Accuracy: 68.75%
      +[ 35/ 50]	       MNIST	Time 0.04239s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 35/ 50]	FashionMNIST	Time 0.02489s	Training Accuracy: 94.04%	Test Accuracy: 71.88%
      +[ 36/ 50]	       MNIST	Time 0.02313s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 36/ 50]	FashionMNIST	Time 0.03971s	Training Accuracy: 94.53%	Test Accuracy: 71.88%
      +[ 37/ 50]	       MNIST	Time 0.02368s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 37/ 50]	FashionMNIST	Time 0.02640s	Training Accuracy: 94.43%	Test Accuracy: 71.88%
      +[ 38/ 50]	       MNIST	Time 0.04166s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 38/ 50]	FashionMNIST	Time 0.02454s	Training Accuracy: 95.12%	Test Accuracy: 75.00%
      +[ 39/ 50]	       MNIST	Time 0.02416s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 39/ 50]	FashionMNIST	Time 0.04181s	Training Accuracy: 95.21%	Test Accuracy: 75.00%
      +[ 40/ 50]	       MNIST	Time 0.02274s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 40/ 50]	FashionMNIST	Time 0.02501s	Training Accuracy: 95.41%	Test Accuracy: 75.00%
      +[ 41/ 50]	       MNIST	Time 0.04484s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 41/ 50]	FashionMNIST	Time 0.02362s	Training Accuracy: 95.51%	Test Accuracy: 75.00%
      +[ 42/ 50]	       MNIST	Time 0.02328s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 42/ 50]	FashionMNIST	Time 0.04095s	Training Accuracy: 96.09%	Test Accuracy: 75.00%
      +[ 43/ 50]	       MNIST	Time 0.02287s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 43/ 50]	FashionMNIST	Time 0.02328s	Training Accuracy: 96.09%	Test Accuracy: 75.00%
      +[ 44/ 50]	       MNIST	Time 0.04195s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 44/ 50]	FashionMNIST	Time 0.02327s	Training Accuracy: 96.29%	Test Accuracy: 75.00%
      +[ 45/ 50]	       MNIST	Time 0.02345s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 45/ 50]	FashionMNIST	Time 0.03968s	Training Accuracy: 96.39%	Test Accuracy: 75.00%
      +[ 46/ 50]	       MNIST	Time 0.02272s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 46/ 50]	FashionMNIST	Time 0.02340s	Training Accuracy: 96.68%	Test Accuracy: 75.00%
      +[ 47/ 50]	       MNIST	Time 0.03976s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 47/ 50]	FashionMNIST	Time 0.02335s	Training Accuracy: 96.58%	Test Accuracy: 78.12%
      +[ 48/ 50]	       MNIST	Time 0.02336s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 48/ 50]	FashionMNIST	Time 0.03924s	Training Accuracy: 96.97%	Test Accuracy: 75.00%
      +[ 49/ 50]	       MNIST	Time 0.02358s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 49/ 50]	FashionMNIST	Time 0.02355s	Training Accuracy: 96.97%	Test Accuracy: 78.12%
      +[ 50/ 50]	       MNIST	Time 0.04359s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 50/ 50]	FashionMNIST	Time 0.02275s	Training Accuracy: 97.17%	Test Accuracy: 75.00%
       
      -[FINAL]	       MNIST	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[FINAL]	FashionMNIST	Training Accuracy: 96.88%	Test Accuracy: 68.75%

      Appendix

      julia
      using InteractiveUtils
      +[FINAL]	       MNIST	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[FINAL]	FashionMNIST	Training Accuracy: 97.17%	Test Accuracy: 75.00%

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -368,8 +288,8 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -407,11 +327,11 @@ import{_ as a,c as i,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const E
       - CUDA_Runtime_jll: 0.15.5+0
       
       Toolchain:
      -- Julia: 1.11.2
      +- Julia: 1.11.3
       - LLVM: 16.0.6
       
       Environment:
       - JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
       
       1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.170 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,26)]))}const y=a(t,[["render",l]]);export{E as __pageData,y as default}; + 0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 2.857 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      `,26)]))}const y=i(t,[["render",l]]);export{d as __pageData,y as default}; diff --git a/dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.lean.js b/dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.lean.js new file mode 100644 index 0000000000..759d437c33 --- /dev/null +++ b/dev/assets/tutorials_intermediate_3_HyperNet.md.CRyPOfhj.lean.js @@ -0,0 +1 @@ +import{_ as i,c as a,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Training a HyperNetwork on MNIST and FashionMNIST","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/3_HyperNet.md","filePath":"tutorials/intermediate/3_HyperNet.md","lastUpdated":null}'),t={name:"tutorials/intermediate/3_HyperNet.md"};function l(e,s,h,k,r,c){return p(),a("div",null,s[0]||(s[0]=[n("",26)]))}const y=i(t,[["render",l]]);export{d as __pageData,y as default}; diff --git a/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.lean.js b/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.lean.js deleted file mode 100644 index c11f3182bf..0000000000 --- a/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.lean.js +++ /dev/null @@ -1,338 +0,0 @@ -import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.I-x9Gl6h.js";const h="/dev/assets/pinn_nested_ad.BvqoGasw.gif",m=JSON.parse('{"title":"Training a PINN on 2D PDE","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/4_PINN2DPDE.md","filePath":"tutorials/intermediate/4_PINN2DPDE.md","lastUpdated":null}'),e={name:"tutorials/intermediate/4_PINN2DPDE.md"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.586ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3795.2 1000","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.401ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3713.2 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.109ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3584.2 1000","aria-hidden":"true"};function c(o,i,F,C,u,f){return p(),n("div",null,[i[10]||(i[10]=a(`

      Training a PINN on 2D PDE

      In this tutorial we will go over using a PINN to solve 2D PDEs. We will be using the system from NeuralPDE Tutorials. However, we will be using our custom loss function and use nested AD capabilities of Lux.jl.

      This is a demonstration of Lux.jl. For serious usecases of PINNs, please refer to the package: NeuralPDE.jl.

      Package Imports

      julia
      using Lux, Optimisers, Random, Printf, Statistics, MLUtils, OnlineStats, CairoMakie,
      -      Reactant, Enzyme
      -
      -const xdev = reactant_device(; force=true)
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Problem Definition

      Since Lux supports efficient nested AD upto 2nd order, we will rewrite the problem with first order derivatives, so that we can compute the gradients of the loss using 2nd order AD.

      Define the Neural Networks

      All the networks take 3 input variables and output a scalar value. Here, we will define a a wrapper over the 3 networks, so that we can train them using Training.TrainState.

      julia
      struct PINN{U, V, W} <: Lux.AbstractLuxContainerLayer{(:u, :v, :w)}
      -    u::U
      -    v::V
      -    w::W
      -end
      -
      -function create_mlp(act, hidden_dims)
      -    return Chain(
      -        Dense(3 => hidden_dims, act),
      -        Dense(hidden_dims => hidden_dims, act),
      -        Dense(hidden_dims => hidden_dims, act),
      -        Dense(hidden_dims => 1)
      -    )
      -end
      -
      -function PINN(; hidden_dims::Int=32)
      -    return PINN(
      -        create_mlp(tanh, hidden_dims),
      -        create_mlp(tanh, hidden_dims),
      -        create_mlp(tanh, hidden_dims)
      -    )
      -end
      Main.var"##230".PINN

      Define the Loss Functions

      We will define a custom loss function to compute the loss using 2nd order AD. We will use the following loss function

      julia
      @views function physics_informed_loss_function(
      -        u::StatefulLuxLayer, v::StatefulLuxLayer, w::StatefulLuxLayer, xyt::AbstractArray
      -)
      -    ∂u_∂xyt = Enzyme.gradient(Enzyme.Reverse, sum  u, xyt)[1]
      -    ∂u_∂x, ∂u_∂y, ∂u_∂t = ∂u_∂xyt[1:1, :], ∂u_∂xyt[2:2, :], ∂u_∂xyt[3:3, :]
      -    ∂v_∂x = Enzyme.gradient(Enzyme.Reverse, sum  v, xyt)[1][1:1, :]
      -    v_xyt = v(xyt)
      -    ∂w_∂y = Enzyme.gradient(Enzyme.Reverse, sum  w, xyt)[1][2:2, :]
      -    w_xyt = w(xyt)
      -    return (
      -        mean(abs2, ∂u_∂t .- ∂v_∂x .- ∂w_∂y) +
      -        mean(abs2, v_xyt .- ∂u_∂x) +
      -        mean(abs2, w_xyt .- ∂u_∂y)
      -    )
      -end
      physics_informed_loss_function (generic function with 1 method)

      Additionally, we need to compute the loss wrt the boundary conditions.

      julia
      function mse_loss_function(u::StatefulLuxLayer, target::AbstractArray, xyt::AbstractArray)
      -    return MSELoss()(u(xyt), target)
      -end
      -
      -function loss_function(model, ps, st, (xyt, target_data, xyt_bc, target_bc))
      -    u_net = StatefulLuxLayer{true}(model.u, ps.u, st.u)
      -    v_net = StatefulLuxLayer{true}(model.v, ps.v, st.v)
      -    w_net = StatefulLuxLayer{true}(model.w, ps.w, st.w)
      -    physics_loss = physics_informed_loss_function(u_net, v_net, w_net, xyt)
      -    data_loss = mse_loss_function(u_net, target_data, xyt)
      -    bc_loss = mse_loss_function(u_net, target_bc, xyt_bc)
      -    loss = physics_loss + data_loss + bc_loss
      -    return (
      -        loss,
      -        (; u=u_net.st, v=v_net.st, w=w_net.st),
      -        (; physics_loss, data_loss, bc_loss)
      -    )
      -end
      loss_function (generic function with 1 method)

      Generate the Data

      `,20)),s("p",null,[i[6]||(i[6]=t("We will generate some random data to train the model on. We will take data on a square spatial and temporal domain ")),s("mjx-container",k,[(p(),n("svg",r,i[0]||(i[0]=[a('',1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"x"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[7]||(i[7]=t(", ")),s("mjx-container",E,[(p(),n("svg",d,i[2]||(i[2]=[a('',1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[8]||(i[8]=t(", and ")),s("mjx-container",g,[(p(),n("svg",y,i[4]||(i[4]=[a('',1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"t"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[9]||(i[9]=t(". Typically, you want to be smarter about the sampling process, but for the sake of simplicity, we will skip that."))]),i[11]||(i[11]=a(`
      julia
      analytical_solution(x, y, t) = @. exp(x + y) * cos(x + y + 4t)
      -analytical_solution(xyt) = analytical_solution(xyt[1, :], xyt[2, :], xyt[3, :])
      -
      -begin
      -    grid_len = 16
      -
      -    grid = range(0.0f0, 2.0f0; length=grid_len)
      -    xyt = stack([[elem...] for elem in vec(collect(Iterators.product(grid, grid, grid)))])
      -
      -    target_data = reshape(analytical_solution(xyt), 1, :)
      -
      -    bc_len = 512
      -
      -    x = collect(range(0.0f0, 2.0f0; length=bc_len))
      -    y = collect(range(0.0f0, 2.0f0; length=bc_len))
      -    t = collect(range(0.0f0, 2.0f0; length=bc_len))
      -
      -    xyt_bc = hcat(
      -        stack((x, y, zeros(Float32, bc_len)); dims=1),
      -        stack((zeros(Float32, bc_len), y, t); dims=1),
      -        stack((ones(Float32, bc_len) .* 2, y, t); dims=1),
      -        stack((x, zeros(Float32, bc_len), t); dims=1),
      -        stack((x, ones(Float32, bc_len) .* 2, t); dims=1)
      -    )
      -    target_bc = reshape(analytical_solution(xyt_bc), 1, :)
      -
      -    min_target_bc, max_target_bc = extrema(target_bc)
      -    min_data, max_data = extrema(target_data)
      -    min_pde_val, max_pde_val = min(min_data, min_target_bc), max(max_data, max_target_bc)
      -
      -    xyt = (xyt .- minimum(xyt)) ./ (maximum(xyt) .- minimum(xyt))
      -    xyt_bc = (xyt_bc .- minimum(xyt_bc)) ./ (maximum(xyt_bc) .- minimum(xyt_bc))
      -    target_bc = (target_bc .- min_pde_val) ./ (max_pde_val - min_pde_val)
      -    target_data = (target_data .- min_pde_val) ./ (max_pde_val - min_pde_val)
      -end

      Training

      julia
      function train_model(
      -        xyt, target_data, xyt_bc, target_bc; seed::Int=0,
      -        maxiters::Int=50000, hidden_dims::Int=32
      -)
      -    rng = Random.default_rng()
      -    Random.seed!(rng, seed)
      -
      -    pinn = PINN(; hidden_dims)
      -    ps, st = Lux.setup(rng, pinn) |> xdev
      -
      -    bc_dataloader = DataLoader(
      -        (xyt_bc, target_bc); batchsize=32, shuffle=true, partial=false
      -    ) |> xdev
      -    pde_dataloader = DataLoader(
      -        (xyt, target_data); batchsize=32, shuffle=true, partial=false
      -    ) |> xdev
      -
      -    train_state = Training.TrainState(pinn, ps, st, Adam(0.05f0))
      -    lr = i -> i < 5000 ? 0.05f0 : (i < 10000 ? 0.005f0 : 0.0005f0)
      -
      -    total_loss_tracker, physics_loss_tracker, data_loss_tracker, bc_loss_tracker = ntuple(
      -        _ -> OnlineStats.CircBuff(Float32, 32; rev=true), 4)
      -
      -    iter = 1
      -    for ((xyt_batch, target_data_batch), (xyt_bc_batch, target_bc_batch)) in zip(
      -        Iterators.cycle(pde_dataloader), Iterators.cycle(bc_dataloader)
      -    )
      -        Optimisers.adjust!(train_state, lr(iter))
      -
      -        _, loss, stats, train_state = Training.single_train_step!(
      -            AutoEnzyme(), loss_function,
      -            (xyt_batch, target_data_batch, xyt_bc_batch, target_bc_batch),
      -            train_state
      -        )
      -
      -        fit!(total_loss_tracker, Float32(loss))
      -        fit!(physics_loss_tracker, Float32(stats.physics_loss))
      -        fit!(data_loss_tracker, Float32(stats.data_loss))
      -        fit!(bc_loss_tracker, Float32(stats.bc_loss))
      -
      -        mean_loss = mean(OnlineStats.value(total_loss_tracker))
      -        mean_physics_loss = mean(OnlineStats.value(physics_loss_tracker))
      -        mean_data_loss = mean(OnlineStats.value(data_loss_tracker))
      -        mean_bc_loss = mean(OnlineStats.value(bc_loss_tracker))
      -
      -        isnan(loss) && throw(ArgumentError("NaN Loss Detected"))
      -
      -        if iter % 1000 == 1 || iter == maxiters
      -            @printf "Iteration: [%6d/%6d] \\t Loss: %.9f (%.9f) \\t Physics Loss: %.9f \\
      -                     (%.9f) \\t Data Loss: %.9f (%.9f) \\t BC \\
      -                     Loss: %.9f (%.9f)\\n" iter maxiters loss mean_loss stats.physics_loss mean_physics_loss stats.data_loss mean_data_loss stats.bc_loss mean_bc_loss
      -        end
      -
      -        iter += 1
      -        iter  maxiters && break
      -    end
      -
      -    return StatefulLuxLayer{true}(
      -        pinn, cdev(train_state.parameters), cdev(train_state.states)
      -    )
      -end
      -
      -trained_model = train_model(xyt, target_data, xyt_bc, target_bc)
      -trained_u = Lux.testmode(
      -    StatefulLuxLayer{true}(trained_model.model.u, trained_model.ps.u, trained_model.st.u)
      -)
      2025-01-20 23:01:51.964035: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 4000489932546559643
      -E0120 23:01:52.335396 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.335702 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.335707 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.335711 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.335715 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.335719 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.335722 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.335726 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.335730 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.335733 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.335762: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.338530 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.338554 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.338558 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.338562 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.338565 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.338569 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.338573 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.338576 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.338580 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.338584 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.338590: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.341328 3951999 buffer_comparator.cc:156] Difference at 1024: -nan, expected 11.5293
      -E0120 23:01:52.341348 3951999 buffer_comparator.cc:156] Difference at 1025: -nan, expected 10.1983
      -E0120 23:01:52.341352 3951999 buffer_comparator.cc:156] Difference at 1026: -nan, expected 13.3385
      -E0120 23:01:52.341356 3951999 buffer_comparator.cc:156] Difference at 1027: -nan, expected 12.4705
      -E0120 23:01:52.341359 3951999 buffer_comparator.cc:156] Difference at 1028: -nan, expected 8.94387
      -E0120 23:01:52.341363 3951999 buffer_comparator.cc:156] Difference at 1029: -nan, expected 10.8997
      -E0120 23:01:52.341367 3951999 buffer_comparator.cc:156] Difference at 1030: -nan, expected 10.6486
      -E0120 23:01:52.341370 3951999 buffer_comparator.cc:156] Difference at 1031: -nan, expected 9.73507
      -E0120 23:01:52.341374 3951999 buffer_comparator.cc:156] Difference at 1032: -nan, expected 12.2806
      -E0120 23:01:52.341378 3951999 buffer_comparator.cc:156] Difference at 1033: -nan, expected 10.1883
      -2025-01-20 23:01:52.341384: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.344116 3951999 buffer_comparator.cc:156] Difference at 1040: -nan, expected 9.99799
      -E0120 23:01:52.344135 3951999 buffer_comparator.cc:156] Difference at 1041: -nan, expected 12.209
      -E0120 23:01:52.344140 3951999 buffer_comparator.cc:156] Difference at 1042: -nan, expected 9.4851
      -E0120 23:01:52.344143 3951999 buffer_comparator.cc:156] Difference at 1043: -nan, expected 8.26397
      -E0120 23:01:52.344147 3951999 buffer_comparator.cc:156] Difference at 1044: -nan, expected 11.9253
      -E0120 23:01:52.344152 3951999 buffer_comparator.cc:156] Difference at 1045: -nan, expected 8.99047
      -E0120 23:01:52.344156 3951999 buffer_comparator.cc:156] Difference at 1046: -nan, expected 8.81842
      -E0120 23:01:52.344159 3951999 buffer_comparator.cc:156] Difference at 1047: -nan, expected 12.2714
      -E0120 23:01:52.344163 3951999 buffer_comparator.cc:156] Difference at 1048: -nan, expected 11.1417
      -E0120 23:01:52.344167 3951999 buffer_comparator.cc:156] Difference at 1049: -nan, expected 10.6572
      -2025-01-20 23:01:52.344173: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.346894 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.346907 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.346910 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.346913 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.346916 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.346919 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.346921 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.346924 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.346926 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.346929 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.346934: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.349626 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.349640 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.349643 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.349646 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.349648 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.349651 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.349653 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.349656 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.349659 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.349661 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.349666: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.352365 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.352384 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.352387 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.352389 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.352392 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.352395 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.352397 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.352400 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.352402 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.352405 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.352411: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Iteration: [     1/ 50000] 	 Loss: 3.158992767 (3.158992767) 	 Physics Loss: 1.982357264 (1.982357264) 	 Data Loss: 0.578243732 (0.578243732) 	 BC Loss: 0.598391712 (0.598391712)
      -Iteration: [  1001/ 50000] 	 Loss: 0.034934171 (0.028678153) 	 Physics Loss: 0.000550666 (0.000427971) 	 Data Loss: 0.022407684 (0.011849238) 	 BC Loss: 0.011975820 (0.016400939)
      -Iteration: [  2001/ 50000] 	 Loss: 0.029342793 (0.031756539) 	 Physics Loss: 0.001803906 (0.001986223) 	 Data Loss: 0.013016323 (0.012399685) 	 BC Loss: 0.014522564 (0.017370628)
      -Iteration: [  3001/ 50000] 	 Loss: 0.021532167 (0.027456025) 	 Physics Loss: 0.002033974 (0.004514552) 	 Data Loss: 0.004263131 (0.008210877) 	 BC Loss: 0.015235063 (0.014730594)
      -Iteration: [  4001/ 50000] 	 Loss: 0.049741872 (0.035063371) 	 Physics Loss: 0.008673832 (0.002975470) 	 Data Loss: 0.016625574 (0.012557827) 	 BC Loss: 0.024442466 (0.019530077)
      -Iteration: [  5001/ 50000] 	 Loss: 0.021555956 (0.037785888) 	 Physics Loss: 0.002584497 (0.009353531) 	 Data Loss: 0.008988982 (0.011811271) 	 BC Loss: 0.009982477 (0.016621085)
      -Iteration: [  6001/ 50000] 	 Loss: 0.022884602 (0.019320106) 	 Physics Loss: 0.000503887 (0.000844118) 	 Data Loss: 0.005671175 (0.006657102) 	 BC Loss: 0.016709540 (0.011818888)
      -Iteration: [  7001/ 50000] 	 Loss: 0.017019382 (0.019912010) 	 Physics Loss: 0.000786213 (0.000926178) 	 Data Loss: 0.006546173 (0.007675517) 	 BC Loss: 0.009686996 (0.011310317)
      -Iteration: [  8001/ 50000] 	 Loss: 0.022041641 (0.018215429) 	 Physics Loss: 0.002748450 (0.002823828) 	 Data Loss: 0.004308246 (0.004818310) 	 BC Loss: 0.014984946 (0.010573289)
      -Iteration: [  9001/ 50000] 	 Loss: 0.020825051 (0.016353965) 	 Physics Loss: 0.001645100 (0.001697728) 	 Data Loss: 0.003023159 (0.004666437) 	 BC Loss: 0.016156793 (0.009989802)
      -Iteration: [ 10001/ 50000] 	 Loss: 0.012419771 (0.014550619) 	 Physics Loss: 0.001211297 (0.002143627) 	 Data Loss: 0.006038338 (0.003847382) 	 BC Loss: 0.005170135 (0.008559611)
      -Iteration: [ 11001/ 50000] 	 Loss: 0.016400268 (0.013275324) 	 Physics Loss: 0.001640767 (0.001365761) 	 Data Loss: 0.003816272 (0.003336374) 	 BC Loss: 0.010943229 (0.008573188)
      -Iteration: [ 12001/ 50000] 	 Loss: 0.012625601 (0.012438955) 	 Physics Loss: 0.001496360 (0.001293942) 	 Data Loss: 0.005017307 (0.003183292) 	 BC Loss: 0.006111934 (0.007961722)
      -Iteration: [ 13001/ 50000] 	 Loss: 0.006276353 (0.011658882) 	 Physics Loss: 0.000723396 (0.001394610) 	 Data Loss: 0.001822183 (0.002704730) 	 BC Loss: 0.003730775 (0.007559542)
      -Iteration: [ 14001/ 50000] 	 Loss: 0.007330933 (0.010494760) 	 Physics Loss: 0.001231446 (0.001130134) 	 Data Loss: 0.003328452 (0.002512253) 	 BC Loss: 0.002771034 (0.006852372)
      -Iteration: [ 15001/ 50000] 	 Loss: 0.012322948 (0.010581179) 	 Physics Loss: 0.001112666 (0.001342141) 	 Data Loss: 0.000318603 (0.002849674) 	 BC Loss: 0.010891679 (0.006389363)
      -Iteration: [ 16001/ 50000] 	 Loss: 0.005651589 (0.011476245) 	 Physics Loss: 0.001261572 (0.001271797) 	 Data Loss: 0.001857541 (0.002460307) 	 BC Loss: 0.002532476 (0.007744140)
      -Iteration: [ 17001/ 50000] 	 Loss: 0.004589280 (0.010282698) 	 Physics Loss: 0.000900613 (0.001197141) 	 Data Loss: 0.000789952 (0.002489067) 	 BC Loss: 0.002898715 (0.006596491)
      -Iteration: [ 18001/ 50000] 	 Loss: 0.013593317 (0.008794412) 	 Physics Loss: 0.001301785 (0.001351097) 	 Data Loss: 0.003135182 (0.001751710) 	 BC Loss: 0.009156350 (0.005691605)
      -Iteration: [ 19001/ 50000] 	 Loss: 0.009044455 (0.009721650) 	 Physics Loss: 0.001797066 (0.001625932) 	 Data Loss: 0.003098861 (0.001854344) 	 BC Loss: 0.004148528 (0.006241375)
      -Iteration: [ 20001/ 50000] 	 Loss: 0.004531536 (0.007789471) 	 Physics Loss: 0.002314395 (0.001649175) 	 Data Loss: 0.001491809 (0.001308756) 	 BC Loss: 0.000725332 (0.004831538)
      -Iteration: [ 21001/ 50000] 	 Loss: 0.004263383 (0.006220151) 	 Physics Loss: 0.001818714 (0.001836546) 	 Data Loss: 0.001014963 (0.001413772) 	 BC Loss: 0.001429706 (0.002969833)
      -Iteration: [ 22001/ 50000] 	 Loss: 0.006228818 (0.005886041) 	 Physics Loss: 0.003274539 (0.002052142) 	 Data Loss: 0.002419390 (0.000916688) 	 BC Loss: 0.000534889 (0.002917211)
      -Iteration: [ 23001/ 50000] 	 Loss: 0.004727511 (0.004308250) 	 Physics Loss: 0.001700793 (0.001910425) 	 Data Loss: 0.001145163 (0.000927515) 	 BC Loss: 0.001881554 (0.001470311)
      -Iteration: [ 24001/ 50000] 	 Loss: 0.005070512 (0.003963022) 	 Physics Loss: 0.001458622 (0.001738792) 	 Data Loss: 0.000377079 (0.000693607) 	 BC Loss: 0.003234812 (0.001530622)
      -Iteration: [ 25001/ 50000] 	 Loss: 0.002241082 (0.003130429) 	 Physics Loss: 0.000983181 (0.001482069) 	 Data Loss: 0.000401148 (0.000641221) 	 BC Loss: 0.000856754 (0.001007139)
      -Iteration: [ 26001/ 50000] 	 Loss: 0.002955514 (0.002503448) 	 Physics Loss: 0.001489272 (0.001270391) 	 Data Loss: 0.000252445 (0.000556399) 	 BC Loss: 0.001213797 (0.000676658)
      -Iteration: [ 27001/ 50000] 	 Loss: 0.003096203 (0.002668173) 	 Physics Loss: 0.001007635 (0.001394353) 	 Data Loss: 0.000529087 (0.000551110) 	 BC Loss: 0.001559482 (0.000722710)
      -Iteration: [ 28001/ 50000] 	 Loss: 0.001113268 (0.002219696) 	 Physics Loss: 0.000669667 (0.001248004) 	 Data Loss: 0.000210634 (0.000505303) 	 BC Loss: 0.000232967 (0.000466388)
      -Iteration: [ 29001/ 50000] 	 Loss: 0.001584558 (0.001840178) 	 Physics Loss: 0.000857728 (0.000936380) 	 Data Loss: 0.000405950 (0.000491625) 	 BC Loss: 0.000320880 (0.000412173)
      -Iteration: [ 30001/ 50000] 	 Loss: 0.001524673 (0.001886792) 	 Physics Loss: 0.000554989 (0.001022693) 	 Data Loss: 0.000461125 (0.000466423) 	 BC Loss: 0.000508558 (0.000397676)
      -Iteration: [ 31001/ 50000] 	 Loss: 0.001906899 (0.002045441) 	 Physics Loss: 0.000959212 (0.001217105) 	 Data Loss: 0.000290614 (0.000451727) 	 BC Loss: 0.000657073 (0.000376609)
      -Iteration: [ 32001/ 50000] 	 Loss: 0.003575745 (0.002317760) 	 Physics Loss: 0.002933637 (0.001479890) 	 Data Loss: 0.000278991 (0.000500438) 	 BC Loss: 0.000363118 (0.000337431)
      -Iteration: [ 33001/ 50000] 	 Loss: 0.001054667 (0.001392871) 	 Physics Loss: 0.000542178 (0.000709339) 	 Data Loss: 0.000279479 (0.000395160) 	 BC Loss: 0.000233010 (0.000288372)
      -Iteration: [ 34001/ 50000] 	 Loss: 0.001258306 (0.001430126) 	 Physics Loss: 0.000604900 (0.000740890) 	 Data Loss: 0.000279625 (0.000439273) 	 BC Loss: 0.000373781 (0.000249964)
      -Iteration: [ 35001/ 50000] 	 Loss: 0.001458327 (0.001451151) 	 Physics Loss: 0.001126388 (0.000859504) 	 Data Loss: 0.000209119 (0.000364929) 	 BC Loss: 0.000122820 (0.000226719)
      -Iteration: [ 36001/ 50000] 	 Loss: 0.001965113 (0.001363893) 	 Physics Loss: 0.000819583 (0.000818373) 	 Data Loss: 0.000884497 (0.000343106) 	 BC Loss: 0.000261034 (0.000202413)
      -Iteration: [ 37001/ 50000] 	 Loss: 0.001001803 (0.001306025) 	 Physics Loss: 0.000458097 (0.000750440) 	 Data Loss: 0.000366133 (0.000375299) 	 BC Loss: 0.000177574 (0.000180287)
      -Iteration: [ 38001/ 50000] 	 Loss: 0.001594038 (0.001202732) 	 Physics Loss: 0.000834267 (0.000656006) 	 Data Loss: 0.000438084 (0.000355202) 	 BC Loss: 0.000321687 (0.000191524)
      -Iteration: [ 39001/ 50000] 	 Loss: 0.001096051 (0.001220055) 	 Physics Loss: 0.000612216 (0.000665410) 	 Data Loss: 0.000219414 (0.000392687) 	 BC Loss: 0.000264422 (0.000161957)
      -Iteration: [ 40001/ 50000] 	 Loss: 0.001038662 (0.001444635) 	 Physics Loss: 0.000485165 (0.000887432) 	 Data Loss: 0.000353080 (0.000370217) 	 BC Loss: 0.000200418 (0.000186986)
      -Iteration: [ 41001/ 50000] 	 Loss: 0.000912517 (0.001242174) 	 Physics Loss: 0.000559338 (0.000678301) 	 Data Loss: 0.000281135 (0.000411240) 	 BC Loss: 0.000072045 (0.000152633)
      -Iteration: [ 42001/ 50000] 	 Loss: 0.001224924 (0.001465745) 	 Physics Loss: 0.001037403 (0.000994991) 	 Data Loss: 0.000128774 (0.000311622) 	 BC Loss: 0.000058747 (0.000159133)
      -Iteration: [ 43001/ 50000] 	 Loss: 0.000964010 (0.001390206) 	 Physics Loss: 0.000636658 (0.000894072) 	 Data Loss: 0.000201709 (0.000336276) 	 BC Loss: 0.000125642 (0.000159858)
      -Iteration: [ 44001/ 50000] 	 Loss: 0.000622213 (0.001140883) 	 Physics Loss: 0.000339070 (0.000657654) 	 Data Loss: 0.000136122 (0.000339579) 	 BC Loss: 0.000147021 (0.000143649)
      -Iteration: [ 45001/ 50000] 	 Loss: 0.000991343 (0.001148664) 	 Physics Loss: 0.000418792 (0.000707322) 	 Data Loss: 0.000449356 (0.000315701) 	 BC Loss: 0.000123195 (0.000125641)
      -Iteration: [ 46001/ 50000] 	 Loss: 0.000918692 (0.000882140) 	 Physics Loss: 0.000450692 (0.000441957) 	 Data Loss: 0.000271820 (0.000319554) 	 BC Loss: 0.000196180 (0.000120629)
      -Iteration: [ 47001/ 50000] 	 Loss: 0.001575661 (0.001149394) 	 Physics Loss: 0.001348543 (0.000694363) 	 Data Loss: 0.000169103 (0.000319502) 	 BC Loss: 0.000058015 (0.000135529)
      -Iteration: [ 48001/ 50000] 	 Loss: 0.001518319 (0.001056366) 	 Physics Loss: 0.001149551 (0.000625095) 	 Data Loss: 0.000176694 (0.000317998) 	 BC Loss: 0.000192074 (0.000113272)
      -Iteration: [ 49001/ 50000] 	 Loss: 0.002202400 (0.001657575) 	 Physics Loss: 0.001789718 (0.001203460) 	 Data Loss: 0.000310050 (0.000312800) 	 BC Loss: 0.000102632 (0.000141316)

      Visualizing the Results

      julia
      ts, xs, ys = 0.0f0:0.05f0:2.0f0, 0.0f0:0.02f0:2.0f0, 0.0f0:0.02f0:2.0f0
      -grid = stack([[elem...] for elem in vec(collect(Iterators.product(xs, ys, ts)))])
      -
      -u_real = reshape(analytical_solution(grid), length(xs), length(ys), length(ts))
      -
      -grid_normalized = (grid .- minimum(grid)) ./ (maximum(grid) .- minimum(grid))
      -u_pred = reshape(trained_u(grid_normalized), length(xs), length(ys), length(ts))
      -u_pred = u_pred .* (max_pde_val - min_pde_val) .+ min_pde_val
      -
      -begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -    errs = [abs.(u_pred[:, :, i] .- u_real[:, :, i]) for i in 1:length(ts)]
      -    Colorbar(fig[1, 2]; limits=extrema(stack(errs)))
      -
      -    CairoMakie.record(fig, "pinn_nested_ad.gif", 1:length(ts); framerate=10) do i
      -        ax.title = "Abs. Predictor Error | Time: $(ts[i])"
      -        err = errs[i]
      -        contour!(ax, xs, ys, err; levels=10, linewidth=2)
      -        heatmap!(ax, xs, ys, err)
      -        return fig
      -    end
      -
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,12))])}const _=l(e,[["render",c]]);export{m as __pageData,_ as default}; diff --git a/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.js b/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.js similarity index 91% rename from dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.js rename to dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.js index c11f3182bf..12ad452211 100644 --- a/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.9vdiyw9_.js +++ b/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.js @@ -1,4 +1,4 @@ -import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.I-x9Gl6h.js";const h="/dev/assets/pinn_nested_ad.BvqoGasw.gif",m=JSON.parse('{"title":"Training a PINN on 2D PDE","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/4_PINN2DPDE.md","filePath":"tutorials/intermediate/4_PINN2DPDE.md","lastUpdated":null}'),e={name:"tutorials/intermediate/4_PINN2DPDE.md"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.586ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3795.2 1000","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.401ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3713.2 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.109ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3584.2 1000","aria-hidden":"true"};function c(o,i,F,C,u,f){return p(),n("div",null,[i[10]||(i[10]=a(`

      Training a PINN on 2D PDE

      In this tutorial we will go over using a PINN to solve 2D PDEs. We will be using the system from NeuralPDE Tutorials. However, we will be using our custom loss function and use nested AD capabilities of Lux.jl.

      This is a demonstration of Lux.jl. For serious usecases of PINNs, please refer to the package: NeuralPDE.jl.

      Package Imports

      julia
      using Lux, Optimisers, Random, Printf, Statistics, MLUtils, OnlineStats, CairoMakie,
      +import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.BetCMmtc.js";const h="/dev/assets/pinn_nested_ad.BvqoGasw.gif",m=JSON.parse('{"title":"Training a PINN on 2D PDE","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/4_PINN2DPDE.md","filePath":"tutorials/intermediate/4_PINN2DPDE.md","lastUpdated":null}'),e={name:"tutorials/intermediate/4_PINN2DPDE.md"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.586ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3795.2 1000","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.401ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3713.2 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.109ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3584.2 1000","aria-hidden":"true"};function c(o,i,F,C,u,f){return p(),n("div",null,[i[10]||(i[10]=a(`

      Training a PINN on 2D PDE

      In this tutorial we will go over using a PINN to solve 2D PDEs. We will be using the system from NeuralPDE Tutorials. However, we will be using our custom loss function and use nested AD capabilities of Lux.jl.

      This is a demonstration of Lux.jl. For serious usecases of PINNs, please refer to the package: NeuralPDE.jl.

      Package Imports

      julia
      using Lux, Optimisers, Random, Printf, Statistics, MLUtils, OnlineStats, CairoMakie,
             Reactant, Enzyme
       
       const xdev = reactant_device(; force=true)
      @@ -153,84 +153,84 @@ import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.I-x9Gl
       trained_model = train_model(xyt, target_data, xyt_bc, target_bc)
       trained_u = Lux.testmode(
           StatefulLuxLayer{true}(trained_model.model.u, trained_model.ps.u, trained_model.st.u)
      -)
      2025-01-20 23:01:51.964035: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 4000489932546559643
      -E0120 23:01:52.335396 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.335702 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.335707 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.335711 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.335715 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.335719 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.335722 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.335726 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.335730 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.335733 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.335762: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.338530 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.338554 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.338558 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.338562 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.338565 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.338569 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.338573 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.338576 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.338580 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.338584 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.338590: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.341328 3951999 buffer_comparator.cc:156] Difference at 1024: -nan, expected 11.5293
      -E0120 23:01:52.341348 3951999 buffer_comparator.cc:156] Difference at 1025: -nan, expected 10.1983
      -E0120 23:01:52.341352 3951999 buffer_comparator.cc:156] Difference at 1026: -nan, expected 13.3385
      -E0120 23:01:52.341356 3951999 buffer_comparator.cc:156] Difference at 1027: -nan, expected 12.4705
      -E0120 23:01:52.341359 3951999 buffer_comparator.cc:156] Difference at 1028: -nan, expected 8.94387
      -E0120 23:01:52.341363 3951999 buffer_comparator.cc:156] Difference at 1029: -nan, expected 10.8997
      -E0120 23:01:52.341367 3951999 buffer_comparator.cc:156] Difference at 1030: -nan, expected 10.6486
      -E0120 23:01:52.341370 3951999 buffer_comparator.cc:156] Difference at 1031: -nan, expected 9.73507
      -E0120 23:01:52.341374 3951999 buffer_comparator.cc:156] Difference at 1032: -nan, expected 12.2806
      -E0120 23:01:52.341378 3951999 buffer_comparator.cc:156] Difference at 1033: -nan, expected 10.1883
      -2025-01-20 23:01:52.341384: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.344116 3951999 buffer_comparator.cc:156] Difference at 1040: -nan, expected 9.99799
      -E0120 23:01:52.344135 3951999 buffer_comparator.cc:156] Difference at 1041: -nan, expected 12.209
      -E0120 23:01:52.344140 3951999 buffer_comparator.cc:156] Difference at 1042: -nan, expected 9.4851
      -E0120 23:01:52.344143 3951999 buffer_comparator.cc:156] Difference at 1043: -nan, expected 8.26397
      -E0120 23:01:52.344147 3951999 buffer_comparator.cc:156] Difference at 1044: -nan, expected 11.9253
      -E0120 23:01:52.344152 3951999 buffer_comparator.cc:156] Difference at 1045: -nan, expected 8.99047
      -E0120 23:01:52.344156 3951999 buffer_comparator.cc:156] Difference at 1046: -nan, expected 8.81842
      -E0120 23:01:52.344159 3951999 buffer_comparator.cc:156] Difference at 1047: -nan, expected 12.2714
      -E0120 23:01:52.344163 3951999 buffer_comparator.cc:156] Difference at 1048: -nan, expected 11.1417
      -E0120 23:01:52.344167 3951999 buffer_comparator.cc:156] Difference at 1049: -nan, expected 10.6572
      -2025-01-20 23:01:52.344173: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.346894 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.346907 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.346910 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.346913 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.346916 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.346919 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.346921 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.346924 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.346926 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.346929 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.346934: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.349626 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.349640 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.349643 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.349646 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.349648 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.349651 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.349653 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.349656 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.349659 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.349661 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.349666: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.352365 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.352384 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.352387 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.352389 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.352392 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.352395 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.352397 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.352400 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.352402 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.352405 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.352411: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +)
      2025-01-24 04:44:46.958300: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 17932355638179910565
      +E0124 04:44:47.189670 1598851 buffer_comparator.cc:156] Difference at 16: 0, expected 11.6059
      +E0124 04:44:47.189735 1598851 buffer_comparator.cc:156] Difference at 17: 0, expected 14.502
      +E0124 04:44:47.189743 1598851 buffer_comparator.cc:156] Difference at 18: 0, expected 11.2449
      +E0124 04:44:47.189750 1598851 buffer_comparator.cc:156] Difference at 19: 0, expected 10.0998
      +E0124 04:44:47.189756 1598851 buffer_comparator.cc:156] Difference at 20: 0, expected 14.0222
      +E0124 04:44:47.189763 1598851 buffer_comparator.cc:156] Difference at 21: 0, expected 10.1321
      +E0124 04:44:47.189769 1598851 buffer_comparator.cc:156] Difference at 22: 0, expected 10.2986
      +E0124 04:44:47.189776 1598851 buffer_comparator.cc:156] Difference at 23: 0, expected 14.1109
      +E0124 04:44:47.189782 1598851 buffer_comparator.cc:156] Difference at 24: 0, expected 13.3463
      +E0124 04:44:47.189788 1598851 buffer_comparator.cc:156] Difference at 25: 0, expected 12.8369
      +2025-01-24 04:44:47.189808: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.192718 1598851 buffer_comparator.cc:156] Difference at 16: 0, expected 11.6059
      +E0124 04:44:47.192749 1598851 buffer_comparator.cc:156] Difference at 17: 0, expected 14.502
      +E0124 04:44:47.192756 1598851 buffer_comparator.cc:156] Difference at 18: 0, expected 11.2449
      +E0124 04:44:47.192762 1598851 buffer_comparator.cc:156] Difference at 19: 0, expected 10.0998
      +E0124 04:44:47.192769 1598851 buffer_comparator.cc:156] Difference at 20: 0, expected 14.0222
      +E0124 04:44:47.192775 1598851 buffer_comparator.cc:156] Difference at 21: 0, expected 10.1321
      +E0124 04:44:47.192781 1598851 buffer_comparator.cc:156] Difference at 22: 0, expected 10.2986
      +E0124 04:44:47.192788 1598851 buffer_comparator.cc:156] Difference at 23: 0, expected 14.1109
      +E0124 04:44:47.192794 1598851 buffer_comparator.cc:156] Difference at 24: 0, expected 13.3463
      +E0124 04:44:47.192800 1598851 buffer_comparator.cc:156] Difference at 25: 0, expected 12.8369
      +2025-01-24 04:44:47.192810: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.195610 1598851 buffer_comparator.cc:156] Difference at 1024: 0, expected 11.5293
      +E0124 04:44:47.195632 1598851 buffer_comparator.cc:156] Difference at 1025: 0, expected 10.1983
      +E0124 04:44:47.195636 1598851 buffer_comparator.cc:156] Difference at 1026: 0, expected 13.3385
      +E0124 04:44:47.195641 1598851 buffer_comparator.cc:156] Difference at 1027: 0, expected 12.4705
      +E0124 04:44:47.195645 1598851 buffer_comparator.cc:156] Difference at 1028: 0, expected 8.94387
      +E0124 04:44:47.195649 1598851 buffer_comparator.cc:156] Difference at 1029: 0, expected 10.8997
      +E0124 04:44:47.195653 1598851 buffer_comparator.cc:156] Difference at 1030: 0, expected 10.6486
      +E0124 04:44:47.195657 1598851 buffer_comparator.cc:156] Difference at 1031: 0, expected 9.73507
      +E0124 04:44:47.195662 1598851 buffer_comparator.cc:156] Difference at 1032: 0, expected 12.2806
      +E0124 04:44:47.195666 1598851 buffer_comparator.cc:156] Difference at 1033: 0, expected 10.1883
      +2025-01-24 04:44:47.195673: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.198396 1598851 buffer_comparator.cc:156] Difference at 1040: 0, expected 9.99799
      +E0124 04:44:47.198418 1598851 buffer_comparator.cc:156] Difference at 1041: 0, expected 12.209
      +E0124 04:44:47.198423 1598851 buffer_comparator.cc:156] Difference at 1042: 0, expected 9.4851
      +E0124 04:44:47.198427 1598851 buffer_comparator.cc:156] Difference at 1043: 0, expected 8.26397
      +E0124 04:44:47.198431 1598851 buffer_comparator.cc:156] Difference at 1044: 0, expected 11.9253
      +E0124 04:44:47.198436 1598851 buffer_comparator.cc:156] Difference at 1045: 0, expected 8.99047
      +E0124 04:44:47.198442 1598851 buffer_comparator.cc:156] Difference at 1046: 0, expected 8.81842
      +E0124 04:44:47.198446 1598851 buffer_comparator.cc:156] Difference at 1047: 0, expected 12.2714
      +E0124 04:44:47.198450 1598851 buffer_comparator.cc:156] Difference at 1048: 0, expected 11.1417
      +E0124 04:44:47.198454 1598851 buffer_comparator.cc:156] Difference at 1049: 0, expected 10.6572
      +2025-01-24 04:44:47.198461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.201184 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.201209 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.201213 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.201218 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.201222 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.201226 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.201230 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.201234 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.201239 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.201243 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.201250: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.203979 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.203999 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.204003 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.204007 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.204012 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.204016 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.204020 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.204024 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.204028 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.204033 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.204039: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.206760 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.206781 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.206785 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.206790 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.206794 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.206798 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.206802 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.206807 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.206811 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.206815 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.206822: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
       Iteration: [     1/ 50000] 	 Loss: 3.158992767 (3.158992767) 	 Physics Loss: 1.982357264 (1.982357264) 	 Data Loss: 0.578243732 (0.578243732) 	 BC Loss: 0.598391712 (0.598391712)
       Iteration: [  1001/ 50000] 	 Loss: 0.034934171 (0.028678153) 	 Physics Loss: 0.000550666 (0.000427971) 	 Data Loss: 0.022407684 (0.011849238) 	 BC Loss: 0.011975820 (0.016400939)
       Iteration: [  2001/ 50000] 	 Loss: 0.029342793 (0.031756539) 	 Physics Loss: 0.001803906 (0.001986223) 	 Data Loss: 0.013016323 (0.012399685) 	 BC Loss: 0.014522564 (0.017370628)
      @@ -317,8 +317,8 @@ import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.I-x9Gl
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      diff --git a/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.lean.js b/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.lean.js
      new file mode 100644
      index 0000000000..a5c943407c
      --- /dev/null
      +++ b/dev/assets/tutorials_intermediate_4_PINN2DPDE.md.BGeYvkn-.lean.js
      @@ -0,0 +1 @@
      +import{_ as l,c as n,a2 as a,j as s,a as t,o as p}from"./chunks/framework.BetCMmtc.js";const h="/dev/assets/pinn_nested_ad.BvqoGasw.gif",m=JSON.parse('{"title":"Training a PINN on 2D PDE","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/4_PINN2DPDE.md","filePath":"tutorials/intermediate/4_PINN2DPDE.md","lastUpdated":null}'),e={name:"tutorials/intermediate/4_PINN2DPDE.md"},k={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},r={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.586ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3795.2 1000","aria-hidden":"true"},E={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},d={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.401ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3713.2 1000","aria-hidden":"true"},g={class:"MathJax",jax:"SVG",style:{direction:"ltr",position:"relative"}},y={style:{overflow:"visible","min-height":"1px","min-width":"1px","vertical-align":"-0.566ex"},xmlns:"http://www.w3.org/2000/svg",width:"8.109ex",height:"2.262ex",role:"img",focusable:"false",viewBox:"0 -750 3584.2 1000","aria-hidden":"true"};function c(o,i,F,C,u,f){return p(),n("div",null,[i[10]||(i[10]=a("",20)),s("p",null,[i[6]||(i[6]=t("We will generate some random data to train the model on. We will take data on a square spatial and temporal domain ")),s("mjx-container",k,[(p(),n("svg",r,i[0]||(i[0]=[a("",1)]))),i[1]||(i[1]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"x"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[7]||(i[7]=t(", ")),s("mjx-container",E,[(p(),n("svg",d,i[2]||(i[2]=[a("",1)]))),i[3]||(i[3]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"y"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[8]||(i[8]=t(", and ")),s("mjx-container",g,[(p(),n("svg",y,i[4]||(i[4]=[a("",1)]))),i[5]||(i[5]=s("mjx-assistive-mml",{unselectable:"on",display:"inline",style:{top:"0px",left:"0px",clip:"rect(1px, 1px, 1px, 1px)","-webkit-touch-callout":"none","-webkit-user-select":"none","-khtml-user-select":"none","-moz-user-select":"none","-ms-user-select":"none","user-select":"none",position:"absolute",padding:"1px 0px 0px 0px",border:"0px",display:"block",width:"auto",overflow:"hidden"}},[s("math",{xmlns:"http://www.w3.org/1998/Math/MathML"},[s("mi",null,"t"),s("mo",null,"∈"),s("mo",{stretchy:"false"},"["),s("mn",null,"0"),s("mo",null,","),s("mn",null,"2"),s("mo",{stretchy:"false"},"]")])],-1))]),i[9]||(i[9]=t(". Typically, you want to be smarter about the sampling process, but for the sake of simplicity, we will skip that."))]),i[11]||(i[11]=a("",12))])}const _=l(e,[["render",c]]);export{m as __pageData,_ as default};
      diff --git a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.js b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.js
      deleted file mode 100644
      index adf17c77de..0000000000
      --- a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.js
      +++ /dev/null
      @@ -1,386 +0,0 @@
      -import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Convolutional VAE for MNIST using Reactant","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/5_ConvolutionalVAE.md","filePath":"tutorials/intermediate/5_ConvolutionalVAE.md","lastUpdated":null}'),h={name:"tutorials/intermediate/5_ConvolutionalVAE.md"};function p(t,s,k,e,r,A){return l(),a("div",null,s[0]||(s[0]=[n(`

      Convolutional VAE for MNIST using Reactant

      Convolutional variational autoencoder (CVAE) implementation in MLX using MNIST. This is based on the CVAE implementation in MLX.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, MLUtils, DataAugmentation,
      -      ConcreteStructs, OneHotArrays, ImageShow, Images, Printf, Optimisers
      -
      -const xdev = reactant_device(; force=true)
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Model Definition

      First we will define the encoder.It maps the input to a normal distribution in latent space and sample a latent vector from that distribution.

      julia
      function cvae_encoder(
      -        rng=Random.default_rng(); num_latent_dims::Int,
      -        image_shape::Dims{3}, max_num_filters::Int
      -)
      -    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      -    return @compact(;
      -        embed=Chain(
      -            Chain(
      -                Conv((3, 3), image_shape[3] => max_num_filters ÷ 4; stride=2, pad=1),
      -                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      -            ),
      -            Chain(
      -                Conv((3, 3), max_num_filters ÷ 4 => max_num_filters ÷ 2; stride=2, pad=1),
      -                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      -            ),
      -            Chain(
      -                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters; stride=2, pad=1),
      -                BatchNorm(max_num_filters, leakyrelu)
      -            ),
      -            FlattenLayer()
      -        ),
      -        proj_mu=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      -        proj_log_var=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      -        rng) do x
      -        y = embed(x)
      -
      -        μ = proj_mu(y)
      -        logσ² = proj_log_var(y)
      -
      -        T = eltype(logσ²)
      -        logσ² = clamp.(logσ², -T(20.0f0), T(10.0f0))
      -        σ = exp.(logσ² .* T(0.5))
      -
      -        # Generate a tensor of random values from a normal distribution
      -        rng = Lux.replicate(rng)
      -        ϵ = randn_like(rng, σ)
      -
      -        # Reparameterization trick to brackpropagate through sampling
      -        z = ϵ .* σ .+ μ
      -
      -        @return z, μ, logσ²
      -    end
      -end
      cvae_encoder (generic function with 2 methods)

      Similarly we define the decoder.

      julia
      function cvae_decoder(; num_latent_dims::Int, image_shape::Dims{3}, max_num_filters::Int)
      -    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      -    return @compact(;
      -        linear=Dense(num_latent_dims, flattened_dim),
      -        upchain=Chain(
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters => max_num_filters ÷ 2; stride=1, pad=1),
      -                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      -            ),
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters ÷ 4; stride=1, pad=1),
      -                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      -            ),
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters ÷ 4 => image_shape[3],
      -                    sigmoid; stride=1, pad=1)
      -            )
      -        ),
      -        max_num_filters) do x
      -        y = linear(x)
      -        img = reshape(y, image_shape[1] ÷ 8, image_shape[2] ÷ 8, max_num_filters, :)
      -        @return upchain(img)
      -    end
      -end
      -
      -@concrete struct CVAE <: Lux.AbstractLuxContainerLayer{(:encoder, :decoder)}
      -    encoder <: Lux.AbstractLuxLayer
      -    decoder <: Lux.AbstractLuxLayer
      -end
      -
      -function CVAE(rng=Random.default_rng(); num_latent_dims::Int,
      -        image_shape::Dims{3}, max_num_filters::Int)
      -    decoder = cvae_decoder(; num_latent_dims, image_shape, max_num_filters)
      -    encoder = cvae_encoder(rng; num_latent_dims, image_shape, max_num_filters)
      -    return CVAE(encoder, decoder)
      -end
      -
      -function (cvae::CVAE)(x, ps, st)
      -    (z, μ, logσ²), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      -    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      -    return (x_rec, μ, logσ²), (; encoder=st_enc, decoder=st_dec)
      -end
      -
      -function encode(cvae::CVAE, x, ps, st)
      -    (z, _, _), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      -    return z, (; encoder=st_enc, st.decoder)
      -end
      -
      -function decode(cvae::CVAE, z, ps, st)
      -    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      -    return x_rec, (; decoder=st_dec, st.encoder)
      -end
      decode (generic function with 1 method)

      Loading MNIST

      julia
      @concrete struct TensorDataset
      -    dataset
      -    transform
      -    total_samples::Int
      -end
      -
      -Base.length(ds::TensorDataset) = ds.total_samples
      -
      -function Base.getindex(ds::TensorDataset, idxs::Union{Vector{<:Integer}, AbstractRange})
      -    img = Image.(eachslice(convert2image(ds.dataset, idxs); dims=3))
      -    return stack(parent  itemdata  Base.Fix1(apply, ds.transform), img)
      -end
      -
      -function loadmnist(batchsize, image_size::Dims{2})
      -    # Load MNIST: Only 1500 for demonstration purposes on CI
      -    train_dataset = MNIST(; split=:train)
      -    N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : length(train_dataset)
      -
      -    train_transform = ScaleKeepAspect(image_size) |> ImageToTensor()
      -    trainset = TensorDataset(train_dataset, train_transform, N)
      -    trainloader = DataLoader(trainset; batchsize, shuffle=true, partial=false)
      -
      -    return trainloader
      -end
      loadmnist (generic function with 1 method)

      Helper Functions

      Generate an Image Grid from a list of images

      julia
      function create_image_grid(imgs::AbstractArray, grid_rows::Int, grid_cols::Int)
      -    total_images = grid_rows * grid_cols
      -    imgs = map(eachslice(imgs[:, :, :, 1:total_images]; dims=4)) do img
      -        cimg = size(img, 3) == 1 ? colorview(Gray, view(img, :, :, 1)) :
      -               colorview(RGB, permutedims(img, (3, 1, 2)))
      -        return cimg'
      -    end
      -    return create_image_grid(imgs, grid_rows, grid_cols)
      -end
      -
      -function create_image_grid(images::Vector, grid_rows::Int, grid_cols::Int)
      -    # Check if the number of images matches the grid
      -    total_images = grid_rows * grid_cols
      -    @assert length(images) == total_images
      -
      -    # Get the size of a single image (assuming all images are the same size)
      -    img_height, img_width = size(images[1])
      -
      -    # Create a blank grid canvas
      -    grid_height = img_height * grid_rows
      -    grid_width = img_width * grid_cols
      -    grid_canvas = similar(images[1], grid_height, grid_width)
      -
      -    # Place each image in the correct position on the canvas
      -    for idx in 1:total_images
      -        row = div(idx - 1, grid_cols) + 1
      -        col = mod(idx - 1, grid_cols) + 1
      -
      -        start_row = (row - 1) * img_height + 1
      -        start_col = (col - 1) * img_width + 1
      -
      -        grid_canvas[start_row:(start_row + img_height - 1), start_col:(start_col + img_width - 1)] .= images[idx]
      -    end
      -
      -    return grid_canvas
      -end
      -
      -function loss_function(model, ps, st, X)
      -    (y, μ, logσ²), st = model(X, ps, st)
      -    reconstruction_loss = MSELoss(; agg=sum)(y, X)
      -    kldiv_loss = -sum(1 .+ logσ² .- μ .^ 2 .- exp.(logσ²)) / 2
      -    loss = reconstruction_loss + kldiv_loss
      -    return loss, st, (; y, μ, logσ², reconstruction_loss, kldiv_loss)
      -end
      -
      -function generate_images(
      -        model, ps, st; num_samples::Int=128, num_latent_dims::Int, decode_compiled=nothing)
      -    z = randn(Float32, num_latent_dims, num_samples) |> get_device((ps, st))
      -    if decode_compiled === nothing
      -        images, _ = decode(model, z, ps, Lux.testmode(st))
      -    else
      -        images, _ = decode_compiled(model, z, ps, Lux.testmode(st))
      -        images = images |> cpu_device()
      -    end
      -    return create_image_grid(images, 8, num_samples ÷ 8)
      -end
      -
      -function reconstruct_images(model, ps, st, X)
      -    (recon, _, _), _ = model(X, ps, Lux.testmode(st))
      -    recon = recon |> cpu_device()
      -    return create_image_grid(recon, 8, size(X, ndims(X)) ÷ 8)
      -end
      reconstruct_images (generic function with 1 method)

      Training the Model

      julia
      function main(; batchsize=128, image_size=(64, 64), num_latent_dims=8, max_num_filters=64,
      -        seed=0, epochs=50, weight_decay=1e-5, learning_rate=1e-3, num_samples=batchsize)
      -    rng = Xoshiro()
      -    Random.seed!(rng, seed)
      -
      -    cvae = CVAE(rng; num_latent_dims, image_shape=(image_size..., 1), max_num_filters)
      -    ps, st = Lux.setup(rng, cvae) |> xdev
      -
      -    z = randn(Float32, num_latent_dims, num_samples) |> xdev
      -    decode_compiled = @compile decode(cvae, z, ps, Lux.testmode(st))
      -    x = randn(Float32, image_size..., 1, batchsize) |> xdev
      -    cvae_compiled = @compile cvae(x, ps, Lux.testmode(st))
      -
      -    train_dataloader = loadmnist(batchsize, image_size) |> xdev
      -
      -    opt = AdamW(; eta=learning_rate, lambda=weight_decay)
      -
      -    train_state = Training.TrainState(cvae, ps, st, opt)
      -
      -    @printf "Total Trainable Parameters: %0.4f M\\n" (Lux.parameterlength(ps)/1e6)
      -
      -    is_vscode = isdefined(Main, :VSCodeServer)
      -    empty_row, model_img_full = nothing, nothing
      -
      -    for epoch in 1:epochs
      -        loss_total = 0.0f0
      -        total_samples = 0
      -
      -        start_time = time()
      -        for (i, X) in enumerate(train_dataloader)
      -            (_, loss, _, train_state) = Training.single_train_step!(
      -                AutoEnzyme(), loss_function, X, train_state; return_gradients=Val(false)
      -            )
      -
      -            loss_total += loss
      -            total_samples += size(X, ndims(X))
      -
      -            if i % 250 == 0 || i == length(train_dataloader)
      -                throughput = total_samples / (time() - start_time)
      -                @printf "Epoch %d, Iter %d, Loss: %.7f, Throughput: %.6f im/s\\n" epoch i loss throughput
      -            end
      -        end
      -        total_time = time() - start_time
      -
      -        train_loss = loss_total / length(train_dataloader)
      -        throughput = total_samples / total_time
      -        @printf "Epoch %d, Train Loss: %.7f, Time: %.4fs, Throughput: %.6f im/s\\n" epoch train_loss total_time throughput
      -
      -        if is_vscode || epoch == epochs
      -            recon_images = reconstruct_images(
      -                cvae_compiled, train_state.parameters, train_state.states,
      -                first(train_dataloader))
      -            gen_images = generate_images(
      -                cvae, train_state.parameters, train_state.states;
      -                num_samples, num_latent_dims, decode_compiled)
      -            if empty_row === nothing
      -                empty_row = similar(gen_images, image_size[1], size(gen_images, 2))
      -                fill!(empty_row, 0)
      -            end
      -            model_img_full = vcat(recon_images, empty_row, gen_images)
      -            is_vscode && display(model_img_full)
      -        end
      -    end
      -
      -    return model_img_full
      -end
      -
      -img = main()
      ┌ Warning: \`training\` is set to \`Val{false}()\` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a \`Lux.jl\` model, set it to training mode using \`LuxCore.trainmode\`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 23:18:11.917010: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 11895952635772526774
      -Total Trainable Parameters: 0.1493 M
      -Epoch 1, Iter 11, Loss: 44447.5625000, Throughput: 21.315431 im/s
      -Epoch 1, Train Loss: 62683.6601562, Time: 66.4955s, Throughput: 21.174372 im/s
      -Epoch 2, Iter 11, Loss: 31240.5175781, Throughput: 1604.194354 im/s
      -Epoch 2, Train Loss: 35970.2460938, Time: 0.8781s, Throughput: 1603.482196 im/s
      -Epoch 3, Iter 11, Loss: 25488.8515625, Throughput: 1647.041705 im/s
      -Epoch 3, Train Loss: 27952.8320312, Time: 0.8554s, Throughput: 1646.061570 im/s
      -Epoch 4, Iter 11, Loss: 22313.5644531, Throughput: 1692.354699 im/s
      -Epoch 4, Train Loss: 23722.8613281, Time: 0.8322s, Throughput: 1691.830117 im/s
      -Epoch 5, Iter 11, Loss: 19402.6035156, Throughput: 1711.697829 im/s
      -Epoch 5, Train Loss: 21031.9238281, Time: 0.8229s, Throughput: 1710.922240 im/s
      -Epoch 6, Iter 11, Loss: 18538.2871094, Throughput: 1669.197785 im/s
      -Epoch 6, Train Loss: 19078.5078125, Time: 0.8439s, Throughput: 1668.527162 im/s
      -Epoch 7, Iter 11, Loss: 16468.9140625, Throughput: 1752.542727 im/s
      -Epoch 7, Train Loss: 17712.4550781, Time: 0.8037s, Throughput: 1751.821146 im/s
      -Epoch 8, Iter 11, Loss: 16625.4785156, Throughput: 1705.391086 im/s
      -Epoch 8, Train Loss: 16778.6386719, Time: 0.8261s, Throughput: 1704.311769 im/s
      -Epoch 9, Iter 11, Loss: 15829.8203125, Throughput: 1735.845295 im/s
      -Epoch 9, Train Loss: 16046.4257812, Time: 0.8115s, Throughput: 1735.101199 im/s
      -Epoch 10, Iter 11, Loss: 15746.0839844, Throughput: 1788.674827 im/s
      -Epoch 10, Train Loss: 15447.8466797, Time: 0.7874s, Throughput: 1788.122411 im/s
      -Epoch 11, Iter 11, Loss: 15215.0312500, Throughput: 1759.938214 im/s
      -Epoch 11, Train Loss: 15064.4023438, Time: 0.8002s, Throughput: 1759.511911 im/s
      -Epoch 12, Iter 11, Loss: 14532.9414062, Throughput: 1778.945628 im/s
      -Epoch 12, Train Loss: 14696.0908203, Time: 0.7917s, Throughput: 1778.393313 im/s
      -Epoch 13, Iter 11, Loss: 14967.7109375, Throughput: 1797.568888 im/s
      -Epoch 13, Train Loss: 14457.2812500, Time: 0.7835s, Throughput: 1796.965581 im/s
      -Epoch 14, Iter 11, Loss: 14077.5800781, Throughput: 1789.538796 im/s
      -Epoch 14, Train Loss: 14026.6181641, Time: 0.7905s, Throughput: 1781.119643 im/s
      -Epoch 15, Iter 11, Loss: 13518.2919922, Throughput: 1743.134711 im/s
      -Epoch 15, Train Loss: 13632.0664062, Time: 0.8084s, Throughput: 1741.795939 im/s
      -Epoch 16, Iter 11, Loss: 13380.7929688, Throughput: 1819.513944 im/s
      -Epoch 16, Train Loss: 13282.1279297, Time: 0.7743s, Throughput: 1818.383929 im/s
      -Epoch 17, Iter 11, Loss: 12565.8125000, Throughput: 1825.887155 im/s
      -Epoch 17, Train Loss: 13235.3505859, Time: 0.7716s, Throughput: 1824.810108 im/s
      -Epoch 18, Iter 11, Loss: 12630.6835938, Throughput: 1754.058005 im/s
      -Epoch 18, Train Loss: 13102.6396484, Time: 0.8031s, Throughput: 1753.240960 im/s
      -Epoch 19, Iter 11, Loss: 12953.1679688, Throughput: 1817.470635 im/s
      -Epoch 19, Train Loss: 12875.8691406, Time: 0.7751s, Throughput: 1816.633136 im/s
      -Epoch 20, Iter 11, Loss: 12055.4677734, Throughput: 1825.307005 im/s
      -Epoch 20, Train Loss: 12572.4384766, Time: 0.7717s, Throughput: 1824.538367 im/s
      -Epoch 21, Iter 11, Loss: 13185.9873047, Throughput: 1810.913795 im/s
      -Epoch 21, Train Loss: 12500.9931641, Time: 0.7779s, Throughput: 1809.975813 im/s
      -Epoch 22, Iter 11, Loss: 12371.1367188, Throughput: 1812.958520 im/s
      -Epoch 22, Train Loss: 12382.7285156, Time: 0.7770s, Throughput: 1812.155757 im/s
      -Epoch 23, Iter 11, Loss: 12146.3300781, Throughput: 1825.927238 im/s
      -Epoch 23, Train Loss: 12117.9287109, Time: 0.7715s, Throughput: 1824.916684 im/s
      -Epoch 24, Iter 11, Loss: 12079.3994141, Throughput: 1401.442699 im/s
      -Epoch 24, Train Loss: 11997.0742188, Time: 1.0057s, Throughput: 1399.970944 im/s
      -Epoch 25, Iter 11, Loss: 11770.6054688, Throughput: 1620.130117 im/s
      -Epoch 25, Train Loss: 11831.6191406, Time: 0.8694s, Throughput: 1619.544968 im/s
      -Epoch 26, Iter 11, Loss: 12617.6123047, Throughput: 1716.463947 im/s
      -Epoch 26, Train Loss: 11800.9433594, Time: 0.8206s, Throughput: 1715.782232 im/s
      -Epoch 27, Iter 11, Loss: 11695.0839844, Throughput: 1838.151494 im/s
      -Epoch 27, Train Loss: 11604.6308594, Time: 0.7662s, Throughput: 1837.748226 im/s
      -Epoch 28, Iter 11, Loss: 11695.5996094, Throughput: 1806.788953 im/s
      -Epoch 28, Train Loss: 11651.4423828, Time: 0.7798s, Throughput: 1805.595737 im/s
      -Epoch 29, Iter 11, Loss: 11384.9062500, Throughput: 1784.109790 im/s
      -Epoch 29, Train Loss: 11569.6884766, Time: 0.7895s, Throughput: 1783.328058 im/s
      -Epoch 30, Iter 11, Loss: 10836.7968750, Throughput: 1764.955002 im/s
      -Epoch 30, Train Loss: 11339.8691406, Time: 0.8096s, Throughput: 1739.121482 im/s
      -Epoch 31, Iter 11, Loss: 11125.7089844, Throughput: 1698.424038 im/s
      -Epoch 31, Train Loss: 11223.2167969, Time: 0.8293s, Throughput: 1697.741932 im/s
      -Epoch 32, Iter 11, Loss: 11135.4785156, Throughput: 1667.973904 im/s
      -Epoch 32, Train Loss: 11096.8427734, Time: 0.8445s, Throughput: 1667.330154 im/s
      -Epoch 33, Iter 11, Loss: 12250.8027344, Throughput: 1685.428124 im/s
      -Epoch 33, Train Loss: 11205.4375000, Time: 0.8375s, Throughput: 1681.246007 im/s
      -Epoch 34, Iter 11, Loss: 11169.9980469, Throughput: 1724.074720 im/s
      -Epoch 34, Train Loss: 11285.1865234, Time: 0.8170s, Throughput: 1723.332131 im/s
      -Epoch 35, Iter 11, Loss: 10973.0673828, Throughput: 1673.721785 im/s
      -Epoch 35, Train Loss: 11315.8671875, Time: 0.8416s, Throughput: 1672.952260 im/s
      -Epoch 36, Iter 11, Loss: 10784.8652344, Throughput: 1689.407790 im/s
      -Epoch 36, Train Loss: 11000.4267578, Time: 0.8340s, Throughput: 1688.216841 im/s
      -Epoch 37, Iter 11, Loss: 10987.7080078, Throughput: 1660.775089 im/s
      -Epoch 37, Train Loss: 10908.0439453, Time: 0.8481s, Throughput: 1660.126616 im/s
      -Epoch 38, Iter 11, Loss: 10832.6396484, Throughput: 1662.204075 im/s
      -Epoch 38, Train Loss: 10761.0380859, Time: 0.8475s, Throughput: 1661.407709 im/s
      -Epoch 39, Iter 11, Loss: 10657.9746094, Throughput: 1650.764429 im/s
      -Epoch 39, Train Loss: 10716.2685547, Time: 0.8534s, Throughput: 1649.948099 im/s
      -Epoch 40, Iter 11, Loss: 11321.4941406, Throughput: 1726.715686 im/s
      -Epoch 40, Train Loss: 10696.7226562, Time: 0.8157s, Throughput: 1726.082812 im/s
      -Epoch 41, Iter 11, Loss: 10473.2099609, Throughput: 1808.169746 im/s
      -Epoch 41, Train Loss: 10672.8632812, Time: 0.7792s, Throughput: 1807.083633 im/s
      -Epoch 42, Iter 11, Loss: 10543.8730469, Throughput: 1697.991372 im/s
      -Epoch 42, Train Loss: 10663.1875000, Time: 0.8297s, Throughput: 1696.974060 im/s
      -Epoch 43, Iter 11, Loss: 11151.8281250, Throughput: 1742.673312 im/s
      -Epoch 43, Train Loss: 10626.8242188, Time: 0.8083s, Throughput: 1741.864781 im/s
      -Epoch 44, Iter 11, Loss: 10089.9355469, Throughput: 1790.705991 im/s
      -Epoch 44, Train Loss: 10622.2226562, Time: 0.7866s, Throughput: 1789.904907 im/s
      -Epoch 45, Iter 11, Loss: 10384.1025391, Throughput: 1799.232128 im/s
      -Epoch 45, Train Loss: 10386.0625000, Time: 0.7829s, Throughput: 1798.365896 im/s
      -Epoch 46, Iter 11, Loss: 10393.1582031, Throughput: 1808.824368 im/s
      -Epoch 46, Train Loss: 10321.5947266, Time: 0.7788s, Throughput: 1807.981533 im/s
      -Epoch 47, Iter 11, Loss: 10101.5517578, Throughput: 1813.404992 im/s
      -Epoch 47, Train Loss: 10217.5341797, Time: 0.7768s, Throughput: 1812.646342 im/s
      -Epoch 48, Iter 11, Loss: 10764.6113281, Throughput: 1812.191346 im/s
      -Epoch 48, Train Loss: 10252.5224609, Time: 0.7773s, Throughput: 1811.459271 im/s
      -Epoch 49, Iter 11, Loss: 10491.2128906, Throughput: 1725.773609 im/s
      -Epoch 49, Train Loss: 10155.1474609, Time: 0.8162s, Throughput: 1725.088512 im/s
      -Epoch 50, Iter 11, Loss: 10332.0097656, Throughput: 1723.435230 im/s
      -Epoch 50, Train Loss: 10062.9257812, Time: 0.8173s, Throughput: 1722.734901 im/s

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28)]))}const d=i(h,[["render",p]]);export{g as __pageData,d as default}; diff --git a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.lean.js b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.lean.js deleted file mode 100644 index adf17c77de..0000000000 --- a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.23tMd-dw.lean.js +++ /dev/null @@ -1,386 +0,0 @@ -import{_ as i,c as a,a2 as n,o as l}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Convolutional VAE for MNIST using Reactant","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/5_ConvolutionalVAE.md","filePath":"tutorials/intermediate/5_ConvolutionalVAE.md","lastUpdated":null}'),h={name:"tutorials/intermediate/5_ConvolutionalVAE.md"};function p(t,s,k,e,r,A){return l(),a("div",null,s[0]||(s[0]=[n(`

      Convolutional VAE for MNIST using Reactant

      Convolutional variational autoencoder (CVAE) implementation in MLX using MNIST. This is based on the CVAE implementation in MLX.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, MLUtils, DataAugmentation,
      -      ConcreteStructs, OneHotArrays, ImageShow, Images, Printf, Optimisers
      -
      -const xdev = reactant_device(; force=true)
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Model Definition

      First we will define the encoder.It maps the input to a normal distribution in latent space and sample a latent vector from that distribution.

      julia
      function cvae_encoder(
      -        rng=Random.default_rng(); num_latent_dims::Int,
      -        image_shape::Dims{3}, max_num_filters::Int
      -)
      -    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      -    return @compact(;
      -        embed=Chain(
      -            Chain(
      -                Conv((3, 3), image_shape[3] => max_num_filters ÷ 4; stride=2, pad=1),
      -                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      -            ),
      -            Chain(
      -                Conv((3, 3), max_num_filters ÷ 4 => max_num_filters ÷ 2; stride=2, pad=1),
      -                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      -            ),
      -            Chain(
      -                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters; stride=2, pad=1),
      -                BatchNorm(max_num_filters, leakyrelu)
      -            ),
      -            FlattenLayer()
      -        ),
      -        proj_mu=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      -        proj_log_var=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      -        rng) do x
      -        y = embed(x)
      -
      -        μ = proj_mu(y)
      -        logσ² = proj_log_var(y)
      -
      -        T = eltype(logσ²)
      -        logσ² = clamp.(logσ², -T(20.0f0), T(10.0f0))
      -        σ = exp.(logσ² .* T(0.5))
      -
      -        # Generate a tensor of random values from a normal distribution
      -        rng = Lux.replicate(rng)
      -        ϵ = randn_like(rng, σ)
      -
      -        # Reparameterization trick to brackpropagate through sampling
      -        z = ϵ .* σ .+ μ
      -
      -        @return z, μ, logσ²
      -    end
      -end
      cvae_encoder (generic function with 2 methods)

      Similarly we define the decoder.

      julia
      function cvae_decoder(; num_latent_dims::Int, image_shape::Dims{3}, max_num_filters::Int)
      -    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      -    return @compact(;
      -        linear=Dense(num_latent_dims, flattened_dim),
      -        upchain=Chain(
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters => max_num_filters ÷ 2; stride=1, pad=1),
      -                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      -            ),
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters ÷ 4; stride=1, pad=1),
      -                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      -            ),
      -            Chain(
      -                Upsample(2),
      -                Conv((3, 3), max_num_filters ÷ 4 => image_shape[3],
      -                    sigmoid; stride=1, pad=1)
      -            )
      -        ),
      -        max_num_filters) do x
      -        y = linear(x)
      -        img = reshape(y, image_shape[1] ÷ 8, image_shape[2] ÷ 8, max_num_filters, :)
      -        @return upchain(img)
      -    end
      -end
      -
      -@concrete struct CVAE <: Lux.AbstractLuxContainerLayer{(:encoder, :decoder)}
      -    encoder <: Lux.AbstractLuxLayer
      -    decoder <: Lux.AbstractLuxLayer
      -end
      -
      -function CVAE(rng=Random.default_rng(); num_latent_dims::Int,
      -        image_shape::Dims{3}, max_num_filters::Int)
      -    decoder = cvae_decoder(; num_latent_dims, image_shape, max_num_filters)
      -    encoder = cvae_encoder(rng; num_latent_dims, image_shape, max_num_filters)
      -    return CVAE(encoder, decoder)
      -end
      -
      -function (cvae::CVAE)(x, ps, st)
      -    (z, μ, logσ²), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      -    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      -    return (x_rec, μ, logσ²), (; encoder=st_enc, decoder=st_dec)
      -end
      -
      -function encode(cvae::CVAE, x, ps, st)
      -    (z, _, _), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      -    return z, (; encoder=st_enc, st.decoder)
      -end
      -
      -function decode(cvae::CVAE, z, ps, st)
      -    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      -    return x_rec, (; decoder=st_dec, st.encoder)
      -end
      decode (generic function with 1 method)

      Loading MNIST

      julia
      @concrete struct TensorDataset
      -    dataset
      -    transform
      -    total_samples::Int
      -end
      -
      -Base.length(ds::TensorDataset) = ds.total_samples
      -
      -function Base.getindex(ds::TensorDataset, idxs::Union{Vector{<:Integer}, AbstractRange})
      -    img = Image.(eachslice(convert2image(ds.dataset, idxs); dims=3))
      -    return stack(parent  itemdata  Base.Fix1(apply, ds.transform), img)
      -end
      -
      -function loadmnist(batchsize, image_size::Dims{2})
      -    # Load MNIST: Only 1500 for demonstration purposes on CI
      -    train_dataset = MNIST(; split=:train)
      -    N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : length(train_dataset)
      -
      -    train_transform = ScaleKeepAspect(image_size) |> ImageToTensor()
      -    trainset = TensorDataset(train_dataset, train_transform, N)
      -    trainloader = DataLoader(trainset; batchsize, shuffle=true, partial=false)
      -
      -    return trainloader
      -end
      loadmnist (generic function with 1 method)

      Helper Functions

      Generate an Image Grid from a list of images

      julia
      function create_image_grid(imgs::AbstractArray, grid_rows::Int, grid_cols::Int)
      -    total_images = grid_rows * grid_cols
      -    imgs = map(eachslice(imgs[:, :, :, 1:total_images]; dims=4)) do img
      -        cimg = size(img, 3) == 1 ? colorview(Gray, view(img, :, :, 1)) :
      -               colorview(RGB, permutedims(img, (3, 1, 2)))
      -        return cimg'
      -    end
      -    return create_image_grid(imgs, grid_rows, grid_cols)
      -end
      -
      -function create_image_grid(images::Vector, grid_rows::Int, grid_cols::Int)
      -    # Check if the number of images matches the grid
      -    total_images = grid_rows * grid_cols
      -    @assert length(images) == total_images
      -
      -    # Get the size of a single image (assuming all images are the same size)
      -    img_height, img_width = size(images[1])
      -
      -    # Create a blank grid canvas
      -    grid_height = img_height * grid_rows
      -    grid_width = img_width * grid_cols
      -    grid_canvas = similar(images[1], grid_height, grid_width)
      -
      -    # Place each image in the correct position on the canvas
      -    for idx in 1:total_images
      -        row = div(idx - 1, grid_cols) + 1
      -        col = mod(idx - 1, grid_cols) + 1
      -
      -        start_row = (row - 1) * img_height + 1
      -        start_col = (col - 1) * img_width + 1
      -
      -        grid_canvas[start_row:(start_row + img_height - 1), start_col:(start_col + img_width - 1)] .= images[idx]
      -    end
      -
      -    return grid_canvas
      -end
      -
      -function loss_function(model, ps, st, X)
      -    (y, μ, logσ²), st = model(X, ps, st)
      -    reconstruction_loss = MSELoss(; agg=sum)(y, X)
      -    kldiv_loss = -sum(1 .+ logσ² .- μ .^ 2 .- exp.(logσ²)) / 2
      -    loss = reconstruction_loss + kldiv_loss
      -    return loss, st, (; y, μ, logσ², reconstruction_loss, kldiv_loss)
      -end
      -
      -function generate_images(
      -        model, ps, st; num_samples::Int=128, num_latent_dims::Int, decode_compiled=nothing)
      -    z = randn(Float32, num_latent_dims, num_samples) |> get_device((ps, st))
      -    if decode_compiled === nothing
      -        images, _ = decode(model, z, ps, Lux.testmode(st))
      -    else
      -        images, _ = decode_compiled(model, z, ps, Lux.testmode(st))
      -        images = images |> cpu_device()
      -    end
      -    return create_image_grid(images, 8, num_samples ÷ 8)
      -end
      -
      -function reconstruct_images(model, ps, st, X)
      -    (recon, _, _), _ = model(X, ps, Lux.testmode(st))
      -    recon = recon |> cpu_device()
      -    return create_image_grid(recon, 8, size(X, ndims(X)) ÷ 8)
      -end
      reconstruct_images (generic function with 1 method)

      Training the Model

      julia
      function main(; batchsize=128, image_size=(64, 64), num_latent_dims=8, max_num_filters=64,
      -        seed=0, epochs=50, weight_decay=1e-5, learning_rate=1e-3, num_samples=batchsize)
      -    rng = Xoshiro()
      -    Random.seed!(rng, seed)
      -
      -    cvae = CVAE(rng; num_latent_dims, image_shape=(image_size..., 1), max_num_filters)
      -    ps, st = Lux.setup(rng, cvae) |> xdev
      -
      -    z = randn(Float32, num_latent_dims, num_samples) |> xdev
      -    decode_compiled = @compile decode(cvae, z, ps, Lux.testmode(st))
      -    x = randn(Float32, image_size..., 1, batchsize) |> xdev
      -    cvae_compiled = @compile cvae(x, ps, Lux.testmode(st))
      -
      -    train_dataloader = loadmnist(batchsize, image_size) |> xdev
      -
      -    opt = AdamW(; eta=learning_rate, lambda=weight_decay)
      -
      -    train_state = Training.TrainState(cvae, ps, st, opt)
      -
      -    @printf "Total Trainable Parameters: %0.4f M\\n" (Lux.parameterlength(ps)/1e6)
      -
      -    is_vscode = isdefined(Main, :VSCodeServer)
      -    empty_row, model_img_full = nothing, nothing
      -
      -    for epoch in 1:epochs
      -        loss_total = 0.0f0
      -        total_samples = 0
      -
      -        start_time = time()
      -        for (i, X) in enumerate(train_dataloader)
      -            (_, loss, _, train_state) = Training.single_train_step!(
      -                AutoEnzyme(), loss_function, X, train_state; return_gradients=Val(false)
      -            )
      -
      -            loss_total += loss
      -            total_samples += size(X, ndims(X))
      -
      -            if i % 250 == 0 || i == length(train_dataloader)
      -                throughput = total_samples / (time() - start_time)
      -                @printf "Epoch %d, Iter %d, Loss: %.7f, Throughput: %.6f im/s\\n" epoch i loss throughput
      -            end
      -        end
      -        total_time = time() - start_time
      -
      -        train_loss = loss_total / length(train_dataloader)
      -        throughput = total_samples / total_time
      -        @printf "Epoch %d, Train Loss: %.7f, Time: %.4fs, Throughput: %.6f im/s\\n" epoch train_loss total_time throughput
      -
      -        if is_vscode || epoch == epochs
      -            recon_images = reconstruct_images(
      -                cvae_compiled, train_state.parameters, train_state.states,
      -                first(train_dataloader))
      -            gen_images = generate_images(
      -                cvae, train_state.parameters, train_state.states;
      -                num_samples, num_latent_dims, decode_compiled)
      -            if empty_row === nothing
      -                empty_row = similar(gen_images, image_size[1], size(gen_images, 2))
      -                fill!(empty_row, 0)
      -            end
      -            model_img_full = vcat(recon_images, empty_row, gen_images)
      -            is_vscode && display(model_img_full)
      -        end
      -    end
      -
      -    return model_img_full
      -end
      -
      -img = main()
      ┌ Warning: \`training\` is set to \`Val{false}()\` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a \`Lux.jl\` model, set it to training mode using \`LuxCore.trainmode\`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 23:18:11.917010: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 11895952635772526774
      -Total Trainable Parameters: 0.1493 M
      -Epoch 1, Iter 11, Loss: 44447.5625000, Throughput: 21.315431 im/s
      -Epoch 1, Train Loss: 62683.6601562, Time: 66.4955s, Throughput: 21.174372 im/s
      -Epoch 2, Iter 11, Loss: 31240.5175781, Throughput: 1604.194354 im/s
      -Epoch 2, Train Loss: 35970.2460938, Time: 0.8781s, Throughput: 1603.482196 im/s
      -Epoch 3, Iter 11, Loss: 25488.8515625, Throughput: 1647.041705 im/s
      -Epoch 3, Train Loss: 27952.8320312, Time: 0.8554s, Throughput: 1646.061570 im/s
      -Epoch 4, Iter 11, Loss: 22313.5644531, Throughput: 1692.354699 im/s
      -Epoch 4, Train Loss: 23722.8613281, Time: 0.8322s, Throughput: 1691.830117 im/s
      -Epoch 5, Iter 11, Loss: 19402.6035156, Throughput: 1711.697829 im/s
      -Epoch 5, Train Loss: 21031.9238281, Time: 0.8229s, Throughput: 1710.922240 im/s
      -Epoch 6, Iter 11, Loss: 18538.2871094, Throughput: 1669.197785 im/s
      -Epoch 6, Train Loss: 19078.5078125, Time: 0.8439s, Throughput: 1668.527162 im/s
      -Epoch 7, Iter 11, Loss: 16468.9140625, Throughput: 1752.542727 im/s
      -Epoch 7, Train Loss: 17712.4550781, Time: 0.8037s, Throughput: 1751.821146 im/s
      -Epoch 8, Iter 11, Loss: 16625.4785156, Throughput: 1705.391086 im/s
      -Epoch 8, Train Loss: 16778.6386719, Time: 0.8261s, Throughput: 1704.311769 im/s
      -Epoch 9, Iter 11, Loss: 15829.8203125, Throughput: 1735.845295 im/s
      -Epoch 9, Train Loss: 16046.4257812, Time: 0.8115s, Throughput: 1735.101199 im/s
      -Epoch 10, Iter 11, Loss: 15746.0839844, Throughput: 1788.674827 im/s
      -Epoch 10, Train Loss: 15447.8466797, Time: 0.7874s, Throughput: 1788.122411 im/s
      -Epoch 11, Iter 11, Loss: 15215.0312500, Throughput: 1759.938214 im/s
      -Epoch 11, Train Loss: 15064.4023438, Time: 0.8002s, Throughput: 1759.511911 im/s
      -Epoch 12, Iter 11, Loss: 14532.9414062, Throughput: 1778.945628 im/s
      -Epoch 12, Train Loss: 14696.0908203, Time: 0.7917s, Throughput: 1778.393313 im/s
      -Epoch 13, Iter 11, Loss: 14967.7109375, Throughput: 1797.568888 im/s
      -Epoch 13, Train Loss: 14457.2812500, Time: 0.7835s, Throughput: 1796.965581 im/s
      -Epoch 14, Iter 11, Loss: 14077.5800781, Throughput: 1789.538796 im/s
      -Epoch 14, Train Loss: 14026.6181641, Time: 0.7905s, Throughput: 1781.119643 im/s
      -Epoch 15, Iter 11, Loss: 13518.2919922, Throughput: 1743.134711 im/s
      -Epoch 15, Train Loss: 13632.0664062, Time: 0.8084s, Throughput: 1741.795939 im/s
      -Epoch 16, Iter 11, Loss: 13380.7929688, Throughput: 1819.513944 im/s
      -Epoch 16, Train Loss: 13282.1279297, Time: 0.7743s, Throughput: 1818.383929 im/s
      -Epoch 17, Iter 11, Loss: 12565.8125000, Throughput: 1825.887155 im/s
      -Epoch 17, Train Loss: 13235.3505859, Time: 0.7716s, Throughput: 1824.810108 im/s
      -Epoch 18, Iter 11, Loss: 12630.6835938, Throughput: 1754.058005 im/s
      -Epoch 18, Train Loss: 13102.6396484, Time: 0.8031s, Throughput: 1753.240960 im/s
      -Epoch 19, Iter 11, Loss: 12953.1679688, Throughput: 1817.470635 im/s
      -Epoch 19, Train Loss: 12875.8691406, Time: 0.7751s, Throughput: 1816.633136 im/s
      -Epoch 20, Iter 11, Loss: 12055.4677734, Throughput: 1825.307005 im/s
      -Epoch 20, Train Loss: 12572.4384766, Time: 0.7717s, Throughput: 1824.538367 im/s
      -Epoch 21, Iter 11, Loss: 13185.9873047, Throughput: 1810.913795 im/s
      -Epoch 21, Train Loss: 12500.9931641, Time: 0.7779s, Throughput: 1809.975813 im/s
      -Epoch 22, Iter 11, Loss: 12371.1367188, Throughput: 1812.958520 im/s
      -Epoch 22, Train Loss: 12382.7285156, Time: 0.7770s, Throughput: 1812.155757 im/s
      -Epoch 23, Iter 11, Loss: 12146.3300781, Throughput: 1825.927238 im/s
      -Epoch 23, Train Loss: 12117.9287109, Time: 0.7715s, Throughput: 1824.916684 im/s
      -Epoch 24, Iter 11, Loss: 12079.3994141, Throughput: 1401.442699 im/s
      -Epoch 24, Train Loss: 11997.0742188, Time: 1.0057s, Throughput: 1399.970944 im/s
      -Epoch 25, Iter 11, Loss: 11770.6054688, Throughput: 1620.130117 im/s
      -Epoch 25, Train Loss: 11831.6191406, Time: 0.8694s, Throughput: 1619.544968 im/s
      -Epoch 26, Iter 11, Loss: 12617.6123047, Throughput: 1716.463947 im/s
      -Epoch 26, Train Loss: 11800.9433594, Time: 0.8206s, Throughput: 1715.782232 im/s
      -Epoch 27, Iter 11, Loss: 11695.0839844, Throughput: 1838.151494 im/s
      -Epoch 27, Train Loss: 11604.6308594, Time: 0.7662s, Throughput: 1837.748226 im/s
      -Epoch 28, Iter 11, Loss: 11695.5996094, Throughput: 1806.788953 im/s
      -Epoch 28, Train Loss: 11651.4423828, Time: 0.7798s, Throughput: 1805.595737 im/s
      -Epoch 29, Iter 11, Loss: 11384.9062500, Throughput: 1784.109790 im/s
      -Epoch 29, Train Loss: 11569.6884766, Time: 0.7895s, Throughput: 1783.328058 im/s
      -Epoch 30, Iter 11, Loss: 10836.7968750, Throughput: 1764.955002 im/s
      -Epoch 30, Train Loss: 11339.8691406, Time: 0.8096s, Throughput: 1739.121482 im/s
      -Epoch 31, Iter 11, Loss: 11125.7089844, Throughput: 1698.424038 im/s
      -Epoch 31, Train Loss: 11223.2167969, Time: 0.8293s, Throughput: 1697.741932 im/s
      -Epoch 32, Iter 11, Loss: 11135.4785156, Throughput: 1667.973904 im/s
      -Epoch 32, Train Loss: 11096.8427734, Time: 0.8445s, Throughput: 1667.330154 im/s
      -Epoch 33, Iter 11, Loss: 12250.8027344, Throughput: 1685.428124 im/s
      -Epoch 33, Train Loss: 11205.4375000, Time: 0.8375s, Throughput: 1681.246007 im/s
      -Epoch 34, Iter 11, Loss: 11169.9980469, Throughput: 1724.074720 im/s
      -Epoch 34, Train Loss: 11285.1865234, Time: 0.8170s, Throughput: 1723.332131 im/s
      -Epoch 35, Iter 11, Loss: 10973.0673828, Throughput: 1673.721785 im/s
      -Epoch 35, Train Loss: 11315.8671875, Time: 0.8416s, Throughput: 1672.952260 im/s
      -Epoch 36, Iter 11, Loss: 10784.8652344, Throughput: 1689.407790 im/s
      -Epoch 36, Train Loss: 11000.4267578, Time: 0.8340s, Throughput: 1688.216841 im/s
      -Epoch 37, Iter 11, Loss: 10987.7080078, Throughput: 1660.775089 im/s
      -Epoch 37, Train Loss: 10908.0439453, Time: 0.8481s, Throughput: 1660.126616 im/s
      -Epoch 38, Iter 11, Loss: 10832.6396484, Throughput: 1662.204075 im/s
      -Epoch 38, Train Loss: 10761.0380859, Time: 0.8475s, Throughput: 1661.407709 im/s
      -Epoch 39, Iter 11, Loss: 10657.9746094, Throughput: 1650.764429 im/s
      -Epoch 39, Train Loss: 10716.2685547, Time: 0.8534s, Throughput: 1649.948099 im/s
      -Epoch 40, Iter 11, Loss: 11321.4941406, Throughput: 1726.715686 im/s
      -Epoch 40, Train Loss: 10696.7226562, Time: 0.8157s, Throughput: 1726.082812 im/s
      -Epoch 41, Iter 11, Loss: 10473.2099609, Throughput: 1808.169746 im/s
      -Epoch 41, Train Loss: 10672.8632812, Time: 0.7792s, Throughput: 1807.083633 im/s
      -Epoch 42, Iter 11, Loss: 10543.8730469, Throughput: 1697.991372 im/s
      -Epoch 42, Train Loss: 10663.1875000, Time: 0.8297s, Throughput: 1696.974060 im/s
      -Epoch 43, Iter 11, Loss: 11151.8281250, Throughput: 1742.673312 im/s
      -Epoch 43, Train Loss: 10626.8242188, Time: 0.8083s, Throughput: 1741.864781 im/s
      -Epoch 44, Iter 11, Loss: 10089.9355469, Throughput: 1790.705991 im/s
      -Epoch 44, Train Loss: 10622.2226562, Time: 0.7866s, Throughput: 1789.904907 im/s
      -Epoch 45, Iter 11, Loss: 10384.1025391, Throughput: 1799.232128 im/s
      -Epoch 45, Train Loss: 10386.0625000, Time: 0.7829s, Throughput: 1798.365896 im/s
      -Epoch 46, Iter 11, Loss: 10393.1582031, Throughput: 1808.824368 im/s
      -Epoch 46, Train Loss: 10321.5947266, Time: 0.7788s, Throughput: 1807.981533 im/s
      -Epoch 47, Iter 11, Loss: 10101.5517578, Throughput: 1813.404992 im/s
      -Epoch 47, Train Loss: 10217.5341797, Time: 0.7768s, Throughput: 1812.646342 im/s
      -Epoch 48, Iter 11, Loss: 10764.6113281, Throughput: 1812.191346 im/s
      -Epoch 48, Train Loss: 10252.5224609, Time: 0.7773s, Throughput: 1811.459271 im/s
      -Epoch 49, Iter 11, Loss: 10491.2128906, Throughput: 1725.773609 im/s
      -Epoch 49, Train Loss: 10155.1474609, Time: 0.8162s, Throughput: 1725.088512 im/s
      -Epoch 50, Iter 11, Loss: 10332.0097656, Throughput: 1723.435230 im/s
      -Epoch 50, Train Loss: 10062.9257812, Time: 0.8173s, Throughput: 1722.734901 im/s

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28)]))}const d=i(h,[["render",p]]);export{g as __pageData,d as default}; diff --git a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.js b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.js new file mode 100644 index 0000000000..ff23e42a2f --- /dev/null +++ b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.js @@ -0,0 +1,386 @@ +import{_ as i,c as a,a2 as n,o as h}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Convolutional VAE for MNIST using Reactant","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/5_ConvolutionalVAE.md","filePath":"tutorials/intermediate/5_ConvolutionalVAE.md","lastUpdated":null}'),p={name:"tutorials/intermediate/5_ConvolutionalVAE.md"};function l(t,s,k,e,r,A){return h(),a("div",null,s[0]||(s[0]=[n(`

      Convolutional VAE for MNIST using Reactant

      Convolutional variational autoencoder (CVAE) implementation in MLX using MNIST. This is based on the CVAE implementation in MLX.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, MLUtils, DataAugmentation,
      +      ConcreteStructs, OneHotArrays, ImageShow, Images, Printf, Optimisers
      +
      +const xdev = reactant_device(; force=true)
      +const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Model Definition

      First we will define the encoder.It maps the input to a normal distribution in latent space and sample a latent vector from that distribution.

      julia
      function cvae_encoder(
      +        rng=Random.default_rng(); num_latent_dims::Int,
      +        image_shape::Dims{3}, max_num_filters::Int
      +)
      +    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      +    return @compact(;
      +        embed=Chain(
      +            Chain(
      +                Conv((3, 3), image_shape[3] => max_num_filters ÷ 4; stride=2, pad=1),
      +                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      +            ),
      +            Chain(
      +                Conv((3, 3), max_num_filters ÷ 4 => max_num_filters ÷ 2; stride=2, pad=1),
      +                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      +            ),
      +            Chain(
      +                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters; stride=2, pad=1),
      +                BatchNorm(max_num_filters, leakyrelu)
      +            ),
      +            FlattenLayer()
      +        ),
      +        proj_mu=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      +        proj_log_var=Dense(flattened_dim, num_latent_dims; init_bias=zeros32),
      +        rng) do x
      +        y = embed(x)
      +
      +        μ = proj_mu(y)
      +        logσ² = proj_log_var(y)
      +
      +        T = eltype(logσ²)
      +        logσ² = clamp.(logσ², -T(20.0f0), T(10.0f0))
      +        σ = exp.(logσ² .* T(0.5))
      +
      +        # Generate a tensor of random values from a normal distribution
      +        rng = Lux.replicate(rng)
      +        ϵ = randn_like(rng, σ)
      +
      +        # Reparameterization trick to brackpropagate through sampling
      +        z = ϵ .* σ .+ μ
      +
      +        @return z, μ, logσ²
      +    end
      +end
      cvae_encoder (generic function with 2 methods)

      Similarly we define the decoder.

      julia
      function cvae_decoder(; num_latent_dims::Int, image_shape::Dims{3}, max_num_filters::Int)
      +    flattened_dim = prod(image_shape[1:2]  8) * max_num_filters
      +    return @compact(;
      +        linear=Dense(num_latent_dims, flattened_dim),
      +        upchain=Chain(
      +            Chain(
      +                Upsample(2),
      +                Conv((3, 3), max_num_filters => max_num_filters ÷ 2; stride=1, pad=1),
      +                BatchNorm(max_num_filters ÷ 2, leakyrelu)
      +            ),
      +            Chain(
      +                Upsample(2),
      +                Conv((3, 3), max_num_filters ÷ 2 => max_num_filters ÷ 4; stride=1, pad=1),
      +                BatchNorm(max_num_filters ÷ 4, leakyrelu)
      +            ),
      +            Chain(
      +                Upsample(2),
      +                Conv((3, 3), max_num_filters ÷ 4 => image_shape[3],
      +                    sigmoid; stride=1, pad=1)
      +            )
      +        ),
      +        max_num_filters) do x
      +        y = linear(x)
      +        img = reshape(y, image_shape[1] ÷ 8, image_shape[2] ÷ 8, max_num_filters, :)
      +        @return upchain(img)
      +    end
      +end
      +
      +@concrete struct CVAE <: Lux.AbstractLuxContainerLayer{(:encoder, :decoder)}
      +    encoder <: Lux.AbstractLuxLayer
      +    decoder <: Lux.AbstractLuxLayer
      +end
      +
      +function CVAE(rng=Random.default_rng(); num_latent_dims::Int,
      +        image_shape::Dims{3}, max_num_filters::Int)
      +    decoder = cvae_decoder(; num_latent_dims, image_shape, max_num_filters)
      +    encoder = cvae_encoder(rng; num_latent_dims, image_shape, max_num_filters)
      +    return CVAE(encoder, decoder)
      +end
      +
      +function (cvae::CVAE)(x, ps, st)
      +    (z, μ, logσ²), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      +    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      +    return (x_rec, μ, logσ²), (; encoder=st_enc, decoder=st_dec)
      +end
      +
      +function encode(cvae::CVAE, x, ps, st)
      +    (z, _, _), st_enc = cvae.encoder(x, ps.encoder, st.encoder)
      +    return z, (; encoder=st_enc, st.decoder)
      +end
      +
      +function decode(cvae::CVAE, z, ps, st)
      +    x_rec, st_dec = cvae.decoder(z, ps.decoder, st.decoder)
      +    return x_rec, (; decoder=st_dec, st.encoder)
      +end
      decode (generic function with 1 method)

      Loading MNIST

      julia
      @concrete struct TensorDataset
      +    dataset
      +    transform
      +    total_samples::Int
      +end
      +
      +Base.length(ds::TensorDataset) = ds.total_samples
      +
      +function Base.getindex(ds::TensorDataset, idxs::Union{Vector{<:Integer}, AbstractRange})
      +    img = Image.(eachslice(convert2image(ds.dataset, idxs); dims=3))
      +    return stack(parent  itemdata  Base.Fix1(apply, ds.transform), img)
      +end
      +
      +function loadmnist(batchsize, image_size::Dims{2})
      +    # Load MNIST: Only 1500 for demonstration purposes on CI
      +    train_dataset = MNIST(; split=:train)
      +    N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : length(train_dataset)
      +
      +    train_transform = ScaleKeepAspect(image_size) |> ImageToTensor()
      +    trainset = TensorDataset(train_dataset, train_transform, N)
      +    trainloader = DataLoader(trainset; batchsize, shuffle=true, partial=false)
      +
      +    return trainloader
      +end
      loadmnist (generic function with 1 method)

      Helper Functions

      Generate an Image Grid from a list of images

      julia
      function create_image_grid(imgs::AbstractArray, grid_rows::Int, grid_cols::Int)
      +    total_images = grid_rows * grid_cols
      +    imgs = map(eachslice(imgs[:, :, :, 1:total_images]; dims=4)) do img
      +        cimg = size(img, 3) == 1 ? colorview(Gray, view(img, :, :, 1)) :
      +               colorview(RGB, permutedims(img, (3, 1, 2)))
      +        return cimg'
      +    end
      +    return create_image_grid(imgs, grid_rows, grid_cols)
      +end
      +
      +function create_image_grid(images::Vector, grid_rows::Int, grid_cols::Int)
      +    # Check if the number of images matches the grid
      +    total_images = grid_rows * grid_cols
      +    @assert length(images) == total_images
      +
      +    # Get the size of a single image (assuming all images are the same size)
      +    img_height, img_width = size(images[1])
      +
      +    # Create a blank grid canvas
      +    grid_height = img_height * grid_rows
      +    grid_width = img_width * grid_cols
      +    grid_canvas = similar(images[1], grid_height, grid_width)
      +
      +    # Place each image in the correct position on the canvas
      +    for idx in 1:total_images
      +        row = div(idx - 1, grid_cols) + 1
      +        col = mod(idx - 1, grid_cols) + 1
      +
      +        start_row = (row - 1) * img_height + 1
      +        start_col = (col - 1) * img_width + 1
      +
      +        grid_canvas[start_row:(start_row + img_height - 1), start_col:(start_col + img_width - 1)] .= images[idx]
      +    end
      +
      +    return grid_canvas
      +end
      +
      +function loss_function(model, ps, st, X)
      +    (y, μ, logσ²), st = model(X, ps, st)
      +    reconstruction_loss = MSELoss(; agg=sum)(y, X)
      +    kldiv_loss = -sum(1 .+ logσ² .- μ .^ 2 .- exp.(logσ²)) / 2
      +    loss = reconstruction_loss + kldiv_loss
      +    return loss, st, (; y, μ, logσ², reconstruction_loss, kldiv_loss)
      +end
      +
      +function generate_images(
      +        model, ps, st; num_samples::Int=128, num_latent_dims::Int, decode_compiled=nothing)
      +    z = randn(Float32, num_latent_dims, num_samples) |> get_device((ps, st))
      +    if decode_compiled === nothing
      +        images, _ = decode(model, z, ps, Lux.testmode(st))
      +    else
      +        images, _ = decode_compiled(model, z, ps, Lux.testmode(st))
      +        images = images |> cpu_device()
      +    end
      +    return create_image_grid(images, 8, num_samples ÷ 8)
      +end
      +
      +function reconstruct_images(model, ps, st, X)
      +    (recon, _, _), _ = model(X, ps, Lux.testmode(st))
      +    recon = recon |> cpu_device()
      +    return create_image_grid(recon, 8, size(X, ndims(X)) ÷ 8)
      +end
      reconstruct_images (generic function with 1 method)

      Training the Model

      julia
      function main(; batchsize=128, image_size=(64, 64), num_latent_dims=8, max_num_filters=64,
      +        seed=0, epochs=50, weight_decay=1e-5, learning_rate=1e-3, num_samples=batchsize)
      +    rng = Xoshiro()
      +    Random.seed!(rng, seed)
      +
      +    cvae = CVAE(rng; num_latent_dims, image_shape=(image_size..., 1), max_num_filters)
      +    ps, st = Lux.setup(rng, cvae) |> xdev
      +
      +    z = randn(Float32, num_latent_dims, num_samples) |> xdev
      +    decode_compiled = @compile decode(cvae, z, ps, Lux.testmode(st))
      +    x = randn(Float32, image_size..., 1, batchsize) |> xdev
      +    cvae_compiled = @compile cvae(x, ps, Lux.testmode(st))
      +
      +    train_dataloader = loadmnist(batchsize, image_size) |> xdev
      +
      +    opt = AdamW(; eta=learning_rate, lambda=weight_decay)
      +
      +    train_state = Training.TrainState(cvae, ps, st, opt)
      +
      +    @printf "Total Trainable Parameters: %0.4f M\\n" (Lux.parameterlength(ps)/1e6)
      +
      +    is_vscode = isdefined(Main, :VSCodeServer)
      +    empty_row, model_img_full = nothing, nothing
      +
      +    for epoch in 1:epochs
      +        loss_total = 0.0f0
      +        total_samples = 0
      +
      +        start_time = time()
      +        for (i, X) in enumerate(train_dataloader)
      +            (_, loss, _, train_state) = Training.single_train_step!(
      +                AutoEnzyme(), loss_function, X, train_state; return_gradients=Val(false)
      +            )
      +
      +            loss_total += loss
      +            total_samples += size(X, ndims(X))
      +
      +            if i % 250 == 0 || i == length(train_dataloader)
      +                throughput = total_samples / (time() - start_time)
      +                @printf "Epoch %d, Iter %d, Loss: %.7f, Throughput: %.6f im/s\\n" epoch i loss throughput
      +            end
      +        end
      +        total_time = time() - start_time
      +
      +        train_loss = loss_total / length(train_dataloader)
      +        throughput = total_samples / total_time
      +        @printf "Epoch %d, Train Loss: %.7f, Time: %.4fs, Throughput: %.6f im/s\\n" epoch train_loss total_time throughput
      +
      +        if is_vscode || epoch == epochs
      +            recon_images = reconstruct_images(
      +                cvae_compiled, train_state.parameters, train_state.states,
      +                first(train_dataloader))
      +            gen_images = generate_images(
      +                cvae, train_state.parameters, train_state.states;
      +                num_samples, num_latent_dims, decode_compiled)
      +            if empty_row === nothing
      +                empty_row = similar(gen_images, image_size[1], size(gen_images, 2))
      +                fill!(empty_row, 0)
      +            end
      +            model_img_full = vcat(recon_images, empty_row, gen_images)
      +            is_vscode && display(model_img_full)
      +        end
      +    end
      +
      +    return model_img_full
      +end
      +
      +img = main()
      ┌ Warning: \`training\` is set to \`Val{false}()\` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a \`Lux.jl\` model, set it to training mode using \`LuxCore.trainmode\`.
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-16/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      +2025-01-24 05:04:18.737046: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 12154174806817483476
      +Total Trainable Parameters: 0.1493 M
      +Epoch 1, Iter 11, Loss: 44148.5234375, Throughput: 21.693584 im/s
      +Epoch 1, Train Loss: 62822.9492188, Time: 65.3384s, Throughput: 21.549355 im/s
      +Epoch 2, Iter 11, Loss: 31040.7773438, Throughput: 2029.239494 im/s
      +Epoch 2, Train Loss: 36056.7773438, Time: 0.6942s, Throughput: 2028.260295 im/s
      +Epoch 3, Iter 11, Loss: 26118.6953125, Throughput: 2142.259650 im/s
      +Epoch 3, Train Loss: 27947.4707031, Time: 0.6576s, Throughput: 2141.216502 im/s
      +Epoch 4, Iter 11, Loss: 21178.0957031, Throughput: 2137.962330 im/s
      +Epoch 4, Train Loss: 23641.0703125, Time: 0.6589s, Throughput: 2136.940373 im/s
      +Epoch 5, Iter 11, Loss: 20625.9550781, Throughput: 2126.326315 im/s
      +Epoch 5, Train Loss: 20930.2343750, Time: 0.6624s, Throughput: 2125.485262 im/s
      +Epoch 6, Iter 11, Loss: 18482.9648438, Throughput: 2144.402726 im/s
      +Epoch 6, Train Loss: 18999.2207031, Time: 0.6569s, Throughput: 2143.551206 im/s
      +Epoch 7, Iter 11, Loss: 17671.8437500, Throughput: 2131.305874 im/s
      +Epoch 7, Train Loss: 17797.9199219, Time: 0.6610s, Throughput: 2130.061296 im/s
      +Epoch 8, Iter 11, Loss: 17184.2773438, Throughput: 2143.805657 im/s
      +Epoch 8, Train Loss: 16943.1210938, Time: 0.6570s, Throughput: 2142.963165 im/s
      +Epoch 9, Iter 11, Loss: 16169.7519531, Throughput: 2137.329390 im/s
      +Epoch 9, Train Loss: 16133.9785156, Time: 0.6590s, Throughput: 2136.488888 im/s
      +Epoch 10, Iter 11, Loss: 15327.4101562, Throughput: 2147.595323 im/s
      +Epoch 10, Train Loss: 15587.2353516, Time: 0.6559s, Throughput: 2146.557116 im/s
      +Epoch 11, Iter 11, Loss: 14786.2734375, Throughput: 2159.555169 im/s
      +Epoch 11, Train Loss: 15082.0771484, Time: 0.6523s, Throughput: 2158.651333 im/s
      +Epoch 12, Iter 11, Loss: 14255.7041016, Throughput: 2158.227698 im/s
      +Epoch 12, Train Loss: 14557.6679688, Time: 0.6526s, Throughput: 2157.371470 im/s
      +Epoch 13, Iter 11, Loss: 14895.6621094, Throughput: 2139.947919 im/s
      +Epoch 13, Train Loss: 14324.0869141, Time: 0.6583s, Throughput: 2138.907794 im/s
      +Epoch 14, Iter 11, Loss: 13865.3027344, Throughput: 2124.660927 im/s
      +Epoch 14, Train Loss: 14012.7285156, Time: 0.6629s, Throughput: 2123.863200 im/s
      +Epoch 15, Iter 11, Loss: 13444.6328125, Throughput: 2149.116195 im/s
      +Epoch 15, Train Loss: 13671.2695312, Time: 0.6554s, Throughput: 2148.257021 im/s
      +Epoch 16, Iter 11, Loss: 12813.3994141, Throughput: 2140.130161 im/s
      +Epoch 16, Train Loss: 13489.2880859, Time: 0.6582s, Throughput: 2139.300629 im/s
      +Epoch 17, Iter 11, Loss: 13994.4716797, Throughput: 2151.480708 im/s
      +Epoch 17, Train Loss: 13345.4775391, Time: 0.6548s, Throughput: 2150.376099 im/s
      +Epoch 18, Iter 11, Loss: 13775.0205078, Throughput: 2156.532458 im/s
      +Epoch 18, Train Loss: 13114.6132812, Time: 0.6532s, Throughput: 2155.703542 im/s
      +Epoch 19, Iter 11, Loss: 12336.1689453, Throughput: 2136.979036 im/s
      +Epoch 19, Train Loss: 12843.8837891, Time: 0.6592s, Throughput: 2136.042229 im/s
      +Epoch 20, Iter 11, Loss: 12736.4951172, Throughput: 2153.497780 im/s
      +Epoch 20, Train Loss: 12550.9990234, Time: 0.6541s, Throughput: 2152.628038 im/s
      +Epoch 21, Iter 11, Loss: 12392.1289062, Throughput: 2148.699421 im/s
      +Epoch 21, Train Loss: 12481.2226562, Time: 0.6556s, Throughput: 2147.589075 im/s
      +Epoch 22, Iter 11, Loss: 12498.3085938, Throughput: 2151.149206 im/s
      +Epoch 22, Train Loss: 12354.0312500, Time: 0.6548s, Throughput: 2150.160011 im/s
      +Epoch 23, Iter 11, Loss: 12620.2519531, Throughput: 2162.225307 im/s
      +Epoch 23, Train Loss: 12183.5683594, Time: 0.6515s, Throughput: 2161.187146 im/s
      +Epoch 24, Iter 11, Loss: 11805.5263672, Throughput: 2133.556567 im/s
      +Epoch 24, Train Loss: 12003.1621094, Time: 0.6602s, Throughput: 2132.684370 im/s
      +Epoch 25, Iter 11, Loss: 12150.3369141, Throughput: 2148.001512 im/s
      +Epoch 25, Train Loss: 11926.1621094, Time: 0.6559s, Throughput: 2146.786528 im/s
      +Epoch 26, Iter 11, Loss: 12146.6865234, Throughput: 2148.060891 im/s
      +Epoch 26, Train Loss: 11826.6103516, Time: 0.6558s, Throughput: 2147.153377 im/s
      +Epoch 27, Iter 11, Loss: 12543.7255859, Throughput: 2159.248016 im/s
      +Epoch 27, Train Loss: 11727.1728516, Time: 0.6524s, Throughput: 2158.162235 im/s
      +Epoch 28, Iter 11, Loss: 11681.7880859, Throughput: 2157.735638 im/s
      +Epoch 28, Train Loss: 11689.2412109, Time: 0.6529s, Throughput: 2156.505684 im/s
      +Epoch 29, Iter 11, Loss: 11415.7578125, Throughput: 2136.720018 im/s
      +Epoch 29, Train Loss: 11647.7333984, Time: 0.6593s, Throughput: 2135.650591 im/s
      +Epoch 30, Iter 11, Loss: 11249.3818359, Throughput: 2146.128855 im/s
      +Epoch 30, Train Loss: 11345.8574219, Time: 0.6563s, Throughput: 2145.236221 im/s
      +Epoch 31, Iter 11, Loss: 10843.5869141, Throughput: 2151.786438 im/s
      +Epoch 31, Train Loss: 11344.9736328, Time: 0.6546s, Throughput: 2150.777074 im/s
      +Epoch 32, Iter 11, Loss: 11195.3828125, Throughput: 2154.317928 im/s
      +Epoch 32, Train Loss: 11353.8515625, Time: 0.6538s, Throughput: 2153.409047 im/s
      +Epoch 33, Iter 11, Loss: 11060.8808594, Throughput: 2141.569024 im/s
      +Epoch 33, Train Loss: 11371.8496094, Time: 0.6578s, Throughput: 2140.559134 im/s
      +Epoch 34, Iter 11, Loss: 10756.1621094, Throughput: 2126.117329 im/s
      +Epoch 34, Train Loss: 11223.6240234, Time: 0.6625s, Throughput: 2125.161722 im/s
      +Epoch 35, Iter 11, Loss: 10554.7626953, Throughput: 2145.066353 im/s
      +Epoch 35, Train Loss: 11073.7353516, Time: 0.6567s, Throughput: 2144.053941 im/s
      +Epoch 36, Iter 11, Loss: 10349.1953125, Throughput: 2142.992715 im/s
      +Epoch 36, Train Loss: 10935.1337891, Time: 0.6574s, Throughput: 2141.920108 im/s
      +Epoch 37, Iter 11, Loss: 11334.4550781, Throughput: 2130.512374 im/s
      +Epoch 37, Train Loss: 10844.4248047, Time: 0.6612s, Throughput: 2129.446078 im/s
      +Epoch 38, Iter 11, Loss: 11061.8173828, Throughput: 2157.797922 im/s
      +Epoch 38, Train Loss: 10925.8564453, Time: 0.6528s, Throughput: 2156.764007 im/s
      +Epoch 39, Iter 11, Loss: 10529.3828125, Throughput: 2132.393282 im/s
      +Epoch 39, Train Loss: 10848.6083984, Time: 0.6606s, Throughput: 2131.405103 im/s
      +Epoch 40, Iter 11, Loss: 10760.2636719, Throughput: 2128.605625 im/s
      +Epoch 40, Train Loss: 10779.3212891, Time: 0.6618s, Throughput: 2127.617886 im/s
      +Epoch 41, Iter 11, Loss: 11441.4560547, Throughput: 2162.281516 im/s
      +Epoch 41, Train Loss: 10626.4257812, Time: 0.6515s, Throughput: 2161.246465 im/s
      +Epoch 42, Iter 11, Loss: 11216.9746094, Throughput: 2133.911968 im/s
      +Epoch 42, Train Loss: 10579.4189453, Time: 0.6601s, Throughput: 2132.967832 im/s
      +Epoch 43, Iter 11, Loss: 11094.8496094, Throughput: 2148.804968 im/s
      +Epoch 43, Train Loss: 10495.9335938, Time: 0.6555s, Throughput: 2147.831206 im/s
      +Epoch 44, Iter 11, Loss: 10862.9042969, Throughput: 2126.862364 im/s
      +Epoch 44, Train Loss: 10442.4248047, Time: 0.6623s, Throughput: 2125.893078 im/s
      +Epoch 45, Iter 11, Loss: 10530.5800781, Throughput: 2141.151290 im/s
      +Epoch 45, Train Loss: 10395.5947266, Time: 0.6579s, Throughput: 2140.002976 im/s
      +Epoch 46, Iter 11, Loss: 10718.9667969, Throughput: 2145.232324 im/s
      +Epoch 46, Train Loss: 10428.6806641, Time: 0.6566s, Throughput: 2144.337320 im/s
      +Epoch 47, Iter 11, Loss: 9776.3222656, Throughput: 2153.596730 im/s
      +Epoch 47, Train Loss: 10375.9951172, Time: 0.6541s, Throughput: 2152.418557 im/s
      +Epoch 48, Iter 11, Loss: 10743.1513672, Throughput: 2158.230853 im/s
      +Epoch 48, Train Loss: 10290.3066406, Time: 0.6527s, Throughput: 2157.172884 im/s
      +Epoch 49, Iter 11, Loss: 10566.3828125, Throughput: 2130.489316 im/s
      +Epoch 49, Train Loss: 10200.8916016, Time: 0.6611s, Throughput: 2129.628840 im/s
      +Epoch 50, Iter 11, Loss: 9890.2968750, Throughput: 2162.534099 im/s
      +Epoch 50, Train Loss: 10062.1943359, Time: 0.6514s, Throughput: 2161.448175 im/s

      Appendix

      julia
      using InteractiveUtils
      +InteractiveUtils.versioninfo()
      +
      +if @isdefined(MLDataDevices)
      +    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      +        println()
      +        CUDA.versioninfo()
      +    end
      +
      +    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      +        println()
      +        AMDGPU.versioninfo()
      +    end
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
      +Build Info:
      +  Official https://julialang.org/ release
      +Platform Info:
      +  OS: Linux (x86_64-linux-gnu)
      +  CPU: 48 × AMD EPYC 7402 24-Core Processor
      +  WORD_SIZE: 64
      +  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      +Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      +Environment:
      +  JULIA_CPU_THREADS = 2
      +  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      +  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      +  JULIA_PKG_SERVER = 
      +  JULIA_NUM_THREADS = 48
      +  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      +  JULIA_PKG_PRECOMPILE_AUTO = 0
      +  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,28)]))}const g=i(p,[["render",l]]);export{d as __pageData,g as default}; diff --git a/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.lean.js b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.lean.js new file mode 100644 index 0000000000..a5899bfaa6 --- /dev/null +++ b/dev/assets/tutorials_intermediate_5_ConvolutionalVAE.md.BARx5_au.lean.js @@ -0,0 +1 @@ +import{_ as i,c as a,a2 as n,o as h}from"./chunks/framework.BetCMmtc.js";const d=JSON.parse('{"title":"Convolutional VAE for MNIST using Reactant","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/5_ConvolutionalVAE.md","filePath":"tutorials/intermediate/5_ConvolutionalVAE.md","lastUpdated":null}'),p={name:"tutorials/intermediate/5_ConvolutionalVAE.md"};function l(t,s,k,e,r,A){return h(),a("div",null,s[0]||(s[0]=[n("",28)]))}const g=i(p,[["render",l]]);export{d as __pageData,g as default}; diff --git a/dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.js b/dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.js similarity index 52% rename from dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.js rename to dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.js index d8df13ba07..50743366c3 100644 --- a/dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.js +++ b/dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.js @@ -1,4 +1,4 @@ -import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const u=JSON.parse('{"title":"Graph Convolutional Networks on Cora","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/6_GCN_Cora.md","filePath":"tutorials/intermediate/6_GCN_Cora.md","lastUpdated":null}'),c={name:"tutorials/intermediate/6_GCN_Cora.md"};function i(t,a,r,l,f,o){return p(),e("div",null,a[0]||(a[0]=[n(`

      Graph Convolutional Networks on Cora

      This example is based on GCN MLX tutorial. While we are doing this manually, we recommend directly using GNNLux.jl.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, GNNGraphs, ConcreteStructs,
      +import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const u=JSON.parse('{"title":"Graph Convolutional Networks on Cora","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/6_GCN_Cora.md","filePath":"tutorials/intermediate/6_GCN_Cora.md","lastUpdated":null}'),c={name:"tutorials/intermediate/6_GCN_Cora.md"};function i(t,a,r,l,f,o){return p(),e("div",null,a[0]||(a[0]=[n(`

      Graph Convolutional Networks on Cora

      This example is based on GCN MLX tutorial. While we are doing this manually, we recommend directly using GNNLux.jl.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, GNNGraphs, ConcreteStructs,
             Printf, OneHotArrays, Optimisers
       
       const xdev = reactant_device(; force=true)
      @@ -117,1190 +117,1190 @@ import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const u
       end
       
       main()
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
       Total Trainable Parameters: 0.0964 M
       ┌ Warning: \`training\` is set to \`Val{false}()\` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a \`Lux.jl\` model, set it to training mode using \`LuxCore.trainmode\`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 22:59:29.678566: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 3250673994701609345
      -2025-01-20 22:59:30.518900: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      +2025-01-24 04:57:23.183688: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 6197855343288158519
      +2025-01-24 04:57:23.727855: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 24 bytes spill stores, 24 bytes spill loads
       
      -2025-01-20 22:59:30.527924: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 24 bytes spill stores, 24 bytes spill loads
      +2025-01-24 04:57:23.739295: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
       
      -2025-01-20 22:59:30.647255: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      +2025-01-24 04:57:23.838790: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 328 bytes spill stores, 328 bytes spill loads
       
      -2025-01-20 22:59:30.698953: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 328 bytes spill stores, 328 bytes spill loads
      +2025-01-24 04:57:23.953902: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
       
      -2025-01-20 22:59:30.738672: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 376 bytes spill stores, 376 bytes spill loads
      +2025-01-24 04:57:23.989053: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 132 bytes spill stores, 132 bytes spill loads
       
      -2025-01-20 22:59:30.764969: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 856 bytes spill stores, 808 bytes spill loads
      +2025-01-24 04:57:24.022760: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
       
      -2025-01-20 22:59:30.821027: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      +2025-01-24 04:57:24.125019: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 376 bytes spill stores, 376 bytes spill loads
       
      -2025-01-20 22:59:30.833276: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 132 bytes spill stores, 132 bytes spill loads
      +2025-01-24 04:57:24.127351: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 856 bytes spill stores, 808 bytes spill loads
       
      -E0120 22:59:31.077662 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.078796 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.078805 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.078812 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.078818 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.078825 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.078832 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.078839 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.078846 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.078852 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.078872: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.081918 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.081947 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.081954 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.081961 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.081967 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.081974 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.081981 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.081987 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.081994 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.082002 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.082013: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.084990 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.085004 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.085007 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.085010 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.085013 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.085016 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.085019 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.085022 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.085025 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.085028 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.085032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.087802 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.087817 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.087821 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.087824 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.087827 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.087830 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.087833 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.087836 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.087839 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.087842 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -2025-01-20 22:59:31.087847: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.090676 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.090690 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.090693 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.090696 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.090699 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.090702 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.090705 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.090708 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.090711 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.090714 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.090718: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.093486 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.093502 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.093507 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.093510 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.093513 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.093516 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.093519 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.093522 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.093525 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.093528 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.093533: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.096300 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.096314 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.096317 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.096321 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.096323 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.096326 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.096329 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.096332 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.096335 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.096338 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.096343: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.099157 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.099170 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.099174 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.099176 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.099179 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.099182 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.099185 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.099188 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.099191 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.099194 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.099199: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.101952 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.101965 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.101968 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.101971 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.101974 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.101977 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.101980 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.101985 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.101988 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.101991 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.101995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.104768 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.104785 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.104788 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.104791 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.104794 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.104797 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.104800 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.104803 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.104805 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.104808 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.104813: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.107562 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.107576 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.107579 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.107582 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.107585 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.107588 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.107591 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.107594 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.107597 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.107600 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.107605: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.110365 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.110379 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.110382 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.110385 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.110388 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.110391 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.110394 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.110396 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.110399 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.110402 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.110407: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.113168 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.113182 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.113185 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.113188 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.113191 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.113194 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.113197 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.113199 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.113202 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.113205 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.113210: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.116093 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.116107 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.116110 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.116113 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.116116 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.116119 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.116122 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.116125 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.116128 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.116131 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.116136: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.119021 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.119034 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.119037 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.119040 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.119043 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.119046 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.119049 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.119052 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.119055 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.119058 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.119063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.121781 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.121795 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.121799 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.121802 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.413
      -E0120 22:59:31.121807 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.54
      -E0120 22:59:31.121810 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 22:59:31.121814 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 22:59:31.121817 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.76
      -E0120 22:59:31.121820 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.08
      -E0120 22:59:31.121823 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.61
      -2025-01-20 22:59:31.121828: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.124551 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.593
      -E0120 22:59:31.124566 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.89
      -E0120 22:59:31.124569 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.77
      -E0120 22:59:31.124572 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.84
      -E0120 22:59:31.124575 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 22:59:31.124578 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.95
      -E0120 22:59:31.124581 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 22:59:31.124583 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.76
      -E0120 22:59:31.124586 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.37
      -E0120 22:59:31.124589 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 22:59:31.124594: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.127422 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.127436 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.127439 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.127442 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.127445 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.127448 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.127451 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.127454 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.127456 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.127459 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.127464: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.130250 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.130264 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.130267 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.130270 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.130273 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.130276 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.130279 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.130282 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.130285 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.130290 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.130295: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.133056 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.133071 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.133074 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.133077 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.133080 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.133083 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.133086 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.133089 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.133092 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.133095 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.133099: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.135925 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.135939 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.135942 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.135945 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.135948 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.135951 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.135954 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.135956 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.135959 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.135962 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.135967: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.138807 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.138823 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.138826 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.138829 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.138832 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.138835 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.138838 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.138841 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.138843 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.138846 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.138851: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.141858 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.141873 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.141877 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.141880 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.141883 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.141886 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.141889 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.141892 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.141895 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.141898 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.141903: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.144958 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.144971 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.144974 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.144977 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.144980 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.144983 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.144986 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.144989 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.144992 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.144995 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.145000: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.147960 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.147974 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.147978 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.147981 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.147984 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.147987 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.147990 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.147993 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.147995 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.147998 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.148003: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.156378 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.156394 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.156397 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.156400 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.156403 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.156406 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.156409 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.156412 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.156415 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.156418 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.156423: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.162393 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.162409 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.162412 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.162415 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.162418 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.162421 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.162423 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.162426 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.162429 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.162432 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.162436: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.167007 3944451 buffer_comparator.cc:156] Difference at 64: 0, expected 1106.21
      -E0120 22:59:31.167022 3944451 buffer_comparator.cc:156] Difference at 65: 0, expected 1087.83
      -E0120 22:59:31.167025 3944451 buffer_comparator.cc:156] Difference at 66: 0, expected 1090.54
      -E0120 22:59:31.167028 3944451 buffer_comparator.cc:156] Difference at 67: 0, expected 1104.23
      -E0120 22:59:31.167031 3944451 buffer_comparator.cc:156] Difference at 68: 0, expected 1104.3
      -E0120 22:59:31.167034 3944451 buffer_comparator.cc:156] Difference at 69: 0, expected 1093.45
      -E0120 22:59:31.167037 3944451 buffer_comparator.cc:156] Difference at 70: 0, expected 1091.52
      -E0120 22:59:31.167039 3944451 buffer_comparator.cc:156] Difference at 71: 0, expected 1110.4
      -E0120 22:59:31.167042 3944451 buffer_comparator.cc:156] Difference at 72: 0, expected 1106.92
      -E0120 22:59:31.167045 3944451 buffer_comparator.cc:156] Difference at 73: 0, expected 1088.44
      -2025-01-20 22:59:31.167050: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.171423 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.171437 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.171440 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.171443 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.171446 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.171449 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.171452 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.171455 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.171457 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.171460 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.171465: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.175989 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.176003 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.176007 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.176010 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.176013 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.176016 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.176019 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.176022 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.176024 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.176027 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.176032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.185164 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.185181 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.185184 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.185187 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.185190 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.185193 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.185196 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.185199 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.185202 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.185204 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.185209: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.189333 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.189348 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.189351 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.189354 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.189357 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.189360 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.189363 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.189366 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.189368 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.189371 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.189376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.193559 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.193574 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.193577 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.193580 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.193583 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.193586 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.193589 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.193591 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.193594 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.193599 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.193603: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.197920 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.197941 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.197944 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.197947 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.197950 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.197954 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.197957 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.197961 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.197964 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.197966 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.197971: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.202160 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.202175 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.202178 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.202181 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.202184 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.202187 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.202190 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.202192 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.202195 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.202198 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.202203: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.205427 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.205441 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.205444 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.205447 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.205449 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.205452 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.205455 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.205458 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.205461 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.205463 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.205468: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.208666 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.208681 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.208684 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.208687 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.208691 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.208694 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.208696 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.208699 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.208702 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.208705 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.208709: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.211900 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.211913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.211916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.211919 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.211922 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.211925 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.211928 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.211930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.211933 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.211936 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.211941: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.215087 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.215101 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.215104 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.215107 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.215110 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.215113 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.215116 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.215118 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.215121 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.215124 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.215129: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.218258 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.218273 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.218276 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.218279 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.218282 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.218285 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.218288 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.218291 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.218293 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.218296 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.218301: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.221413 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.221439 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.221442 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.221445 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.221448 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.221450 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.221453 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.221456 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.221459 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.221462 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.221466: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.224647 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.224662 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.224665 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.224668 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.224671 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.224673 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.224676 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.224679 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.224682 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.224685 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.224689: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.227897 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.227913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.227916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.227918 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.227921 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.227924 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.227927 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.227930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.227932 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.227935 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.227940: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.231171 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.231197 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.231200 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.231203 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.231206 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.231209 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.231213 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.231216 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.231219 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.231221 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.231226: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.234459 3944451 buffer_comparator.cc:156] Difference at 256: 0, expected 1091.26
      -E0120 22:59:31.234474 3944451 buffer_comparator.cc:156] Difference at 257: 0, expected 1117.91
      -E0120 22:59:31.234477 3944451 buffer_comparator.cc:156] Difference at 258: 0, expected 1086.11
      -E0120 22:59:31.234480 3944451 buffer_comparator.cc:156] Difference at 259: 0, expected 1095.59
      -E0120 22:59:31.234483 3944451 buffer_comparator.cc:156] Difference at 260: 0, expected 1098.42
      -E0120 22:59:31.234486 3944451 buffer_comparator.cc:156] Difference at 261: 0, expected 1113.28
      -E0120 22:59:31.234488 3944451 buffer_comparator.cc:156] Difference at 262: 0, expected 1088.03
      -E0120 22:59:31.234491 3944451 buffer_comparator.cc:156] Difference at 263: 0, expected 1093.88
      -E0120 22:59:31.234494 3944451 buffer_comparator.cc:156] Difference at 264: 0, expected 1115.18
      -E0120 22:59:31.234497 3944451 buffer_comparator.cc:156] Difference at 265: 0, expected 1104.89
      -2025-01-20 22:59:31.234501: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 22:59:35.131435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      +E0124 04:57:24.392006 1635447 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      +E0124 04:57:24.392070 1635447 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      +E0124 04:57:24.392074 1635447 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      +E0124 04:57:24.392079 1635447 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      +E0124 04:57:24.392083 1635447 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      +E0124 04:57:24.392087 1635447 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      +E0124 04:57:24.392091 1635447 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      +E0124 04:57:24.392095 1635447 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      +E0124 04:57:24.392098 1635447 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      +E0124 04:57:24.392102 1635447 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      +2025-01-24 04:57:24.392132: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.398296 1635447 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      +E0124 04:57:24.398316 1635447 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      +E0124 04:57:24.398320 1635447 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      +E0124 04:57:24.398324 1635447 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      +E0124 04:57:24.398329 1635447 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      +E0124 04:57:24.398332 1635447 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      +E0124 04:57:24.398336 1635447 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      +E0124 04:57:24.398340 1635447 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      +E0124 04:57:24.398344 1635447 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      +E0124 04:57:24.398348 1635447 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      +2025-01-24 04:57:24.398356: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.403031 1635447 buffer_comparator.cc:156] Difference at 64: 0, expected 1106.21
      +E0124 04:57:24.403047 1635447 buffer_comparator.cc:156] Difference at 65: 0, expected 1087.83
      +E0124 04:57:24.403051 1635447 buffer_comparator.cc:156] Difference at 66: 0, expected 1090.54
      +E0124 04:57:24.403054 1635447 buffer_comparator.cc:156] Difference at 67: 0, expected 1104.23
      +E0124 04:57:24.403056 1635447 buffer_comparator.cc:156] Difference at 68: 0, expected 1104.3
      +E0124 04:57:24.403059 1635447 buffer_comparator.cc:156] Difference at 69: 0, expected 1093.45
      +E0124 04:57:24.403062 1635447 buffer_comparator.cc:156] Difference at 70: 0, expected 1091.52
      +E0124 04:57:24.403065 1635447 buffer_comparator.cc:156] Difference at 71: 0, expected 1110.4
      +E0124 04:57:24.403068 1635447 buffer_comparator.cc:156] Difference at 72: 0, expected 1106.92
      +E0124 04:57:24.403071 1635447 buffer_comparator.cc:156] Difference at 73: 0, expected 1088.44
      +2025-01-24 04:57:24.403075: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.407521 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.407535 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.407538 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.407541 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.407544 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.407547 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.407549 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.407552 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.407555 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.407558 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.407562: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.412139 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.412157 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.412160 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.412163 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.412166 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.412169 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.412172 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.412174 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.412177 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.412180 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.412185: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.416418 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.416432 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.416435 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.416438 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.416441 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.416445 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.416448 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.416451 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.416454 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.416457 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.416461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.420617 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.420630 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.420633 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.420636 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.420639 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.420642 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.420644 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.420647 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.420650 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.420653 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.420657: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.424934 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.424948 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.424951 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.424954 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.424957 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.424960 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.424963 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.424965 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.424968 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.424971 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.424976: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.429345 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.429359 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.429362 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.429365 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.429368 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.429370 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.429373 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.429376 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.429379 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.429382 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.429386: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.433634 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.433649 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.433652 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.433655 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.433658 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.433661 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.433664 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.433667 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.433669 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.433672 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.433677: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.437762 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.437777 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.437780 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.437783 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.437785 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.437788 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.437791 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.437794 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.437797 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.437800 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.437804: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.441885 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.441899 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.441902 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.441905 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.441908 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.441911 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.441914 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.441916 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.441919 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.441922 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.441927: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.446005 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.446020 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.446023 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.446026 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.446028 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.446031 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.446036 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.446038 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.446041 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.446044 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.446057: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.450070 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.450086 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.450089 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.450092 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.450095 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.450098 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.450100 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.450103 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.450106 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.450109 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.450114: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.454089 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.454107 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.454110 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.454113 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.454116 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.454119 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.454122 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.454125 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.454128 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.454130 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.454135: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.458113 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.458130 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.458133 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.458135 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.458138 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.458141 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.458144 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.458147 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.458150 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.458153 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.458157: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.462205 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.462220 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.462223 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.462226 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.462229 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.462232 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.462235 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.462238 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.462240 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.462243 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.462248: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.466346 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.466359 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.466362 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.466365 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.466368 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.466371 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.466374 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.466377 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.466379 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.466382 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.466387: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.470526 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.470539 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.470542 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.470545 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.470548 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.470551 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.470554 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.470556 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.470559 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.470562 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.470567: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.474657 1635447 buffer_comparator.cc:156] Difference at 256: 0, expected 1091.26
      +E0124 04:57:24.474672 1635447 buffer_comparator.cc:156] Difference at 257: 0, expected 1117.91
      +E0124 04:57:24.474675 1635447 buffer_comparator.cc:156] Difference at 258: 0, expected 1086.11
      +E0124 04:57:24.474678 1635447 buffer_comparator.cc:156] Difference at 259: 0, expected 1095.59
      +E0124 04:57:24.474681 1635447 buffer_comparator.cc:156] Difference at 260: 0, expected 1098.42
      +E0124 04:57:24.474683 1635447 buffer_comparator.cc:156] Difference at 261: 0, expected 1113.28
      +E0124 04:57:24.474686 1635447 buffer_comparator.cc:156] Difference at 262: 0, expected 1088.03
      +E0124 04:57:24.474689 1635447 buffer_comparator.cc:156] Difference at 263: 0, expected 1093.88
      +E0124 04:57:24.474694 1635447 buffer_comparator.cc:156] Difference at 264: 0, expected 1115.18
      +E0124 04:57:24.474696 1635447 buffer_comparator.cc:156] Difference at 265: 0, expected 1104.89
      +2025-01-24 04:57:24.474701: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.486858 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.486914 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.486917 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.486920 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.486923 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.486926 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.486929 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.486932 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.486936 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.486938 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.486948: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.489855 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.489869 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.489872 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.489875 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.489878 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.489881 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.489884 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.489887 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.489890 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.489893 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.489898: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.492857 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.492871 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.492874 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.492877 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.492880 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.492883 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.492886 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.492889 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.492892 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.492895 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.492899: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.495770 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      +E0124 04:57:24.495785 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:57:24.495788 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      +E0124 04:57:24.495792 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.495795 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.495798 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.495801 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.495804 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.495806 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.495809 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +2025-01-24 04:57:24.495814: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.498765 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.498780 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.498783 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.498786 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.498789 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.498792 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.498795 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.498798 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.498801 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.498804 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.498809: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.501694 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.501707 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.501710 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.501713 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.501716 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.501719 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.501722 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.501725 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.501728 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.501730 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.501735: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.504267 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.504281 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.504284 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.504287 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.504290 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.504293 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.504307 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.504310 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.504313 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.504316 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.504321: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.506769 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.506783 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.506786 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.506789 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.506792 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.506795 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.506798 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.506801 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.506804 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.506806 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.506811: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.509228 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.509244 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.509247 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.509250 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.509253 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.509256 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.509259 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.509262 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.509265 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.509268 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.509272: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.511686 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.511699 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.511702 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.511705 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.511708 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.511711 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.511714 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.511716 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.511719 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.511722 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.511727: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.514123 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.514139 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.514142 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.514145 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.514148 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.514151 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.514154 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.514157 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.514160 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.514162 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.514167: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.516575 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.516588 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.516591 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.516594 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.516597 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.516599 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.516602 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.516605 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.516608 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.516611 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.516616: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.519012 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.519025 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.519028 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.519031 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.519034 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.519037 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.519040 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.519043 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.519046 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.519049 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.519053: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.521568 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.521580 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.521583 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.521586 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.521590 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.521593 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.521596 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.521599 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.521602 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.521605 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.521610: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.524110 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.524126 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.524129 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.524132 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.524135 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.524138 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.524141 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.524144 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.524147 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.524149 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.524154: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.526553 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      +E0124 04:57:24.526566 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:57:24.526570 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      +E0124 04:57:24.526574 1635447 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.413
      +E0124 04:57:24.526577 1635447 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.54
      +E0124 04:57:24.526580 1635447 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      +E0124 04:57:24.526583 1635447 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      +E0124 04:57:24.526586 1635447 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.76
      +E0124 04:57:24.526590 1635447 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.08
      +E0124 04:57:24.526593 1635447 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.61
      +2025-01-24 04:57:24.526597: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.528978 1635447 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.593
      +E0124 04:57:24.528990 1635447 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.89
      +E0124 04:57:24.528993 1635447 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.77
      +E0124 04:57:24.528996 1635447 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.84
      +E0124 04:57:24.528999 1635447 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      +E0124 04:57:24.529002 1635447 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.95
      +E0124 04:57:24.529005 1635447 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      +E0124 04:57:24.529008 1635447 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.76
      +E0124 04:57:24.529011 1635447 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.37
      +E0124 04:57:24.529015 1635447 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      +2025-01-24 04:57:24.529020: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.531492 1635447 buffer_comparator.cc:156] Difference at 896: 1204.66, expected 958.128
      +E0124 04:57:24.531508 1635447 buffer_comparator.cc:156] Difference at 897: 1053.28, expected 1218.67
      +E0124 04:57:24.531511 1635447 buffer_comparator.cc:156] Difference at 898: 740.998, expected 1826.79
      +E0124 04:57:24.531514 1635447 buffer_comparator.cc:156] Difference at 899: 1185.71, expected 1593.43
      +E0124 04:57:24.531517 1635447 buffer_comparator.cc:156] Difference at 900: 850.478, expected 1119.04
      +E0124 04:57:24.531520 1635447 buffer_comparator.cc:156] Difference at 901: 634.712, expected 1796.71
      +E0124 04:57:24.531523 1635447 buffer_comparator.cc:156] Difference at 902: 799.593, expected 1279.87
      +E0124 04:57:24.531526 1635447 buffer_comparator.cc:156] Difference at 903: 1208.15, expected 941.479
      +E0124 04:57:24.531529 1635447 buffer_comparator.cc:156] Difference at 904: 1055.09, expected 1202.97
      +E0124 04:57:24.531531 1635447 buffer_comparator.cc:156] Difference at 905: 746.267, expected 1817.41
      +2025-01-24 04:57:24.531536: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.533971 1635447 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      +E0124 04:57:24.533985 1635447 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      +E0124 04:57:24.533988 1635447 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      +E0124 04:57:24.533991 1635447 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      +E0124 04:57:24.533994 1635447 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      +E0124 04:57:24.533997 1635447 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      +E0124 04:57:24.534000 1635447 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      +E0124 04:57:24.534003 1635447 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      +E0124 04:57:24.534006 1635447 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      +E0124 04:57:24.534009 1635447 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      +2025-01-24 04:57:24.534014: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.536420 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.536433 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.536436 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.536439 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.536442 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.536445 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.536448 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.536451 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.536454 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.536457 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.536461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.538923 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.538936 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.538941 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.538944 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.538947 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.538950 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.538953 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.538955 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.538958 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.538961 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.538966: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.541431 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.541445 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.541448 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.541451 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.541454 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.541457 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.541460 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.541463 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.541466 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.541469 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.541474: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.544077 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.544090 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.544093 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.544096 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.544099 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.544102 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.544105 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.544108 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.544111 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.544114 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.544118: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.546747 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.546760 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.546764 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.546767 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.546770 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.546773 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.546777 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.546780 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.546783 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.546786 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.546791: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.549367 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.549379 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.549382 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.549386 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.549389 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.549392 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.549395 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.549397 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.549400 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.549403 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.549408: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +2025-01-24 04:57:28.024854: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
       
      -E0120 22:59:35.144561 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.144631 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.144634 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.144638 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.144641 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.144644 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.144647 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.144650 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.144653 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.144656 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.144669: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.147349 3944451 buffer_comparator.cc:156] Difference at 0: 2299.91, expected 1852.66
      -E0120 22:59:35.147362 3944451 buffer_comparator.cc:156] Difference at 1: 2365.05, expected 1976.49
      -E0120 22:59:35.147366 3944451 buffer_comparator.cc:156] Difference at 2: 1113.52, expected 959.045
      -E0120 22:59:35.147369 3944451 buffer_comparator.cc:156] Difference at 3: 1907.18, expected 1563.86
      -E0120 22:59:35.147372 3944451 buffer_comparator.cc:156] Difference at 4: 1468.14, expected 1234.14
      -E0120 22:59:35.147375 3944451 buffer_comparator.cc:156] Difference at 5: 1437.09, expected 1193.4
      -E0120 22:59:35.147378 3944451 buffer_comparator.cc:156] Difference at 6: 2153.91, expected 1811.25
      -E0120 22:59:35.147381 3944451 buffer_comparator.cc:156] Difference at 14: 2152.45, expected 1862.99
      -E0120 22:59:35.147384 3944451 buffer_comparator.cc:156] Difference at 15: 2224.9, expected 1998.11
      -E0120 22:59:35.147389 3944451 buffer_comparator.cc:156] Difference at 17: 1795.48, expected 1583.4
      -2025-01-20 22:59:35.147394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.150038 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.150058 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.150061 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.150064 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.150067 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.150070 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.150073 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.150076 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.150079 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.150081 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.150086: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.152723 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.152734 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.152738 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.152741 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.152744 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.152746 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.152749 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.152752 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.152755 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.152758 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.152763: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.155412 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.155425 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.155428 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.155431 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.155434 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.155436 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.155439 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.155442 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.155445 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.155448 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.155453: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.158094 3944451 buffer_comparator.cc:156] Difference at 0: 2999.66, expected 1852.66
      -E0120 22:59:35.158106 3944451 buffer_comparator.cc:156] Difference at 1: 3136.74, expected 1976.49
      -E0120 22:59:35.158112 3944451 buffer_comparator.cc:156] Difference at 2: 1562.51, expected 959.045
      -E0120 22:59:35.158115 3944451 buffer_comparator.cc:156] Difference at 3: 2497.23, expected 1563.86
      -E0120 22:59:35.158118 3944451 buffer_comparator.cc:156] Difference at 4: 1995.79, expected 1234.14
      -E0120 22:59:35.158121 3944451 buffer_comparator.cc:156] Difference at 5: 1957.18, expected 1193.4
      -E0120 22:59:35.158124 3944451 buffer_comparator.cc:156] Difference at 6: 2863.57, expected 1811.25
      -E0120 22:59:35.158127 3944451 buffer_comparator.cc:156] Difference at 7: 2789.26, expected 1869.16
      -E0120 22:59:35.158130 3944451 buffer_comparator.cc:156] Difference at 8: 2929.61, expected 1993.31
      -E0120 22:59:35.158133 3944451 buffer_comparator.cc:156] Difference at 9: 1452.92, expected 968.067
      -2025-01-20 22:59:35.158138: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.160769 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.160780 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.160783 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.160786 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.160789 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.160792 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.160795 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.160798 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.160801 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.160804 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.160808: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.163529 3944451 buffer_comparator.cc:156] Difference at 224: 5.86135, expected 1844.62
      -E0120 22:59:35.163541 3944451 buffer_comparator.cc:156] Difference at 225: 4.06782, expected 1964.03
      -E0120 22:59:35.163544 3944451 buffer_comparator.cc:156] Difference at 226: 4.13873, expected 949.238
      -E0120 22:59:35.163547 3944451 buffer_comparator.cc:156] Difference at 227: 5.5797, expected 1560.45
      -E0120 22:59:35.163550 3944451 buffer_comparator.cc:156] Difference at 228: 5.25988, expected 1215.36
      -E0120 22:59:35.163553 3944451 buffer_comparator.cc:156] Difference at 229: 5.30797, expected 1186.63
      -E0120 22:59:35.163556 3944451 buffer_comparator.cc:156] Difference at 230: 4.14199, expected 1795.53
      -E0120 22:59:35.163558 3944451 buffer_comparator.cc:156] Difference at 231: 3.45867, expected 1837.81
      -E0120 22:59:35.163561 3944451 buffer_comparator.cc:156] Difference at 232: 3.85107, expected 1966.39
      -E0120 22:59:35.163564 3944451 buffer_comparator.cc:156] Difference at 233: 3.28131, expected 950.028
      -2025-01-20 22:59:35.163569: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.166200 3944451 buffer_comparator.cc:156] Difference at 224: 3.22034, expected 1844.62
      -E0120 22:59:35.166212 3944451 buffer_comparator.cc:156] Difference at 225: 2.49684, expected 1964.03
      -E0120 22:59:35.166215 3944451 buffer_comparator.cc:156] Difference at 226: 1.42502, expected 949.238
      -E0120 22:59:35.166218 3944451 buffer_comparator.cc:156] Difference at 227: 2.53522, expected 1560.45
      -E0120 22:59:35.166221 3944451 buffer_comparator.cc:156] Difference at 228: 2.01038, expected 1215.36
      -E0120 22:59:35.166224 3944451 buffer_comparator.cc:156] Difference at 229: 2.60944, expected 1186.63
      -Epoch   1	Train Loss: 15.782338	Train Acc: 21.4286%	Val Loss: 7.079204	Val Acc: 23.8000%
      -Epoch   2	Train Loss: 8.093776	Train Acc: 22.1429%	Val Loss: 3.137945	Val Acc: 28.0000%
      -Epoch   3	Train Loss: 2.983344	Train Acc: 43.5714%	Val Loss: 1.921002	Val Acc: 39.6000%
      -Epoch   4	Train Loss: 1.953735	Train Acc: 58.5714%	Val Loss: 2.015625	Val Acc: 42.0000%
      -Epoch   5	Train Loss: 1.699691	Train Acc: 62.8571%	Val Loss: 1.914729	Val Acc: 44.4000%
      -Epoch   6	Train Loss: 1.365187	Train Acc: 70.7143%	Val Loss: 1.669696	Val Acc: 52.6000%
      -Epoch   7	Train Loss: 1.127103	Train Acc: 73.5714%	Val Loss: 1.538632	Val Acc: 58.6000%
      -Epoch   8	Train Loss: 0.973196	Train Acc: 74.2857%	Val Loss: 1.553579	Val Acc: 60.4000%
      -Epoch   9	Train Loss: 0.915362	Train Acc: 75.0000%	Val Loss: 1.562308	Val Acc: 59.4000%
      -Epoch  10	Train Loss: 0.901013	Train Acc: 79.2857%	Val Loss: 1.568935	Val Acc: 59.8000%
      -Epoch  11	Train Loss: 0.794685	Train Acc: 80.0000%	Val Loss: 1.644144	Val Acc: 57.6000%
      -Epoch  12	Train Loss: 0.754806	Train Acc: 80.7143%	Val Loss: 1.734306	Val Acc: 56.2000%
      -Epoch  13	Train Loss: 0.725162	Train Acc: 81.4286%	Val Loss: 1.803399	Val Acc: 57.4000%
      -Epoch  14	Train Loss: 0.691019	Train Acc: 82.8571%	Val Loss: 1.826088	Val Acc: 57.6000%
      -Epoch  15	Train Loss: 0.648507	Train Acc: 84.2857%	Val Loss: 1.802276	Val Acc: 58.0000%
      -Epoch  16	Train Loss: 0.599733	Train Acc: 86.4286%	Val Loss: 1.755903	Val Acc: 59.0000%
      -Epoch  17	Train Loss: 0.569892	Train Acc: 86.4286%	Val Loss: 1.712767	Val Acc: 59.8000%
      -Epoch  18	Train Loss: 0.556019	Train Acc: 86.4286%	Val Loss: 1.689576	Val Acc: 60.6000%
      -Epoch  19	Train Loss: 0.542816	Train Acc: 87.8571%	Val Loss: 1.693934	Val Acc: 61.0000%
      -Epoch  20	Train Loss: 0.514605	Train Acc: 87.8571%	Val Loss: 1.706551	Val Acc: 61.2000%
      -Epoch  21	Train Loss: 0.496801	Train Acc: 87.8571%	Val Loss: 1.721584	Val Acc: 61.6000%
      -Epoch  22	Train Loss: 0.483525	Train Acc: 87.8571%	Val Loss: 1.733926	Val Acc: 62.0000%
      -Epoch  23	Train Loss: 0.472684	Train Acc: 89.2857%	Val Loss: 1.741387	Val Acc: 62.8000%
      -Epoch  24	Train Loss: 0.462024	Train Acc: 90.0000%	Val Loss: 1.743580	Val Acc: 63.4000%
      -Epoch  25	Train Loss: 0.449647	Train Acc: 90.0000%	Val Loss: 1.739714	Val Acc: 64.2000%
      -Epoch  26	Train Loss: 0.435166	Train Acc: 90.0000%	Val Loss: 1.733291	Val Acc: 64.4000%
      -Epoch  27	Train Loss: 0.419638	Train Acc: 91.4286%	Val Loss: 1.727418	Val Acc: 64.8000%
      -Early Stopping at Epoch 27
      -2025-01-20 23:00:08.518435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      +E0124 04:57:28.471448 1635447 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      +E0124 04:57:28.471523 1635447 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      +E0124 04:57:28.471527 1635447 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      +E0124 04:57:28.471532 1635447 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      +E0124 04:57:28.471536 1635447 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      +E0124 04:57:28.471540 1635447 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      +E0124 04:57:28.471544 1635447 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      +E0124 04:57:28.471548 1635447 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      +E0124 04:57:28.471552 1635447 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      +E0124 04:57:28.471556 1635447 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      +2025-01-24 04:57:28.471568: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.474398 1635447 buffer_comparator.cc:156] Difference at 0: 2299.91, expected 1852.66
      +E0124 04:57:28.474414 1635447 buffer_comparator.cc:156] Difference at 1: 2365.05, expected 1976.49
      +E0124 04:57:28.474418 1635447 buffer_comparator.cc:156] Difference at 2: 1113.52, expected 959.045
      +E0124 04:57:28.474423 1635447 buffer_comparator.cc:156] Difference at 3: 1907.18, expected 1563.86
      +E0124 04:57:28.474427 1635447 buffer_comparator.cc:156] Difference at 4: 1468.14, expected 1234.14
      +E0124 04:57:28.474431 1635447 buffer_comparator.cc:156] Difference at 5: 1437.09, expected 1193.4
      +E0124 04:57:28.474435 1635447 buffer_comparator.cc:156] Difference at 6: 2153.91, expected 1811.25
      +E0124 04:57:28.474439 1635447 buffer_comparator.cc:156] Difference at 14: 2152.45, expected 1862.99
      +E0124 04:57:28.474445 1635447 buffer_comparator.cc:156] Difference at 15: 2224.9, expected 1998.11
      +E0124 04:57:28.474450 1635447 buffer_comparator.cc:156] Difference at 17: 1795.48, expected 1583.4
      +2025-01-24 04:57:28.474457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.477270 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.477285 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.477290 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.477294 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.477298 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.477302 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.477306 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.477310 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.477314 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.477318 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.477324: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.480112 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.480124 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.480127 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.480130 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.480133 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.480136 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.480139 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.480142 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.480144 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.480147 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.480152: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.482900 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.482911 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.482914 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.482917 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.482920 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.482923 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.482926 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.482929 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.482932 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.482934 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.482939: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.485704 1635447 buffer_comparator.cc:156] Difference at 0: 2999.66, expected 1852.66
      +E0124 04:57:28.485717 1635447 buffer_comparator.cc:156] Difference at 1: 3136.74, expected 1976.49
      +E0124 04:57:28.485720 1635447 buffer_comparator.cc:156] Difference at 2: 1562.51, expected 959.045
      +E0124 04:57:28.485723 1635447 buffer_comparator.cc:156] Difference at 3: 2497.23, expected 1563.86
      +E0124 04:57:28.485726 1635447 buffer_comparator.cc:156] Difference at 4: 1995.79, expected 1234.14
      +E0124 04:57:28.485729 1635447 buffer_comparator.cc:156] Difference at 5: 1957.18, expected 1193.4
      +E0124 04:57:28.485732 1635447 buffer_comparator.cc:156] Difference at 6: 2863.57, expected 1811.25
      +E0124 04:57:28.485735 1635447 buffer_comparator.cc:156] Difference at 7: 2789.26, expected 1869.16
      +E0124 04:57:28.485738 1635447 buffer_comparator.cc:156] Difference at 8: 2929.61, expected 1993.31
      +E0124 04:57:28.485741 1635447 buffer_comparator.cc:156] Difference at 9: 1452.92, expected 968.067
      +2025-01-24 04:57:28.485745: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.488509 1635447 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      +E0124 04:57:28.488524 1635447 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      +E0124 04:57:28.488527 1635447 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      +E0124 04:57:28.488530 1635447 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      +E0124 04:57:28.488533 1635447 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      +E0124 04:57:28.488535 1635447 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      +E0124 04:57:28.488538 1635447 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      +E0124 04:57:28.488541 1635447 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      +E0124 04:57:28.488544 1635447 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      +E0124 04:57:28.488547 1635447 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      +2025-01-24 04:57:28.488551: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.491393 1635447 buffer_comparator.cc:156] Difference at 224: 5.86135, expected 1844.62
      +E0124 04:57:28.491408 1635447 buffer_comparator.cc:156] Difference at 225: 4.06782, expected 1964.03
      +E0124 04:57:28.491411 1635447 buffer_comparator.cc:156] Difference at 226: 4.13873, expected 949.238
      +E0124 04:57:28.491414 1635447 buffer_comparator.cc:156] Difference at 227: 5.5797, expected 1560.45
      +E0124 04:57:28.491417 1635447 buffer_comparator.cc:156] Difference at 228: 5.25988, expected 1215.36
      +E0124 04:57:28.491420 1635447 buffer_comparator.cc:156] Difference at 229: 5.30797, expected 1186.63
      +E0124 04:57:28.491423 1635447 buffer_comparator.cc:156] Difference at 230: 4.14199, expected 1795.53
      +E0124 04:57:28.491426 1635447 buffer_comparator.cc:156] Difference at 231: 3.45867, expected 1837.81
      +E0124 04:57:28.491428 1635447 buffer_comparator.cc:156] Difference at 232: 3.85107, expected 1966.39
      +E0124 04:57:28.491431 1635447 buffer_comparator.cc:156] Difference at 233: 3.28131, expected 950.028
      +2025-01-24 04:57:28.491436: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.494189 1635447 buffer_comparator.cc:156] Difference at 224: 3.22034, expected 1844.62
      +E0124 04:57:28.494205 1635447 buffer_comparator.cc:156] Difference at 225: 2.49684, expected 1964.03
      +E0124 04:57:28.494208 1635447 buffer_comparator.cc:156] Difference at 226: 1.42502, expected 949.238
      +E0124 04:57:28.494211 1635447 buffer_comparator.cc:156] Difference at 227: 2.53522, expected 1560.45
      +E0124 04:57:28.494214 1635447 buffer_comparator.cc:156] Difference at 228: 2.01038, expected 1215.36
      +Epoch   1	Train Loss: 15.798144	Train Acc: 22.8571%	Val Loss: 7.463610	Val Acc: 24.8000%
      +Epoch   2	Train Loss: 8.541474	Train Acc: 22.1429%	Val Loss: 3.486367	Val Acc: 30.0000%
      +Epoch   3	Train Loss: 3.427734	Train Acc: 43.5714%	Val Loss: 2.192992	Val Acc: 39.8000%
      +Epoch   4	Train Loss: 2.246899	Train Acc: 56.4286%	Val Loss: 1.773665	Val Acc: 44.2000%
      +Epoch   5	Train Loss: 1.623290	Train Acc: 65.0000%	Val Loss: 1.666274	Val Acc: 49.2000%
      +Epoch   6	Train Loss: 1.454367	Train Acc: 70.7143%	Val Loss: 1.493687	Val Acc: 55.0000%
      +Epoch   7	Train Loss: 1.245771	Train Acc: 70.7143%	Val Loss: 1.406759	Val Acc: 59.2000%
      +Epoch   8	Train Loss: 1.096862	Train Acc: 73.5714%	Val Loss: 1.398893	Val Acc: 61.6000%
      +Epoch   9	Train Loss: 0.985403	Train Acc: 74.2857%	Val Loss: 1.414804	Val Acc: 63.4000%
      +Epoch  10	Train Loss: 0.951388	Train Acc: 76.4286%	Val Loss: 1.408753	Val Acc: 63.6000%
      +Epoch  11	Train Loss: 0.843768	Train Acc: 77.8571%	Val Loss: 1.405046	Val Acc: 62.8000%
      +Epoch  12	Train Loss: 0.733548	Train Acc: 77.8571%	Val Loss: 1.433094	Val Acc: 61.8000%
      +Epoch  13	Train Loss: 0.719941	Train Acc: 79.2857%	Val Loss: 1.442757	Val Acc: 63.0000%
      +Epoch  14	Train Loss: 0.655222	Train Acc: 80.7143%	Val Loss: 1.453139	Val Acc: 63.4000%
      +Epoch  15	Train Loss: 0.623899	Train Acc: 82.8571%	Val Loss: 1.455770	Val Acc: 63.8000%
      +Epoch  16	Train Loss: 0.584657	Train Acc: 83.5714%	Val Loss: 1.447179	Val Acc: 64.0000%
      +Epoch  17	Train Loss: 0.542308	Train Acc: 84.2857%	Val Loss: 1.435064	Val Acc: 64.8000%
      +Epoch  18	Train Loss: 0.512126	Train Acc: 85.0000%	Val Loss: 1.426513	Val Acc: 65.4000%
      +Epoch  19	Train Loss: 0.490881	Train Acc: 85.0000%	Val Loss: 1.425405	Val Acc: 65.2000%
      +Epoch  20	Train Loss: 0.473799	Train Acc: 86.4286%	Val Loss: 1.433259	Val Acc: 66.0000%
      +Epoch  21	Train Loss: 0.458639	Train Acc: 87.1429%	Val Loss: 1.450437	Val Acc: 65.4000%
      +Epoch  22	Train Loss: 0.439167	Train Acc: 87.1429%	Val Loss: 1.475696	Val Acc: 65.6000%
      +Epoch  23	Train Loss: 0.420514	Train Acc: 88.5714%	Val Loss: 1.507743	Val Acc: 64.6000%
      +Epoch  24	Train Loss: 0.405530	Train Acc: 89.2857%	Val Loss: 1.545164	Val Acc: 64.4000%
      +Epoch  25	Train Loss: 0.393426	Train Acc: 90.0000%	Val Loss: 1.584067	Val Acc: 63.8000%
      +Epoch  26	Train Loss: 0.383347	Train Acc: 89.2857%	Val Loss: 1.620508	Val Acc: 64.6000%
      +Epoch  27	Train Loss: 0.374202	Train Acc: 88.5714%	Val Loss: 1.651417	Val Acc: 64.2000%
      +Epoch  28	Train Loss: 0.364877	Train Acc: 88.5714%	Val Loss: 1.672666	Val Acc: 64.0000%
      +Early Stopping at Epoch 28
      +2025-01-24 04:57:59.569522: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
       
      -E0120 23:00:08.819090 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.819165 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.819173 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.819180 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.819187 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.819194 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.819201 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.819208 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.819215 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.819221 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.819239: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.821455 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.821484 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.821491 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.821498 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.821505 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.821511 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.821518 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.821524 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.821531 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.821538 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.821548: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.823854 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.823884 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.823891 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.823897 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.823904 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.823911 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.823917 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.823924 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.823930 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.823937 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.823947: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.826144 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.826178 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.826185 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.826194 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.826201 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.826207 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.826214 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.826221 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.826227 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.826234 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.826244: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.828433 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.828460 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.828466 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.828473 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.828480 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.828486 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.828493 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.828499 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.828506 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.828512 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.828523: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.830665 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.830697 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.830704 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.830711 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.830717 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.830724 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.830731 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.830737 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.830744 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.830750 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.830761: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.832792 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.832809 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.832812 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.832815 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.832818 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.832821 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.832824 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.832827 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.832832 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.832835 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.832839: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.834833 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.834849 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.834852 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.834855 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.834858 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.834861 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.834864 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.834866 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.834869 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.834872 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.834877: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.836860 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.836873 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.836876 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.836879 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.836882 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.836885 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.836888 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.836891 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.836894 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.836897 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.836901: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.838876 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.838890 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.838893 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.838896 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.838899 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.838902 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.838905 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.838908 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.838910 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.838913 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.838918: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.840900 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.840915 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.840918 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.840921 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.840924 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.840927 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.840930 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.840932 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.840935 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.840938 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.840943: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.842915 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.842932 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.842935 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.842938 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.842941 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.842944 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.842946 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.842949 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.842952 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.842955 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.842960: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.844939 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.844954 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.844957 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.844960 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.844963 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.844965 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.844968 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.844971 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.844974 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.844977 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.844982: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.847021 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.847034 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.847037 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.847040 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.847043 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.847046 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.847051 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.847054 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.847057 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.847060 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.847064: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.849110 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.849127 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.849130 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.849133 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.849136 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.849139 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.849142 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.849145 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.849148 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.849151 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.849155: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.851143 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.955
      -E0120 23:00:08.851156 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 23:00:08.851160 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.81
      -E0120 23:00:08.851164 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.417
      -E0120 23:00:08.851167 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.55
      -E0120 23:00:08.851170 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 23:00:08.851173 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 23:00:08.851176 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.77
      -E0120 23:00:08.851179 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.09
      -E0120 23:00:08.851182 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.62
      -2025-01-20 23:00:08.851187: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.853170 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.598
      -E0120 23:00:08.853183 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.9
      -E0120 23:00:08.853186 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.78
      -E0120 23:00:08.853189 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.85
      -E0120 23:00:08.853192 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 23:00:08.853195 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.955
      -E0120 23:00:08.853198 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 23:00:08.853201 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.77
      -E0120 23:00:08.853204 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.38
      -E0120 23:00:08.853207 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 23:00:08.853212: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.855224 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.855240 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.855243 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.855246 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.855249 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.855252 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.855255 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.855258 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.855261 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.855263 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.855268: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.857271 3944451 buffer_comparator.cc:156] Difference at 896: 485.098, expected 958.133
      -E0120 23:00:08.857285 3944451 buffer_comparator.cc:156] Difference at 897: 732.587, expected 1218.67
      -E0120 23:00:08.857288 3944451 buffer_comparator.cc:156] Difference at 898: 635.29, expected 1826.8
      -E0120 23:00:08.857291 3944451 buffer_comparator.cc:156] Difference at 899: 446.948, expected 1593.44
      -E0120 23:00:08.857294 3944451 buffer_comparator.cc:156] Difference at 900: 712.745, expected 1119.04
      -E0120 23:00:08.857297 3944451 buffer_comparator.cc:156] Difference at 901: 516.07, expected 1796.72
      -E0120 23:00:08.857300 3944451 buffer_comparator.cc:156] Difference at 902: 373.095, expected 1279.87
      -E0120 23:00:08.857303 3944451 buffer_comparator.cc:156] Difference at 903: 483.905, expected 941.483
      -E0120 23:00:08.857306 3944451 buffer_comparator.cc:156] Difference at 904: 721.412, expected 1202.98
      -E0120 23:00:08.857309 3944451 buffer_comparator.cc:156] Difference at 905: 633.571, expected 1817.42
      -2025-01-20 23:00:08.857314: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.859309 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.859323 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.859326 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.859329 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.859332 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.859335 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.859338 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.859341 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.859343 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.859346 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.859351: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.861349 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.861365 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.861368 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.861371 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.861375 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.861378 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.861381 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.861384 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.861387 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.861390 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.861394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.863415 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.863428 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.863431 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.863434 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.863437 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.863440 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.863443 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.863446 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.863449 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.863452 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.863456: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.865569 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.865583 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.865586 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.865589 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.865592 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.865595 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.865598 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.865601 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.865604 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.865606 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.865611: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.867755 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.867769 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.867772 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.867775 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.867778 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.867781 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.867784 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.867786 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.867791 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.867794 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.867798: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.869885 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.869899 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.869902 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.869905 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.869908 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.869911 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.869913 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.869916 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.869919 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.869922 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.869927: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 23:00:10.392691: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      +E0124 04:57:59.973802 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.973880 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.973888 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.973895 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.973902 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.973909 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.973916 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.973923 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.973929 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.973936 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.973954: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.976363 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.976393 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.976400 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.976406 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.976413 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.976420 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.976426 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.976433 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.976440 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.976446 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.976457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.978935 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.978971 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.978978 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.978985 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.978992 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.978998 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.979005 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.979011 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.979018 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.979025 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.979035: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.981276 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.981291 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.981294 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.981299 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.981302 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.981305 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.981308 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.981310 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.981313 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.981316 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.981321: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.983564 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.983579 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.983583 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.983586 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.983588 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.983592 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.983594 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.983597 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.983600 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.983603 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.983608: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.985794 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.985820 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.985823 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.985826 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.985829 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.985832 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.985835 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.985838 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.985841 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.985844 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.985848: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.988189 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.988204 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.988207 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.988210 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.988213 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.988216 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.988219 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.988221 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.988226 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.988229 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.988234: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.990451 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.990466 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.990470 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.990473 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.990476 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.990479 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.990482 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.990485 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.990488 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.990491 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.990495: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.992682 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.992696 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.992699 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.992702 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.992705 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.992708 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.992711 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.992714 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.992717 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.992720 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.992724: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.994909 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.994924 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.994928 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.994931 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.994933 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.994936 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.994939 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.994942 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.994945 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.994948 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.994953: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.997149 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.997165 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.997168 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.997171 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.997174 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.997177 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.997180 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.997183 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.997186 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.997188 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.997193: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.999399 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.999414 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.999418 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.999420 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.999423 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.999426 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.999429 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.999432 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.999435 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.999438 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.999443: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.001637 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:58:00.001652 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:58:00.001656 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:58:00.001659 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:58:00.001661 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:58:00.001664 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:58:00.001667 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:58:00.001670 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:58:00.001673 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:58:00.001676 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:58:00.001681: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.003933 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.003949 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.003952 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.003955 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.003958 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.003961 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.003965 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.003968 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.003971 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.003974 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.003979: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.006239 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.006253 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.006256 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.006259 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.006262 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.006265 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.006268 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.006271 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.006274 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.006277 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.006282: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.008468 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.955
      +E0124 04:58:00.008485 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:58:00.008489 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.81
      +E0124 04:58:00.008492 1635447 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.417
      +E0124 04:58:00.008495 1635447 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.55
      +E0124 04:58:00.008499 1635447 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      +E0124 04:58:00.008502 1635447 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      +E0124 04:58:00.008505 1635447 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.77
      +E0124 04:58:00.008508 1635447 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.09
      +E0124 04:58:00.008512 1635447 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.62
      +2025-01-24 04:58:00.008516: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.010695 1635447 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.598
      +E0124 04:58:00.010711 1635447 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.9
      +E0124 04:58:00.010714 1635447 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.78
      +E0124 04:58:00.010717 1635447 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.85
      +E0124 04:58:00.010720 1635447 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      +E0124 04:58:00.010723 1635447 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.955
      +E0124 04:58:00.010726 1635447 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      +E0124 04:58:00.010729 1635447 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.77
      +E0124 04:58:00.010731 1635447 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.38
      +E0124 04:58:00.010734 1635447 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      +2025-01-24 04:58:00.010739: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.012966 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.012981 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.012984 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.012987 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.012990 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.012993 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.012996 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.012999 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.013002 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.013005 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.013010: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.015229 1635447 buffer_comparator.cc:156] Difference at 896: 485.098, expected 958.133
      +E0124 04:58:00.015248 1635447 buffer_comparator.cc:156] Difference at 897: 732.587, expected 1218.67
      +E0124 04:58:00.015251 1635447 buffer_comparator.cc:156] Difference at 898: 635.29, expected 1826.8
      +E0124 04:58:00.015254 1635447 buffer_comparator.cc:156] Difference at 899: 446.948, expected 1593.44
      +E0124 04:58:00.015257 1635447 buffer_comparator.cc:156] Difference at 900: 712.745, expected 1119.04
      +E0124 04:58:00.015260 1635447 buffer_comparator.cc:156] Difference at 901: 516.07, expected 1796.72
      +E0124 04:58:00.015263 1635447 buffer_comparator.cc:156] Difference at 902: 373.095, expected 1279.87
      +E0124 04:58:00.015266 1635447 buffer_comparator.cc:156] Difference at 903: 483.905, expected 941.483
      +E0124 04:58:00.015269 1635447 buffer_comparator.cc:156] Difference at 904: 721.412, expected 1202.98
      +E0124 04:58:00.015272 1635447 buffer_comparator.cc:156] Difference at 905: 633.571, expected 1817.42
      +2025-01-24 04:58:00.015277: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.017492 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.017507 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.017510 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.017513 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.017516 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.017519 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.017522 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.017525 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.017528 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.017531 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.017536: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.019748 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.019764 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.019767 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.019770 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.019775 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.019778 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.019781 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.019784 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.019787 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.019789 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.019794: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.022008 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.022021 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.022024 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.022027 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.022030 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.022033 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.022036 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.022039 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.022042 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.022045 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.022055: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.024415 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.024429 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.024432 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.024435 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.024438 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.024441 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.024444 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.024447 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.024450 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.024453 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.024457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.026814 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.026829 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.026832 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.026835 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.026838 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.026841 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.026844 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.026847 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.026851 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.026854 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.026859: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.029167 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.029184 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.029187 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.029190 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.029193 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.029196 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.029199 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.029202 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.029205 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.029208 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.029212: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +2025-01-24 04:58:01.758767: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
       
      -E0120 23:00:10.649785 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.649889 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.649904 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.649912 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.649919 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.649926 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.649933 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.649939 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.649946 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.649953 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.649972: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.653130 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.653156 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.653164 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.653171 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.653178 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.653185 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.653191 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.653198 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.653205 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.653211 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.653225: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.656355 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.656367 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.656370 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.656373 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.656376 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.656379 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.656382 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.656385 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.656388 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.656391 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.656396: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.659359 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.659370 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.659374 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.659377 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.659380 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.659383 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.659386 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.659389 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.659392 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.659395 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.659400: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.662366 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.662377 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.662381 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.662384 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.662387 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.662390 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.662393 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.662396 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.662399 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.662401 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.662406: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.665448 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.665459 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.665462 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.665467 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.665470 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.665473 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.665476 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.665479 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.665482 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.665485 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.665490: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.668396 3944451 buffer_comparator.cc:156] Difference at 224: 612.82, expected 745.838
      -E0120 23:00:10.668407 3944451 buffer_comparator.cc:156] Difference at 225: 624.08, expected 1079.1
      -E0120 23:00:10.668411 3944451 buffer_comparator.cc:156] Difference at 226: 503.822, expected 1034.99
      -E0120 23:00:10.668414 3944451 buffer_comparator.cc:156] Difference at 227: 349.836, expected 1538.8
      -E0120 23:00:10.668417 3944451 buffer_comparator.cc:156] Difference at 228: 305.595, expected 1554.44
      -E0120 23:00:10.668420 3944451 buffer_comparator.cc:156] Difference at 229: 442.133, expected 1264.82
      -E0120 23:00:10.668423 3944451 buffer_comparator.cc:156] Difference at 230: 429.287, expected 853.966
      -E0120 23:00:10.668426 3944451 buffer_comparator.cc:156] Difference at 231: 623.312, expected 756.177
      -E0120 23:00:10.668429 3944451 buffer_comparator.cc:156] Difference at 232: 633.094, expected 1076.91
      -E0120 23:00:10.668432 3944451 buffer_comparator.cc:156] Difference at 233: 515.065, expected 1029.02
      -2025-01-20 23:00:10.668437: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.671336 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.671347 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.671350 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.671353 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.671356 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.671360 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.671363 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.671366 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.671369 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.671371 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.671376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.674337 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.674348 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.674352 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.674355 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.674358 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.674361 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.674364 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.674367 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.674372 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.674375 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.674380: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.677306 3944451 buffer_comparator.cc:156] Difference at 448: 1224.67, expected 770.258
      -E0120 23:00:10.677317 3944451 buffer_comparator.cc:156] Difference at 449: 1238.62, expected 1098.93
      -E0120 23:00:10.677320 3944451 buffer_comparator.cc:156] Difference at 451: 690.154, expected 1560.21
      -E0120 23:00:10.677324 3944451 buffer_comparator.cc:156] Difference at 452: 601.951, expected 1585.41
      -E0120 23:00:10.677327 3944451 buffer_comparator.cc:156] Difference at 453: 877.959, expected 1307.15
      -E0120 23:00:10.677330 3944451 buffer_comparator.cc:156] Difference at 455: 1229.45, expected 760.638
      -E0120 23:00:10.677333 3944451 buffer_comparator.cc:156] Difference at 456: 1249.76, expected 1092.67
      -E0120 23:00:10.677336 3944451 buffer_comparator.cc:156] Difference at 458: 694.593, expected 1551.71
      -E0120 23:00:10.677339 3944451 buffer_comparator.cc:156] Difference at 459: 614.473, expected 1570.73
      -E0120 23:00:10.677342 3944451 buffer_comparator.cc:156] Difference at 460: 884.496, expected 1283.32
      -2025-01-20 23:00:10.677346: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.680191 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.680202 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.680206 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.680209 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.680212 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.680215 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.680218 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.680221 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.680224 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.680227 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.680232: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.683110 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.683121 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.683124 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.683127 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.683131 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.683134 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.683137 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.683140 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.683143 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.683145 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.683150: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.686103 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.686116 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.686120 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.686123 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.686126 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.686129 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.686132 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.686135 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.686138 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.686141 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.686146: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.689023 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.689034 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.689037 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.689040 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.689043 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.689046 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.689049 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.689052 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.689055 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.689058 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.689063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.691933 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.691944 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.691948 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.691951 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.691954 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.691957 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.691960 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.691963 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.691966 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.691969 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.691973: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.694973 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.694985 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.694988 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.694991 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.694994 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.694997 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.695002 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.695005 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.695008 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.695011 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.695016: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.697978 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.697988 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.697992 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.697995 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.697998 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.698001 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.698004 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.698007 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.698010 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.698013 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.698017: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.700955 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.700966 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.700969 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.700972 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.700975 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.700978 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.700981 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.700984 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.700987 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.700990 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.700995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.703891 3944451 buffer_comparator.cc:156] Difference at 896: 610.467, expected 767.869
      -E0120 23:00:10.703903 3944451 buffer_comparator.cc:156] Difference at 897: 622.568, expected 1090.2
      -E0120 23:00:10.703906 3944451 buffer_comparator.cc:156] Difference at 898: 502.172, expected 1050.23
      -E0120 23:00:10.703909 3944451 buffer_comparator.cc:156] Difference at 899: 349.792, expected 1561.6
      -E0120 23:00:10.703912 3944451 buffer_comparator.cc:156] Difference at 900: 312.127, expected 1574.44
      -E0120 23:00:10.703915 3944451 buffer_comparator.cc:156] Difference at 901: 449.924, expected 1303.84
      -E0120 23:00:10.703918 3944451 buffer_comparator.cc:156] Difference at 902: 433.368, expected 881.498
      -E0120 23:00:10.703921 3944451 buffer_comparator.cc:156] Difference at 903: 632.775, expected 755.455
      -E0120 23:00:10.703924 3944451 buffer_comparator.cc:156] Difference at 904: 650.408, expected 1073.52
      -E0120 23:00:10.703927 3944451 buffer_comparator.cc:156] Difference at 905: 531.789, expected 1034.81
      -2025-01-20 23:00:10.703932: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.706816 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.706827 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.706831 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.706834 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.706837 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.706840 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.706843 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.706846 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.706849 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.706852 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.706856: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.709758 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.709768 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.709772 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.709775 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.709778 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.709781 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.709784 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.709787 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.709790 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.709793 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.709797: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.712907 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.712918 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.712921 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.712924 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.712928 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.712931 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.712934 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.712937 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.712940 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.712943 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.712948: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.716059 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.716070 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.716074 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.716077 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.716082 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.716085 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.716088 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.716091 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.716094 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.716097 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.716102: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.719161 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.719172 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.719175 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.719178 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.719181 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.719184 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.719188 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.719190 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.719193 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.719196 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.719201: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Test Loss: 1.548569	Test Acc: 65.5000%

      Appendix

      julia
      using InteractiveUtils
      +E0124 04:58:01.765178 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.765247 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.765255 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.765262 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.765269 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.765275 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.765282 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.765288 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.765294 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.765301 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.765315: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.768537 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.768555 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.768559 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.768564 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.768568 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.768572 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.768576 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.768579 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.768583 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.768587 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.768594: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.771718 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.771736 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.771740 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.771744 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.771749 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.771753 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.771756 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.771760 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.771764 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.771768 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.771775: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.774898 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.774915 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.774919 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.774923 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.774927 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.774931 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.774935 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.774939 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.774943 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.774947 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.774954: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.778134 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.778152 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.778156 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.778160 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.778164 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.778168 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.778172 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.778176 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.778180 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.778184 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.778190: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.781387 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.781399 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.781402 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.781405 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.781408 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.781411 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.781415 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.781418 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.781421 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.781424 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.781428: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.784469 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.784482 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.784485 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.784488 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.784491 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.784494 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.784497 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.784499 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.784502 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.784505 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.784510: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.787533 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.787546 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.787549 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.787552 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.787554 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.787557 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.787560 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.787563 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.787566 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.787569 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.787573: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.790648 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.790663 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.790666 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.790669 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.790672 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.790675 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.790678 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.790681 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.790683 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.790686 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.790691: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.793736 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.793752 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.793755 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.793758 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.793761 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.793764 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.793767 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.793770 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.793773 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.793775 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.793780: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.796775 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.796788 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.796791 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.796794 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.796797 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.796800 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.796803 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.796805 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.796808 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.796811 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.796816: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.799819 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.799834 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.799837 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.799840 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.799843 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.799846 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.799848 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.799851 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.799854 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.799857 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.799862: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.802928 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.802941 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.802944 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.802947 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.802950 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.802953 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.802956 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.802959 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.802963 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.802966 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.802970: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.805973 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.805987 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.805990 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.805993 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.805996 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.805999 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.806002 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.806005 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.806008 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.806010 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.806015: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.809028 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.809041 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.809044 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.809047 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.809049 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.809052 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.809055 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.809058 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.809061 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.809064 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.809068: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.812194 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.812208 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.812211 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.812214 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.812217 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.812220 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.812223 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.812225 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.812228 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.812231 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.812236: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.815341 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.815354 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.815357 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.815362 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.815365 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.815367 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.815370 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.815373 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.815376 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.815379 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.815383: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.818439 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.818452 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.818455 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.818458 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.818461 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.818464 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.818467 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.818470 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.818472 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.818475 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.818480: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.821504 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.821518 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.821521 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.821524 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.821526 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.821529 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.821532 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.821535 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.821538 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.821541 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.821545: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.824546 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.824560 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.824563 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.824566 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.824569 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.824571 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.824574 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.824577 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.824580 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.824583 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.824589: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.827634 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.827647 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.827650 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.827653 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.827656 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.827659 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.827662 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.827664 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.827667 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.827670 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.827675: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.830899 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.830913 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.830916 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.830918 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.830921 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.830924 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.830927 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.830930 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.830933 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.830936 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.830940: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.834190 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.834205 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.834208 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.834211 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.834214 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.834217 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.834220 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.834222 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.834225 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.834228 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.834233: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.837405 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.837418 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.837421 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.837424 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.837429 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.837431 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.837434 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.837437 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.837440 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.837443 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.837447: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +Test Loss: 1.441829	Test Acc: 66.7000%

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -1313,8 +1313,8 @@ import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const u
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      diff --git a/dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.lean.js b/dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.lean.js
      new file mode 100644
      index 0000000000..8095f5cf19
      --- /dev/null
      +++ b/dev/assets/tutorials_intermediate_6_GCN_Cora.md.CpGmSjYV.lean.js
      @@ -0,0 +1 @@
      +import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.BetCMmtc.js";const u=JSON.parse('{"title":"Graph Convolutional Networks on Cora","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/6_GCN_Cora.md","filePath":"tutorials/intermediate/6_GCN_Cora.md","lastUpdated":null}'),c={name:"tutorials/intermediate/6_GCN_Cora.md"};function i(t,a,r,l,f,o){return p(),e("div",null,a[0]||(a[0]=[n("",21)]))}const d=s(c,[["render",i]]);export{u as __pageData,d as default};
      diff --git a/dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.lean.js b/dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.lean.js
      deleted file mode 100644
      index d8df13ba07..0000000000
      --- a/dev/assets/tutorials_intermediate_6_GCN_Cora.md.DyovN6WF.lean.js
      +++ /dev/null
      @@ -1,1334 +0,0 @@
      -import{_ as s,c as e,a2 as n,o as p}from"./chunks/framework.I-x9Gl6h.js";const u=JSON.parse('{"title":"Graph Convolutional Networks on Cora","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/6_GCN_Cora.md","filePath":"tutorials/intermediate/6_GCN_Cora.md","lastUpdated":null}'),c={name:"tutorials/intermediate/6_GCN_Cora.md"};function i(t,a,r,l,f,o){return p(),e("div",null,a[0]||(a[0]=[n(`

      Graph Convolutional Networks on Cora

      This example is based on GCN MLX tutorial. While we are doing this manually, we recommend directly using GNNLux.jl.

      julia
      using Lux, Reactant, MLDatasets, Random, Statistics, Enzyme, GNNGraphs, ConcreteStructs,
      -      Printf, OneHotArrays, Optimisers
      -
      -const xdev = reactant_device(; force=true)
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Loading Cora Dataset

      julia
      function loadcora()
      -    data = Cora()
      -    gph = data.graphs[1]
      -    gnngraph = GNNGraph(
      -        gph.edge_index; ndata=gph.node_data, edata=gph.edge_data, gph.num_nodes
      -    )
      -    return (
      -        gph.node_data.features,
      -        onehotbatch(gph.node_data.targets, data.metadata["classes"]),
      -        # We use a dense matrix here to avoid incompatibility with Reactant
      -        Matrix(adjacency_matrix(gnngraph)),
      -        # We use this since Reactant doesn't yet support gather adjoint
      -        (1:140, 141:640, 1709:2708)
      -    )
      -end
      loadcora (generic function with 1 method)

      Model Definition

      julia
      function GCNLayer(args...; kwargs...)
      -    return @compact(; dense=Dense(args...; kwargs...)) do (x, adj)
      -        @return dense(x) * adj
      -    end
      -end
      -
      -function GCN(x_dim, h_dim, out_dim; nb_layers=2, dropout=0.5, kwargs...)
      -    layer_sizes = vcat(x_dim, [h_dim for _ in 1:nb_layers])
      -    gcn_layers = [GCNLayer(in_dim => out_dim; kwargs...)
      -                  for (in_dim, out_dim) in zip(layer_sizes[1:(end - 1)], layer_sizes[2:end])]
      -    last_layer = GCNLayer(layer_sizes[end] => out_dim; kwargs...)
      -    dropout = Dropout(dropout)
      -
      -    return @compact(; gcn_layers, dropout, last_layer) do (x, adj, mask)
      -        for layer in gcn_layers
      -            x = relu.(layer((x, adj)))
      -            x = dropout(x)
      -        end
      -        @return last_layer((x, adj))[:, mask]
      -    end
      -end
      GCN (generic function with 1 method)

      Helper Functions

      julia
      function loss_function(model, ps, st, (x, y, adj, mask))
      -    y_pred, st = model((x, adj, mask), ps, st)
      -    loss = CrossEntropyLoss(; agg=mean, logits=Val(true))(y_pred, y[:, mask])
      -    return loss, st, (; y_pred)
      -end
      -
      -accuracy(y_pred, y) = mean(onecold(y_pred) .== onecold(y)) * 100
      accuracy (generic function with 1 method)

      Training the Model

      julia
      function main(;
      -        hidden_dim::Int=64, dropout::Float64=0.1, nb_layers::Int=2, use_bias::Bool=true,
      -        lr::Float64=0.001, weight_decay::Float64=0.0, patience::Int=20, epochs::Int=200
      -)
      -    rng = Random.default_rng()
      -    Random.seed!(rng, 0)
      -
      -    features, targets, adj, (train_idx, val_idx, test_idx) = loadcora() |> xdev
      -
      -    gcn = GCN(size(features, 1), hidden_dim, size(targets, 1); nb_layers, dropout, use_bias)
      -    ps, st = Lux.setup(rng, gcn) |> xdev
      -    opt = iszero(weight_decay) ? Adam(lr) : AdamW(; eta=lr, lambda=weight_decay)
      -
      -    train_state = Training.TrainState(gcn, ps, st, opt)
      -
      -    @printf "Total Trainable Parameters: %0.4f M\\n" (Lux.parameterlength(ps)/1e6)
      -
      -    val_loss_compiled = @compile loss_function(
      -        gcn, ps, Lux.testmode(st), (features, targets, adj, val_idx))
      -
      -    train_model_compiled = @compile gcn((features, adj, train_idx), ps, Lux.testmode(st))
      -    val_model_compiled = @compile gcn((features, adj, val_idx), ps, Lux.testmode(st))
      -
      -    best_loss_val = Inf
      -    cnt = 0
      -
      -    for epoch in 1:epochs
      -        (_, loss, _, train_state) = Lux.Training.single_train_step!(
      -            AutoEnzyme(), loss_function, (features, targets, adj, train_idx), train_state;
      -            return_gradients=Val(false)
      -        )
      -        train_acc = accuracy(
      -            Array(train_model_compiled((features, adj, train_idx),
      -                train_state.parameters, Lux.testmode(train_state.states))[1]),
      -            Array(targets)[:, train_idx]
      -        )
      -
      -        val_loss = first(val_loss_compiled(
      -            gcn, train_state.parameters, Lux.testmode(train_state.states),
      -            (features, targets, adj, val_idx)))
      -        val_acc = accuracy(
      -            Array(val_model_compiled((features, adj, val_idx),
      -                train_state.parameters, Lux.testmode(train_state.states))[1]),
      -            Array(targets)[:, val_idx]
      -        )
      -
      -        @printf "Epoch %3d\\tTrain Loss: %.6f\\tTrain Acc: %.4f%%\\tVal Loss: %.6f\\t\\
      -                 Val Acc: %.4f%%\\n" epoch loss train_acc val_loss val_acc
      -
      -        if val_loss < best_loss_val
      -            best_loss_val = val_loss
      -            cnt = 0
      -        else
      -            cnt += 1
      -            if cnt == patience
      -                @printf "Early Stopping at Epoch %d\\n" epoch
      -                break
      -            end
      -        end
      -    end
      -
      -    test_loss = @jit(loss_function(
      -        gcn, train_state.parameters, Lux.testmode(train_state.states),
      -        (features, targets, adj, test_idx)))[1]
      -    test_acc = accuracy(
      -        Array(@jit(gcn((features, adj, test_idx),
      -            train_state.parameters, Lux.testmode(train_state.states)))[1]),
      -        Array(targets)[:, test_idx]
      -    )
      -
      -    @printf "Test Loss: %.6f\\tTest Acc: %.4f%%\\n" test_loss test_acc
      -    return
      -end
      -
      -main()
      ┌ Warning: \`replicate\` doesn't work for \`TaskLocalRNG\`. Returning the same \`TaskLocalRNG\`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -Total Trainable Parameters: 0.0964 M
      -┌ Warning: \`training\` is set to \`Val{false}()\` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a \`Lux.jl\` model, set it to training mode using \`LuxCore.trainmode\`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 22:59:29.678566: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 3250673994701609345
      -2025-01-20 22:59:30.518900: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      -
      -2025-01-20 22:59:30.527924: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 24 bytes spill stores, 24 bytes spill loads
      -
      -2025-01-20 22:59:30.647255: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      -
      -2025-01-20 22:59:30.698953: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 328 bytes spill stores, 328 bytes spill loads
      -
      -2025-01-20 22:59:30.738672: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 376 bytes spill stores, 376 bytes spill loads
      -
      -2025-01-20 22:59:30.764969: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 856 bytes spill stores, 808 bytes spill loads
      -
      -2025-01-20 22:59:30.821027: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      -
      -2025-01-20 22:59:30.833276: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 132 bytes spill stores, 132 bytes spill loads
      -
      -E0120 22:59:31.077662 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.078796 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.078805 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.078812 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.078818 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.078825 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.078832 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.078839 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.078846 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.078852 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.078872: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.081918 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.081947 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.081954 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.081961 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.081967 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.081974 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.081981 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.081987 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.081994 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.082002 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.082013: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.084990 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.085004 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.085007 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.085010 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.085013 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.085016 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.085019 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.085022 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.085025 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.085028 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.085032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.087802 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.087817 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.087821 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.087824 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.087827 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.087830 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.087833 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.087836 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.087839 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.087842 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -2025-01-20 22:59:31.087847: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.090676 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.090690 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.090693 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.090696 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.090699 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.090702 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.090705 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.090708 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.090711 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.090714 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.090718: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.093486 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.093502 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.093507 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.093510 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.093513 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.093516 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.093519 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.093522 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.093525 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.093528 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.093533: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.096300 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.096314 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.096317 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.096321 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.096323 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.096326 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.096329 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.096332 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.096335 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.096338 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.096343: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.099157 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.099170 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.099174 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.099176 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.099179 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.099182 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.099185 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.099188 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.099191 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.099194 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.099199: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.101952 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.101965 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.101968 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.101971 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.101974 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.101977 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.101980 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.101985 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.101988 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.101991 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.101995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.104768 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.104785 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.104788 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.104791 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.104794 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.104797 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.104800 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.104803 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.104805 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.104808 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.104813: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.107562 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.107576 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.107579 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.107582 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.107585 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.107588 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.107591 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.107594 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.107597 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.107600 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.107605: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.110365 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.110379 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.110382 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.110385 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.110388 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.110391 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.110394 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.110396 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.110399 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.110402 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.110407: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.113168 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.113182 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.113185 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.113188 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.113191 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.113194 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.113197 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.113199 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.113202 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.113205 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.113210: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.116093 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.116107 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.116110 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.116113 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.116116 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.116119 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.116122 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.116125 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.116128 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.116131 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.116136: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.119021 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.119034 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.119037 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.119040 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.119043 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.119046 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.119049 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.119052 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.119055 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.119058 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.119063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.121781 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.121795 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.121799 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.121802 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.413
      -E0120 22:59:31.121807 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.54
      -E0120 22:59:31.121810 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 22:59:31.121814 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 22:59:31.121817 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.76
      -E0120 22:59:31.121820 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.08
      -E0120 22:59:31.121823 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.61
      -2025-01-20 22:59:31.121828: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.124551 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.593
      -E0120 22:59:31.124566 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.89
      -E0120 22:59:31.124569 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.77
      -E0120 22:59:31.124572 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.84
      -E0120 22:59:31.124575 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 22:59:31.124578 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.95
      -E0120 22:59:31.124581 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 22:59:31.124583 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.76
      -E0120 22:59:31.124586 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.37
      -E0120 22:59:31.124589 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 22:59:31.124594: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.127422 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.127436 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.127439 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.127442 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.127445 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.127448 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.127451 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.127454 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.127456 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.127459 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.127464: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.130250 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.130264 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.130267 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.130270 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.130273 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.130276 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.130279 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.130282 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.130285 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.130290 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.130295: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.133056 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.133071 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.133074 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.133077 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.133080 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.133083 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.133086 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.133089 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.133092 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.133095 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.133099: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.135925 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.135939 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.135942 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.135945 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.135948 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.135951 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.135954 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.135956 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.135959 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.135962 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.135967: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.138807 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.138823 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.138826 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.138829 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.138832 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.138835 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.138838 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.138841 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.138843 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.138846 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.138851: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.141858 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.141873 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.141877 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.141880 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.141883 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.141886 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.141889 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.141892 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.141895 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.141898 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.141903: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.144958 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.144971 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.144974 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.144977 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.144980 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.144983 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.144986 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.144989 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.144992 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.144995 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.145000: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.147960 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.147974 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.147978 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.147981 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.147984 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.147987 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.147990 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.147993 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.147995 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.147998 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.148003: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.156378 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.156394 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.156397 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.156400 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.156403 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.156406 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.156409 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.156412 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.156415 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.156418 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.156423: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.162393 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.162409 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.162412 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.162415 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.162418 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.162421 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.162423 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.162426 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.162429 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.162432 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.162436: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.167007 3944451 buffer_comparator.cc:156] Difference at 64: 0, expected 1106.21
      -E0120 22:59:31.167022 3944451 buffer_comparator.cc:156] Difference at 65: 0, expected 1087.83
      -E0120 22:59:31.167025 3944451 buffer_comparator.cc:156] Difference at 66: 0, expected 1090.54
      -E0120 22:59:31.167028 3944451 buffer_comparator.cc:156] Difference at 67: 0, expected 1104.23
      -E0120 22:59:31.167031 3944451 buffer_comparator.cc:156] Difference at 68: 0, expected 1104.3
      -E0120 22:59:31.167034 3944451 buffer_comparator.cc:156] Difference at 69: 0, expected 1093.45
      -E0120 22:59:31.167037 3944451 buffer_comparator.cc:156] Difference at 70: 0, expected 1091.52
      -E0120 22:59:31.167039 3944451 buffer_comparator.cc:156] Difference at 71: 0, expected 1110.4
      -E0120 22:59:31.167042 3944451 buffer_comparator.cc:156] Difference at 72: 0, expected 1106.92
      -E0120 22:59:31.167045 3944451 buffer_comparator.cc:156] Difference at 73: 0, expected 1088.44
      -2025-01-20 22:59:31.167050: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.171423 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.171437 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.171440 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.171443 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.171446 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.171449 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.171452 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.171455 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.171457 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.171460 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.171465: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.175989 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.176003 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.176007 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.176010 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.176013 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.176016 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.176019 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.176022 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.176024 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.176027 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.176032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.185164 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.185181 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.185184 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.185187 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.185190 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.185193 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.185196 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.185199 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.185202 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.185204 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.185209: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.189333 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.189348 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.189351 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.189354 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.189357 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.189360 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.189363 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.189366 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.189368 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.189371 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.189376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.193559 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.193574 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.193577 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.193580 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.193583 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.193586 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.193589 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.193591 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.193594 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.193599 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.193603: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.197920 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.197941 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.197944 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.197947 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.197950 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.197954 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.197957 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.197961 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.197964 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.197966 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.197971: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.202160 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.202175 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.202178 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.202181 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.202184 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.202187 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.202190 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.202192 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.202195 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.202198 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.202203: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.205427 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.205441 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.205444 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.205447 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.205449 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.205452 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.205455 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.205458 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.205461 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.205463 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.205468: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.208666 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.208681 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.208684 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.208687 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.208691 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.208694 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.208696 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.208699 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.208702 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.208705 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.208709: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.211900 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.211913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.211916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.211919 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.211922 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.211925 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.211928 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.211930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.211933 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.211936 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.211941: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.215087 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.215101 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.215104 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.215107 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.215110 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.215113 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.215116 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.215118 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.215121 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.215124 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.215129: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.218258 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.218273 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.218276 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.218279 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.218282 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.218285 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.218288 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.218291 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.218293 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.218296 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.218301: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.221413 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.221439 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.221442 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.221445 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.221448 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.221450 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.221453 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.221456 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.221459 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.221462 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.221466: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.224647 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.224662 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.224665 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.224668 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.224671 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.224673 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.224676 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.224679 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.224682 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.224685 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.224689: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.227897 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.227913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.227916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.227918 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.227921 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.227924 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.227927 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.227930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.227932 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.227935 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.227940: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.231171 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.231197 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.231200 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.231203 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.231206 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.231209 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.231213 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.231216 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.231219 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.231221 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.231226: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.234459 3944451 buffer_comparator.cc:156] Difference at 256: 0, expected 1091.26
      -E0120 22:59:31.234474 3944451 buffer_comparator.cc:156] Difference at 257: 0, expected 1117.91
      -E0120 22:59:31.234477 3944451 buffer_comparator.cc:156] Difference at 258: 0, expected 1086.11
      -E0120 22:59:31.234480 3944451 buffer_comparator.cc:156] Difference at 259: 0, expected 1095.59
      -E0120 22:59:31.234483 3944451 buffer_comparator.cc:156] Difference at 260: 0, expected 1098.42
      -E0120 22:59:31.234486 3944451 buffer_comparator.cc:156] Difference at 261: 0, expected 1113.28
      -E0120 22:59:31.234488 3944451 buffer_comparator.cc:156] Difference at 262: 0, expected 1088.03
      -E0120 22:59:31.234491 3944451 buffer_comparator.cc:156] Difference at 263: 0, expected 1093.88
      -E0120 22:59:31.234494 3944451 buffer_comparator.cc:156] Difference at 264: 0, expected 1115.18
      -E0120 22:59:31.234497 3944451 buffer_comparator.cc:156] Difference at 265: 0, expected 1104.89
      -2025-01-20 22:59:31.234501: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 22:59:35.131435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      -
      -E0120 22:59:35.144561 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.144631 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.144634 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.144638 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.144641 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.144644 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.144647 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.144650 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.144653 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.144656 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.144669: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.147349 3944451 buffer_comparator.cc:156] Difference at 0: 2299.91, expected 1852.66
      -E0120 22:59:35.147362 3944451 buffer_comparator.cc:156] Difference at 1: 2365.05, expected 1976.49
      -E0120 22:59:35.147366 3944451 buffer_comparator.cc:156] Difference at 2: 1113.52, expected 959.045
      -E0120 22:59:35.147369 3944451 buffer_comparator.cc:156] Difference at 3: 1907.18, expected 1563.86
      -E0120 22:59:35.147372 3944451 buffer_comparator.cc:156] Difference at 4: 1468.14, expected 1234.14
      -E0120 22:59:35.147375 3944451 buffer_comparator.cc:156] Difference at 5: 1437.09, expected 1193.4
      -E0120 22:59:35.147378 3944451 buffer_comparator.cc:156] Difference at 6: 2153.91, expected 1811.25
      -E0120 22:59:35.147381 3944451 buffer_comparator.cc:156] Difference at 14: 2152.45, expected 1862.99
      -E0120 22:59:35.147384 3944451 buffer_comparator.cc:156] Difference at 15: 2224.9, expected 1998.11
      -E0120 22:59:35.147389 3944451 buffer_comparator.cc:156] Difference at 17: 1795.48, expected 1583.4
      -2025-01-20 22:59:35.147394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.150038 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.150058 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.150061 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.150064 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.150067 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.150070 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.150073 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.150076 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.150079 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.150081 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.150086: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.152723 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.152734 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.152738 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.152741 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.152744 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.152746 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.152749 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.152752 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.152755 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.152758 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.152763: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.155412 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.155425 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.155428 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.155431 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.155434 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.155436 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.155439 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.155442 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.155445 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.155448 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.155453: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.158094 3944451 buffer_comparator.cc:156] Difference at 0: 2999.66, expected 1852.66
      -E0120 22:59:35.158106 3944451 buffer_comparator.cc:156] Difference at 1: 3136.74, expected 1976.49
      -E0120 22:59:35.158112 3944451 buffer_comparator.cc:156] Difference at 2: 1562.51, expected 959.045
      -E0120 22:59:35.158115 3944451 buffer_comparator.cc:156] Difference at 3: 2497.23, expected 1563.86
      -E0120 22:59:35.158118 3944451 buffer_comparator.cc:156] Difference at 4: 1995.79, expected 1234.14
      -E0120 22:59:35.158121 3944451 buffer_comparator.cc:156] Difference at 5: 1957.18, expected 1193.4
      -E0120 22:59:35.158124 3944451 buffer_comparator.cc:156] Difference at 6: 2863.57, expected 1811.25
      -E0120 22:59:35.158127 3944451 buffer_comparator.cc:156] Difference at 7: 2789.26, expected 1869.16
      -E0120 22:59:35.158130 3944451 buffer_comparator.cc:156] Difference at 8: 2929.61, expected 1993.31
      -E0120 22:59:35.158133 3944451 buffer_comparator.cc:156] Difference at 9: 1452.92, expected 968.067
      -2025-01-20 22:59:35.158138: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.160769 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.160780 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.160783 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.160786 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.160789 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.160792 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.160795 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.160798 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.160801 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.160804 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.160808: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.163529 3944451 buffer_comparator.cc:156] Difference at 224: 5.86135, expected 1844.62
      -E0120 22:59:35.163541 3944451 buffer_comparator.cc:156] Difference at 225: 4.06782, expected 1964.03
      -E0120 22:59:35.163544 3944451 buffer_comparator.cc:156] Difference at 226: 4.13873, expected 949.238
      -E0120 22:59:35.163547 3944451 buffer_comparator.cc:156] Difference at 227: 5.5797, expected 1560.45
      -E0120 22:59:35.163550 3944451 buffer_comparator.cc:156] Difference at 228: 5.25988, expected 1215.36
      -E0120 22:59:35.163553 3944451 buffer_comparator.cc:156] Difference at 229: 5.30797, expected 1186.63
      -E0120 22:59:35.163556 3944451 buffer_comparator.cc:156] Difference at 230: 4.14199, expected 1795.53
      -E0120 22:59:35.163558 3944451 buffer_comparator.cc:156] Difference at 231: 3.45867, expected 1837.81
      -E0120 22:59:35.163561 3944451 buffer_comparator.cc:156] Difference at 232: 3.85107, expected 1966.39
      -E0120 22:59:35.163564 3944451 buffer_comparator.cc:156] Difference at 233: 3.28131, expected 950.028
      -2025-01-20 22:59:35.163569: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.166200 3944451 buffer_comparator.cc:156] Difference at 224: 3.22034, expected 1844.62
      -E0120 22:59:35.166212 3944451 buffer_comparator.cc:156] Difference at 225: 2.49684, expected 1964.03
      -E0120 22:59:35.166215 3944451 buffer_comparator.cc:156] Difference at 226: 1.42502, expected 949.238
      -E0120 22:59:35.166218 3944451 buffer_comparator.cc:156] Difference at 227: 2.53522, expected 1560.45
      -E0120 22:59:35.166221 3944451 buffer_comparator.cc:156] Difference at 228: 2.01038, expected 1215.36
      -E0120 22:59:35.166224 3944451 buffer_comparator.cc:156] Difference at 229: 2.60944, expected 1186.63
      -Epoch   1	Train Loss: 15.782338	Train Acc: 21.4286%	Val Loss: 7.079204	Val Acc: 23.8000%
      -Epoch   2	Train Loss: 8.093776	Train Acc: 22.1429%	Val Loss: 3.137945	Val Acc: 28.0000%
      -Epoch   3	Train Loss: 2.983344	Train Acc: 43.5714%	Val Loss: 1.921002	Val Acc: 39.6000%
      -Epoch   4	Train Loss: 1.953735	Train Acc: 58.5714%	Val Loss: 2.015625	Val Acc: 42.0000%
      -Epoch   5	Train Loss: 1.699691	Train Acc: 62.8571%	Val Loss: 1.914729	Val Acc: 44.4000%
      -Epoch   6	Train Loss: 1.365187	Train Acc: 70.7143%	Val Loss: 1.669696	Val Acc: 52.6000%
      -Epoch   7	Train Loss: 1.127103	Train Acc: 73.5714%	Val Loss: 1.538632	Val Acc: 58.6000%
      -Epoch   8	Train Loss: 0.973196	Train Acc: 74.2857%	Val Loss: 1.553579	Val Acc: 60.4000%
      -Epoch   9	Train Loss: 0.915362	Train Acc: 75.0000%	Val Loss: 1.562308	Val Acc: 59.4000%
      -Epoch  10	Train Loss: 0.901013	Train Acc: 79.2857%	Val Loss: 1.568935	Val Acc: 59.8000%
      -Epoch  11	Train Loss: 0.794685	Train Acc: 80.0000%	Val Loss: 1.644144	Val Acc: 57.6000%
      -Epoch  12	Train Loss: 0.754806	Train Acc: 80.7143%	Val Loss: 1.734306	Val Acc: 56.2000%
      -Epoch  13	Train Loss: 0.725162	Train Acc: 81.4286%	Val Loss: 1.803399	Val Acc: 57.4000%
      -Epoch  14	Train Loss: 0.691019	Train Acc: 82.8571%	Val Loss: 1.826088	Val Acc: 57.6000%
      -Epoch  15	Train Loss: 0.648507	Train Acc: 84.2857%	Val Loss: 1.802276	Val Acc: 58.0000%
      -Epoch  16	Train Loss: 0.599733	Train Acc: 86.4286%	Val Loss: 1.755903	Val Acc: 59.0000%
      -Epoch  17	Train Loss: 0.569892	Train Acc: 86.4286%	Val Loss: 1.712767	Val Acc: 59.8000%
      -Epoch  18	Train Loss: 0.556019	Train Acc: 86.4286%	Val Loss: 1.689576	Val Acc: 60.6000%
      -Epoch  19	Train Loss: 0.542816	Train Acc: 87.8571%	Val Loss: 1.693934	Val Acc: 61.0000%
      -Epoch  20	Train Loss: 0.514605	Train Acc: 87.8571%	Val Loss: 1.706551	Val Acc: 61.2000%
      -Epoch  21	Train Loss: 0.496801	Train Acc: 87.8571%	Val Loss: 1.721584	Val Acc: 61.6000%
      -Epoch  22	Train Loss: 0.483525	Train Acc: 87.8571%	Val Loss: 1.733926	Val Acc: 62.0000%
      -Epoch  23	Train Loss: 0.472684	Train Acc: 89.2857%	Val Loss: 1.741387	Val Acc: 62.8000%
      -Epoch  24	Train Loss: 0.462024	Train Acc: 90.0000%	Val Loss: 1.743580	Val Acc: 63.4000%
      -Epoch  25	Train Loss: 0.449647	Train Acc: 90.0000%	Val Loss: 1.739714	Val Acc: 64.2000%
      -Epoch  26	Train Loss: 0.435166	Train Acc: 90.0000%	Val Loss: 1.733291	Val Acc: 64.4000%
      -Epoch  27	Train Loss: 0.419638	Train Acc: 91.4286%	Val Loss: 1.727418	Val Acc: 64.8000%
      -Early Stopping at Epoch 27
      -2025-01-20 23:00:08.518435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      -
      -E0120 23:00:08.819090 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.819165 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.819173 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.819180 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.819187 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.819194 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.819201 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.819208 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.819215 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.819221 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.819239: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.821455 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.821484 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.821491 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.821498 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.821505 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.821511 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.821518 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.821524 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.821531 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.821538 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.821548: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.823854 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.823884 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.823891 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.823897 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.823904 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.823911 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.823917 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.823924 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.823930 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.823937 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.823947: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.826144 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.826178 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.826185 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.826194 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.826201 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.826207 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.826214 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.826221 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.826227 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.826234 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.826244: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.828433 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.828460 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.828466 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.828473 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.828480 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.828486 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.828493 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.828499 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.828506 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.828512 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.828523: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.830665 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.830697 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.830704 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.830711 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.830717 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.830724 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.830731 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.830737 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.830744 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.830750 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.830761: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.832792 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.832809 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.832812 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.832815 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.832818 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.832821 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.832824 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.832827 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.832832 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.832835 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.832839: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.834833 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.834849 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.834852 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.834855 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.834858 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.834861 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.834864 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.834866 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.834869 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.834872 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.834877: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.836860 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.836873 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.836876 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.836879 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.836882 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.836885 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.836888 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.836891 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.836894 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.836897 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.836901: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.838876 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.838890 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.838893 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.838896 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.838899 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.838902 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.838905 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.838908 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.838910 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.838913 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.838918: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.840900 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.840915 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.840918 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.840921 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.840924 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.840927 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.840930 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.840932 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.840935 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.840938 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.840943: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.842915 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.842932 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.842935 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.842938 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.842941 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.842944 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.842946 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.842949 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.842952 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.842955 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.842960: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.844939 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.844954 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.844957 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.844960 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.844963 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.844965 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.844968 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.844971 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.844974 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.844977 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.844982: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.847021 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.847034 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.847037 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.847040 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.847043 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.847046 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.847051 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.847054 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.847057 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.847060 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.847064: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.849110 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.849127 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.849130 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.849133 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.849136 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.849139 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.849142 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.849145 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.849148 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.849151 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.849155: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.851143 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.955
      -E0120 23:00:08.851156 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 23:00:08.851160 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.81
      -E0120 23:00:08.851164 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.417
      -E0120 23:00:08.851167 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.55
      -E0120 23:00:08.851170 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 23:00:08.851173 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 23:00:08.851176 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.77
      -E0120 23:00:08.851179 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.09
      -E0120 23:00:08.851182 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.62
      -2025-01-20 23:00:08.851187: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.853170 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.598
      -E0120 23:00:08.853183 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.9
      -E0120 23:00:08.853186 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.78
      -E0120 23:00:08.853189 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.85
      -E0120 23:00:08.853192 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 23:00:08.853195 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.955
      -E0120 23:00:08.853198 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 23:00:08.853201 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.77
      -E0120 23:00:08.853204 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.38
      -E0120 23:00:08.853207 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 23:00:08.853212: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.855224 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.855240 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.855243 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.855246 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.855249 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.855252 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.855255 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.855258 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.855261 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.855263 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.855268: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.857271 3944451 buffer_comparator.cc:156] Difference at 896: 485.098, expected 958.133
      -E0120 23:00:08.857285 3944451 buffer_comparator.cc:156] Difference at 897: 732.587, expected 1218.67
      -E0120 23:00:08.857288 3944451 buffer_comparator.cc:156] Difference at 898: 635.29, expected 1826.8
      -E0120 23:00:08.857291 3944451 buffer_comparator.cc:156] Difference at 899: 446.948, expected 1593.44
      -E0120 23:00:08.857294 3944451 buffer_comparator.cc:156] Difference at 900: 712.745, expected 1119.04
      -E0120 23:00:08.857297 3944451 buffer_comparator.cc:156] Difference at 901: 516.07, expected 1796.72
      -E0120 23:00:08.857300 3944451 buffer_comparator.cc:156] Difference at 902: 373.095, expected 1279.87
      -E0120 23:00:08.857303 3944451 buffer_comparator.cc:156] Difference at 903: 483.905, expected 941.483
      -E0120 23:00:08.857306 3944451 buffer_comparator.cc:156] Difference at 904: 721.412, expected 1202.98
      -E0120 23:00:08.857309 3944451 buffer_comparator.cc:156] Difference at 905: 633.571, expected 1817.42
      -2025-01-20 23:00:08.857314: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.859309 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.859323 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.859326 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.859329 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.859332 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.859335 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.859338 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.859341 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.859343 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.859346 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.859351: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.861349 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.861365 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.861368 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.861371 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.861375 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.861378 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.861381 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.861384 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.861387 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.861390 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.861394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.863415 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.863428 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.863431 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.863434 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.863437 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.863440 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.863443 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.863446 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.863449 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.863452 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.863456: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.865569 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.865583 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.865586 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.865589 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.865592 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.865595 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.865598 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.865601 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.865604 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.865606 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.865611: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.867755 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.867769 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.867772 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.867775 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.867778 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.867781 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.867784 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.867786 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.867791 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.867794 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.867798: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.869885 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.869899 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.869902 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.869905 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.869908 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.869911 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.869913 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.869916 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.869919 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.869922 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.869927: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 23:00:10.392691: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      -
      -E0120 23:00:10.649785 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.649889 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.649904 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.649912 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.649919 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.649926 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.649933 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.649939 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.649946 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.649953 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.649972: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.653130 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.653156 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.653164 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.653171 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.653178 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.653185 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.653191 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.653198 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.653205 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.653211 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.653225: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.656355 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.656367 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.656370 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.656373 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.656376 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.656379 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.656382 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.656385 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.656388 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.656391 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.656396: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.659359 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.659370 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.659374 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.659377 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.659380 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.659383 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.659386 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.659389 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.659392 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.659395 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.659400: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.662366 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.662377 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.662381 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.662384 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.662387 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.662390 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.662393 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.662396 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.662399 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.662401 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.662406: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.665448 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.665459 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.665462 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.665467 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.665470 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.665473 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.665476 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.665479 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.665482 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.665485 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.665490: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.668396 3944451 buffer_comparator.cc:156] Difference at 224: 612.82, expected 745.838
      -E0120 23:00:10.668407 3944451 buffer_comparator.cc:156] Difference at 225: 624.08, expected 1079.1
      -E0120 23:00:10.668411 3944451 buffer_comparator.cc:156] Difference at 226: 503.822, expected 1034.99
      -E0120 23:00:10.668414 3944451 buffer_comparator.cc:156] Difference at 227: 349.836, expected 1538.8
      -E0120 23:00:10.668417 3944451 buffer_comparator.cc:156] Difference at 228: 305.595, expected 1554.44
      -E0120 23:00:10.668420 3944451 buffer_comparator.cc:156] Difference at 229: 442.133, expected 1264.82
      -E0120 23:00:10.668423 3944451 buffer_comparator.cc:156] Difference at 230: 429.287, expected 853.966
      -E0120 23:00:10.668426 3944451 buffer_comparator.cc:156] Difference at 231: 623.312, expected 756.177
      -E0120 23:00:10.668429 3944451 buffer_comparator.cc:156] Difference at 232: 633.094, expected 1076.91
      -E0120 23:00:10.668432 3944451 buffer_comparator.cc:156] Difference at 233: 515.065, expected 1029.02
      -2025-01-20 23:00:10.668437: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.671336 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.671347 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.671350 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.671353 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.671356 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.671360 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.671363 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.671366 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.671369 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.671371 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.671376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.674337 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.674348 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.674352 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.674355 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.674358 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.674361 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.674364 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.674367 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.674372 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.674375 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.674380: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.677306 3944451 buffer_comparator.cc:156] Difference at 448: 1224.67, expected 770.258
      -E0120 23:00:10.677317 3944451 buffer_comparator.cc:156] Difference at 449: 1238.62, expected 1098.93
      -E0120 23:00:10.677320 3944451 buffer_comparator.cc:156] Difference at 451: 690.154, expected 1560.21
      -E0120 23:00:10.677324 3944451 buffer_comparator.cc:156] Difference at 452: 601.951, expected 1585.41
      -E0120 23:00:10.677327 3944451 buffer_comparator.cc:156] Difference at 453: 877.959, expected 1307.15
      -E0120 23:00:10.677330 3944451 buffer_comparator.cc:156] Difference at 455: 1229.45, expected 760.638
      -E0120 23:00:10.677333 3944451 buffer_comparator.cc:156] Difference at 456: 1249.76, expected 1092.67
      -E0120 23:00:10.677336 3944451 buffer_comparator.cc:156] Difference at 458: 694.593, expected 1551.71
      -E0120 23:00:10.677339 3944451 buffer_comparator.cc:156] Difference at 459: 614.473, expected 1570.73
      -E0120 23:00:10.677342 3944451 buffer_comparator.cc:156] Difference at 460: 884.496, expected 1283.32
      -2025-01-20 23:00:10.677346: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.680191 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.680202 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.680206 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.680209 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.680212 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.680215 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.680218 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.680221 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.680224 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.680227 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.680232: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.683110 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.683121 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.683124 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.683127 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.683131 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.683134 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.683137 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.683140 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.683143 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.683145 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.683150: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.686103 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.686116 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.686120 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.686123 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.686126 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.686129 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.686132 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.686135 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.686138 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.686141 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.686146: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.689023 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.689034 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.689037 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.689040 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.689043 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.689046 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.689049 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.689052 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.689055 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.689058 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.689063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.691933 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.691944 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.691948 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.691951 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.691954 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.691957 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.691960 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.691963 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.691966 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.691969 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.691973: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.694973 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.694985 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.694988 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.694991 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.694994 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.694997 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.695002 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.695005 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.695008 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.695011 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.695016: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.697978 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.697988 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.697992 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.697995 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.697998 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.698001 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.698004 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.698007 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.698010 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.698013 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.698017: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.700955 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.700966 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.700969 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.700972 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.700975 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.700978 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.700981 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.700984 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.700987 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.700990 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.700995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.703891 3944451 buffer_comparator.cc:156] Difference at 896: 610.467, expected 767.869
      -E0120 23:00:10.703903 3944451 buffer_comparator.cc:156] Difference at 897: 622.568, expected 1090.2
      -E0120 23:00:10.703906 3944451 buffer_comparator.cc:156] Difference at 898: 502.172, expected 1050.23
      -E0120 23:00:10.703909 3944451 buffer_comparator.cc:156] Difference at 899: 349.792, expected 1561.6
      -E0120 23:00:10.703912 3944451 buffer_comparator.cc:156] Difference at 900: 312.127, expected 1574.44
      -E0120 23:00:10.703915 3944451 buffer_comparator.cc:156] Difference at 901: 449.924, expected 1303.84
      -E0120 23:00:10.703918 3944451 buffer_comparator.cc:156] Difference at 902: 433.368, expected 881.498
      -E0120 23:00:10.703921 3944451 buffer_comparator.cc:156] Difference at 903: 632.775, expected 755.455
      -E0120 23:00:10.703924 3944451 buffer_comparator.cc:156] Difference at 904: 650.408, expected 1073.52
      -E0120 23:00:10.703927 3944451 buffer_comparator.cc:156] Difference at 905: 531.789, expected 1034.81
      -2025-01-20 23:00:10.703932: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.706816 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.706827 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.706831 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.706834 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.706837 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.706840 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.706843 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.706846 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.706849 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.706852 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.706856: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.709758 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.709768 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.709772 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.709775 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.709778 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.709781 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.709784 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.709787 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.709790 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.709793 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.709797: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.712907 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.712918 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.712921 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.712924 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.712928 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.712931 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.712934 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.712937 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.712940 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.712943 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.712948: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.716059 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.716070 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.716074 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.716077 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.716082 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.716085 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.716088 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.716091 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.716094 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.716097 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.716102: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.719161 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.719172 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.719175 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.719178 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.719181 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.719184 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.719188 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.719190 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.719193 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.719196 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.719201: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Test Loss: 1.548569	Test Acc: 65.5000%

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,21)]))}const d=s(c,[["render",i]]);export{u as __pageData,d as default}; diff --git a/dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.lean.js b/dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.lean.js deleted file mode 100644 index cae2a4add8..0000000000 --- a/dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.lean.js +++ /dev/null @@ -1,279 +0,0 @@ -import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Normalizing Flows for Density Estimation","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/7_RealNVP.md","filePath":"tutorials/intermediate/7_RealNVP.md","lastUpdated":null}'),e={name:"tutorials/intermediate/7_RealNVP.md"};function t(d,A,h,f,p,v){return n(),i("div",null,A[0]||(A[0]=[a(`

      Normalizing Flows for Density Estimation

      This tutorial demonstrates how to use Lux to train a RealNVP. This is based on the RealNVP implementation in MLX.

      julia
      using Lux, Reactant, Random, Statistics, Enzyme, MLUtils, ConcreteStructs, Printf,
      -      Optimisers, CairoMakie
      -
      -const xdev = reactant_device(; force=true)
      -const cdev = cpu_device()
      (::MLDataDevices.CPUDevice) (generic function with 1 method)

      Define & Load the Moons Dataset

      We define a function to generate data from the moons dataset. We use the code here from this tutorial.

      julia
      function make_moons(
      -        rng::AbstractRNG, ::Type{T}, n_samples::Int=100;
      -        noise::Union{Nothing, AbstractFloat}=nothing
      -) where {T}
      -    n_moons = n_samples ÷ 2
      -    t_min, t_max = T(0), T(π)
      -    t_inner = rand(rng, T, n_moons) * (t_max - t_min) .+ t_min
      -    t_outer = rand(rng, T, n_moons) * (t_max - t_min) .+ t_min
      -    outer_circ_x = cos.(t_outer)
      -    outer_circ_y = sin.(t_outer) .+ T(1)
      -    inner_circ_x = 1 .- cos.(t_inner)
      -    inner_circ_y = 1 .- sin.(t_inner) .- T(1)
      -
      -    data = [outer_circ_x outer_circ_y; inner_circ_x inner_circ_y]
      -    z = permutedims(data, (2, 1))
      -    noise !== nothing && (z .+= T(noise) * randn(rng, T, size(z)))
      -    return z
      -end
      make_moons (generic function with 2 methods)

      Let's visualize the dataset

      julia
      begin
      -    fig = Figure()
      -    ax = CairoMakie.Axis(fig[1, 1]; xlabel="x", ylabel="y")
      -
      -    z = make_moons(Random.default_rng(), Float32, 10_000; noise=0.1)
      -    scatter!(ax, z[1, :], z[2, :]; markersize=2)
      -
      -    fig
      -end

      julia
      function load_moons_dataloader(
      -        args...; batchsize::Int, noise::Union{Nothing, AbstractFloat}=nothing, kwargs...
      -)
      -    return DataLoader(
      -        make_moons(args...; noise); batchsize, shuffle=true, partial=false, kwargs...
      -    )
      -end
      load_moons_dataloader (generic function with 1 method)

      Bijectors Implementation

      julia
      abstract type AbstractBijector end
      -
      -@concrete struct AffineBijector <: AbstractBijector
      -    shift <: AbstractArray
      -    log_scale <: AbstractArray
      -end
      -
      -function AffineBijector(shift_and_log_scale::AbstractArray{T, N}) where {T, N}
      -    n = size(shift_and_log_scale, 1) ÷ 2
      -    idxs = ntuple(Returns(Colon()), N - 1)
      -    return AffineBijector(
      -        shift_and_log_scale[1:n, idxs...], shift_and_log_scale[(n + 1):end, idxs...]
      -    )
      -end
      -
      -function forward_and_log_det(bj::AffineBijector, x::AbstractArray)
      -    y = x .* exp.(bj.log_scale) .+ bj.shift
      -    return y, bj.log_scale
      -end
      -
      -function inverse_and_log_det(bj::AffineBijector, y::AbstractArray)
      -    x = (y .- bj.shift) ./ exp.(bj.log_scale)
      -    return x, -bj.log_scale
      -end
      -
      -@concrete struct MaskedCoupling <: AbstractBijector
      -    mask <: AbstractArray
      -    conditioner
      -    bijector
      -end
      -
      -function apply_mask(bj::MaskedCoupling, x::AbstractArray, fn::F) where {F}
      -    x_masked = x .* (1 .- bj.mask)
      -    bijector_params = bj.conditioner(x_masked)
      -    y, log_det = fn(bijector_params)
      -    log_det = log_det .* bj.mask
      -    y = ifelse.(bj.mask, y, x)
      -    return y, dsum(log_det; dims=Tuple(collect(1:(ndims(x) - 1))))
      -end
      -
      -function forward_and_log_det(bj::MaskedCoupling, x::AbstractArray)
      -    return apply_mask(bj, x, params -> forward_and_log_det(bj.bijector(params), x))
      -end
      -
      -function inverse_and_log_det(bj::MaskedCoupling, y::AbstractArray)
      -    return apply_mask(bj, y, params -> inverse_and_log_det(bj.bijector(params), y))
      -end
      inverse_and_log_det (generic function with 2 methods)

      Model Definition

      julia
      function MLP(in_dims::Int, hidden_dims::Int, out_dims::Int, n_layers::Int; activation=gelu)
      -    return Chain(
      -        Dense(in_dims => hidden_dims, activation),
      -        [Dense(hidden_dims => hidden_dims, activation) for _ in 1:(n_layers - 1)]...,
      -        Dense(hidden_dims => out_dims)
      -    )
      -end
      -
      -@concrete struct RealNVP <: AbstractLuxContainerLayer{(:conditioners,)}
      -    conditioners
      -    dist_dims::Int
      -    n_transforms::Int
      -end
      -
      -const StatefulRealNVP{M} = StatefulLuxLayer{M, <:RealNVP}
      -
      -function Lux.initialstates(rng::AbstractRNG, l::RealNVP)
      -    mask_list = [collect(1:(l.dist_dims)) .% 2 .== i % 2 for i in 1:(l.n_transforms)] .|>
      -                Vector{Bool}
      -    return (; mask_list, conditioners=Lux.initialstates(rng, l.conditioners))
      -end
      -
      -function RealNVP(; n_transforms::Int, dist_dims::Int, hidden_dims::Int, n_layers::Int)
      -    conditioners = [MLP(dist_dims, hidden_dims, 2 * dist_dims, n_layers; activation=gelu)
      -                    for _ in 1:n_transforms]
      -    conditioners = NamedTuple{ntuple(Base.Fix1(Symbol, :conditioners_), n_transforms)}(
      -        Tuple(conditioners)
      -    )
      -    return RealNVP(conditioners, dist_dims, n_transforms)
      -end
      -
      -log_prob(x::AbstractArray{T}) where {T} = -T(0.5 * log()) .- T(0.5) .* abs2.(x)
      -
      -function log_prob(l::StatefulRealNVP, x::AbstractArray{T}) where {T}
      -    smodels = [StatefulLuxLayer{true}(
      -                   conditioner, l.ps.conditioners[i], l.st.conditioners[i])
      -               for (i, conditioner) in enumerate(l.model.conditioners)]
      -
      -    lprob = zeros_like(x, size(x, ndims(x)))
      -    for (mask, conditioner) in Iterators.reverse(zip(l.st.mask_list, smodels))
      -        bj = MaskedCoupling(mask, conditioner, AffineBijector)
      -        x, log_det = inverse_and_log_det(bj, x)
      -        lprob += log_det
      -    end
      -    lprob += dsum(log_prob(x); dims=Tuple(collect(1:(ndims(x) - 1))))
      -
      -    conditioners = NamedTuple{ntuple(
      -        Base.Fix1(Symbol, :conditioners_), l.model.n_transforms)
      -    }(Tuple([smodel.st for smodel in smodels]))
      -    l.st = merge(l.st, (; conditioners))
      -
      -    return lprob
      -end
      -
      -function sample(
      -        rng::AbstractRNG, ::Type{T}, d::StatefulRealNVP,
      -        nsamples::Int, nsteps::Int=length(d.model.conditioners)
      -) where {T}
      -    @assert 1 nsteps  length(d.model.conditioners)
      -
      -    smodels = [StatefulLuxLayer{true}(
      -                   conditioner, d.ps.conditioners[i], d.st.conditioners[i])
      -               for (i, conditioner) in enumerate(d.model.conditioners)]
      -
      -    x = randn(rng, T, d.model.dist_dims, nsamples)
      -    for (i, (mask, conditioner)) in enumerate(zip(d.st.mask_list, smodels))
      -        x, _ = forward_and_log_det(MaskedCoupling(mask, conditioner, AffineBijector), x)
      -        i  nsteps && break
      -    end
      -    return x
      -end
      sample (generic function with 2 methods)

      Helper Functions

      julia
      dsum(x; dims) = dropdims(sum(x; dims); dims)
      -
      -function loss_function(model, ps, st, x)
      -    smodel = StatefulLuxLayer{true}(model, ps, st)
      -    lprob = log_prob(smodel, x)
      -    return -mean(lprob), smodel.st, (;)
      -end
      loss_function (generic function with 1 method)

      Training the Model

      julia
      function main(;
      -        maxiters::Int=10_000, n_train_samples::Int=100_000, batchsize::Int=128,
      -        n_transforms::Int=6, hidden_dims::Int=16, n_layers::Int=4,
      -        lr::Float64=0.0004, noise::Float64=0.06
      -)
      -    rng = Random.default_rng()
      -    Random.seed!(rng, 0)
      -
      -    dataloader = load_moons_dataloader(rng, Float32, n_train_samples; batchsize, noise) |>
      -                 xdev |> Iterators.cycle
      -
      -    model = RealNVP(; n_transforms, dist_dims=2, hidden_dims, n_layers)
      -    ps, st = Lux.setup(rng, model) |> xdev
      -    opt = Adam(lr)
      -
      -    train_state = Training.TrainState(model, ps, st, opt)
      -    @printf "Total Trainable Parameters: %d\\n" Lux.parameterlength(ps)
      -
      -    total_samples = 0
      -    start_time = time()
      -
      -    for (iter, x) in enumerate(dataloader)
      -        total_samples += size(x, ndims(x))
      -        (_, loss, _, train_state) = Training.single_train_step!(
      -            AutoEnzyme(), loss_function, x, train_state;
      -            return_gradients=Val(false)
      -        )
      -
      -        isnan(loss) && error("NaN loss encountered in iter $(iter)!")
      -
      -        if iter == 1 || iter == maxiters || iter % 1000 == 0
      -            throughput = total_samples / (time() - start_time)
      -            @printf "Iter: [%6d/%6d]\\tTraining Loss: %.6f\\t\\
      -                     Throughput: %.6f samples/s\\n" iter maxiters loss throughput
      -        end
      -
      -        iter  maxiters && break
      -    end
      -
      -    return StatefulLuxLayer{true}(
      -        model, train_state.parameters, Lux.testmode(train_state.states)
      -    )
      -end
      -
      -trained_model = main()
      Total Trainable Parameters: 5592
      -2025-01-20 23:06:11.275993: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 10156251217167081480
      -E0120 23:06:11.804459 3990735 buffer_comparator.cc:156] Difference at 17: 22.1601, expected 25.2177
      -E0120 23:06:11.804695 3990735 buffer_comparator.cc:156] Difference at 18: 17.8166, expected 20.6638
      -E0120 23:06:11.804699 3990735 buffer_comparator.cc:156] Difference at 24: 20.8071, expected 23.8795
      -E0120 23:06:11.804702 3990735 buffer_comparator.cc:156] Difference at 25: 19.1691, expected 23.0753
      -E0120 23:06:11.804706 3990735 buffer_comparator.cc:156] Difference at 27: 16.8353, expected 20.2124
      -E0120 23:06:11.804710 3990735 buffer_comparator.cc:156] Difference at 31: 20.6599, expected 23.7059
      -2025-01-20 23:06:11.804736: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:06:11.821113 3990735 buffer_comparator.cc:156] Difference at 32: 0, expected 0.569339
      -E0120 23:06:11.821139 3990735 buffer_comparator.cc:156] Difference at 33: 0, expected 0.542682
      -E0120 23:06:11.821142 3990735 buffer_comparator.cc:156] Difference at 34: 0, expected 0.348735
      -E0120 23:06:11.821145 3990735 buffer_comparator.cc:156] Difference at 35: 0, expected 0.223541
      -E0120 23:06:11.821148 3990735 buffer_comparator.cc:156] Difference at 36: 0, expected 0.474634
      -E0120 23:06:11.821151 3990735 buffer_comparator.cc:156] Difference at 37: 0, expected 0.288978
      -E0120 23:06:11.821153 3990735 buffer_comparator.cc:156] Difference at 38: 0, expected 0.205903
      -E0120 23:06:11.821156 3990735 buffer_comparator.cc:156] Difference at 39: 0, expected 0.446466
      -E0120 23:06:11.821159 3990735 buffer_comparator.cc:156] Difference at 40: 0, expected 0.524228
      -E0120 23:06:11.821161 3990735 buffer_comparator.cc:156] Difference at 41: 0, expected 0.432399
      -2025-01-20 23:06:11.821168: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Iter: [     1/ 10000]	Training Loss: 3.863690	Throughput: 0.738206 samples/s
      -Iter: [  1000/ 10000]	Training Loss: 0.839611	Throughput: 718.088204 samples/s
      -Iter: [  2000/ 10000]	Training Loss: 0.480875	Throughput: 1414.794825 samples/s
      -Iter: [  3000/ 10000]	Training Loss: 0.532076	Throughput: 2090.809874 samples/s
      -Iter: [  4000/ 10000]	Training Loss: 0.589929	Throughput: 2727.900330 samples/s
      -Iter: [  5000/ 10000]	Training Loss: 0.469730	Throughput: 3362.438248 samples/s
      -Iter: [  6000/ 10000]	Training Loss: 0.530390	Throughput: 3979.620796 samples/s
      -Iter: [  7000/ 10000]	Training Loss: 0.512984	Throughput: 4580.390459 samples/s
      -Iter: [  8000/ 10000]	Training Loss: 0.494450	Throughput: 5125.308281 samples/s
      -Iter: [  9000/ 10000]	Training Loss: 0.435445	Throughput: 5692.110374 samples/s
      -Iter: [ 10000/ 10000]	Training Loss: 0.536155	Throughput: 6244.824544 samples/s

      Visualizing the Results

      julia
      z_stages = Matrix{Float32}[]
      -for i in 1:(trained_model.model.n_transforms)
      -    z = @jit sample(Random.default_rng(), Float32, trained_model, 10_000, i)
      -    push!(z_stages, Array(z))
      -end
      -
      -begin
      -    fig = Figure(; size=(1200, 800))
      -
      -    for (idx, z) in enumerate(z_stages)
      -        i, j = (idx - 1) ÷ 3, (idx - 1) % 3
      -        ax = Axis(fig[i, j]; title="$(idx) transforms")
      -        scatter!(ax, z[1, :], z[2, :]; markersize=2)
      -    end
      -
      -    fig
      -end

      Appendix

      julia
      using InteractiveUtils
      -InteractiveUtils.versioninfo()
      -
      -if @isdefined(MLDataDevices)
      -    if @isdefined(CUDA) && MLDataDevices.functional(CUDADevice)
      -        println()
      -        CUDA.versioninfo()
      -    end
      -
      -    if @isdefined(AMDGPU) && MLDataDevices.functional(AMDGPUDevice)
      -        println()
      -        AMDGPU.versioninfo()
      -    end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      -Build Info:
      -  Official https://julialang.org/ release
      -Platform Info:
      -  OS: Linux (x86_64-linux-gnu)
      -  CPU: 48 × AMD EPYC 7402 24-Core Processor
      -  WORD_SIZE: 64
      -  LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 48 default, 0 interactive, 24 GC (on 2 virtual cores)
      -Environment:
      -  JULIA_CPU_THREADS = 2
      -  JULIA_DEPOT_PATH = /root/.cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
      -  LD_LIBRARY_PATH = /usr/local/nvidia/lib:/usr/local/nvidia/lib64
      -  JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 48
      -  JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
      -  JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,34)]))}const E=s(e,[["render",t]]);export{g as __pageData,E as default}; diff --git a/dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.js b/dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.js similarity index 77% rename from dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.js rename to dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.js index cae2a4add8..7d88f4e3ff 100644 --- a/dev/assets/tutorials_intermediate_7_RealNVP.md.BYJJE9cz.js +++ b/dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.js @@ -1,4 +1,4 @@ -import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g=JSON.parse('{"title":"Normalizing Flows for Density Estimation","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/7_RealNVP.md","filePath":"tutorials/intermediate/7_RealNVP.md","lastUpdated":null}'),e={name:"tutorials/intermediate/7_RealNVP.md"};function t(d,A,h,f,p,v){return n(),i("div",null,A[0]||(A[0]=[a(`

      Normalizing Flows for Density Estimation

      This tutorial demonstrates how to use Lux to train a RealNVP. This is based on the RealNVP implementation in MLX.

      julia
      using Lux, Reactant, Random, Statistics, Enzyme, MLUtils, ConcreteStructs, Printf,
      +import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.BetCMmtc.js";const r=JSON.parse('{"title":"Normalizing Flows for Density Estimation","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/7_RealNVP.md","filePath":"tutorials/intermediate/7_RealNVP.md","lastUpdated":null}'),e={name:"tutorials/intermediate/7_RealNVP.md"};function t(d,A,h,f,p,v){return n(),i("div",null,A[0]||(A[0]=[a(`

      Normalizing Flows for Density Estimation

      This tutorial demonstrates how to use Lux to train a RealNVP. This is based on the RealNVP implementation in MLX.

      julia
      using Lux, Reactant, Random, Statistics, Enzyme, MLUtils, ConcreteStructs, Printf,
             Optimisers, CairoMakie
       
       const xdev = reactant_device(; force=true)
      @@ -27,7 +27,7 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
           scatter!(ax, z[1, :], z[2, :]; markersize=2)
       
           fig
      -end

      julia
      function load_moons_dataloader(
      +end

      julia
      function load_moons_dataloader(
               args...; batchsize::Int, noise::Union{Nothing, AbstractFloat}=nothing, kwargs...
       )
           return DataLoader(
      @@ -200,36 +200,36 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
       end
       
       trained_model = main()
      Total Trainable Parameters: 5592
      -2025-01-20 23:06:11.275993: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 10156251217167081480
      -E0120 23:06:11.804459 3990735 buffer_comparator.cc:156] Difference at 17: 22.1601, expected 25.2177
      -E0120 23:06:11.804695 3990735 buffer_comparator.cc:156] Difference at 18: 17.8166, expected 20.6638
      -E0120 23:06:11.804699 3990735 buffer_comparator.cc:156] Difference at 24: 20.8071, expected 23.8795
      -E0120 23:06:11.804702 3990735 buffer_comparator.cc:156] Difference at 25: 19.1691, expected 23.0753
      -E0120 23:06:11.804706 3990735 buffer_comparator.cc:156] Difference at 27: 16.8353, expected 20.2124
      -E0120 23:06:11.804710 3990735 buffer_comparator.cc:156] Difference at 31: 20.6599, expected 23.7059
      -2025-01-20 23:06:11.804736: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:06:11.821113 3990735 buffer_comparator.cc:156] Difference at 32: 0, expected 0.569339
      -E0120 23:06:11.821139 3990735 buffer_comparator.cc:156] Difference at 33: 0, expected 0.542682
      -E0120 23:06:11.821142 3990735 buffer_comparator.cc:156] Difference at 34: 0, expected 0.348735
      -E0120 23:06:11.821145 3990735 buffer_comparator.cc:156] Difference at 35: 0, expected 0.223541
      -E0120 23:06:11.821148 3990735 buffer_comparator.cc:156] Difference at 36: 0, expected 0.474634
      -E0120 23:06:11.821151 3990735 buffer_comparator.cc:156] Difference at 37: 0, expected 0.288978
      -E0120 23:06:11.821153 3990735 buffer_comparator.cc:156] Difference at 38: 0, expected 0.205903
      -E0120 23:06:11.821156 3990735 buffer_comparator.cc:156] Difference at 39: 0, expected 0.446466
      -E0120 23:06:11.821159 3990735 buffer_comparator.cc:156] Difference at 40: 0, expected 0.524228
      -E0120 23:06:11.821161 3990735 buffer_comparator.cc:156] Difference at 41: 0, expected 0.432399
      -2025-01-20 23:06:11.821168: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Iter: [     1/ 10000]	Training Loss: 3.863690	Throughput: 0.738206 samples/s
      -Iter: [  1000/ 10000]	Training Loss: 0.839611	Throughput: 718.088204 samples/s
      -Iter: [  2000/ 10000]	Training Loss: 0.480875	Throughput: 1414.794825 samples/s
      -Iter: [  3000/ 10000]	Training Loss: 0.532076	Throughput: 2090.809874 samples/s
      -Iter: [  4000/ 10000]	Training Loss: 0.589929	Throughput: 2727.900330 samples/s
      -Iter: [  5000/ 10000]	Training Loss: 0.469730	Throughput: 3362.438248 samples/s
      -Iter: [  6000/ 10000]	Training Loss: 0.530390	Throughput: 3979.620796 samples/s
      -Iter: [  7000/ 10000]	Training Loss: 0.512984	Throughput: 4580.390459 samples/s
      -Iter: [  8000/ 10000]	Training Loss: 0.494450	Throughput: 5125.308281 samples/s
      -Iter: [  9000/ 10000]	Training Loss: 0.435445	Throughput: 5692.110374 samples/s
      -Iter: [ 10000/ 10000]	Training Loss: 0.536155	Throughput: 6244.824544 samples/s

      Visualizing the Results

      julia
      z_stages = Matrix{Float32}[]
      +2025-01-24 04:51:54.316887: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 13249940196448407136
      +E0124 04:51:54.645504 1609376 buffer_comparator.cc:156] Difference at 17: 22.1601, expected 25.2177
      +E0124 04:51:54.645561 1609376 buffer_comparator.cc:156] Difference at 18: 17.8166, expected 20.6638
      +E0124 04:51:54.645564 1609376 buffer_comparator.cc:156] Difference at 24: 20.8071, expected 23.8795
      +E0124 04:51:54.645567 1609376 buffer_comparator.cc:156] Difference at 25: 19.1691, expected 23.0753
      +E0124 04:51:54.645571 1609376 buffer_comparator.cc:156] Difference at 27: 16.8353, expected 20.2124
      +E0124 04:51:54.645574 1609376 buffer_comparator.cc:156] Difference at 31: 20.6599, expected 23.7059
      +2025-01-24 04:51:54.645584: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:51:54.658324 1609376 buffer_comparator.cc:156] Difference at 32: 0, expected 0.569339
      +E0124 04:51:54.658368 1609376 buffer_comparator.cc:156] Difference at 33: 0, expected 0.542682
      +E0124 04:51:54.658371 1609376 buffer_comparator.cc:156] Difference at 34: 0, expected 0.348735
      +E0124 04:51:54.658374 1609376 buffer_comparator.cc:156] Difference at 35: 0, expected 0.223541
      +E0124 04:51:54.658377 1609376 buffer_comparator.cc:156] Difference at 36: 0, expected 0.474634
      +E0124 04:51:54.658380 1609376 buffer_comparator.cc:156] Difference at 37: 0, expected 0.288978
      +E0124 04:51:54.658382 1609376 buffer_comparator.cc:156] Difference at 38: 0, expected 0.205903
      +E0124 04:51:54.658385 1609376 buffer_comparator.cc:156] Difference at 39: 0, expected 0.446466
      +E0124 04:51:54.658388 1609376 buffer_comparator.cc:156] Difference at 40: 0, expected 0.524228
      +E0124 04:51:54.658390 1609376 buffer_comparator.cc:156] Difference at 41: 0, expected 0.432399
      +2025-01-24 04:51:54.658397: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +Iter: [     1/ 10000]	Training Loss: 3.863690	Throughput: 0.708851 samples/s
      +Iter: [  1000/ 10000]	Training Loss: 0.839611	Throughput: 692.792474 samples/s
      +Iter: [  2000/ 10000]	Training Loss: 0.480875	Throughput: 1369.441548 samples/s
      +Iter: [  3000/ 10000]	Training Loss: 0.532076	Throughput: 2017.190574 samples/s
      +Iter: [  4000/ 10000]	Training Loss: 0.589929	Throughput: 2655.728896 samples/s
      +Iter: [  5000/ 10000]	Training Loss: 0.469730	Throughput: 3285.542704 samples/s
      +Iter: [  6000/ 10000]	Training Loss: 0.530390	Throughput: 3869.442394 samples/s
      +Iter: [  7000/ 10000]	Training Loss: 0.512984	Throughput: 4467.345266 samples/s
      +Iter: [  8000/ 10000]	Training Loss: 0.494450	Throughput: 5052.929012 samples/s
      +Iter: [  9000/ 10000]	Training Loss: 0.435445	Throughput: 5621.556140 samples/s
      +Iter: [ 10000/ 10000]	Training Loss: 0.536155	Throughput: 6129.436969 samples/s

      Visualizing the Results

      julia
      z_stages = Matrix{Float32}[]
       for i in 1:(trained_model.model.n_transforms)
           z = @jit sample(Random.default_rng(), Float32, trained_model, 10_000, i)
           push!(z_stages, Array(z))
      @@ -258,8 +258,8 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -276,4 +276,4 @@ import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.I-x9Gl6h.js";const g
         JULIA_NUM_THREADS = 48
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
      -  JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,34)]))}const E=s(e,[["render",t]]);export{g as __pageData,E as default}; + JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      `,34)]))}const E=s(e,[["render",t]]);export{r as __pageData,E as default}; diff --git a/dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.lean.js b/dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.lean.js new file mode 100644 index 0000000000..be5e75af8d --- /dev/null +++ b/dev/assets/tutorials_intermediate_7_RealNVP.md.Be4ikYnY.lean.js @@ -0,0 +1 @@ +import{_ as s,c as i,a2 as a,o as n}from"./chunks/framework.BetCMmtc.js";const r=JSON.parse('{"title":"Normalizing Flows for Density Estimation","description":"","frontmatter":{},"headers":[],"relativePath":"tutorials/intermediate/7_RealNVP.md","filePath":"tutorials/intermediate/7_RealNVP.md","lastUpdated":null}'),e={name:"tutorials/intermediate/7_RealNVP.md"};function t(d,A,h,f,p,v){return n(),i("div",null,A[0]||(A[0]=[a("",34)]))}const E=s(e,[["render",t]]);export{r as __pageData,E as default}; diff --git a/dev/hashmap.json b/dev/hashmap.json index c4ddb521f0..31825f8894 100644 --- a/dev/hashmap.json +++ b/dev/hashmap.json @@ -1 +1 @@ -{"api_accelerator_support_mldatadevices.md":"B7-0S35X","api_building_blocks_luxcore.md":"BGgzNXbc","api_building_blocks_weightinitializers.md":"CUbJYQgk","api_lux_autodiff.md":"Wvm0sUp0","api_lux_contrib.md":"BwHwNoK7","api_lux_distributed_utils.md":"BXHaY16P","api_lux_interop.md":"BHPjmyrL","api_lux_layers.md":"DiMdFKta","api_lux_utilities.md":"DvX6-akN","api_nn_primitives_activationfunctions.md":"DNcaJ2dy","api_nn_primitives_luxlib.md":"CPEqKhMV","api_nn_primitives_nnlib.md":"DHUiCckb","api_testing_functionality_luxtestutils.md":"DlO2GhUE","index.md":"CnfhmVsi","introduction_citation.md":"Cyg9oVHB","introduction_index.md":"hL9b0OC7","introduction_overview.md":"BvBZ09ef","introduction_resources.md":"21ZkndKX","introduction_updating_to_v1.md":"Ck5XPGhV","manual_autodiff.md":"D-CoZgun","manual_compiling_lux_models.md":"BQYERvZe","manual_debugging.md":"BbPueqSs","manual_dispatch_custom_input.md":"CcWKCtg7","manual_distributed_utils.md":"CYueR7UH","manual_exporting_to_jax.md":"BGqMRVNM","manual_freezing_model_parameters.md":"Cni-iuqC","manual_gpu_management.md":"DpZq9wPT","manual_interface.md":"ChAHHkfm","manual_migrate_from_flux.md":"nhekMfcw","manual_nested_autodiff.md":"0e9MwF0A","manual_nn_inside_gpu_kernels.md":"DI86XBbU","manual_performance_pitfalls.md":"qHBdfLha","manual_preferences.md":"rR-7V6zU","manual_weight_initializers.md":"D3ynEEqW","tutorials_advanced_1_gravitationalwaveform.md":"BuiqplW5","tutorials_beginner_1_basics.md":"BYin9je8","tutorials_beginner_2_polynomialfitting.md":"BnLeWgHX","tutorials_beginner_3_simplernn.md":"DmnqxtSU","tutorials_beginner_4_simplechains.md":"snvLiqaS","tutorials_beginner_5_optimizationintegration.md":"audP0w97","tutorials_index.md":"DjU0cWXL","tutorials_intermediate_1_neuralode.md":"Cq8Z3u1S","tutorials_intermediate_2_bayesiannn.md":"CrjRlGMZ","tutorials_intermediate_3_hypernet.md":"BEqA10sc","tutorials_intermediate_4_pinn2dpde.md":"9vdiyw9_","tutorials_intermediate_5_convolutionalvae.md":"23tMd-dw","tutorials_intermediate_6_gcn_cora.md":"DyovN6WF","tutorials_intermediate_7_realnvp.md":"BYJJE9cz"} +{"api_accelerator_support_mldatadevices.md":"BHPxWmuW","api_building_blocks_luxcore.md":"CsmzM99Z","api_building_blocks_weightinitializers.md":"ZsEkwReB","api_lux_autodiff.md":"CvcnLNnS","api_lux_contrib.md":"DbSqHdk6","api_lux_distributed_utils.md":"106ZB8SY","api_lux_interop.md":"D74VGhe2","api_lux_layers.md":"WaBurqvX","api_lux_utilities.md":"CRauyyus","api_nn_primitives_activationfunctions.md":"Cjp_vPbj","api_nn_primitives_luxlib.md":"CY-wCYUU","api_nn_primitives_nnlib.md":"-JRexgX5","api_testing_functionality_luxtestutils.md":"Dut_312M","index.md":"d5k3UIjW","introduction_citation.md":"oktAg9dE","introduction_index.md":"A6L_dCHU","introduction_overview.md":"B53X8gve","introduction_resources.md":"CaXbRHi2","introduction_updating_to_v1.md":"DYJFRVTZ","manual_autodiff.md":"E3SgD0p5","manual_compiling_lux_models.md":"73Iw5CoK","manual_debugging.md":"BGHrv_QR","manual_dispatch_custom_input.md":"B4ERitcs","manual_distributed_utils.md":"XyQ9ZBk2","manual_exporting_to_jax.md":"BhFDko_-","manual_freezing_model_parameters.md":"C5hCUAD-","manual_gpu_management.md":"DCaTfJhy","manual_interface.md":"BHQCSZeo","manual_migrate_from_flux.md":"CKssu-OV","manual_nested_autodiff.md":"yMpJh0UD","manual_performance_pitfalls.md":"CTvklr_I","manual_preferences.md":"Ddm3pW4n","manual_weight_initializers.md":"BZq8xM8l","tutorials_advanced_1_gravitationalwaveform.md":"iiGvMAnB","tutorials_beginner_1_basics.md":"BIkM597J","tutorials_beginner_2_polynomialfitting.md":"BCdraYWF","tutorials_beginner_3_simplernn.md":"mE6TZyjC","tutorials_beginner_4_simplechains.md":"B9RS1fYd","tutorials_beginner_5_optimizationintegration.md":"DhkgwbPj","tutorials_index.md":"B-_qyAzm","tutorials_intermediate_1_neuralode.md":"BenIrmiX","tutorials_intermediate_2_bayesiannn.md":"DSd6_ExB","tutorials_intermediate_3_hypernet.md":"CRyPOfhj","tutorials_intermediate_4_pinn2dpde.md":"BGeYvkn-","tutorials_intermediate_5_convolutionalvae.md":"BARx5_au","tutorials_intermediate_6_gcn_cora.md":"CpGmSjYV","tutorials_intermediate_7_realnvp.md":"Be4ikYnY"} diff --git a/dev/index.html b/dev/index.html index 532992a33a..313954aec9 100644 --- a/dev/index.html +++ b/dev/index.html @@ -5,15 +5,15 @@ Lux.jl Docs - + - + - - - + + + @@ -30,13 +30,13 @@
      Skip to content

      LuxDL DocsElegant & Performant Scientific Machine Learning in JuliaLang

      A Pure Julia Deep Learning Framework designed for Scientific Machine Learning

      Lux.jl

      How to Install Lux.jl?

      Its easy to install Lux.jl. Since Lux.jl is registered in the Julia General registry, you can simply run the following command in the Julia REPL:

      julia
      julia> using Pkg
       julia> Pkg.add("Lux")

      If you want to use the latest unreleased version of Lux.jl, you can run the following command: (in most cases the released version will be same as the version on github)

      julia
      julia> using Pkg
      -julia> Pkg.add(url="https://github.com/LuxDL/Lux.jl")

      Want GPU Support?

      Install the following package(s):

      julia
      using Pkg
      +julia> Pkg.add(url="https://github.com/LuxDL/Lux.jl")

      Want GPU Support?

      Install the following package(s):

      julia
      using Pkg
       Pkg.add("LuxCUDA")
       # or
       Pkg.add(["CUDA", "cuDNN"])
      julia
      using Pkg
       Pkg.add("AMDGPU")
      julia
      using Pkg
       Pkg.add("Metal")
      julia
      using Pkg
      -Pkg.add("oneAPI")

      Run the following to access a device:

      julia
      using Lux, LuxCUDA
      +Pkg.add("oneAPI")

      Run the following to access a device:

      julia
      using Lux, LuxCUDA
       
       const dev = gpu_device()
      julia
      using Lux, AMDGPU
       
      @@ -45,7 +45,7 @@
       const dev = gpu_device()
      julia
      using Lux, oneAPI
       
       const dev = gpu_device()

      Want Reactant (XLA) Support?

      Install the following package:

      julia
      using Pkg;
      -Pkg.add("Reactant")

      Run the following to access a device (Reactant automatically selects the best backend by default):

      julia
      using Reactant, Lux
      +Pkg.add("Reactant")

      Run the following to access a device (Reactant automatically selects the best backend by default):

      julia
      using Reactant, Lux
       Reactant.set_default_backend("cpu")
       
       const dev = reactant_device()
      julia
      using Reactant, Lux
      @@ -55,7 +55,7 @@
       Reactant.set_default_backend("tpu")
       
       const dev = reactant_device()
      - + \ No newline at end of file diff --git a/dev/introduction/citation.html b/dev/introduction/citation.html index 2a8fd49970..7521cb8a69 100644 --- a/dev/introduction/citation.html +++ b/dev/introduction/citation.html @@ -5,15 +5,15 @@ Citation | Lux.jl Docs - + - + - - - + + + @@ -44,7 +44,7 @@ year = {2023}, school = {Massachusetts Institute of Technology} }
      - + \ No newline at end of file diff --git a/dev/introduction/index.html b/dev/introduction/index.html index 015de4e982..db80626ca4 100644 --- a/dev/introduction/index.html +++ b/dev/introduction/index.html @@ -5,15 +5,15 @@ Getting Started | Lux.jl Docs - + - + - - - + + + @@ -134,7 +134,7 @@ return model, ps, st end -train_model!(model, ps, st, x_data, y_data)
      2025-01-20 23:36:47.182300: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 17363501086534973873
      +train_model!(model, ps, st, x_data, y_data)
      2025-01-24 05:15:12.020926: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 310851790191412072
       Iteration: 0001 	 Loss: 2.08073235
       Iteration: 0101 	 Loss: 0.142574623
       Iteration: 0201 	 Loss: 0.0051055951
      @@ -146,7 +146,7 @@
       Iteration: 0801 	 Loss: 0.00201115524
       Iteration: 0901 	 Loss: 9.70276451e-05
       Iteration: 1000 	 Loss: 7.81012277e-05

      Training with Optimization.jl

      If you are coming from the SciML ecosystem and want to use Optimization.jl, please refer to the Optimization.jl Tutorial.

      Additional Packages

      LuxDL hosts various packages that provide additional functionality for Lux.jl. All packages mentioned in this documentation are available via the Julia General Registry.

      You can install all those packages via import Pkg; Pkg.add(<package name>).

      XLA (CPU/GPU/TPU) Support

      Lux.jl supports XLA compilation for CPU, GPU, and TPU using Reactant.jl.

      GPU Support

      GPU Support for Lux.jl requires loading additional packages:

      - + \ No newline at end of file diff --git a/dev/introduction/overview.html b/dev/introduction/overview.html index 8ce63a84a2..365a8127da 100644 --- a/dev/introduction/overview.html +++ b/dev/introduction/overview.html @@ -5,15 +5,15 @@ Why we wrote Lux? | Lux.jl Docs - + - + - - - + + + @@ -29,7 +29,7 @@
      Skip to content

      Why we wrote Lux?

      Julia already has quite a few well established Neural Network Frameworks – Flux & KNet. However, certain design elements – Coupled Model and Parameters & Internal Mutations – associated with these frameworks make them less compiler and user friendly. Making changes to address these problems in the respective frameworks would be too disruptive for users. Here comes in Lux: a neural network framework built completely using pure functions to make it both compiler and autodiff friendly.

      Design Principles

      • Layers must be immutable – cannot store any parameter/state but rather store the information to construct them

      • Layers are pure functions

      • Layers return a Tuple containing the result and the updated state

      • Given same inputs the outputs must be same – yes this must hold true even for stochastic functions. Randomness must be controlled using rngs passed in the state.

      • Easily extensible

      • Extensive Testing – All layers and features are tested across all supported AD backends across all supported hardware backends.

      Why use Lux over Flux?

      • Neural Networks for SciML: For SciML Applications (Neural ODEs, Deep Equilibrium Models) solvers typically expect a monolithic parameter vector. Flux enables this via its destructure mechanism, but destructure comes with various edge cases and limitations. Lux forces users to make an explicit distinction between state variables and parameter variables to avoid these issues. Also, it comes battery-included for distributed training.

      • Sensible display of Custom Layers – Ever wanted to see Pytorch like Network printouts or wondered how to extend the pretty printing of Flux's layers? Lux handles all of that by default.

      • Truly immutable models - No unexpected internal mutations since all layers are implemented as pure functions. All layers are also deterministic given the parameters and state: if a layer is supposed to be stochastic (say Dropout), the state must contain a seed which is then updated after the function call.

      • Easy Parameter Manipulation – By separating parameter data and layer structures, Lux makes implementing WeightNorm, SpectralNorm, etc. downright trivial. Without this separation, it is much harder to pass such parameters around without mutations which AD systems don't like.

      • Wider AD Support – Lux has extensive support for most AD systems in julia, while Flux is mostly tied to Zygote (with some initial support for Enzyme).

      • Small Neural Networks on CPU – Lux is developed for training large neural networks. For smaller architectures, we recommend using SimpleChains.jl or even better use it in conjunction with Lux via ToSimpleChainsAdaptor.

      • Reliability – We have learned from the mistakes of the past with Flux and everything in our core framework is extensively tested, along with downstream CI to ensure that everything works as expected.

      Revising Previous Recommendation about Large Models

      Previously we recommended not using Lux for very large models. But we have been making a lot of head-way with Reactant.jl and it would be worthwhile to test larger models with Lux. See compiling Lux models for more information.

      - + \ No newline at end of file diff --git a/dev/introduction/resources.html b/dev/introduction/resources.html index 23e50254cd..45bd290997 100644 --- a/dev/introduction/resources.html +++ b/dev/introduction/resources.html @@ -5,15 +5,15 @@ Resources to Get Started | Lux.jl Docs - + - + - - - + + + @@ -29,7 +29,7 @@
      Skip to content

      Resources to Get Started

      • Go through the Quickstart Example.

      • Read the introductory tutorials on Julia and Lux.

      • Go through the examples sorted based on their complexity in the documentation.

      Have More Questions?

      For usage related questions, please use Github Discussions which allows questions and answers to be indexed. To report bugs use Github Issues or even better send in a Pull Request.

      - + \ No newline at end of file diff --git a/dev/introduction/updating_to_v1.html b/dev/introduction/updating_to_v1.html index 4c54b66162..171c0cbd48 100644 --- a/dev/introduction/updating_to_v1.html +++ b/dev/introduction/updating_to_v1.html @@ -5,15 +5,15 @@ Updating to Lux v1 | Lux.jl Docs - + - + - - - + + + @@ -29,7 +29,7 @@
      Skip to content

      Updating to Lux v1

      Lux v1 is a Major Release, mostly to signify the stability of the API. In this page, we list out a concrete set of changes that need to be made to your code to update to Lux v1. We also list out some new exciting features that were added as part of this release.

      LuxLib.jl

      Breaking Changes

      • Old deprecated API with keyword arguments has been removed. See the new docs in LuxLib API for more details.

      • Default for layernorm dims has been changed to exclude the batch dimension.

      New Major Features

      • Dense layers now support CUDA backend for Enzyme (starting v1.1). Wider support for other operations with Enzyme + CUDA is being actively worked on.

      LuxCore.jl

      Breaking Changes

      • AbstractExplicitLayer has been renamed to AbstractLuxLayer.

      • AbstractExplicitContainerLayer behaviour

        • This has been renamed to AbstractLuxContainerLayer.

        • Previously, AbstractExplicitContainerLayer{(:a,)} (i.e. singleton containers) would produce default initial parameters and states without wrapping them in a NamedTuple{(:a,)}. This was inconsistent with non-singleton containers, and was a source of confusion. With v we return (; a = <parameters>) and (; a = <states>) by default. See AbstractLuxWrapperLayer for a replacement of this functionality.

      • inputsize has been removed since it was ambiguous and not used anywhere.

      • Changes to outputsize:

        • Single argument version has been removed. See LuxCore.jl Pull Request 43 for more details on the rationale behind this change.

        • Fallback implementation has been moved to Lux.jl. (i.e. users using Lux shouldn't see a difference, but if Lux.jl isn't loaded, this function has error.)

          • Internally this uses a NilArray that is able to compute sizes without actually running the computation.
      • Functors and Setfield have been made into optional dependencies. Certain LuxCore functionality that rely on these functions, will throw an error if these packages are not loaded.

      New Major Features

      • Introduction of AbstractLuxWrapperLayer. This behaves exactly like the old singleton container. For example, the old AbstractExplicitContainerLayer{(:a,)} is equivalent to AbstractLuxWrapperLayer{:a}.

      WeightInitializers.jl

      This was a major release to signify the stability of the API. There were no breaking changes. We do support a wider range of RNG types, see Supported RNG Types for more details.

      MLDataDevices.jl

      This is the most aggressive change that was made. We renamed the LuxDeviceUtils.jl package to MLDataDevices.jl, to allow for non-Lux packages to use this shared device management abstraction.

      Deprecation of LuxDeviceUtils.jl

      This also marks the deprecation of the LuxDeviceUtils.jl package. We won't be making any updates to that package, including fixing any bugs. All users should switch to MLDataDevices.jl instead.

      Breaking Changes

      • Lux(___)Device objects have been renamed to (___)Device. For example, LuxCUDADevice has been renamed to CUDADevice.

      • Lux(___)Adaptor objects have been removed. The corresponding Device objects should be used directly instead.

      New Major Features

      • DeviceIterator provides a generalization of CUDA.CuIterator and works for all backends and more data types (using Functors.jl). MLUtils.DataLoader |> gdev now returns a DeviceIterator instead of being a no-op.

      Lux.jl

      Breaking Changes (Removed Functionality)

      • Direct reexport of NNlib has been removed. We reexport selected functionality from NNlib. Direactly load NNlib if you need to use the other functions.

      • Flattening of Chain layers has been removed, and the corresponding disable_optimizations kwarg has been removed.

      • Some layers overloaded Base.keys, these have been removed. These were mostly un-documented and weren't supposed to be used outside of the Lux.jl package.

      • Training.TrainState construction with rng has been removed.

      • Older versions of Preferences have been removed.

      • disable_stacktrace_truncation! has been removed. From Julia 1.9 onwards, stacktrace truncation is enabled by default.

      • Certain Experimental features were present outside the Lux.Experimental module. These have been removed, use them via Lux.Experimental instead. Run Julia with with depwarn as error and Lux v0.5 to see the deprecations.

      • Lux.Experimental.@layer_map is not longer needed and has been removed. The name of the variable prevents writing generic functions and is no longer pre-pended to the KeyPath. See the docstring of Lux.Experimental.layer_map for more details.

      • allow_fast_activation kwarg has been removed completely. Pass an anonymous function as the activation to prevent internal modivations to the activation function.

      Breaking Changes (Moved Functionality)

      • Lux.Experimental.Training has been moved to Lux.Training. We guarantee SemVar on this new module.

      • Lux.cpu and Lux.gpu have been removed. Use cpu_device and gpu_device instead.

      • Experimental.@compact can be directly used via @compact now.

      • Experimental.StatefulLuxLayer has been moved to Lux.StatefulLuxLayer.

      • st_fixed_path kwarg has been removed from Lux.StatefulLuxLayer, instead use it as StatefulLuxLayer{st_fixed_path}(...).

      • Strings as inputs to Lux.Experimental.layer_map and Lux.Experimental.@debug_mode are removed, use Functors.KeyPath instead.

      • CrossCor has been removed. Use Conv(args...; kwargs..., cross_correlation=true) instead.

      Breaking Changes (Changes in Defaults)

      • Conv and ConvTranspose use an initialization based on the activation function, taken from Pytorch. Pytorch assumes the activation function is leakyrelu to compute the gain, however, we compute the gain based on the activation function passed in to the layer.

      • Upsample now has an align_corners keyword argument, which defaults to false. Previously this was always true.

      • Dense and Bilinear have updated default initializations to align with the defaults from Pytorch. See the documentation for more details.

      • InstanceNorm now defaults to affine=false instead of affine=true.

      • Embedding now defaults to init_weight=rand32 instead of init_weight=randn32.

      • Recurrent Cells - RNNCell, LSTMCell, and GRUCell now have different default initializations. See the documentation for more details.

      New Features

      • InstanceNorm now supports tracking statistics.

      • RNNCell and LSTMCell add bias_ih and bias_hh to the parameters to align with Pytorch. Both are controlled using init_bias and use_bias.

      • ConvTranspose allows flipkernel=true via cross_correlation=true. This makes it efficient for MIOpen.

      • ConvTranspose now has an outpad keyword argument, which is used to increase the size of the output in the desired dimensions.

      • Pooling Layers based on lpnorm have been added – LPPool, GlobalLPPool, and AdaptiveLPPool.

      - + \ No newline at end of file diff --git a/dev/manual/autodiff.html b/dev/manual/autodiff.html index 33c7fa0012..4409678033 100644 --- a/dev/manual/autodiff.html +++ b/dev/manual/autodiff.html @@ -5,15 +5,15 @@ Automatic Differentiation | Lux.jl Docs - + - + - - - + + + @@ -28,8 +28,8 @@ -
      Skip to content

      Automatic Differentiation

      Lux is not an AD package, but it composes well with most of the AD packages available in the Julia ecosystem. This document lists the current level of support for various AD packages in Lux. Additionally, we provide some convenience functions for working with AD.

      Overview

      AD PackageModeCPUGPUTPUNested 2nd Order ADSupport Class
      Reactant.jl[1] + Enzyme.jlReverse✔️✔️✔️✔️Tier I
      ChainRules.jl[2]Reverse✔️✔️✔️Tier I
      Enzyme.jlReverse✔️[3][3:1]Tier I[4]
      Zygote.jlReverse✔️✔️✔️Tier I
      ForwardDiff.jlForward✔️✔️✔️Tier I
      ReverseDiff.jlReverse✔️Tier II
      Tracker.jlReverse✔️✔️Tier II
      Mooncake.jlReverse[3:2]Tier III
      Diffractor.jlForward[3:3][3:4][3:5]Tier III

      Recommendations

      • For CPU Usacases:

        1. Use Reactant.jl + Enzyme.jl for the best performance as well as mutation-support. When available, this is the most reliable and fastest option.

        2. Use Zygote.jl for the best performance without Reactant.jl. This is the most reliable and fastest option for CPU for the time-being. (We are working on faster Enzyme support for CPU)

        3. Use Enzyme.jl, if there are mutations in the code and/or Zygote.jl fails.

        4. If Enzyme.jl fails for some reason, (open an issue and) try ReverseDiff.jl (possibly with compiled mode).

      • For GPU Usacases:

        1. Use Reactant.jl + Enzyme.jl for the best performance. This is the most reliable and fastest option, but presently only supports NVIDIA GPU's. AMD GPUs are currently not supported.

        2. Use Zygote.jl for the best performance on non-NVIDIA GPUs. This is the most reliable and fastest non-Reactant.jl option for GPU for the time-being. We are working on supporting Enzyme.jl without Reactant.jl for GPU as well.

      • For TPU Usacases:

        1. Use Reactant.jl. This is the only supported (and fastest) option.

      Support Class

      1. Tier I: These packages are fully supported and have been tested extensively. Often have special rules to enhance performance. Issues for these backends take the highest priority.

      2. Tier II: These packages are supported and extensively tested but often don't have the best performance. Issues against these backends are less critical, but we fix them when possible. (Some specific edge cases, especially with AMDGPU, are known to fail here)

      3. Tier III: We don't know if these packages currently work with Lux. We'd love to add tests for these backends, but currently these are not our priority.

      Footnotes


      1. Note that Reactant.jl is not really an AD package, but a tool for compiling functions, including the use of EnzymeMLIR for AD via Enzyme.jl. We have first-class support for the usage of Reactant.jl for inference and training when using Enzyme.jl for differentiation. ↩︎

      2. Note that ChainRules.jl is not really an AD package, but we have first-class support for packages that use rrules. ↩︎

      3. This feature is supported downstream, but we don't extensively test it to ensure that it works with Lux. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

      4. Currently Enzyme outperforms other AD packages in terms of CPU performance. However, there are some edge cases where it might not work with Lux when not using Reactant. We are working on improving the compatibility. Please report any issues you encounter and try Reactant if something fails. ↩︎

      - +
      Skip to content

      Automatic Differentiation

      Lux is not an AD package, but it composes well with most of the AD packages available in the Julia ecosystem. This document lists the current level of support for various AD packages in Lux. Additionally, we provide some convenience functions for working with AD.

      Overview

      AD PackageModeCPUGPUTPUNested 2nd Order ADSupport Class
      Reactant.jl[1] + Enzyme.jlReverse✔️✔️✔️✔️Tier I
      ChainRules.jl[2]Reverse✔️✔️✔️Tier I
      Enzyme.jlReverse✔️[3][3:1]Tier I[4]
      Zygote.jlReverse✔️✔️✔️Tier I
      ForwardDiff.jlForward✔️✔️✔️Tier I
      ReverseDiff.jlReverse✔️Tier II
      Tracker.jlReverse✔️✔️Tier II
      Mooncake.jlReverse[3:2]Tier III
      Diffractor.jlForward[3:3][3:4][3:5]Tier III

      Recommendations

      • For CPU Usacases:

        1. Use Reactant.jl + Enzyme.jl for the best performance as well as mutation-support. When available, this is the most reliable and fastest option.

        2. Use Zygote.jl for the best performance without Reactant.jl. This is the most reliable and fastest option for CPU for the time-being. (We are working on faster Enzyme support for CPU)

        3. Use Enzyme.jl, if there are mutations in the code and/or Zygote.jl fails.

        4. If Enzyme.jl fails for some reason, (open an issue and) try ReverseDiff.jl (possibly with compiled mode).

      • For GPU Usacases:

        1. Use Reactant.jl + Enzyme.jl for the best performance. This is the most reliable and fastest option, but presently only supports NVIDIA GPU's. AMD GPUs are currently not supported.

        2. Use Zygote.jl for the best performance on non-NVIDIA GPUs. This is the most reliable and fastest non-Reactant.jl option for GPU for the time-being. We are working on supporting Enzyme.jl without Reactant.jl for GPU as well.

      • For TPU Usacases:

        1. Use Reactant.jl. This is the only supported (and fastest) option.

      Support Class

      1. Tier I: These packages are fully supported and have been tested extensively. Often have special rules to enhance performance. Issues for these backends take the highest priority.

      2. Tier II: These packages are supported and extensively tested but often don't have the best performance. Issues against these backends are less critical, but we fix them when possible. (Some specific edge cases, especially with AMDGPU, are known to fail here)

      3. Tier III: We don't know if these packages currently work with Lux. We'd love to add tests for these backends, but currently these are not our priority.

      Footnotes


      1. Note that Reactant.jl is not really an AD package, but a tool for compiling functions, including the use of EnzymeMLIR for AD via Enzyme.jl. We have first-class support for the usage of Reactant.jl for inference and training when using Enzyme.jl for differentiation. ↩︎

      2. Note that ChainRules.jl is not really an AD package, but we have first-class support for packages that use rrules. ↩︎

      3. This feature is supported downstream, but we don't extensively test it to ensure that it works with Lux. ↩︎ ↩︎ ↩︎ ↩︎ ↩︎ ↩︎

      4. Currently Enzyme outperforms other AD packages in terms of CPU performance. However, there are some edge cases where it might not work with Lux when not using Reactant. We are working on improving the compatibility. Please report any issues you encounter and try Reactant if something fails. ↩︎

      + \ No newline at end of file diff --git a/dev/manual/compiling_lux_models.html b/dev/manual/compiling_lux_models.html index 71b457837f..f8c5ea7c29 100644 --- a/dev/manual/compiling_lux_models.html +++ b/dev/manual/compiling_lux_models.html @@ -5,15 +5,15 @@ Compiling Lux Models using Reactant.jl | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Compiling Lux Models using Reactant.jl

      Quoting the Reactant.jl Readme:

      Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

      Experimental

      Reactant compilation is a very new feature and is currently experimental. Certain models might not be compilable yet, but we are actively working on it. Open an issue if you encounter any problems.

      julia
      using Lux, Reactant, Enzyme, Random, Zygote
      +    
      Skip to content

      Compiling Lux Models using Reactant.jl

      Quoting the Reactant.jl Readme:

      Reactant takes Julia function and compile it into MLIR and run fancy optimizations on top of it, including using EnzymeMLIR for automatic differentiation, and create relevant executables for CPU/GPU/TPU via XLA. It presently operates as a tracing system. Compiled functions will assume the same control flow pattern as was original taken by objects used at compile time, and control flow (e.g. if, for) as well as any type instabilities will be removed. The benefits of this approach is immediately making all such code available for advanced optimization with little developer effort.

      julia
      using Lux, Reactant, Enzyme, Random, Zygote
       using Functors, Optimisers, Printf

      Running on alternate accelerators

      Reactant.set_default_backend("gpu") sets the default backend to CUDA and Reactant.set_default_backend("tpu") sets the default backend to TPU.

      Using the TrainState API

      If you are using the Training.TrainState API, skip to the bottom of this page to see how to train the model without any of this boilerplate.

      We start by defining a simple MLP model:

      julia
      model = Chain(
           Dense(2 => 32, gelu),
           Dense(32 => 32, gelu),
      @@ -43,7 +43,7 @@
       y_ra = y |> xdev
       ps_ra = ps |> xdev
       st_ra = st |> xdev
      -nothing

      First let's run the model as we would normally:

      julia
      pred_lux, _ = model(x, ps, Lux.testmode(st))
      (Float32[0.015869793 0.010564294 … -0.4137662 0.018748894; 0.07865399 0.06953073 … -0.23402624 0.21624334], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

      To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using reactant_device first.

      julia
      model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
      Reactant.Compiler.Thunk{Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#328235"), Tuple{Reactant.ConcreteRArray{Float32, 2}, @NamedTuple{layer_1::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_2::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_3::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, true}(Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing))

      Now we can test the difference between the results:

      julia
      pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
      +nothing

      First let's run the model as we would normally:

      julia
      pred_lux, _ = model(x, ps, Lux.testmode(st))
      (Float32[0.015869793 0.010564294 … -0.4137662 0.018748894; 0.07865399 0.06953073 … -0.23402624 0.21624334], (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple()))

      To run it using XLA we need to compile the model. We can do this using the Reactant.@compile macro. Note that the inputs need to be moved to the device using reactant_device first.

      julia
      model_compiled = @compile model(x_ra, ps_ra, Lux.testmode(st_ra))
      Reactant.Compiler.Thunk{Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, Symbol("##Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing)_reactant#328268"), Tuple{Reactant.ConcreteRArray{Float32, 2}, @NamedTuple{layer_1::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_2::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}, layer_3::@NamedTuple{weight::Reactant.ConcreteRArray{Float32, 2}, bias::Reactant.ConcreteRArray{Float32, 1}}}, @NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}}, true}(Chain{@NamedTuple{layer_1::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Dense{typeof(gelu), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}((layer_1 = Dense(2 => 32, gelu), layer_2 = Dense(32 => 32, gelu), layer_3 = Dense(32 => 2)), nothing))

      Now we can test the difference between the results:

      julia
      pred_compiled, _ = model_compiled(x_ra, ps_ra, Lux.testmode(st_ra))
       
       pred_lux .- Array(pred_compiled)
      2×32 Matrix{Float32}:
        1.35005f-5  -2.00942f-5  5.03547f-5  …  0.000276357  -1.31615f-5
      @@ -100,7 +100,7 @@
       Iter: [ 800/1000]	Loss: 0.01325738
       Iter: [ 900/1000]	Loss: 0.01003141
       Iter: [1000/1000]	Loss: 0.00775477
      - + \ No newline at end of file diff --git a/dev/manual/debugging.html b/dev/manual/debugging.html index 657441ceec..912b0f393e 100644 --- a/dev/manual/debugging.html +++ b/dev/manual/debugging.html @@ -5,15 +5,15 @@ Debugging Lux Models | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Debugging Lux Models

      Debugging DNNs can be very painful. Especially with the gigantic stacktraces for Lux, it is even harder to pin-point to which particular layer errored out. This page describes some useful tools that ship with Lux, that can help you debug your models.

      TL;DR

      Simply wrap your model with Lux.Experimental.@debug_mode!!

      Don't Forget

      Remember to use the non Debug mode model after you finish debugging. Debug mode models are way slower.

      Let us construct a model which has an obviously incorrect dimension. In this example, you will see how easy it is to pin-point the problematic layer.

      Incorrect Model Specification: Dimension Mismatch Problems

      julia
      using Lux, Random
      +    
      Skip to content

      Debugging Lux Models

      Debugging DNNs can be very painful. Especially with the gigantic stacktraces for Lux, it is even harder to pin-point to which particular layer errored out. This page describes some useful tools that ship with Lux, that can help you debug your models.

      TL;DR

      Simply wrap your model with Lux.Experimental.@debug_mode!!

      Don't Forget

      Remember to use the non Debug mode model after you finish debugging. Debug mode models are way slower.

      Let us construct a model which has an obviously incorrect dimension. In this example, you will see how easy it is to pin-point the problematic layer.

      Incorrect Model Specification: Dimension Mismatch Problems

      julia
      using Lux, Random
       
       model = Chain(Dense(1 => 16, relu), Chain(Dense(16 => 3), Dense(1 => 1)), BatchNorm(1))
       
      @@ -70,7 +70,7 @@
       [ Info: Input Type: Matrix{Float32} | Input Structure: (3, 2).
       [ Info: Running Layer: Dense(1 => 1) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2)!
       ┌ Error: Layer Dense(1 => 1) failed!! This layer is present at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).
      -└ @ Lux.Experimental /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/src/contrib/debug.jl:103
      +└ @ Lux.Experimental /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/src/contrib/debug.jl:103
       DimensionMismatch("A has shape (1, 1) but B has shape (3, 2)")

      See now we know that model.layers.layer_2.layers.layer_2 is the problematic layer. Let us fix that layer and see what happens:

      julia
      model = Chain(Dense(1 => 16, relu),
           Chain(
               Dense(16 => 3),  
      @@ -125,8 +125,12 @@
       model(x, ps, st)
      (Float32[0.9999881 -0.9999881], (layer_1 = NamedTuple(), layer_2 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_3 = (running_mean = Float32[0.0026271285], running_var = Float32[0.98396176], training = Val{true}())))

      Let us define a custom backward pass to introduce some NaNs:

      julia
      function CRC.rrule(::typeof(offending_layer), x)
           y = offending_layer(x)
           function ∇offending_layer(Δ)
      -        Δ[1] = NaN
      -        return NoTangent(), Δ
      +        problematicΔ = CRC.@thunk begin
      +            Δ = CRC.unthunk(Δ)
      +            Δ[1] = NaN
      +            return Δ
      +        end
      +        return NoTangent(), problematicΔ
           end
           return y, ∇offending_layer
       end

      Let us compute the gradient of the layer now:

      julia
      Zygote.gradient(ps -> sum(first(model(x, ps, st))), ps)
      ((layer_1 = (weight = Float32[0.0; 0.0; … ; 0.0; 0.0;;], bias = Float32[0.0, 0.0, 0.0, 0.0, NaN, 0.0, NaN, NaN, 0.0, 0.0, 0.0, NaN, NaN, NaN, 0.0, 0.0]), layer_2 = (layer_1 = (weight = Float32[NaN NaN … NaN NaN], bias = Float32[NaN]), layer_2 = nothing), layer_3 = (scale = Float32[0.0], bias = Float32[2.0])),)

      Oh no!! A NaN is present in the gradient of ps. Let us run the debug model and see where the NaN occurred:

      julia
      model_debug = Lux.Experimental.@debug_mode model nan_check=:both
      @@ -148,7 +152,7 @@
       [ Info: Running Layer: BatchNorm(1, affine=true, track_stats=true) at location KeyPath(:model, :layers, :layer_3)!
       [ Info: Output Type: Matrix{Float32} | Output Structure: (1, 2).
       DomainError(Float32[NaN 0.0], "NaNs detected in pullback output (x)  of layer WrappedFunction(offending_layer) at location KeyPath(:model, :layers, :layer_2, :layers, :layer_2).")

      And there you go our debug layer prints that the problem is in WrappedFunction(offending_layer) at location model.layers.layer_2.layers.layer_2! Once we fix the pullback of the layer, we will fix the NaNs.

      Conclusion

      In this manual section, we have discussed tracking down errors in Lux models. We have covered tracking incorrect model specifications and NaNs in forward and backward passes. However, remember that this is an Experimental feature, and there might be edge cases that don't work correctly. If you find any such cases, please open an issue on GitHub!

      - + \ No newline at end of file diff --git a/dev/manual/dispatch_custom_input.html b/dev/manual/dispatch_custom_input.html index 176c03a5a0..5b5d4e48a0 100644 --- a/dev/manual/dispatch_custom_input.html +++ b/dev/manual/dispatch_custom_input.html @@ -5,15 +5,15 @@ Dispatching on Custom Input Types | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Dispatching on Custom Input Types

      Which function should participate in dispatch?

      • Defining a dispatch on (::Layer)(x::MyInputType, ps, st::NamedTuple) is inconvenient, since it requires the user to define a new method for every layer type.

      • (::AbstractLuxLayer)(x::MyInputType, ps, st::NamedTuple) doesn't work.

      • Instead, we need to define the dispatch on Lux.apply(::AbstractLuxLayer, x::MyInputType, ps, st::NamedTuple).

      Concrete Example

      Consider Neural ODEs. In these models, often time we want to every iteration of the neural network to take the current time as input. Here, we won't go through implementing an entire Neural ODE model. Instead we will define a time dependent version of Chain.

      Time-Dependent Chain Implementation

      julia
      using Lux, Random
      +    
      Skip to content

      Dispatching on Custom Input Types

      Which function should participate in dispatch?

      • Defining a dispatch on (::Layer)(x::MyInputType, ps, st::NamedTuple) is inconvenient, since it requires the user to define a new method for every layer type.

      • (::AbstractLuxLayer)(x::MyInputType, ps, st::NamedTuple) doesn't work.

      • Instead, we need to define the dispatch on Lux.apply(::AbstractLuxLayer, x::MyInputType, ps, st::NamedTuple).

      Concrete Example

      Consider Neural ODEs. In these models, often time we want to every iteration of the neural network to take the current time as input. Here, we won't go through implementing an entire Neural ODE model. Instead we will define a time dependent version of Chain.

      Time-Dependent Chain Implementation

      julia
      using Lux, Random
       
       struct TDChain{L <: NamedTuple} <: Lux.AbstractLuxWrapperLayer{:layers}
           layers::L
      @@ -67,11 +67,11 @@
       
       Closest candidates are:
         apply(!Matched::AbstractLuxLayer, ::Any, ::Any, ::Any)
      -   @ LuxCore /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:154
      +   @ LuxCore /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:154
         apply(!Matched::AbstractLuxLayer, !Matched::Tracker.TrackedArray, ::Any, ::Any)
      -   @ LuxCoreArrayInterfaceTrackerExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceTrackerExt.jl:19
      +   @ LuxCoreArrayInterfaceTrackerExt /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceTrackerExt.jl:19
         apply(!Matched::AbstractLuxLayer, !Matched::ReverseDiff.TrackedArray, ::Any, ::Any)
      -   @ LuxCoreArrayInterfaceReverseDiffExt /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceReverseDiffExt.jl:20
      +   @ LuxCoreArrayInterfaceReverseDiffExt /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxCore/ext/LuxCoreArrayInterfaceReverseDiffExt.jl:20
         ...

      Writing the Correct Dispatch Rules

      • Create a Custom Layer storing the time.
      julia
      struct ArrayAndTime{A <: AbstractArray, T <: Real}
           array::A
           time::T
      @@ -90,7 +90,7 @@
       ps, st = Lux.setup(rng, model)
       
       model(xt, ps, st)[1]
      Main.ArrayAndTime{Matrix{Float32}, Float32}(Float32[0.40721768 1.2363781], 10.0f0)
      - + \ No newline at end of file diff --git a/dev/manual/distributed_utils.html b/dev/manual/distributed_utils.html index ce7d3ffef0..c95d19fe2c 100644 --- a/dev/manual/distributed_utils.html +++ b/dev/manual/distributed_utils.html @@ -5,15 +5,15 @@ Distributed Data Parallel Training | Lux.jl Docs - + - + - - - + + + @@ -28,11 +28,11 @@ -
      Skip to content

      Distributed Data Parallel Training

      Tip

      For a fully functional example, see the ImageNet Training Example.

      DDP Training using Lux.DistributedUtils is a spiritual successor to FluxMPI.jl, but has some key differences.

      Guide to Integrating DistributedUtils into your code

      • Initialize the respective backend with DistributedUtils.initialize, by passing in a backend type. It is important that you pass in the type, i.e. NCCLBackend and not the object NCCLBackend().
      julia
      DistributedUtils.initialize(NCCLBackend)
      julia
      backend = DistributedUtils.get_distributed_backend(NCCLBackend)

      It is important that you use this function instead of directly constructing the backend, since there are certain internal states that need to be synchronized.

      • Next synchronize the parameters and states of the model. This is done by calling DistributedUtils.synchronize!! with the backend and the respective input.
      julia
      ps = DistributedUtils.synchronize!!(backend, ps)
      +    
      Skip to content

      Distributed Data Parallel Training

      Tip

      For a fully functional example, see the ImageNet Training Example.

      DDP Training using Lux.DistributedUtils is a spiritual successor to FluxMPI.jl, but has some key differences.

      Guide to Integrating DistributedUtils into your code

      • Initialize the respective backend with DistributedUtils.initialize, by passing in a backend type. It is important that you pass in the type, i.e. NCCLBackend and not the object NCCLBackend().
      julia
      DistributedUtils.initialize(NCCLBackend)
      julia
      backend = DistributedUtils.get_distributed_backend(NCCLBackend)

      It is important that you use this function instead of directly constructing the backend, since there are certain internal states that need to be synchronized.

      • Next synchronize the parameters and states of the model. This is done by calling DistributedUtils.synchronize!! with the backend and the respective input.
      julia
      ps = DistributedUtils.synchronize!!(backend, ps)
       st = DistributedUtils.synchronize!!(backend, st)
      julia
      data = DistributedUtils.DistributedDataContainer(backend, data)
      • Wrap the optimizer in DistributedUtils.DistributedOptimizer to ensure that the optimizer is correctly synchronized across all processes before parameter updates. After initializing the state of the optimizer, synchronize the state across all processes.
      julia
      opt = DistributedUtils.DistributedOptimizer(backend, opt)
       opt_state = Optimisers.setup(opt, ps)
      -opt_state = DistributedUtils.synchronize!!(backend, opt_state)
      • Finally change all logging and serialization code to trigger on local_rank(backend) == 0. This ensures that only the master process logs and serializes the model.

      Migration Guide from FluxMPI.jl

      Let's compare the changes we need to make wrt the FluxMPI.jl integration guide.

      1. FluxMPI.Init is now DistributedUtils.initialize.

      2. FluxMPI.synchronize!(x) needs to be changed to x_new = DistributedUtils.synchronize!!(backend, x).

      3. DistributedUtils.DistributedDataContainer, DistributedUtils.local_rank, and DistributedUtils.DistributedOptimizer need backend as the first input.

      And that's pretty much it!

      Removed Functionality

      1. FluxMPI.allreduce_gradients no longer exists. Previously this was needed when CUDA communication was flaky, with NCCL.jl this is no longer the case.

      2. FluxMPIFluxModel has been removed. DistributedUtils no longer works with Flux.

      Key Differences

      1. FluxMPI.synchronize! is now DistributedUtils.synchronize!! to highlight the fact that some of the inputs are not updated in-place.

      2. All of the functions now require a communication backend as input.

      3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See GPU-aware MPI for more information.

      4. Older (now non-existent) Lux.gpu implementations used to "just work" with FluxMPI.jl. We expect gpu_device to continue working as expected, however, we recommend using gpu_device after calling DistributedUtils.initialize to avoid any mismatch between the device set via DistributedUtils and the device stores in CUDADevice or AMDGPUDevice.

      Known Shortcomings

      1. Currently we don't run tests with CUDA or ROCM aware MPI, use those features at your own risk. We are working on adding tests for these features.

      2. AMDGPU support is mostly experimental and causes deadlocks in certain situations, this is being investigated. If you have a minimal reproducer for this, please open an issue.

      - +opt_state = DistributedUtils.synchronize!!(backend, opt_state)
      • Finally change all logging and serialization code to trigger on local_rank(backend) == 0. This ensures that only the master process logs and serializes the model.

      Migration Guide from FluxMPI.jl

      Let's compare the changes we need to make wrt the FluxMPI.jl integration guide.

      1. FluxMPI.Init is now DistributedUtils.initialize.

      2. FluxMPI.synchronize!(x) needs to be changed to x_new = DistributedUtils.synchronize!!(backend, x).

      3. DistributedUtils.DistributedDataContainer, DistributedUtils.local_rank, and DistributedUtils.DistributedOptimizer need backend as the first input.

      And that's pretty much it!

      Removed Functionality

      1. FluxMPI.allreduce_gradients no longer exists. Previously this was needed when CUDA communication was flaky, with NCCL.jl this is no longer the case.

      2. FluxMPIFluxModel has been removed. DistributedUtils no longer works with Flux.

      Key Differences

      1. FluxMPI.synchronize! is now DistributedUtils.synchronize!! to highlight the fact that some of the inputs are not updated in-place.

      2. All of the functions now require a communication backend as input.

      3. We don't automatically determine if the MPI Implementation is CUDA or ROCM aware. See GPU-aware MPI for more information.

      4. Older (now non-existent) Lux.gpu implementations used to "just work" with FluxMPI.jl. We expect gpu_device to continue working as expected, however, we recommend using gpu_device after calling DistributedUtils.initialize to avoid any mismatch between the device set via DistributedUtils and the device stores in CUDADevice or AMDGPUDevice.

      Known Shortcomings

      1. Currently we don't run tests with CUDA or ROCM aware MPI, use those features at your own risk. We are working on adding tests for these features.

      2. AMDGPU support is mostly experimental and causes deadlocks in certain situations, this is being investigated. If you have a minimal reproducer for this, please open an issue.

      + \ No newline at end of file diff --git a/dev/manual/exporting_to_jax.html b/dev/manual/exporting_to_jax.html index 1da660060e..c8eb3f5990 100644 --- a/dev/manual/exporting_to_jax.html +++ b/dev/manual/exporting_to_jax.html @@ -5,15 +5,15 @@ Exporting Lux Models to Jax (via EnzymeJAX & Reactant) | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Exporting Lux Models to Jax (via EnzymeJAX & Reactant)

      Experimental

      This feature is experimental and is subject to change without notice. Additionally, this feature currently requires some manual setup for interacting with Jax, which we are working on improving.

      In this manual, we will go over how to export Lux models to StableHLO and use EnzymeJAX to run integrate Lux models with JAX. We assume that users are familiar with Reactant compilation of Lux models.

      julia
      using Lux, Reactant, Random
      +    
      Skip to content

      Exporting Lux Models to Jax (via EnzymeJAX & Reactant)

      In this manual, we will go over how to export Lux models to StableHLO and use EnzymeJAX to run integrate Lux models with JAX. We assume that users are familiar with Reactant compilation of Lux models.

      julia
      using Lux, Reactant, Random
       
       const dev = reactant_device()
      (::ReactantDevice{Missing, Missing}) (generic function with 1 method)

      We simply define a Lux model and generate the stablehlo code using Reactant.@code_hlo.

      julia
      model = Chain(
           Conv((5, 5), 1 => 6, relu),
      @@ -254,7 +254,7 @@
               bias6_3,
           )
       )
      - + \ No newline at end of file diff --git a/dev/manual/freezing_model_parameters.html b/dev/manual/freezing_model_parameters.html index 3a13a2c055..882cf312d7 100644 --- a/dev/manual/freezing_model_parameters.html +++ b/dev/manual/freezing_model_parameters.html @@ -5,15 +5,15 @@ Freezing Model Parameters | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Freezing Model Parameters

      Warning

      API for freezing parameters should be considered experimental at this point.

      In this manual entry, we will go over how to freeze certain parameters in a model.

      Freezing Layers of a Particular Kind

      To freeze a particular kind of layer, let's say Dense in the following example. We can use Lux.Experimental.layer_map and freeze layers if they are of type Dense.

      julia
      using Lux, Random, Functors
      +    
      Skip to content

      Freezing Model Parameters

      Warning

      API for freezing parameters should be considered experimental at this point.

      In this manual entry, we will go over how to freeze certain parameters in a model.

      Freezing Layers of a Particular Kind

      To freeze a particular kind of layer, let's say Dense in the following example. We can use Lux.Experimental.layer_map and freeze layers if they are of type Dense.

      julia
      using Lux, Random, Functors
       
       rng = Xoshiro(0)
       
      @@ -47,7 +47,7 @@
       
       model_frozen, ps_frozen, st_frozen = Lux.Experimental.layer_map(freeze_dense, model, ps, st)
       
      -model_frozen(x, ps_frozen, st_frozen)
      (Float32[0.6886741 -1.2361472], (layer_1 = (frozen_params = (weight = Float32[-0.028461456 -0.5999714 -0.3850993; -0.18860114 0.72428167 0.32322538; -0.965117 -0.4585489 -0.32623518; -0.86290836 -0.82805836 -0.7673453], bias = Float32[0.4216236, -0.4510427, -0.097253, 0.23325463]), states = NamedTuple()), layer_2 = (layer_1 = (frozen_params = (weight = Float32[-0.680748 0.1764085 0.34383082 0.6469914; -0.13819042 -0.109261915 -0.6143286 -0.21672015; -0.20881107 0.70390546 0.48137343 0.25662464; 0.38187847 0.05779423 -0.35181466 -0.096988946], bias = Float32[0.41246277, 0.4318977, -0.4305781, 0.3367505]), states = NamedTuple()), layer_2 = (rng = Random.Xoshiro(0x4fa3403dd074e603, 0x12c522b8034ae186, 0x8e0c3a65079041bb, 0x21617f7747d97206, 0x22a21880af5dc689), training = Val{true}()), layer_3 = (running_mean = Float32[0.01965834, 0.0, 0.0, 0.015937408], running_var = Float32[0.90772897, 0.9, 0.9, 0.90508], training = Val{true}())), layer_3 = (frozen_params = (weight = Float32[0.7794657 0.8337032 0.6323408 -0.18308182], bias = Float32[-0.27373654]), states = NamedTuple())))

      Freezing by Layer Name

      When the function in layer_map is called, the 4th argument is the name of the layer. For example, if you want to freeze the 1st layer inside the inner Chain. The name for this would be layer_2.layer_1.

      julia
      
      +model_frozen(x, ps_frozen, st_frozen)
      (Float32[0.6886741 -1.2361472], (layer_1 = (frozen_params = (weight = Float32[-0.028461456 -0.5999714 -0.3850993; -0.18860114 0.72428167 0.32322538; -0.965117 -0.4585489 -0.32623518; -0.86290836 -0.82805836 -0.7673453], bias = Float32[0.4216236, -0.4510427, -0.097253, 0.23325463]), states = NamedTuple()), layer_2 = (layer_1 = (frozen_params = (weight = Float32[-0.680748 0.1764085 0.34383082 0.6469914; -0.13819042 -0.109261915 -0.6143286 -0.21672015; -0.20881107 0.70390546 0.48137343 0.25662464; 0.38187847 0.05779423 -0.35181466 -0.096988946], bias = Float32[0.41246277, 0.4318977, -0.4305781, 0.3367505]), states = NamedTuple()), layer_2 = (rng = Random.Xoshiro(0x4fa3403dd074e603, 0x12c522b8034ae186, 0x8e0c3a65079041bb, 0x21617f7747d97206, 0x22a21880af5dc689), training = Val{true}()), layer_3 = (running_mean = Float32[0.01965834, 0.0, 0.0, 0.015937408], running_var = Float32[0.90772897, 0.9, 0.9, 0.90508], training = Val{true}())), layer_3 = (frozen_params = (weight = Float32[0.7794657 0.8337032 0.6323408 -0.18308182], bias = Float32[-0.27373654]), states = NamedTuple())))

      Freezing by Layer Name

      When the function in layer_map is called, the 4th argument is the name of the layer. For example, if you want to freeze the 1st layer inside the inner Chain. The name for this would be layer_2.layer_1.

      julia
      
       function freeze_by_name(d, ps, st, name::KeyPath)
           name == KeyPath(:layer_2, :layer_1) &&
               return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
      @@ -56,7 +56,7 @@
       function freeze_dense(d::Dense, ps, st, _)
           return Lux.Experimental.freeze(d, ps, st, (:weight, :bias))
       end
      -freeze_dense(l, ps, st, _) = (l, ps, st)

      Freezing Part of the Parameters

      Instead of freezing all the parameters, we can simply specify (:weight,) to freeze only the weight parameter while training the bias parameter.

      julia
      
      +freeze_dense(l, ps, st, _) = (l, ps, st)

      Freezing Part of the Parameters

      Instead of freezing all the parameters, we can simply specify (:weight,) to freeze only the weight parameter while training the bias parameter.

      julia
      
       function freeze_by_name(d, ps, st, name::KeyPath)
           name == KeyPath(:layer_2, :layer_1) &&
               return Lux.Experimental.freeze(d, ps, st, (:weight,))
      @@ -79,7 +79,7 @@
       x = randn(rng, Float32, 3, 2)
       
       model_frozen(x, ps, st)
      (Float32[0.7429947 -1.2904677], (layer_1 = (layer_1 = NamedTuple(), layer_2 = NamedTuple()), layer_2 = (frozen_params = (layer_3 = NamedTuple(), layer_4 = (scale = Float32[1.0, 1.0, 1.0, 1.0], bias = Float32[0.0, 0.0, 0.0, 0.0])), states = (layer_3 = (rng = Random.TaskLocalRNG(), training = Val{true}()), layer_4 = (running_mean = Float32[0.0, 0.048522998, 0.0, 0.015937408], running_var = Float32[0.9, 0.9470896, 0.9, 0.90508], training = Val{true}()))), layer_3 = NamedTuple()))
      - + \ No newline at end of file diff --git a/dev/manual/gpu_management.html b/dev/manual/gpu_management.html index 75fe6f704b..94802a382e 100644 --- a/dev/manual/gpu_management.html +++ b/dev/manual/gpu_management.html @@ -5,15 +5,15 @@ GPU Management | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      GPU Management

      Lux.jl can handle multiple GPU backends. Currently, the following backends are supported:

      julia
      # Important to load trigger packages
      +    
      Skip to content

      GPU Management

      Lux.jl can handle multiple GPU backends. Currently, the following backends are supported:

      julia
      # Important to load trigger packages
       using Lux, LuxCUDA #, AMDGPU, Metal, oneAPI
       
       supported_gpu_backends()
      ("CUDA", "AMDGPU", "Metal", "oneAPI")

      GPU Support via Reactant

      If you are using Reactant, you can use the reactant_device function to automatically select Reactant backend if available. Additionally to force Reactant to use gpu, you can run Reactant.set_default_backend("gpu") (this is automatic).

      Metal Support

      Support for Metal GPUs should be considered extremely experimental at this point.

      Automatic Backend Management (Recommended Approach)

      Automatic Backend Management is done by two simple functions: cpu_device and gpu_device.

      • cpu_device: This is a simple function and just returns a CPUDevice object. @example gpu_management cdev = cpu_device()@example gpu_management x_cpu = randn(Float32, 3, 2)

      • gpu_device: This function performs automatic GPU device selection and returns an object.

        1. If no GPU is available, it returns a CPUDevice object.

        2. If a LocalPreferences file is present, then the backend specified in the file is used. To set a backend, use Lux.gpu_backend!(<backend_name>). (a) If the trigger package corresponding to the device is not loaded, then a warning is displayed. (b) If no LocalPreferences file is present, then the first working GPU with loaded trigger package is used.

        @example
        x_gpu = x_cpu |&gt; gdev  ```
        @@ -48,7 +48,7 @@
         end
         
         (x_gpu |> cdev)  x_cpu
        true
      - + \ No newline at end of file diff --git a/dev/manual/interface.html b/dev/manual/interface.html index c2023171b0..7825800118 100644 --- a/dev/manual/interface.html +++ b/dev/manual/interface.html @@ -5,15 +5,15 @@ Lux Interface | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Lux Interface

      Lux.jl vs LuxCore.jl

      If you just want to define compatibility with Lux without actually using any of the other functionality provided by Lux (like layers), it is recommended to depend on LuxCore.jl instead of Lux.jl. LuxCore.jl is a significantly lighter dependency.

      Following this interface provides the ability for frameworks built on top of Lux to be cross compatible. Additionally, any new functionality built into Lux, will just work for your framework.

      @compact macro

      While writing out a custom struct and defining dispatches manually is a good way to understand the interface, it is not the most concise way. We recommend using the Lux.@compact macro to define layers which makes handling the states and parameters downright trivial.

      Layer Interface

      Singular Layer

      If the layer doesn't contain any other Lux layer, then it is a Singular Layer. This means it should optionally subtype Lux.AbstractLuxLayer but mandatorily define all the necessary functions mentioned in the docstrings. Consider a simplified version of Dense called Linear.

      First, setup the architectural details for this layer. Note, that the architecture doesn't contain any mutable structure like arrays. When in doubt, remember, once constructed a model architecture cannot change.

      Tip

      For people coming from Flux.jl background, this might be weird. We recommend checking out the Flux to Lux migration guide first before proceeding.

      julia
      using LuxCore, Random, WeightInitializers # Importing `Lux` also gives you access to `LuxCore`
      +    
      Skip to content

      Lux Interface

      Lux.jl vs LuxCore.jl

      If you just want to define compatibility with Lux without actually using any of the other functionality provided by Lux (like layers), it is recommended to depend on LuxCore.jl instead of Lux.jl. LuxCore.jl is a significantly lighter dependency.

      Following this interface provides the ability for frameworks built on top of Lux to be cross compatible. Additionally, any new functionality built into Lux, will just work for your framework.

      @compact macro

      While writing out a custom struct and defining dispatches manually is a good way to understand the interface, it is not the most concise way. We recommend using the Lux.@compact macro to define layers which makes handling the states and parameters downright trivial.

      Layer Interface

      Singular Layer

      If the layer doesn't contain any other Lux layer, then it is a Singular Layer. This means it should optionally subtype Lux.AbstractLuxLayer but mandatorily define all the necessary functions mentioned in the docstrings. Consider a simplified version of Dense called Linear.

      First, setup the architectural details for this layer. Note, that the architecture doesn't contain any mutable structure like arrays. When in doubt, remember, once constructed a model architecture cannot change.

      Tip

      For people coming from Flux.jl background, this might be weird. We recommend checking out the Flux to Lux migration guide first before proceeding.

      julia
      using LuxCore, Random, WeightInitializers # Importing `Lux` also gives you access to `LuxCore`
       
       struct Linear{F1, F2} <: LuxCore.AbstractLuxLayer
           in_dims::Int
      @@ -119,7 +119,7 @@
       ps = DenseLayerParameters(ps_default.weight, ps_default.bias)
       
       println("Result with `DenseLayerParameters` parameters: ", first(d(x, ps, st)))
      Result with `DenseLayerParameters` parameters: Float32[0.23710957; 0.1003911; -0.57671577;;]

      The takeaway from this shouldn't be – lets define weird parameter types. Simply because you can do weird things like this doesn't mean you should, since it only leads to bugs.

      Instead this shows the flexibility you have for how your parameters can be structured.

      State Interface

      States are always type constrained to be NamedTuple. The structure of the input state must match that of the output state, i.e. keys(st_in) == keys(st_out). This doesn't imply that types of the input and output state match. To generate efficient code, we often do dispatch on the state, for example, Dropout, BatchNorm, etc.

      - + \ No newline at end of file diff --git a/dev/manual/migrate_from_flux.html b/dev/manual/migrate_from_flux.html index 5b7f16ed9a..95ad461364 100644 --- a/dev/manual/migrate_from_flux.html +++ b/dev/manual/migrate_from_flux.html @@ -5,15 +5,15 @@ Migrating from Flux to Lux | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Migrating from Flux to Lux

      For the core library layers like Dense, Conv, etc. we have intentionally kept the API very similar to Flux. In most cases, replacing using Flux with using Lux should be enough to get you started. We cover the additional changes that you will have to make in the following example.

      julia
      using Lux, Random, NNlib, Zygote
      +    
      Skip to content

      Migrating from Flux to Lux

      For the core library layers like Dense, Conv, etc. we have intentionally kept the API very similar to Flux. In most cases, replacing using Flux with using Lux should be enough to get you started. We cover the additional changes that you will have to make in the following example.

      julia
      using Lux, Random, NNlib, Zygote
       
       model = Chain(Dense(2 => 4), BatchNorm(4, relu), Dense(4 => 2))
       rng = Random.default_rng()
      @@ -48,7 +48,7 @@
       
       model(x)
       
      -gradient(model -> sum(model(x)), model)

      Implementing Custom Layers

      Flux and Lux operate under extremely different design philosophies regarding how layers should be implemented. A summary of the differences would be:

      • Flux stores everything in a single struct and relies on Functors.@functor and Flux.trainable to distinguish between trainable and non-trainable parameters.

      • Lux relies on the user to define Lux.initialparameters and Lux.initialstates to distinguish between trainable parameters (called "parameters") and non-trainable parameters (called "states"). Additionally, Lux layers define the model architecture, hence device transfer utilities like gpu_device, cpu_device, etc. cannot be applied on Lux layers, instead they need to be applied on the parameters and states.

      Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes A×B×x where A is not trainable and B is trainable.

      julia
      using Lux, Random, NNlib, Zygote
      +gradient(model -> sum(model(x)), model)

      Implementing Custom Layers

      Flux and Lux operate under extremely different design philosophies regarding how layers should be implemented. A summary of the differences would be:

      • Flux stores everything in a single struct and relies on Functors.@functor and Flux.trainable to distinguish between trainable and non-trainable parameters.

      • Lux relies on the user to define Lux.initialparameters and Lux.initialstates to distinguish between trainable parameters (called "parameters") and non-trainable parameters (called "states"). Additionally, Lux layers define the model architecture, hence device transfer utilities like gpu_device, cpu_device, etc. cannot be applied on Lux layers, instead they need to be applied on the parameters and states.

      Let's work through a concrete example to demonstrate this. We will implement a very simple layer that computes A×B×x where A is not trainable and B is trainable.

      julia
      using Lux, Random, NNlib, Zygote
       
       struct LuxLinear <: Lux.AbstractLuxLayer
           init_A
      @@ -86,7 +86,7 @@
       # Needed so that both `A` and `B` can be transferred between devices
       Flux.@functor FluxLinear
       
      -(l::FluxLinear)(x) = l.A * l.B * x

      Now let us run the model.

      julia
      rng = Random.default_rng()
      +(l::FluxLinear)(x) = l.A * l.B * x

      Now let us run the model.

      julia
      rng = Random.default_rng()
       model = LuxLinear(randn(rng, 2, 4), randn(rng, 4, 2))
       x = randn(rng, 2, 1)
       
      @@ -103,7 +103,7 @@
       model(x)
       
       gradient(model -> sum(model(x)), model)

      To reiterate some important points:

      • Don't store mutables like Arrays inside a Lux Layer.

      • Parameters and States should be constructured inside the respective initial* functions.

      Certain Important Implementation Details

      Training/Inference Mode

      Flux supports a mode called :auto which automatically decides if the user is training the model or running inference. This is the default mode for Flux.BatchNorm, Flux.GroupNorm, Flux.Dropout, etc. Lux doesn't support this mode (specifically to keep code simple and do exactly what the user wants), hence our default mode is training. This can be changed using Lux.testmode.

      Can we still use Flux Layers?

      If you have Flux loaded in your code, you can use the function FromFluxAdaptor to automatically convert your model to Lux. Note that in case a native Lux counterpart isn't available, we fallback to using Optimisers.destructure.

      - + \ No newline at end of file diff --git a/dev/manual/nested_autodiff.html b/dev/manual/nested_autodiff.html index 9a8f41e47d..6afab1f3b0 100644 --- a/dev/manual/nested_autodiff.html +++ b/dev/manual/nested_autodiff.html @@ -5,15 +5,15 @@ Nested Automatic Differentiation | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Nested Automatic Differentiation

      Note

      This is a relatively new feature in Lux, so there might be some rough edges. If you encounter any issues, please let us know by opening an issue on the GitHub repository.

      In this manual, we will explore how to use automatic differentiation (AD) inside your layers or loss functions and have Lux automatically switch the AD backend with a faster one when needed.

      Tip

      Don't wan't Lux to do this switching for you? You can disable it by setting the automatic_nested_ad_switching Preference to false.

      Remember that if you are using ForwardDiff inside a Zygote call, it will drop gradients (with a warning message), so it is not recommended to use this combination.

      Let's explore this using some questions that were posted on the Julia Discourse forum.

      julia
      using ADTypes, Lux, LinearAlgebra, Zygote, ForwardDiff, Random, StableRNGs
      +    
      Skip to content

      Nested Automatic Differentiation

      Note

      This is a relatively new feature in Lux, so there might be some rough edges. If you encounter any issues, please let us know by opening an issue on the GitHub repository.

      In this manual, we will explore how to use automatic differentiation (AD) inside your layers or loss functions and have Lux automatically switch the AD backend with a faster one when needed.

      Tip

      Don't wan't Lux to do this switching for you? You can disable it by setting the automatic_nested_ad_switching Preference to false.

      Remember that if you are using ForwardDiff inside a Zygote call, it will drop gradients (with a warning message), so it is not recommended to use this combination.

      Let's explore this using some questions that were posted on the Julia Discourse forum.

      julia
      using ADTypes, Lux, LinearAlgebra, Zygote, ForwardDiff, Random, StableRNGs
       using ComponentArrays, FiniteDiff

      First let's set the stage using some minor changes that need to be made for this feature to work:

      • Switching only works if a StatefulLuxLayer is being used, with the following function calls:

        • For operations on the inputs:

          • (<some-function> ∘ <StatefulLuxLayer>)(x::AbstractArray)

          • (<StatefulLuxLayer> ∘ <some-function>)(x::AbstractArray)

          • (<StatefulLuxLayer>)(x::AbstractArray)

        • For operations on the parameters:

          • (<some-function> ∘ Base.Fix1(<StatefulLuxLayer>, x))(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x) ∘ <some-function>)(ps)

          • (Base.Fix1(<StatefulLuxLayer>, x))(ps)

      • Currently we have custom routines implemented for:

      • Switching only happens for ChainRules compatible AD libraries.

      We plan to capture DifferentiationInterface, and Enzyme.autodiff calls in the future (PRs are welcome).

      Tip

      @compact uses StatefulLuxLayers internally, so you can directly use these features inside a layer generated by @compact.

      Loss Function containing Jacobian Computation

      This problem comes from @facusapienza on Discourse. In this case, we want to add a regularization term to the neural DE based on first-order derivatives. The neural DE part is not important here and we can demonstrate this easily with a standard neural network.

      julia
      function loss_function1(model, x, ps, st, y)
           # Make it a stateful layer
           smodel = StatefulLuxLayer{true}(model, ps, st)
      @@ -46,15 +46,15 @@
       x = randn(StableRNG(0), Float32, 2, 10)
       y = randn(StableRNG(11), Float32, 2, 10)
       
      -loss_function1(model, x, ps, st, y)
      14.883664f0

      So our loss function works, let's take the gradient (forward diff doesn't nest nicely here):

      julia
      _, ∂x, ∂ps, _, _ = Zygote.gradient(loss_function1, model, x, ps, st, y)
      (nothing, Float32[-1.6702257 0.9043228 … 0.16094846 -4.992662; -8.010404 0.8541596 … 3.3928175 -7.1936812], (layer_1 = (weight = Float32[-4.3707023 -4.9076533; 22.199387 1.867202; 0.47872233 -0.9734574; -0.36428708 0.31861955], bias = Float32[-1.0168695, -0.16566901, 1.0829282, 1.4810884]), layer_2 = (scale = Float32[4.2774315, 3.1984668, 6.840588, 3.7018592], bias = Float32[-2.6477456, 4.9094505, -4.987689, -0.7292344]), layer_3 = (weight = Float32[11.395306 1.9206433 9.744489 -7.6726513; 2.5979974 7.106069 -7.869632 -1.787159], bias = Float32[0.041031003, 7.928609])), nothing, Float32[0.48193252 1.4007905 … -0.19124654 -1.7181164; 1.7811481 0.6913705 … -1.5627227 1.4397957])

      Now let's verify the gradients using finite differences:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function1(model, x, ps, st, y), x)
      +loss_function1(model, x, ps, Lux.testmode(st), y)
      11.380776f0

      So our loss function works, let's take the gradient (forward diff doesn't nest nicely here):

      julia
      _, ∂x, ∂ps, _, _ = Zygote.gradient(loss_function1, model, x, ps, st, y)
      (nothing, Float32[-1.6702257 0.9043228 … 0.16094846 -4.992662; -8.010404 0.8541596 … 3.3928175 -7.1936812], (layer_1 = (weight = Float32[-4.3707023 -4.9076533; 22.199387 1.867202; 0.47872233 -0.9734574; -0.36428708 0.31861955], bias = Float32[-1.0168695, -0.16566901, 1.0829282, 1.4810884]), layer_2 = (scale = Float32[4.2774315, 3.1984668, 6.840588, 3.7018592], bias = Float32[-2.6477456, 4.9094505, -4.987689, -0.7292344]), layer_3 = (weight = Float32[11.395306 1.9206433 9.744489 -7.6726513; 2.5979974 7.106069 -7.869632 -1.787159], bias = Float32[0.041031003, 7.928609])), nothing, Float32[0.48193252 1.4007905 … -0.19124654 -1.7181164; 1.7811481 0.6913705 … -1.5627227 1.4397957])

      Now let's verify the gradients using finite differences:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function1(model, x, ps, st, y), x)
       ∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function1(model, x, ps, st, y),
           ComponentArray(ps))
       
       println("∞-norm(∂x - ∂x_fd): ", norm(∂x .- ∂x_fd, Inf))
       println("∞-norm(∂ps - ∂ps_fd): ", norm(ComponentArray(∂ps) .- ∂ps_fd, Inf))
      ┌ Warning: `training` is set to `Val{true}()` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a `Lux.jl` model, set it to inference (test) mode using `LuxCore.testmode`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
       ┌ Warning: `training` is set to `Val{true}()` but is not being used within an autodiff call (gradient, jacobian, etc...). This will be slow. If you are using a `Lux.jl` model, set it to inference (test) mode using `LuxCore.testmode`. Reliance on this behavior is discouraged, and is not guaranteed by Semantic Versioning, and might be removed without a deprecation cycle. It is recommended to fix this issue in your code.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-3/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:314
       ∞-norm(∂x - ∂x_fd): 0.00046014786
       ∞-norm(∂ps - ∂ps_fd): 0.00068473816

      That's pretty good, of course you will have some error from the finite differences calculation.

      Using Batched Jacobian for Multiple Inputs

      Notice that in this example the Jacobian J consists on the full matrix of derivatives of smodel with respect the different inputs in x. In many cases, we are interested in computing the Jacobian with respect to each input individually, avoiding the unnecessary calculation of zero entries of the Jacobian. This can be achieved with batched_jacobian to parse the calculation of the Jacobian per each single input. Using the same example from the previous section:

      julia
      model = Chain(Dense(2 => 4, tanh), Dense(4 => 2))
       ps, st = Lux.setup(StableRNG(0), model)
      @@ -72,7 +72,7 @@
           return loss_emp + loss_reg
       end
       
      -loss_function_batched(model, x, ps, st, y)
      11.380777f0

      Notice that in this last example we removed BatchNorm() from the neural network. This is done so outputs corresponding to differern inputs don't have an algebraic dependency due to the batch normalization happening in the neural network. We can now verify again the value of the Jacobian:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function_batched(model, x, ps, st, y), x)
      +loss_function_batched(model, x, ps, st, y)
      11.380777f0

      Notice that in this last example we removed BatchNorm() from the neural network. This is done so outputs corresponding to different inputs don't have an algebraic dependency due to the batch normalization happening in the neural network. We can now verify again the value of the Jacobian:

      julia
      ∂x_fd = FiniteDiff.finite_difference_gradient(x -> loss_function_batched(model, x, ps, st, y), x)
       ∂ps_fd = FiniteDiff.finite_difference_gradient(ps -> loss_function_batched(model, x, ps, st, y),
           ComponentArray(ps))
       
      @@ -147,7 +147,7 @@
       ∞-norm(∂ps using vjp): 0.0
       ∞-norm(∂x using full jacobian): 7.1525574e-7
       ∞-norm(∂ps using full jacobian): 1.4305115e-6
      - + \ No newline at end of file diff --git a/dev/manual/nn_inside_gpu_kernels.html b/dev/manual/nn_inside_gpu_kernels.html deleted file mode 100644 index eac23620ef..0000000000 --- a/dev/manual/nn_inside_gpu_kernels.html +++ /dev/null @@ -1,137 +0,0 @@ - - - - - - Neural Networks Inside GPU Kernels | Lux.jl Docs - - - - - - - - - - - - - - - - - - - - - - - - -
      Skip to content

      Neural Networks Inside GPU Kernels

      In this page, we will describe how to embed neural networks inside GPU kernels. We will use KernelAbstractions.jl to do this, making it compatible with multiple GPU backends.

      Experimental Feature

      This is a relatively new and experimental feature. Expect edge cases and open issues on GitHub if you find any.

      Inference Only

      Currently this works only for inference. We will eventually test automatic differentiation using Enzyme.jl

      Batching

      In most usecases, this form of batching via embedding the neural network inside a GPU kernel is not recommended and will lead to suboptimal performance. Instead, batch the input data and let Lux handle the batching internally.

      julia
      using Lux, LuxCUDA, Random, Functors
      -using KernelAbstractions, StaticArrays

      First thing to remember is that we can't use regular high-level operations inside the kernels, instead we will use Static Arrays. Leveraging Julia's multiple dispatch Lux will use specialized operations that are compatible with GPU kernels.

      julia
      @kernel function nn_eval_single_batch!(output, model, input, ps, st)
      -    i = @index(Global, Linear)
      -    y, st_ = Lux.apply(model, input[i], ps, st)
      -    output[i] = y
      -end
      nn_eval_single_batch! (generic function with 4 methods)

      We define and initialize the neural network as usual, but we need to additionally convert the Arrays into SArrays.

      julia
      nn = Chain(Dense(4, 4, relu), Dense(4, 4))
      -ps, st = Lux.setup(Xoshiro(123), nn)
      -
      -to_sarray(x) = SArray{Tuple{size(x)...}}(x)
      -ps_static = fmap(to_sarray, ps)
      -st_static = fmap(to_sarray, st)
      (layer_1 = NamedTuple(), layer_2 = NamedTuple())

      First we will run it on CPU.

      Warning

      Currently due to a minor bug, we cannot call the Lux models with vector input. As a workaround we make them into Matrix with batch size 1.

      julia
      input = [@SArray(rand(Float64, 4, 1)) for i in 1:1024]
      -output = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] # Allocate the output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]

      Now run the model using KernelAbstractions.jl

      julia
      backend = KernelAbstractions.get_backend(output)
      -cpu_kernel! = nn_eval_single_batch!(backend)
      -cpu_kernel!(output, nn, input, ps_static, st_static; ndrange=length(output))
      -KernelAbstractions.synchronize(backend)
      -output
      1024-element Vector{StaticArraysCore.SMatrix{4, 1, Float64, 4}}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922175;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817416; 1.6423511984038721; -1.1477053709248992; -0.3834447782571344;;]
      - [2.091359335844565; 1.0621559246995447; -1.4763277207638008; -1.142470881033475;;]
      - [2.712979078066394; 0.42005835019799886; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]

      Now we will run the same model on GPU.

      julia
      gdev = gpu_device()
      -
      -input_gpu = input |> gdev
      -output_gpu = [@SArray(zeros(Float64, 4, 1)) for i in 1:1024] |> gdev
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      -
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      - [0.0; 0.0; 0.0; 0.0;;]
      julia
      backend = KernelAbstractions.get_backend(output_gpu)
      -gpu_kernel! = nn_eval_single_batch!(backend)
      -gpu_kernel!(output_gpu, nn, input_gpu, ps_static, st_static; ndrange=length(output_gpu))
      -KernelAbstractions.synchronize(backend)
      -output_gpu
      1024-element CUDA.CuArray{StaticArraysCore.SMatrix{4, 1, Float64, 4}, 1, CUDA.DeviceMemory}:
      - [2.0564903986057956; 1.1188200246206075; -1.2227837233928576; -0.8173783982243132;;]
      - [1.9721554734769875; 1.3940224213371761; -1.297959481822617; -0.7195462169922173;;]
      - [2.5680085614623662; 1.713567516238075; -1.7165512278088038; -1.009963844931984;;]
      - [1.800792614736468; 0.36222499022985155; -1.1204217935313214; -1.1836515766351254;;]
      - [1.486550215883336; 0.32839986131789933; -0.9019142280758281; -0.9452923791531558;;]
      - [2.716134755899883; 1.1617228180412864; -1.902982902377702; -1.5865265807660498;;]
      - [1.0228109822209213; 0.2525357728685884; -0.4376572711003852; -0.4500963619011972;;]
      - [2.2771862617010155; 0.5381101016248151; -1.4730743722547668; -1.488028235902512;;]
      - [3.2791573282471997; 1.3436353225087703; -2.4619778701480337; -2.1239749674027375;;]
      - [1.2290224145974982; 0.4158693023143286; -0.6370531107315014; -0.5779067839062536;;]
      -
      - [1.8674763752817414; 1.6423511984038721; -1.147705370924899; -0.3834447782571341;;]
      - [2.0913593358445652; 1.062155924699545; -1.4763277207638013; -1.142470881033475;;]
      - [2.712979078066394; 0.420058350197999; -1.717863343114228; -1.8601870861800127;;]
      - [0.7701346738750905; 0.2869913410456831; -0.1586047939092094; -0.10140238162746013;;]
      - [1.611584190904272; 1.2797048270773437; -0.923950547913545; -0.3558193508137715;;]
      - [2.0884834705765853; 0.862480861009647; -1.3942307655311696; -1.179584495291061;;]
      - [2.390200114697191; 0.5267549745189349; -1.657670184695808; -1.7089496198123055;;]
      - [2.1846486482317626; -0.031414255389526885; -1.3279041356366077; -1.6909446526419574;;]
      - [1.3650193059617517; 0.5210742834996898; -0.7689272356710357; -0.6642563709240284;;]
      - - - - \ No newline at end of file diff --git a/dev/manual/performance_pitfalls.html b/dev/manual/performance_pitfalls.html index ca92f53558..46c67b6b34 100644 --- a/dev/manual/performance_pitfalls.html +++ b/dev/manual/performance_pitfalls.html @@ -5,15 +5,15 @@ Performance Pitfalls & How to Catch Them | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Performance Pitfalls & How to Catch Them

      Go through the following documentations for general performance tips:

      1. Official Julia Performance Tips.

      2. Recommendations for selecting AD packages.

      Spurious Type-Promotion

      Lux by-default uses Julia semantics for type-promotions, while this means that we do the "correct" numerical thing, this can often come as a surprise to users coming from a more deep learning background. For example, consider the following code:

      julia
      using Lux, Random
      +    
      Skip to content

      Performance Pitfalls & How to Catch Them

      Go through the following documentations for general performance tips:

      1. Official Julia Performance Tips.

      2. Recommendations for selecting AD packages.

      Spurious Type-Promotion

      Lux by-default uses Julia semantics for type-promotions, while this means that we do the "correct" numerical thing, this can often come as a surprise to users coming from a more deep learning background. For example, consider the following code:

      julia
      using Lux, Random
       
       rng = Xoshiro(0)
       
      @@ -56,7 +56,7 @@
           # do some computation
           # ...
       end

      Here, X and y are on the gpu device gdev and the data transfer happens in the worker processes. Additionally, it behaves similar to CuIterator from CUDA.jl and eagerly frees the data after every iteration (this is device agnostic and works on all supported GPU backends).

      - + \ No newline at end of file diff --git a/dev/manual/preferences.html b/dev/manual/preferences.html index 1d491d05a9..b03b1db206 100644 --- a/dev/manual/preferences.html +++ b/dev/manual/preferences.html @@ -5,15 +5,15 @@ Preferences for Lux.jl | Lux.jl Docs - + - + - - - + + + @@ -28,10 +28,10 @@ -
      Skip to content

      Preferences for Lux.jl

      How to set Preferences

      PreferenceTools.jl provides an interactive way to set preferences. First run the following command:

      julia
      using PreferenceTools

      Then in the pkg mode (press ] in the REPL), run the following command:

      julia
      pkg> preference add Lux <preference-name>=<value>
      +    
      Skip to content

      Preferences for Lux.jl

      How to set Preferences

      PreferenceTools.jl provides an interactive way to set preferences. First run the following command:

      julia
      using PreferenceTools

      Then in the pkg mode (press ] in the REPL), run the following command:

      julia
      pkg> preference add Lux <preference-name>=<value>
       pkg> preference add LuxLib <preference-name>=<value>
       pkg> preference add LuxCore <preference-name>=<value>

      Lux.jl relies on several preferences to make decision on how to run your code. Here is an exhaustive list of preferences that Lux.jl uses.

      Nested Automatic Differentiation

      1. automatic_nested_ad_switching - Set this to false to disable automatic switching of backends for nested automatic differentiation. See the manual section on nested automatic differentiation for more details.

      GPU-Aware MPI Support

      If you are using a custom MPI build that supports CUDA or ROCM, you can use the following preferences with Preferences.jl:

      1. cuda_aware_mpi - Set this to true if your MPI build is CUDA aware.

      2. rocm_aware_mpi - Set this to true if your MPI build is ROCM aware.

      By default, both of these preferences are set to false.

      GPU Backend Selection

      1. gpu_backend - Set this to bypass the automatic backend selection and use a specific gpu backend. Valid options are "cuda", "rocm", "metal", and "oneapi". This preference needs to be set for MLDataDevices package. It is recommended to use MLDataDevices.gpu_backend! to set this preference.

      Automatic Eltype Conversion

      1. eltype_mismatch_handling - Preference controlling what happens when layers get different eltypes as input. See the documentation on match_eltype for more details.

      Dispatch Doctor

      1. instability_check - Preference controlling the dispatch doctor. See the documentation on Lux.set_dispatch_doctor_preferences! for more details. The preferences need to be set for LuxCore and LuxLib packages. Both of them default to disable.
      • Setting the LuxCore preference sets the check at the level of LuxCore.apply. This essentially activates the dispatch doctor for all Lux layers.

      • Setting the LuxLib preference sets the check at the level of functional layer of Lux, for example, fused_dense_bias_activation. These functions are supposed to be type stable for common input types and can be used to guarantee type stability.

      Disabling Loop Vectorization / Octavian

      LoopVectorization.jl and Octavian.jl are optional dependencies that are used to accelerate certain CPU operations. However, these packages are tightly coupled with julia and might not work with all julia versions and systems. If these packages are loaded in any form LuxLib will use the optimized versions of the functions. But it might be desirable to disable these packages and use the default implementations instead. This can be done by setting the disable_loop_vectorization preference to true for LuxLib.

      - + \ No newline at end of file diff --git a/dev/manual/weight_initializers.html b/dev/manual/weight_initializers.html index 36b53115b3..9009f5cd1c 100644 --- a/dev/manual/weight_initializers.html +++ b/dev/manual/weight_initializers.html @@ -5,15 +5,15 @@ Initializing Weights | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,7 @@ -
      Skip to content

      Initializing Weights

      WeightInitializers.jl provides common weight initialization schemes for deep learning models.

      julia
      using WeightInitializers, Random
      +    
      Skip to content

      Initializing Weights

      WeightInitializers.jl provides common weight initialization schemes for deep learning models.

      julia
      using WeightInitializers, Random
       
       # Fixing rng
       rng = Random.MersenneTwister(42)
      Random.MersenneTwister(42)
      julia
      # Explicit rng call
      @@ -48,17 +48,17 @@
         0.486214   0.321506  -0.306641  0.145296   0.206476

      To generate weights directly on GPU, pass in a CUDA.RNG. For a complete list of supported RNG types, see Supported RNG Types.

      julia
      using LuxCUDA
       
       weights = kaiming_normal(CUDA.default_rng(), 2, 5)
      2×5 CUDA.CuArray{Float32, 2, CUDA.DeviceMemory}:
      - -0.550935  -0.671487  0.595443   0.612179  0.157552
      - -0.706354  -0.58597   0.0450307  0.222485  0.434273

      You can also generate Complex Numbers:

      julia
      weights = kaiming_normal(CUDA.default_rng(), ComplexF32, 2, 5)
      2×5 CUDA.CuArray{ComplexF32, 2, CUDA.DeviceMemory}:
      - 0.0939071-0.432439im    0.51712-0.177116im  …   0.859452+0.0580112im
      - 0.0271588+0.529416im  -0.471206+1.02357im      -0.631482+0.317067im

      Quick examples

      The package is meant to be working with deep learning libraries such as (F)Lux. All the methods take as input the chosen rng type and the dimension for the array.

      julia
      weights = init(rng, dims...)

      The rng is optional, if not specified a default one will be used.

      julia
      weights = init(dims...)

      If there is the need to use keyword arguments the methods can be called with just the rng (optionally) and the keywords to get in return a function behaving like the two examples above.

      julia
      weights_init = init(rng; kwargs...)
      + -1.43846   -0.0514504  -0.752638   0.632952  -0.711561
      + -0.451763   0.846389   -0.053432  -0.857412  -0.351028

      You can also generate Complex Numbers:

      julia
      weights = kaiming_normal(CUDA.default_rng(), ComplexF32, 2, 5)
      2×5 CUDA.CuArray{ComplexF32, 2, CUDA.DeviceMemory}:
      +   0.180649+0.13021im   0.00332033+0.608668im  …  -0.509792+0.507333im
      + -0.0334668-0.253364im   -0.706174-0.81377im      0.0949819-0.0634295im

      Quick examples

      The package is meant to be working with deep learning libraries such as (F)Lux. All the methods take as input the chosen rng type and the dimension for the array.

      julia
      weights = init(rng, dims...)

      The rng is optional, if not specified a default one will be used.

      julia
      weights = init(dims...)

      If there is the need to use keyword arguments the methods can be called with just the rng (optionally) and the keywords to get in return a function behaving like the two examples above.

      julia
      weights_init = init(rng; kwargs...)
       weights = weights_init(rng, dims...)
       
       # Or
       
       weights_init = init(; kwargs...)
       weights = weights_init(dims...)
      - + \ No newline at end of file diff --git a/dev/tutorials/advanced/1_GravitationalWaveForm.html b/dev/tutorials/advanced/1_GravitationalWaveForm.html index 6dde3b8ff4..7e2580f7f1 100644 --- a/dev/tutorials/advanced/1_GravitationalWaveForm.html +++ b/dev/tutorials/advanced/1_GravitationalWaveForm.html @@ -5,15 +5,15 @@ Training a Neural ODE to Model Gravitational Waveforms | Lux.jl Docs - + - + - - - + + + @@ -31,526 +31,641 @@
      Skip to content

      Training a Neural ODE to Model Gravitational Waveforms

      This code is adapted from Astroinformatics/ScientificMachineLearning

      The code has been minimally adapted from Keith et. al. 2021 which originally used Flux.jl

      Package Imports

      julia
      using Lux, ComponentArrays, LineSearches, OrdinaryDiffEqLowOrderRK, Optimization,
             OptimizationOptimJL, Printf, Random, SciMLSensitivity
       using CairoMakie
      Precompiling Lux...
      -    491.1 ms  ✓ JLLWrappers
      -    554.4 ms  ✓ Requires
      -    552.6 ms  ✓ Compat
      -    629.1 ms  ✓ DocStringExtensions
      -    634.4 ms  ✓ CpuId
      -    820.3 ms  ✓ Static
      -    401.2 ms  ✓ Compat → CompatLinearAlgebraExt
      -    632.2 ms  ✓ Hwloc_jll
      -    661.6 ms  ✓ OpenSpecFun_jll
      -    431.8 ms  ✓ BitTwiddlingConvenienceFunctions
      -    684.5 ms  ✓ LogExpFunctions
      -    658.8 ms  ✓ Functors
      -   1631.2 ms  ✓ DispatchDoctor
      -   1087.2 ms  ✓ CPUSummary
      -    476.3 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   1311.3 ms  ✓ ChainRulesCore
      -   1012.2 ms  ✓ MLDataDevices
      -   1696.2 ms  ✓ StaticArrayInterface
      -    868.5 ms  ✓ PolyesterWeave
      -    545.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    551.7 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -   1443.6 ms  ✓ LuxCore
      -    888.0 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    669.0 ms  ✓ CloseOpenIntervals
      -    744.1 ms  ✓ LayoutPointers
      -    822.5 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -   1334.1 ms  ✓ Optimisers
      -   2495.6 ms  ✓ Hwloc
      -    595.4 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    597.0 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    608.0 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    628.4 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    745.5 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1681.4 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    441.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    441.6 ms  ✓ Optimisers → OptimisersAdaptExt
      -   2906.4 ms  ✓ SpecialFunctions
      -   1013.3 ms  ✓ StrideArraysCore
      -    750.8 ms  ✓ Polyester
      -   1703.7 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   6816.8 ms  ✓ StaticArrays
      -   2749.4 ms  ✓ WeightInitializers
      -    593.3 ms  ✓ Adapt → AdaptStaticArraysExt
      -    602.7 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    617.4 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    643.8 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    673.6 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3505.1 ms  ✓ ForwardDiff
      -    931.9 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    832.3 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3181.1 ms  ✓ KernelAbstractions
      -    622.3 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    698.8 ms  ✓ KernelAbstractions → EnzymeExt
      -   5027.9 ms  ✓ NNlib
      -    805.4 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    892.2 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5422.4 ms  ✓ LuxLib
      -   8900.0 ms  ✓ Lux
      -  58 dependencies successfully precompiled in 32 seconds. 51 already precompiled.
      +    394.7 ms  ✓ Future
      +    453.9 ms  ✓ ConcreteStructs
      +    387.9 ms  ✓ Reexport
      +    386.9 ms  ✓ SIMDTypes
      +    395.5 ms  ✓ OpenLibm_jll
      +    404.1 ms  ✓ CEnum
      +    397.8 ms  ✓ ManualMemory
      +    404.8 ms  ✓ ArgCheck
      +    495.3 ms  ✓ Requires
      +    505.2 ms  ✓ CompilerSupportLibraries_jll
      +    542.8 ms  ✓ Statistics
      +    600.6 ms  ✓ EnzymeCore
      +    626.1 ms  ✓ ADTypes
      +    337.8 ms  ✓ IfElse
      +    363.1 ms  ✓ CommonWorldInvalidations
      +    361.7 ms  ✓ FastClosures
      +    403.1 ms  ✓ StaticArraysCore
      +    472.3 ms  ✓ ConstructionBase
      +    950.9 ms  ✓ IrrationalConstants
      +    473.2 ms  ✓ NaNMath
      +    576.9 ms  ✓ Compat
      +    496.7 ms  ✓ JLLWrappers
      +    413.8 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    638.3 ms  ✓ DocStringExtensions
      +    644.4 ms  ✓ CpuId
      +    471.6 ms  ✓ Adapt
      +    419.7 ms  ✓ DiffResults
      +    419.8 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    447.7 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +    824.8 ms  ✓ ThreadingUtilities
      +    404.9 ms  ✓ Compat → CompatLinearAlgebraExt
      +    386.2 ms  ✓ EnzymeCore → AdaptExt
      +    475.3 ms  ✓ GPUArraysCore
      +    815.3 ms  ✓ Static
      +    538.3 ms  ✓ ArrayInterface
      +    627.0 ms  ✓ Hwloc_jll
      +    609.0 ms  ✓ LogExpFunctions
      +    656.6 ms  ✓ OpenSpecFun_jll
      +   1735.7 ms  ✓ UnsafeAtomics
      +    396.1 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    443.8 ms  ✓ BitTwiddlingConvenienceFunctions
      +    633.2 ms  ✓ Functors
      +    398.8 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +   2036.3 ms  ✓ MacroTools
      +    506.0 ms  ✓ Atomix
      +   1188.3 ms  ✓ ChainRulesCore
      +   1084.7 ms  ✓ CPUSummary
      +    674.2 ms  ✓ CommonSubexpressions
      +    839.1 ms  ✓ MLDataDevices
      +    412.9 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    441.1 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +   1457.0 ms  ✓ StaticArrayInterface
      +    610.7 ms  ✓ PolyesterWeave
      +   1417.7 ms  ✓ Setfield
      +    697.6 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +   1519.1 ms  ✓ DispatchDoctor
      +    488.7 ms  ✓ CloseOpenIntervals
      +    619.6 ms  ✓ LayoutPointers
      +   1127.9 ms  ✓ Optimisers
      +   2081.4 ms  ✓ Hwloc
      +   1388.1 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    453.7 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +    445.2 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    448.7 ms  ✓ Optimisers → OptimisersAdaptExt
      +   2505.0 ms  ✓ SpecialFunctions
      +    655.4 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +    973.5 ms  ✓ StrideArraysCore
      +    623.3 ms  ✓ DiffRules
      +   1236.1 ms  ✓ LuxCore
      +    468.5 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    467.0 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    473.9 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    477.7 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    730.6 ms  ✓ Polyester
      +    639.9 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1673.2 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   2627.8 ms  ✓ WeightInitializers
      +   6046.2 ms  ✓ StaticArrays
      +    608.8 ms  ✓ Adapt → AdaptStaticArraysExt
      +    617.9 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    635.3 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    641.0 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    666.9 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +    945.0 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +   3331.0 ms  ✓ ForwardDiff
      +    886.6 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3194.9 ms  ✓ KernelAbstractions
      +    686.5 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    732.4 ms  ✓ KernelAbstractions → EnzymeExt
      +   5372.0 ms  ✓ NNlib
      +    844.4 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    947.4 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5897.7 ms  ✓ LuxLib
      +   9226.4 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 33 seconds. 15 already precompiled.
       Precompiling ComponentArrays...
      -    876.0 ms  ✓ ComponentArrays
      +    899.7 ms  ✓ ComponentArrays
         1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
       Precompiling MLDataDevicesComponentArraysExt...
      -    497.4 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      +    527.0 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
         1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
       Precompiling LuxComponentArraysExt...
      -    526.1 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1503.1 ms  ✓ Lux → LuxComponentArraysExt
      -   1876.2 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      +    529.6 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      +   1416.5 ms  ✓ Lux → LuxComponentArraysExt
      +   2037.1 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
         3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
       Precompiling LineSearches...
      -    980.1 ms  ✓ NLSolversBase
      -   1709.8 ms  ✓ LineSearches
      -  2 dependencies successfully precompiled in 3 seconds. 41 already precompiled.
      +    351.6 ms  ✓ UnPack
      +    515.1 ms  ✓ OrderedCollections
      +    588.6 ms  ✓ Serialization
      +    626.9 ms  ✓ FiniteDiff
      +    451.8 ms  ✓ Parameters
      +   1670.2 ms  ✓ Distributed
      +   1058.7 ms  ✓ NLSolversBase
      +   1793.9 ms  ✓ LineSearches
      +  8 dependencies successfully precompiled in 5 seconds. 35 already precompiled.
       Precompiling FiniteDiffStaticArraysExt...
      -    554.4 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      +    620.8 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
         1 dependency successfully precompiled in 1 seconds. 21 already precompiled.
       Precompiling OrdinaryDiffEqLowOrderRK...
      -    417.5 ms  ✓ FastPower
      -    435.1 ms  ✓ MuladdMacro
      -    426.0 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    472.1 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    527.1 ms  ✓ TruncatedStacktraces
      -    758.9 ms  ✓ PreallocationTools
      -    790.4 ms  ✓ FastBroadcast
      -    631.1 ms  ✓ FastPower → FastPowerForwardDiffExt
      -   1449.2 ms  ✓ RecipesBase
      -   1660.9 ms  ✓ DataStructures
      -   2067.6 ms  ✓ Accessors
      -    736.5 ms  ✓ Accessors → LinearAlgebraExt
      -   1315.0 ms  ✓ SymbolicIndexingInterface
      -   1715.7 ms  ✓ SciMLOperators
      -    495.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1955.7 ms  ✓ RecursiveArrayTools
      -    717.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    797.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -  10992.2 ms  ✓ SciMLBase
      -   5769.1 ms  ✓ DiffEqBase
      -   4436.6 ms  ✓ OrdinaryDiffEqCore
      -   1470.8 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   4097.4 ms  ✓ OrdinaryDiffEqLowOrderRK
      -  23 dependencies successfully precompiled in 33 seconds. 102 already precompiled.
      +    344.5 ms  ✓ IteratorInterfaceExtensions
      +    355.2 ms  ✓ CommonSolve
      +    357.5 ms  ✓ DataValueInterfaces
      +    371.1 ms  ✓ FastPower
      +    398.1 ms  ✓ SimpleUnPack
      +    396.4 ms  ✓ MuladdMacro
      +    391.6 ms  ✓ CompositionsBase
      +    424.7 ms  ✓ ExprTools
      +    417.3 ms  ✓ EnumX
      +    418.2 ms  ✓ DataAPI
      +    437.1 ms  ✓ SciMLStructures
      +    475.9 ms  ✓ InverseFunctions
      +    594.3 ms  ✓ TruncatedStacktraces
      +    735.8 ms  ✓ FunctionWrappers
      +    401.6 ms  ✓ TableTraits
      +    456.5 ms  ✓ RuntimeGeneratedFunctions
      +    427.2 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      +    446.8 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      +    971.0 ms  ✓ FillArrays
      +    501.3 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      +    691.1 ms  ✓ FastPower → FastPowerForwardDiffExt
      +    397.1 ms  ✓ FunctionWrappersWrappers
      +    778.9 ms  ✓ PreallocationTools
      +    763.8 ms  ✓ FastBroadcast
      +    441.1 ms  ✓ FillArrays → FillArraysStatisticsExt
      +    819.4 ms  ✓ Tables
      +   1461.8 ms  ✓ RecipesBase
      +   1709.5 ms  ✓ DataStructures
      +   2162.1 ms  ✓ Accessors
      +    884.3 ms  ✓ Accessors → LinearAlgebraExt
      +   1519.4 ms  ✓ SymbolicIndexingInterface
      +   1847.2 ms  ✓ SciMLOperators
      +    578.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +   2058.8 ms  ✓ RecursiveArrayTools
      +    773.8 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +    789.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      +  11188.9 ms  ✓ MLStyle
      +   7812.3 ms  ✓ Expronicon
      +  11395.6 ms  ✓ SciMLBase
      +   5993.6 ms  ✓ DiffEqBase
      +   4619.2 ms  ✓ OrdinaryDiffEqCore
      +   1499.4 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      +   4141.6 ms  ✓ OrdinaryDiffEqLowOrderRK
      +  43 dependencies successfully precompiled in 47 seconds. 82 already precompiled.
       Precompiling StaticArraysExt...
      -    631.9 ms  ✓ Accessors → StaticArraysExt
      +    681.6 ms  ✓ Accessors → StaticArraysExt
         1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
       Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    587.3 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +    623.5 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
       Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    674.5 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      +    705.4 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
       Precompiling ComponentArraysSciMLBaseExt...
      -    949.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   1088.6 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      +    987.6 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   1128.0 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
         2 dependencies successfully precompiled in 1 seconds. 97 already precompiled.
       Precompiling DiffEqBaseChainRulesCoreExt...
          1519.9 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
         1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
       Precompiling MLDataDevicesFillArraysExt...
      -    424.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +    465.7 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
         1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling Optimization...
      -    443.0 ms  ✓ ProgressLogging
      -    521.6 ms  ✓ LoggingExtras
      -    619.6 ms  ✓ L_BFGS_B_jll
      -    834.1 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -    845.3 ms  ✓ ProgressMeter
      -    896.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -    663.1 ms  ✓ TerminalLoggers
      -    509.3 ms  ✓ LBFGSB
      -   1243.6 ms  ✓ SparseMatrixColorings
      -    455.1 ms  ✓ ConsoleProgressMonitor
      -    813.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -   3586.2 ms  ✓ SparseConnectivityTracer
      -   2116.4 ms  ✓ OptimizationBase
      -   1946.0 ms  ✓ Optimization
      -  14 dependencies successfully precompiled in 8 seconds. 90 already precompiled.
      +    470.7 ms  ✓ SuiteSparse_jll
      +    482.2 ms  ✓ ProgressLogging
      +    533.9 ms  ✓ AbstractTrees
      +    540.6 ms  ✓ LoggingExtras
      +    647.6 ms  ✓ L_BFGS_B_jll
      +    870.8 ms  ✓ DifferentiationInterface
      +    875.8 ms  ✓ ProgressMeter
      +    432.2 ms  ✓ LeftChildRightSiblingTrees
      +    541.9 ms  ✓ LBFGSB
      +    519.7 ms  ✓ ConsoleProgressMonitor
      +    684.8 ms  ✓ TerminalLoggers
      +   3690.2 ms  ✓ SparseArrays
      +    639.5 ms  ✓ SuiteSparse
      +    646.0 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +    675.6 ms  ✓ Statistics → SparseArraysExt
      +    691.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    715.7 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +    818.4 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +    876.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1233.8 ms  ✓ SparseMatrixColorings
      +    884.3 ms  ✓ PDMats
      +    882.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      +    683.6 ms  ✓ FillArrays → FillArraysPDMatsExt
      +   3588.6 ms  ✓ SparseConnectivityTracer
      +   2199.6 ms  ✓ OptimizationBase
      +   2034.9 ms  ✓ Optimization
      +  26 dependencies successfully precompiled in 13 seconds. 78 already precompiled.
       Precompiling ChainRulesCoreSparseArraysExt...
      -    609.2 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +    686.8 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
         1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
       Precompiling SparseArraysExt...
      -    891.2 ms  ✓ KernelAbstractions → SparseArraysExt
      +    958.0 ms  ✓ KernelAbstractions → SparseArraysExt
         1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
       Precompiling MLDataDevicesSparseArraysExt...
      -    640.6 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +    693.5 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
         1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      +Precompiling FiniteDiffSparseArraysExt...
      +    655.8 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 16 already precompiled.
       Precompiling DiffEqBaseSparseArraysExt...
      -   1626.5 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      +   1582.4 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
         1 dependency successfully precompiled in 2 seconds. 125 already precompiled.
      +Precompiling DifferentiationInterfaceFiniteDiffExt...
      +    472.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling DifferentiationInterfaceChainRulesCoreExt...
      -    384.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -  1 dependency successfully precompiled in 0 seconds. 11 already precompiled.
      +    457.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
       Precompiling DifferentiationInterfaceStaticArraysExt...
      -    576.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +    636.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
         1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
       Precompiling DifferentiationInterfaceForwardDiffExt...
      -    757.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +    841.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
         1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
       Precompiling SparseConnectivityTracerSpecialFunctionsExt...
      -   1150.6 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   1534.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      +   1218.4 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      +   1609.2 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
         2 dependencies successfully precompiled in 2 seconds. 26 already precompiled.
       Precompiling SparseConnectivityTracerNNlibExt...
      -   1644.8 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      +   1676.6 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
         1 dependency successfully precompiled in 2 seconds. 46 already precompiled.
       Precompiling SparseConnectivityTracerNaNMathExt...
      -   1205.7 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      +   1315.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
         1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      +Precompiling OptimizationFiniteDiffExt...
      +    420.6 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 97 already precompiled.
       Precompiling OptimizationForwardDiffExt...
      -    602.4 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      +    678.7 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
         1 dependency successfully precompiled in 1 seconds. 110 already precompiled.
       Precompiling OptimizationMLDataDevicesExt...
      -   1376.5 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      +   1418.5 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
         1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
       Precompiling HwlocTrees...
      -    496.8 ms  ✓ Hwloc → HwlocTrees
      +    570.4 ms  ✓ Hwloc → HwlocTrees
         1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
       Precompiling OptimizationOptimJL...
      -    477.3 ms  ✓ SortingAlgorithms
      -   2165.1 ms  ✓ StatsBase
      -   2976.7 ms  ✓ Optim
      -  12287.9 ms  ✓ OptimizationOptimJL
      -  4 dependencies successfully precompiled in 18 seconds. 136 already precompiled.
      +    372.0 ms  ✓ PtrArrays
      +    419.8 ms  ✓ StatsAPI
      +    482.7 ms  ✓ PositiveFactorizations
      +    475.3 ms  ✓ Missings
      +    542.8 ms  ✓ SortingAlgorithms
      +    496.8 ms  ✓ AliasTables
      +   2242.0 ms  ✓ StatsBase
      +   3129.7 ms  ✓ Optim
      +  11882.1 ms  ✓ OptimizationOptimJL
      +  9 dependencies successfully precompiled in 18 seconds. 131 already precompiled.
       Precompiling SciMLSensitivity...
      -    519.6 ms  ✓ StructIO
      -    536.7 ms  ✓ HashArrayMappedTries
      -    548.7 ms  ✓ PoissonRandom
      -    590.7 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    608.4 ms  ✓ Scratch
      -    721.9 ms  ✓ Accessors → StructArraysExt
      -    840.3 ms  ✓ Rmath_jll
      -    924.0 ms  ✓ oneTBB_jll
      -   1345.0 ms  ✓ Cassette
      -    866.8 ms  ✓ ResettableStacks
      -   1475.2 ms  ✓ KLU
      -    988.3 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    906.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   1794.3 ms  ✓ FastLapackInterface
      -    663.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -   1578.8 ms  ✓ LazyArtifacts
      -   1610.7 ms  ✓ ZygoteRules
      -   1462.2 ms  ✓ QuadGK
      -   1684.4 ms  ✓ HypergeometricFunctions
      -   1297.9 ms  ✓ HostCPUFeatures
      -    561.4 ms  ✓ ScopedValues
      -   1155.1 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2935.7 ms  ✓ IRTools
      -    777.8 ms  ✓ FunctionProperties
      -   3341.8 ms  ✓ TimerOutputs
      -   1411.2 ms  ✓ Rmath
      -   2171.9 ms  ✓ IntelOpenMP_jll
      -   2295.5 ms  ✓ LLVMExtra_jll
      -   2445.8 ms  ✓ Enzyme_jll
      -   5200.1 ms  ✓ Test
      -   3138.1 ms  ✓ ObjectFile
      -   4434.5 ms  ✓ SciMLJacobianOperators
      -    843.7 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -    901.1 ms  ✓ Accessors → TestExt
      -   2624.9 ms  ✓ StatsFuns
      -   2008.0 ms  ✓ MKL_jll
      -   1606.2 ms  ✓ Sparspak
      -   1703.8 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    859.0 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   7505.6 ms  ✓ ChainRules
      -   1813.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   6845.9 ms  ✓ Tracker
      -   8709.5 ms  ✓ Krylov
      -   6699.2 ms  ✓ DiffEqCallbacks
      -    919.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1322.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -   1538.6 ms  ✓ FastPower → FastPowerTrackerExt
      -   1616.7 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -   1639.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -   1811.3 ms  ✓ Tracker → TrackerPDMatsExt
      -   2806.7 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   8958.9 ms  ✓ VectorizationBase
      -   5930.2 ms  ✓ Distributions
      -   7162.7 ms  ✓ LLVM
      -   1136.5 ms  ✓ SLEEFPirates
      -   1474.0 ms  ✓ Distributions → DistributionsTestExt
      -   1476.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1903.3 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -   2017.8 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -  14531.2 ms  ✓ ArrayLayouts
      -    781.6 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   2432.6 ms  ✓ LazyArrays
      -   3963.6 ms  ✓ DiffEqNoiseProcess
      -   4622.0 ms  ✓ GPUArrays
      -   1307.4 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -  17512.8 ms  ✓ ReverseDiff
      -   3478.5 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   3482.2 ms  ✓ ArrayInterface → ArrayInterfaceReverseDiffExt
      -   3654.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -   4734.7 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -   4958.2 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   5063.9 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18538.1 ms  ✓ GPUCompiler
      -  21384.0 ms  ✓ LoopVectorization
      -   1165.2 ms  ✓ LoopVectorization → SpecialFunctionsExt
      -   1305.6 ms  ✓ LoopVectorization → ForwardDiffExt
      -   3945.1 ms  ✓ TriangularSolve
      -  29507.5 ms  ✓ Zygote
      -   1612.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -   1946.3 ms  ✓ Zygote → ZygoteTrackerExt
      -   3108.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   3504.4 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -  16228.7 ms  ✓ RecursiveFactorization
      -   5413.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  30267.2 ms  ✓ LinearSolve
      -   2570.3 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2623.4 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4099.1 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      - 199431.2 ms  ✓ Enzyme
      -   7082.8 ms  ✓ FastPower → FastPowerEnzymeExt
      -   7095.7 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -   7163.5 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   7185.4 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   7190.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   7370.6 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -  17743.4 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19363.4 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  17569.3 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  29561.6 ms  ✓ SciMLSensitivity
      -  99 dependencies successfully precompiled in 279 seconds. 192 already precompiled.
      +    394.0 ms  ✓ RealDot
      +    408.0 ms  ✓ StructIO
      +    423.7 ms  ✓ HashArrayMappedTries
      +    434.1 ms  ✓ PoissonRandom
      +    452.7 ms  ✓ Scratch
      +    589.2 ms  ✓ AbstractFFTs
      +    667.6 ms  ✓ SparseInverseSubset
      +    794.5 ms  ✓ StructArrays
      +    956.2 ms  ✓ RandomNumbers
      +   1006.9 ms  ✓ Cassette
      +    988.7 ms  ✓ OffsetArrays
      +    615.9 ms  ✓ Rmath_jll
      +   1022.1 ms  ✓ KLU
      +    642.6 ms  ✓ oneTBB_jll
      +   1281.1 ms  ✓ FastLapackInterface
      +    647.2 ms  ✓ ResettableStacks
      +   1094.5 ms  ✓ ZygoteRules
      +   1066.5 ms  ✓ LazyArtifacts
      +    400.6 ms  ✓ ScopedValues
      +    825.1 ms  ✓ HostCPUFeatures
      +   1041.8 ms  ✓ QuadGK
      +   1879.3 ms  ✓ TimerOutputs
      +    436.8 ms  ✓ StructArrays → StructArraysAdaptExt
      +    495.7 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +   1210.3 ms  ✓ HypergeometricFunctions
      +    454.3 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +   1996.8 ms  ✓ IRTools
      +    705.1 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    706.4 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +    509.7 ms  ✓ Accessors → StructArraysExt
      +    748.5 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    646.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    450.1 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      +    564.8 ms  ✓ FunctionProperties
      +    516.1 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      +    892.9 ms  ✓ Random123
      +   2065.5 ms  ✓ ObjectFile
      +    900.9 ms  ✓ Rmath
      +   3028.3 ms  ✓ Test
      +   1291.7 ms  ✓ IntelOpenMP_jll
      +   2931.8 ms  ✓ SciMLJacobianOperators
      +    627.0 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      +    673.4 ms  ✓ Accessors → TestExt
      +   1490.4 ms  ✓ Enzyme_jll
      +   1534.9 ms  ✓ LLVMExtra_jll
      +   1207.4 ms  ✓ Sparspak
      +   1373.1 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      +   1822.9 ms  ✓ StatsFuns
      +   4571.8 ms  ✓ DiffEqCallbacks
      +    720.6 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +   6213.2 ms  ✓ Krylov
      +   5217.8 ms  ✓ Tracker
      +   1569.2 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +   1011.1 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      +   1124.0 ms  ✓ FastPower → FastPowerTrackerExt
      +   1150.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   1261.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +   1431.0 ms  ✓ Tracker → TrackerPDMatsExt
      +   5419.1 ms  ✓ ChainRules
      +   2335.4 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      +    830.3 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +   6854.1 ms  ✓ VectorizationBase
      +   6018.7 ms  ✓ LLVM
      +   5051.9 ms  ✓ Distributions
      +   1027.1 ms  ✓ SLEEFPirates
      +   7721.2 ms  ✓ MKL_jll
      +   1457.8 ms  ✓ Distributions → DistributionsTestExt
      +   1477.1 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +  11969.9 ms  ✓ ArrayLayouts
      +   1780.2 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      +   1891.1 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      +    837.5 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      +   2433.5 ms  ✓ LazyArrays
      +  14546.2 ms  ✓ ReverseDiff
      +   3749.3 ms  ✓ DiffEqNoiseProcess
      +   1328.2 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      +   4547.8 ms  ✓ GPUArrays
      +   3313.2 ms  ✓ ArrayInterface → ArrayInterfaceReverseDiffExt
      +   3335.4 ms  ✓ FastPower → FastPowerReverseDiffExt
      +   3490.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      +   3471.4 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      +   4714.6 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      +   4762.3 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      +  15955.5 ms  ✓ GPUCompiler
      +  17383.9 ms  ✓ LoopVectorization
      +   1133.7 ms  ✓ LoopVectorization → SpecialFunctionsExt
      +   1267.9 ms  ✓ LoopVectorization → ForwardDiffExt
      +   3215.7 ms  ✓ TriangularSolve
      +  24597.6 ms  ✓ Zygote
      +   1551.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      +   1863.5 ms  ✓ Zygote → ZygoteTrackerExt
      +   3198.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      +   3505.5 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      +  15101.7 ms  ✓ RecursiveFactorization
      +   5352.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      +  28296.0 ms  ✓ LinearSolve
      +   2527.9 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      +   2583.6 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      +   4248.9 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      + 188388.4 ms  ✓ Enzyme
      +   5528.8 ms  ✓ FastPower → FastPowerEnzymeExt
      +   5571.1 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      +   5609.8 ms  ✓ QuadGK → QuadGKEnzymeExt
      +   5643.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      +   5681.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      +   5846.7 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      +   7804.7 ms  ✓ Enzyme → EnzymeStaticArraysExt
      +  10642.1 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      +   7971.6 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      +  21510.4 ms  ✓ SciMLSensitivity
      +  110 dependencies successfully precompiled in 247 seconds. 181 already precompiled.
      +  1 dependency had output during precompilation:
      +┌ MKL_jll
      +│   Downloading artifact: IntelOpenMP
      +
       Precompiling LuxLibSLEEFPiratesExt...
      -   2421.1 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      +   2441.5 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
         1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
       Precompiling LuxLibLoopVectorizationExt...
      -   4597.0 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      -  1 dependency successfully precompiled in 5 seconds. 105 already precompiled.
      +   4016.1 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      +  1 dependency successfully precompiled in 4 seconds. 105 already precompiled.
       Precompiling LuxLibEnzymeExt...
      -   1326.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      +   1328.1 ms  ✓ LuxLib → LuxLibEnzymeExt
         1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
       Precompiling LuxEnzymeExt...
      -   7532.3 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      +   6675.8 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
       Precompiling OptimizationEnzymeExt...
      -  20489.3 ms  ✓ OptimizationBase → OptimizationEnzymeExt
      -  1 dependency successfully precompiled in 21 seconds. 109 already precompiled.
      +  13182.2 ms  ✓ OptimizationBase → OptimizationEnzymeExt
      +  1 dependency successfully precompiled in 14 seconds. 109 already precompiled.
       Precompiling MLDataDevicesTrackerExt...
      -   1148.9 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      +   1206.5 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
         1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
       Precompiling LuxLibTrackerExt...
      -   1074.6 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.9 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      +   1108.5 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3350.4 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
       Precompiling LuxTrackerExt...
      -   2038.3 ms  ✓ Lux → LuxTrackerExt
      +   2125.0 ms  ✓ Lux → LuxTrackerExt
         1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
       Precompiling ComponentArraysTrackerExt...
      -   1155.5 ms  ✓ ComponentArrays → ComponentArraysTrackerExt
      +   1189.0 ms  ✓ ComponentArrays → ComponentArraysTrackerExt
         1 dependency successfully precompiled in 1 seconds. 70 already precompiled.
       Precompiling MLDataDevicesReverseDiffExt...
      -   3498.5 ms  ✓ MLDataDevices → MLDataDevicesReverseDiffExt
      +   3431.4 ms  ✓ MLDataDevices → MLDataDevicesReverseDiffExt
         1 dependency successfully precompiled in 4 seconds. 49 already precompiled.
       Precompiling LuxLibReverseDiffExt...
      -   3387.4 ms  ✓ LuxCore → LuxCoreArrayInterfaceReverseDiffExt
      -   4268.7 ms  ✓ LuxLib → LuxLibReverseDiffExt
      +   3363.2 ms  ✓ LuxCore → LuxCoreArrayInterfaceReverseDiffExt
      +   4162.8 ms  ✓ LuxLib → LuxLibReverseDiffExt
         2 dependencies successfully precompiled in 4 seconds. 98 already precompiled.
       Precompiling ComponentArraysReverseDiffExt...
      -   3584.6 ms  ✓ ComponentArrays → ComponentArraysReverseDiffExt
      +   3465.8 ms  ✓ ComponentArrays → ComponentArraysReverseDiffExt
         1 dependency successfully precompiled in 4 seconds. 57 already precompiled.
       Precompiling OptimizationReverseDiffExt...
      -   3383.0 ms  ✓ OptimizationBase → OptimizationReverseDiffExt
      +   3342.4 ms  ✓ OptimizationBase → OptimizationReverseDiffExt
         1 dependency successfully precompiled in 4 seconds. 130 already precompiled.
       Precompiling LuxReverseDiffExt...
      -   4398.6 ms  ✓ Lux → LuxReverseDiffExt
      +   4344.1 ms  ✓ Lux → LuxReverseDiffExt
         1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
       Precompiling MLDataDevicesChainRulesExt...
      -    793.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +    872.8 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
         1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
       Precompiling MLDataDevicesZygoteExt...
      -   1552.7 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -   1579.9 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      +   1590.9 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      +   1614.8 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
         2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
       Precompiling LuxZygoteExt...
      -   1645.8 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -   2696.5 ms  ✓ Lux → LuxZygoteExt
      +   1677.6 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   2840.9 ms  ✓ Lux → LuxZygoteExt
         2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
       Precompiling ComponentArraysZygoteExt...
      -   1549.5 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -   1826.0 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +   1599.8 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +   1823.6 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
         2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.
       Precompiling OptimizationZygoteExt...
      -   2164.3 ms  ✓ OptimizationBase → OptimizationZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 160 already precompiled.
      +   2207.7 ms  ✓ OptimizationBase → OptimizationZygoteExt
      +  1 dependency successfully precompiled in 3 seconds. 160 already precompiled.
       Precompiling CairoMakie...
      -    532.3 ms  ✓ RangeArrays
      -    515.1 ms  ✓ PolygonOps
      -    516.4 ms  ✓ IndirectArrays
      -    524.0 ms  ✓ LaTeXStrings
      -    565.6 ms  ✓ GeoFormatTypes
      -    571.2 ms  ✓ Contour
      -    591.2 ms  ✓ TensorCore
      -    628.0 ms  ✓ TriplotBase
      -    639.0 ms  ✓ StableRNGs
      -    691.0 ms  ✓ Extents
      -    703.5 ms  ✓ Observables
      -    742.1 ms  ✓ IntervalSets
      -    744.9 ms  ✓ RoundingEmulator
      -    834.5 ms  ✓ IterTools
      -    457.4 ms  ✓ CRC32c
      -    526.0 ms  ✓ Ratios
      -    550.6 ms  ✓ LazyModules
      -    601.2 ms  ✓ PCRE2_jll
      -   1160.8 ms  ✓ Grisu
      -    615.2 ms  ✓ Inflate
      -    599.2 ms  ✓ MappedArrays
      -    545.5 ms  ✓ RelocatableFolders
      -    780.8 ms  ✓ TranscodingStreams
      -   1577.5 ms  ✓ Format
      -    984.0 ms  ✓ SharedArrays
      -    876.0 ms  ✓ OpenSSL_jll
      -    805.3 ms  ✓ Graphite2_jll
      -    836.5 ms  ✓ LLVMOpenMP_jll
      -    815.4 ms  ✓ Bzip2_jll
      -    876.1 ms  ✓ Libmount_jll
      -    831.4 ms  ✓ libfdk_aac_jll
      -    924.3 ms  ✓ Xorg_libXau_jll
      -    839.4 ms  ✓ Imath_jll
      -    872.4 ms  ✓ libpng_jll
      -    817.1 ms  ✓ Giflib_jll
      -    989.0 ms  ✓ LAME_jll
      -   1592.0 ms  ✓ SimpleTraits
      -    855.9 ms  ✓ LERC_jll
      -    850.5 ms  ✓ EarCut_jll
      -    833.9 ms  ✓ CRlibm_jll
      -    929.8 ms  ✓ JpegTurbo_jll
      -    839.7 ms  ✓ Ogg_jll
      -    847.5 ms  ✓ x265_jll
      -    930.1 ms  ✓ XZ_jll
      -    838.0 ms  ✓ Xorg_libXdmcp_jll
      -    854.5 ms  ✓ x264_jll
      -    875.3 ms  ✓ libaom_jll
      -    885.1 ms  ✓ Zstd_jll
      -    861.8 ms  ✓ Expat_jll
      -   2356.2 ms  ✓ UnicodeFun
      -    838.8 ms  ✓ LZO_jll
      -    841.3 ms  ✓ Opus_jll
      -    717.8 ms  ✓ Xorg_xtrans_jll
      -    838.3 ms  ✓ Libffi_jll
      -    872.3 ms  ✓ Libiconv_jll
      -    818.6 ms  ✓ Libgpg_error_jll
      -    755.5 ms  ✓ Xorg_libpthread_stubs_jll
      -    856.0 ms  ✓ isoband_jll
      -    867.2 ms  ✓ FFTW_jll
      -    845.3 ms  ✓ FriBidi_jll
      -    829.6 ms  ✓ Libuuid_jll
      -    532.0 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    524.7 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -    527.5 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    547.6 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -    572.0 ms  ✓ Showoff
      -    656.4 ms  ✓ MosaicViews
      -   1510.9 ms  ✓ FilePathsBase
      -    855.5 ms  ✓ Pixman_jll
      -   1453.4 ms  ✓ GeoInterface
      -    887.6 ms  ✓ FreeType2_jll
      -   1007.6 ms  ✓ OpenEXR_jll
      -   1835.2 ms  ✓ ColorBrewer
      -    912.5 ms  ✓ libsixel_jll
      -    943.6 ms  ✓ libvorbis_jll
      -    935.4 ms  ✓ Libtiff_jll
      -    645.9 ms  ✓ Isoband
      -    962.7 ms  ✓ XML2_jll
      -    879.5 ms  ✓ Libgcrypt_jll
      -    844.9 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   1120.0 ms  ✓ AxisArrays
      -   2999.8 ms  ✓ ColorVectorSpace
      -   1064.5 ms  ✓ FilePaths
      -   1131.8 ms  ✓ Fontconfig_jll
      -   1151.7 ms  ✓ Gettext_jll
      -   2947.0 ms  ✓ Interpolations
      -   1440.1 ms  ✓ FreeType
      -   1035.2 ms  ✓ XSLT_jll
      -   1822.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1024.1 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -   3759.5 ms  ✓ IntervalArithmetic
      -   1256.5 ms  ✓ Glib_jll
      -   5306.1 ms  ✓ PkgVersion
      -   5392.6 ms  ✓ FileIO
      -    840.7 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   1719.0 ms  ✓ Xorg_libxcb_jll
      -    658.4 ms  ✓ Xorg_libX11_jll
      -    647.1 ms  ✓ Xorg_libXext_jll
      -    817.7 ms  ✓ Xorg_libXrender_jll
      -   1689.9 ms  ✓ QOI
      -   4094.7 ms  ✓ ColorSchemes
      -    912.6 ms  ✓ Libglvnd_jll
      -   2345.9 ms  ✓ OpenEXR
      -   1076.1 ms  ✓ Cairo_jll
      -   1550.8 ms  ✓ libwebp_jll
      -   1399.6 ms  ✓ HarfBuzz_jll
      -   7925.6 ms  ✓ FFTW
      -   7438.4 ms  ✓ GeometryBasics
      -   6105.2 ms  ✓ ExactPredicates
      -  10882.4 ms  ✓ SIMD
      -   1349.5 ms  ✓ libass_jll
      -   1414.4 ms  ✓ Pango_jll
      -   1736.2 ms  ✓ Packing
      -   2325.6 ms  ✓ ShaderAbstractions
      -   1367.2 ms  ✓ FFMPEG_jll
      -   2916.6 ms  ✓ FreeTypeAbstraction
      -   1742.4 ms  ✓ Cairo
      -   3073.0 ms  ✓ KernelDensity
      -   5911.6 ms  ✓ MakieCore
      -   6151.3 ms  ✓ DelaunayTriangulation
      -   7962.2 ms  ✓ GridLayoutBase
      -  11568.1 ms  ✓ PlotUtils
      -  22893.7 ms  ✓ Unitful
      -    586.3 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    590.6 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   1425.8 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -  12104.4 ms  ✓ Automa
      -  21254.1 ms  ✓ ImageCore
      -   2100.7 ms  ✓ ImageBase
      -   2629.4 ms  ✓ WebP
      -   3426.7 ms  ✓ PNGFiles
      -   3607.1 ms  ✓ JpegTurbo
      -   2191.5 ms  ✓ ImageAxes
      -   4724.6 ms  ✓ Sixel
      -   1163.4 ms  ✓ ImageMetadata
      -   1999.8 ms  ✓ Netpbm
      -  12307.5 ms  ✓ MathTeXEngine
      -  49319.1 ms  ✓ TiffImages
      -   1176.4 ms  ✓ ImageIO
      - 112829.4 ms  ✓ Makie
      -  74538.7 ms  ✓ CairoMakie
      -  141 dependencies successfully precompiled in 252 seconds. 129 already precompiled.
      +    422.2 ms  ✓ RangeArrays
      +    407.9 ms  ✓ PolygonOps
      +    431.3 ms  ✓ IndirectArrays
      +    443.4 ms  ✓ LaTeXStrings
      +    460.9 ms  ✓ GeoFormatTypes
      +    470.0 ms  ✓ Contour
      +    488.3 ms  ✓ TensorCore
      +    484.1 ms  ✓ TriplotBase
      +    487.2 ms  ✓ StableRNGs
      +    528.4 ms  ✓ PaddedViews
      +    530.4 ms  ✓ Observables
      +    534.1 ms  ✓ IntervalSets
      +    543.6 ms  ✓ RoundingEmulator
      +    570.3 ms  ✓ IterTools
      +    442.2 ms  ✓ PCRE2_jll
      +    842.7 ms  ✓ Grisu
      +    378.6 ms  ✓ CRC32c
      +    479.5 ms  ✓ Extents
      +    424.6 ms  ✓ Ratios
      +    462.0 ms  ✓ LazyModules
      +    450.4 ms  ✓ MappedArrays
      +    496.9 ms  ✓ Inflate
      +    477.9 ms  ✓ StackViews
      +    593.6 ms  ✓ TranscodingStreams
      +   1121.1 ms  ✓ Format
      +    714.7 ms  ✓ SharedArrays
      +    732.3 ms  ✓ WoodburyMatrices
      +    423.0 ms  ✓ SignedDistanceFields
      +    452.4 ms  ✓ RelocatableFolders
      +    657.8 ms  ✓ Graphite2_jll
      +    692.3 ms  ✓ OpenSSL_jll
      +    645.5 ms  ✓ Libmount_jll
      +    625.1 ms  ✓ LLVMOpenMP_jll
      +    640.5 ms  ✓ Bzip2_jll
      +    650.6 ms  ✓ Xorg_libXau_jll
      +    670.7 ms  ✓ libpng_jll
      +    641.6 ms  ✓ libfdk_aac_jll
      +    640.5 ms  ✓ Giflib_jll
      +    650.4 ms  ✓ Imath_jll
      +    655.9 ms  ✓ LAME_jll
      +   1585.4 ms  ✓ AdaptivePredicates
      +   1267.9 ms  ✓ SimpleTraits
      +    660.2 ms  ✓ LERC_jll
      +    656.0 ms  ✓ EarCut_jll
      +    665.3 ms  ✓ CRlibm_jll
      +    700.3 ms  ✓ JpegTurbo_jll
      +    644.3 ms  ✓ Ogg_jll
      +    707.1 ms  ✓ XZ_jll
      +   1599.0 ms  ✓ UnicodeFun
      +    678.5 ms  ✓ x265_jll
      +   1977.8 ms  ✓ FixedPointNumbers
      +    654.1 ms  ✓ Xorg_libXdmcp_jll
      +    673.5 ms  ✓ x264_jll
      +    612.0 ms  ✓ Expat_jll
      +    695.7 ms  ✓ libaom_jll
      +    690.1 ms  ✓ Zstd_jll
      +    663.8 ms  ✓ LZO_jll
      +    587.7 ms  ✓ Xorg_xtrans_jll
      +    668.9 ms  ✓ Opus_jll
      +    683.0 ms  ✓ Libiconv_jll
      +    653.9 ms  ✓ Libffi_jll
      +    659.2 ms  ✓ Libgpg_error_jll
      +    660.2 ms  ✓ isoband_jll
      +    572.1 ms  ✓ Xorg_libpthread_stubs_jll
      +    666.0 ms  ✓ FFTW_jll
      +    675.0 ms  ✓ FriBidi_jll
      +    650.2 ms  ✓ Libuuid_jll
      +    449.0 ms  ✓ IntervalSets → IntervalSetsRandomExt
      +    449.8 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      +    452.8 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      +    473.6 ms  ✓ Showoff
      +    508.9 ms  ✓ MosaicViews
      +   1018.6 ms  ✓ FilePathsBase
      +    683.2 ms  ✓ Pixman_jll
      +    743.4 ms  ✓ AxisAlgorithms
      +    695.2 ms  ✓ FreeType2_jll
      +    491.3 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      +    703.4 ms  ✓ libsixel_jll
      +    776.6 ms  ✓ OpenEXR_jll
      +    721.6 ms  ✓ libvorbis_jll
      +   1081.8 ms  ✓ GeoInterface
      +    711.5 ms  ✓ Libtiff_jll
      +    508.4 ms  ✓ Isoband
      +    731.3 ms  ✓ XML2_jll
      +    679.4 ms  ✓ Libgcrypt_jll
      +    582.2 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      +    859.6 ms  ✓ AxisArrays
      +    833.7 ms  ✓ FilePaths
      +   1494.1 ms  ✓ ColorTypes
      +    860.5 ms  ✓ Fontconfig_jll
      +   2502.7 ms  ✓ PkgVersion
      +    744.9 ms  ✓ Gettext_jll
      +   1040.5 ms  ✓ FreeType
      +   1289.6 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      +    751.0 ms  ✓ XSLT_jll
      +    560.8 ms  ✓ ColorTypes → StyledStringsExt
      +   2542.1 ms  ✓ IntervalArithmetic
      +    889.8 ms  ✓ Glib_jll
      +   3468.8 ms  ✓ FileIO
      +   2083.5 ms  ✓ Interpolations
      +    544.9 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      +   1177.4 ms  ✓ Xorg_libxcb_jll
      +   1912.1 ms  ✓ ColorVectorSpace
      +    689.0 ms  ✓ Xorg_libX11_jll
      +    838.1 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      +   1505.2 ms  ✓ QOI
      +    671.4 ms  ✓ Xorg_libXrender_jll
      +    668.6 ms  ✓ Xorg_libXext_jll
      +   4807.9 ms  ✓ FFTW
      +    895.2 ms  ✓ Libglvnd_jll
      +    923.1 ms  ✓ Cairo_jll
      +   4002.3 ms  ✓ Colors
      +   6750.4 ms  ✓ SIMD
      +    647.7 ms  ✓ Graphics
      +    661.9 ms  ✓ Animations
      +   3850.6 ms  ✓ ExactPredicates
      +    903.6 ms  ✓ libwebp_jll
      +    880.6 ms  ✓ HarfBuzz_jll
      +    851.4 ms  ✓ ColorBrewer
      +    784.9 ms  ✓ libass_jll
      +   1817.0 ms  ✓ KernelDensity
      +    850.6 ms  ✓ Pango_jll
      +   1619.6 ms  ✓ OpenEXR
      +    987.5 ms  ✓ FFMPEG_jll
      +   1358.5 ms  ✓ Cairo
      +   3537.3 ms  ✓ ColorSchemes
      +   9513.6 ms  ✓ GeometryBasics
      +   5259.9 ms  ✓ DelaunayTriangulation
      +   1207.8 ms  ✓ Packing
      +   1326.0 ms  ✓ ShaderAbstractions
      +   2112.2 ms  ✓ FreeTypeAbstraction
      +  15568.6 ms  ✓ Unitful
      +    601.0 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      +    612.5 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      +   1287.0 ms  ✓ Interpolations → InterpolationsUnitfulExt
      +   8314.0 ms  ✓ Automa
      +   3920.4 ms  ✓ MakieCore
      +   5349.4 ms  ✓ GridLayoutBase
      +   8321.6 ms  ✓ PlotUtils
      +  14392.5 ms  ✓ ImageCore
      +   1985.2 ms  ✓ ImageBase
      +   2375.0 ms  ✓ WebP
      +   3096.9 ms  ✓ PNGFiles
      +   3252.9 ms  ✓ JpegTurbo
      +   3341.5 ms  ✓ Sixel
      +   8991.4 ms  ✓ MathTeXEngine
      +   2184.2 ms  ✓ ImageAxes
      +   1273.7 ms  ✓ ImageMetadata
      +   1918.1 ms  ✓ Netpbm
      +  43256.2 ms  ✓ TiffImages
      +   1321.2 ms  ✓ ImageIO
      + 106541.5 ms  ✓ Makie
      +  82478.8 ms  ✓ CairoMakie
      +  153 dependencies successfully precompiled in 243 seconds. 118 already precompiled.
       Precompiling SparseMatrixColoringsColorsExt...
      -    869.7 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      +    959.5 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
         1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
       Precompiling ZygoteColorsExt...
      -   1732.9 ms  ✓ Zygote → ZygoteColorsExt
      +   1822.0 ms  ✓ Zygote → ZygoteColorsExt
         1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
       Precompiling IntervalSetsExt...
      -    784.1 ms  ✓ Accessors → IntervalSetsExt
      +   1039.0 ms  ✓ Accessors → IntervalSetsExt
         1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling IntervalSetsRecipesBaseExt...
      -    515.5 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      +    637.6 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
         1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
       Precompiling UnitfulExt...
      -    585.0 ms  ✓ Accessors → UnitfulExt
      +    656.6 ms  ✓ Accessors → UnitfulExt
         1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling DiffEqBaseUnitfulExt...
      -   1538.2 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
      +   1543.3 ms  ✓ DiffEqBase → DiffEqBaseUnitfulExt
         1 dependency successfully precompiled in 2 seconds. 123 already precompiled.
       Precompiling NNlibFFTWExt...
      -    860.7 ms  ✓ NNlib → NNlibFFTWExt
      +    932.7 ms  ✓ NNlib → NNlibFFTWExt
         1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
       Precompiling IntervalArithmeticForwardDiffExt...
      -    452.2 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    642.3 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +    553.9 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      +    745.8 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
         2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
       Precompiling IntervalArithmeticRecipesBaseExt...
      -    756.5 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      +    891.2 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
         1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
       Precompiling SciMLBaseMakieExt...
      -   9269.6 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.

      Define some Utility Functions

      Tip

      This section can be skipped. It defines functions to simulate the model, however, from a scientific machine learning perspective, isn't super relevant.

      We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector r=r1r2 and use Newtonian formulas to get r1, r2 (e.g. Theoretical Mechanics of Particles and Continua 4.3)

      julia
      function one2two(path, m₁, m₂)
      +   8282.2 ms  ✓ SciMLBase → SciMLBaseMakieExt
      +  1 dependency successfully precompiled in 9 seconds. 304 already precompiled.

      Define some Utility Functions

      Tip

      This section can be skipped. It defines functions to simulate the model, however, from a scientific machine learning perspective, isn't super relevant.

      We need a very crude 2-body path. Assume the 1-body motion is a newtonian 2-body position vector r=r1r2 and use Newtonian formulas to get r1, r2 (e.g. Theoretical Mechanics of Particles and Continua 4.3)

      julia
      function one2two(path, m₁, m₂)
           M = m₁ + m₂
           r₁ = m₂ / M .* path
           r₂ = -m₁ / M .* path
      @@ -701,11 +816,11 @@
           axislegend(ax, [[l, s]], ["Waveform Data"])
       
           fig
      -end

      Defiing a Neural Network Model

      Next, we define the neural network model that takes 1 input (time) and has two outputs. We'll make a function ODE_model that takes the initial conditions, neural network parameters and a time as inputs and returns the derivatives.

      It is typically never recommended to use globals but incase you do use them, make sure to mark them as const.

      We will deviate from the standard Neural Network initialization and use WeightInitializers.jl,

      julia
      const nn = Chain(Base.Fix1(fast_activation, cos),
      +end

      Defiing a Neural Network Model

      Next, we define the neural network model that takes 1 input (time) and has two outputs. We'll make a function ODE_model that takes the initial conditions, neural network parameters and a time as inputs and returns the derivatives.

      It is typically never recommended to use globals but incase you do use them, make sure to mark them as const.

      We will deviate from the standard Neural Network initialization and use WeightInitializers.jl,

      julia
      const nn = Chain(Base.Fix1(fast_activation, cos),
           Dense(1 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
           Dense(32 => 32, cos; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32),
           Dense(32 => 2; init_weight=truncated_normal(; std=1e-4), init_bias=zeros32))
      -ps, st = Lux.setup(Random.default_rng(), nn)
      ((layer_1 = NamedTuple(), layer_2 = (weight = Float32[0.00012904078; -0.000112544876; -3.194287f-5; 9.532207f-5; 5.7499907f-5; -0.000106882915; -0.00010909063; 3.8261045f-5; -1.2864484f-5; -8.822358f-5; 3.546014f-5; -1.8142538f-5; -0.000102116464; -4.5605157f-6; -0.00021668768; 0.00014581048; -0.000112148016; 6.22374f-5; 0.00024771586; -4.4607037f-5; 4.5975794f-5; -0.00013581885; -8.202444f-5; -2.2094564f-5; 2.0224994f-5; -0.00024750977; 8.9906325f-6; -9.196156f-5; 1.3154642f-5; 5.571319f-5; -2.2547f-5; -1.0075346f-5;;], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_3 = (weight = Float32[0.0001618415 0.00010802869 9.6770396f-5 -0.00014462393 8.764277f-5 -4.6047666f-5 -6.289895f-5 1.4200443f-5 3.7395082f-5 -3.8730854f-5 -2.4571409f-5 -1.5922284f-5 -0.0001338827 0.00013049862 -1.7215027f-5 0.0001519071 3.6739f-5 5.3988137f-5 7.5490334f-6 4.1321127f-5 0.00012454136 -0.00012439543 0.00032224192 -9.0235466f-5 8.027509f-6 -4.230032f-5 -1.2626992f-5 3.817326f-5 0.00010345286 -3.2830536f-5 -0.00013042263 3.6778765f-5; -0.00010449851 6.9333175f-5 -0.00025979505 -4.1506977f-5 0.00015308514 0.00015436667 4.068362f-5 6.194856f-5 0.00016091425 4.190392f-5 8.6157415f-5 -0.00015018477 -1.8756064f-5 0.00013426754 3.4385368f-5 3.7520163f-5 -8.245856f-6 8.076973f-6 -7.841589f-5 9.4385825f-5 3.1037864f-5 1.632189f-5 -0.00011407531 -9.840075f-5 -9.079082f-6 -0.00012266912 -0.0001501576 -4.8833277f-5 6.306712f-5 -8.952644f-5 -6.778331f-5 0.00013526856; 3.5417983f-5 -6.405736f-5 5.0200804f-5 -2.187474f-5 9.65856f-5 1.6087113f-5 0.00017726328 5.0810268f-5 -1.2770168f-5 0.0002584681 -6.891108f-5 -1.8794399f-5 1.8506884f-5 3.4208548f-5 5.7545672f-5 -0.00019487216 -4.1474475f-5 -7.719686f-5 0.00022728901 7.322569f-5 4.1465402f-5 -0.00011304584 -0.000121477155 -7.1576396f-5 -0.00019784343 7.308304f-6 -8.1276325f-5 -5.8827296f-5 3.2044787f-5 -7.681639f-5 -4.2084102f-5 -1.7746544f-5; -4.1023915f-5 6.294326f-5 4.1910976f-6 -8.0661346f-5 4.198329f-5 -6.740117f-5 2.4916579f-5 -0.00011881353 -0.00010547372 1.9229235f-5 8.645224f-6 0.000109944114 -1.4017144f-5 -7.512534f-6 7.736905f-5 -0.00010222966 0.00014278025 -6.63838f-5 -0.00014546272 -7.750724f-6 -0.00011267116 3.0435387f-5 9.601777f-5 -0.000119031705 -0.00015527422 0.0001169817 -0.00013191596 0.000102389204 5.188713f-5 2.1413676f-5 -0.00010448805 1.1473379f-5; 8.031061f-6 6.5873464f-5 2.5314368f-5 0.00017256496 -4.9571987f-5 -1.3794807f-5 2.4063143f-5 9.781523f-5 -4.280496f-5 -2.0803165f-5 -7.521194f-5 0.00018738274 0.00010471025 7.513476f-5 -4.893005f-5 0.00014860366 -8.2969374f-5 7.298375f-5 -8.980523f-5 0.00023513274 -7.5829416f-6 -5.8425405f-5 1.23955515f-5 6.96497f-5 2.4521105f-5 7.589782f-7 -5.2146323f-5 2.6066837f-5 -1.2284129f-5 -7.0048576f-5 -8.4076615f-5 -6.8420195f-6; 7.3120784f-5 -4.376328f-5 3.065425f-5 2.2613407f-5 5.7423866f-5 5.5075692f-5 -1.3883068f-5 -9.999049f-6 7.647168f-5 2.6283391f-5 -3.4442135f-5 3.925019f-5 -7.635232f-5 7.141157f-5 0.0001065878 -1.3486914f-5 -2.3242749f-6 -0.00015052347 0.00014461616 2.219101f-5 -0.00012743202 -7.240347f-5 7.515301f-5 -3.7679296f-5 0.000116404415 -8.719309f-5 -0.00011620738 -2.2765513f-5 3.5284862f-5 -5.4052427f-5 4.8226982f-5 -5.9169564f-5; -6.7190835f-5 -5.7709756f-5 -3.7505262f-5 9.183235f-5 -2.3899584f-5 0.00014002899 5.2985233f-5 2.9858915f-5 -0.00014034772 0.0002104392 2.2348457f-5 -9.916955f-5 -5.660702f-5 9.209628f-5 -0.00011463575 -5.018853f-6 1.7207522f-5 -9.028814f-5 5.9064158f-5 5.6500547f-5 5.1515904f-5 -6.183798f-5 -1.869243f-5 -0.00019577722 0.0001116403 2.623907f-5 2.5146139f-5 -0.00020383588 -6.0033046f-5 7.342174f-5 -0.00012018732 0.00019175147; -4.4822886f-5 -1.809907f-5 -0.0001516205 -9.591752f-5 -0.000307869 0.00017293467 -0.00014444215 0.00018572801 -4.6772497f-5 0.00020110856 0.00011630823 0.00012441541 -1.9470537f-5 0.0001705941 -0.00011531175 -4.7874884f-5 -3.391144f-5 -0.00017337958 -0.00015705569 0.0001380679 9.7610886f-5 -4.8745784f-5 -5.210988f-5 -0.00017633072 -4.5728182f-5 -9.739671f-5 -1.0597239f-5 2.2026148f-5 -6.3983694f-5 6.102486f-5 0.00010338496 -6.827732f-5; 0.00018907207 -2.7584168f-5 6.10194f-5 -3.5499725f-5 -4.9563205f-5 -2.0559924f-5 3.0567677f-5 -6.0137634f-5 2.9698942f-6 -3.717725f-5 -5.4455915f-5 0.000120902325 -2.6554806f-5 -9.82872f-5 0.00012045895 0.00013023132 -4.3817818f-6 -7.90597f-6 -0.00017591458 -6.145454f-5 -9.7153286f-5 -0.000113216745 -0.00019022694 -1.5265173f-6 -3.7145597f-5 -1.418679f-6 4.0716244f-5 -7.1149034f-6 -5.2239297f-6 6.4689855f-5 0.00015458622 6.8282454f-5; 9.238393f-5 -0.00010029555 -0.00032395986 5.5797962f-5 0.00025480852 0.00010384605 6.894006f-5 9.168032f-5 -0.00014577566 5.929657f-5 0.0002960712 0.00013800748 2.5192774f-5 0.00011150848 -5.873785f-5 -7.113468f-5 -5.1084924f-5 3.14979f-5 8.180124f-5 -1.0872986f-5 6.779847f-5 6.3255684f-5 2.280975f-5 -2.5365513f-5 -7.609393f-5 6.9949754f-5 -0.00020458306 -0.00014076366 -5.1785355f-5 -6.7380766f-5 -9.743695f-5 -2.6103799f-5; 2.995047f-5 -6.736759f-6 -3.8725888f-5 -6.074171f-6 7.08185f-5 -2.7499069f-5 -7.966966f-5 -0.00014262945 0.00015519376 4.3257455f-6 0.00013839432 6.4476895f-5 -0.00012280572 0.00015023709 -0.00010829573 -0.00012139006 -0.00010294529 -6.770257f-6 -4.841615f-5 -1.7801849f-6 -3.3879445f-5 5.58065f-6 3.9426603f-5 5.5769644f-5 -3.4012104f-5 -8.564142f-5 3.2661476f-6 -1.0633827f-5 -9.2488284f-5 6.4112755f-5 -0.00017150475 3.3733468f-5; -5.781235f-5 0.0001663885 -0.00022763766 -2.2378454f-5 6.2147716f-5 0.00016687042 -0.00019428028 -2.5980862f-5 -1.556338f-5 8.4640174f-5 -9.9490266f-5 1.250148f-6 -0.00014455267 -2.947595f-5 6.0663788f-5 -0.00010613067 5.1250395f-6 7.563867f-5 5.97999f-5 4.5715213f-5 1.86434f-5 -7.3369134f-5 -4.44051f-6 -2.5847781f-5 -1.2085683f-5 -3.6958885f-5 0.00019028709 -0.000108798136 0.0001345308 2.3690662f-5 0.00019774816 -0.00010464023; -0.00014413495 -0.00013636223 3.6046575f-5 7.1486343f-6 1.915911f-5 0.000108172964 -4.005471f-5 1.8743809f-5 0.00031020257 -3.856181f-5 4.4903583f-5 3.4804332f-5 -0.00016249428 -0.00010392467 5.827663f-5 9.077724f-5 5.3671643f-5 -4.8848806f-6 9.078535f-5 8.6138905f-5 -1.8303757f-5 -8.899495f-5 3.5389185f-5 7.94879f-6 5.58088f-5 -4.5910605f-5 6.699846f-5 -0.00014090221 -0.0002088031 -2.0524585f-5 5.01877f-5 0.00019194714; -5.0838746f-5 -7.539191f-5 0.00012213155 0.00015550757 -0.00012977826 -5.158861f-5 -9.445511f-5 0.00012671694 5.2321917f-5 0.00023097855 -8.698364f-5 0.00018369449 -1.2698216f-5 -2.2390311f-6 3.0692383f-5 -0.00011975833 0.00012478858 -1.4962289f-5 3.8748185f-5 0.00023630871 -0.00012000166 -1.5395174f-6 8.959036f-5 -3.4250785f-5 1.3674271f-5 -0.000111441506 -0.00010046797 6.0148104f-5 -2.4395333f-5 3.2727636f-5 8.028999f-5 -4.091216f-5; -2.0079646f-5 4.2053984f-5 0.0002203402 0.00020521648 -0.0001587786 -0.0001763035 -0.00016422565 0.00010166542 9.380203f-5 6.9437694f-5 7.57007f-5 -0.00020894472 -0.00016988067 -6.765953f-5 -0.00010387241 5.217241f-5 2.3392158f-5 -6.166616f-5 0.00017520235 -3.2230004f-5 0.00016375548 -3.2104384f-5 0.00013332158 6.2654086f-5 0.00017636029 9.2175396f-5 -3.051804f-5 4.447554f-5 -4.3382257f-5 3.6749552f-5 7.922374f-5 5.572315f-6; -2.6858583f-5 0.00017819837 4.695629f-5 -9.4709256f-5 -0.00016462848 -2.3182307f-5 -4.6315886f-6 8.540567f-5 -8.155574f-5 -0.00014111411 7.0400456f-5 -0.0003217462 0.00013005744 0.00011168272 8.3365914f-5 6.0671675f-5 -7.8074496f-5 -7.818376f-5 -6.0908715f-5 0.00014188448 0.00011401225 0.00024713847 -2.690341f-5 0.000102220314 8.4159314f-5 -0.00015950877 4.007158f-6 -8.7333174f-5 -8.763305f-5 5.593f-5 -7.80944f-5 -8.700921f-5; 4.7898816f-6 -6.2080675f-5 -3.9439437f-5 0.00014633437 0.0001298414 3.786074f-5 3.610755f-5 -5.0587707f-5 -0.000108645305 -7.380673f-5 -3.9151306f-5 2.2096217f-5 4.4382992f-5 -0.00020986874 -2.2879865f-5 -9.160353f-5 9.355732f-6 5.507163f-6 2.5489919f-5 2.7703583f-5 7.0461865f-5 -1.7779136f-6 -8.9434f-5 2.8051127f-5 -5.7027803f-5 8.368734f-6 5.7428515f-6 3.801712f-5 -4.082395f-5 1.3992459f-5 -0.00016164676 -0.0003113387; 0.00010210765 -0.00014016182 -9.4709f-5 -0.00034049625 -0.00021730701 -0.00021877282 -1.7789896f-6 5.1318613f-5 -6.843532f-5 2.579598f-6 -7.676633f-5 3.0357873f-5 4.429472f-5 -0.00013344285 8.7713386f-5 -4.4082106f-5 4.933527f-5 -0.00012167484 0.00014828255 3.681658f-5 5.1665567f-5 3.5419303f-5 1.1032115f-5 -2.0139467f-5 -5.1311414f-5 4.592762f-6 -0.00010240131 0.0003480712 4.9550832f-5 9.322965f-5 0.00013964142 -1.44593205f-5; 4.6231064f-5 0.0001759908 -9.25005f-5 -0.00012786509 -9.314455f-5 -0.00010438618 0.0001246568 -6.0039115f-6 -4.256412f-5 3.3602988f-5 -7.607951f-5 -5.8172565f-5 9.8862365f-5 8.6666434f-5 -0.0001534175 9.098543f-7 -0.0001733446 -7.1654413f-6 7.078415f-6 7.887453f-5 -8.685129f-5 3.543198f-5 0.00011281002 -0.000119297736 2.4146246f-5 -8.2930885f-5 1.4110002f-5 -0.00012959536 -9.485212f-5 -0.00019885453 -1.7635973f-7 6.0904702f-5; 3.8074464f-5 -0.00014175575 -0.00010340612 -5.7893245f-5 -4.6557743f-5 8.02458f-5 -4.9841892f-5 0.000101947204 0.00012222587 -2.4416882f-5 9.291772f-5 7.913553f-6 -6.0172544f-5 0.00018882217 4.9115602f-5 5.3959157f-5 -0.00025722026 -5.7353386f-6 4.8929433f-5 -0.00013650712 -3.4423745f-5 3.0329427f-6 6.108192f-5 -7.605748f-5 0.00013981201 9.759291f-5 8.666391f-5 -2.3675924f-5 0.00012245314 -0.00010652234 -1.3366141f-5 0.00012327557; -3.3457505f-5 4.095177f-5 -0.000109747016 5.973901f-5 -4.265944f-5 -9.30009f-5 8.915604f-5 -0.00010395834 -1.20415725f-5 0.00015913686 -6.577562f-5 -1.2961416f-5 -1.3428478f-6 -2.4245908f-5 -4.767823f-5 8.947429f-6 0.00021950372 6.0229213f-5 4.8471844f-5 -1.8123917f-5 4.926346f-6 0.00016245825 -9.935177f-5 -6.088053f-5 -0.00013101641 0.00015148571 -9.9824676f-5 2.438139f-5 9.963044f-6 -9.3802286f-5 -8.4489046f-5 -9.9167686f-5; 9.8044664f-5 -0.0001373537 6.387073f-5 6.1176324f-5 3.706773f-5 1.7435063f-5 4.7456964f-5 -1.5212719f-5 3.6089532f-5 -8.671219f-5 -7.336197f-6 -2.0153177f-5 -5.8473186f-5 8.0986705f-5 2.2233919f-5 -2.9421195f-5 -3.1216456f-5 -8.403217f-6 -0.00013870597 5.3565193f-5 3.9979343f-5 8.018874f-5 -0.00020783316 -3.9275405f-5 2.2974551f-5 -0.00013394952 0.00018567356 -3.7417303f-5 -5.7108025f-7 5.572647f-5 -0.00011531178 -7.445494f-5; 2.5635985f-5 -3.386408f-5 3.410427f-5 -8.304946f-5 0.00010648793 -7.12835f-5 -1.8213308f-5 0.00015260768 0.0001266959 -0.000107867665 -5.9321865f-5 0.00011133931 -0.000105488856 0.00018247867 9.0476264f-5 -5.4776487f-5 6.0621394f-5 0.00028524833 6.466627f-5 -5.295654f-6 -0.00010272296 0.00010126048 1.4139942f-5 6.678978f-5 -0.00018882136 -5.3258304f-5 1.6948114f-5 -7.912578f-6 -2.1099555f-5 -9.280504f-5 2.0245323f-5 2.3501534f-5; 0.00012267528 -2.3302788f-5 0.0001845575 -3.6030975f-5 -8.9690475f-5 5.39783f-6 -6.8735144f-6 -0.0001025949 2.5651641f-5 -0.00015345518 -0.00013484679 7.593546f-5 -1.713295f-5 -4.639202f-6 9.142571f-5 -2.3218283f-5 5.4158005f-5 -5.6290253f-5 -0.00015190005 3.1802838f-6 -3.7887046f-5 -2.6533817f-5 0.00012959103 8.168044f-5 0.00016691723 1.225193f-5 -0.00022118838 -5.3078587f-5 4.3580926f-6 0.00017463026 -1.0993981f-5 5.7551082f-5; 3.1593845f-5 -0.000110218614 -6.81751f-5 -4.0169387f-5 4.775593f-5 -6.654202f-5 8.2955985f-6 2.4026602f-5 2.1414777f-5 -0.00018230491 4.825706f-6 2.1396403f-5 8.545525f-5 -0.00010729712 -3.5680565f-5 7.198789f-5 -6.8265894f-5 -8.482774f-5 -4.113752f-5 9.73255f-5 0.00013572449 -2.7198057f-6 -4.7429945f-5 -0.0001458811 9.53614f-5 4.0818675f-5 3.8685084f-5 0.00010142652 -7.212244f-5 -7.761069f-5 -0.00018438329 -5.83635f-5; 1.9431573f-5 -3.5413144f-5 5.494157f-5 0.00013289817 0.00010804467 -9.7779135f-5 6.7757464f-5 6.780156f-5 6.958222f-5 -6.413478f-5 9.143599f-5 3.0031626f-5 -4.9843726f-5 9.828978f-5 4.441148f-5 3.23047f-5 0.00010129856 -8.872279f-6 6.877895f-5 0.000117699274 -9.963044f-5 -0.00012705963 -0.00023942554 6.482481f-5 0.00019372506 1.3753422f-5 -0.000111746165 5.2670017f-5 -0.00019889923 2.1095306f-5 0.00014670983 0.000106552376; 0.0001532651 0.00017225282 -0.00019747327 0.000101641795 -1.9324214f-5 3.1270938f-5 -1.0656201f-5 0.00012335583 1.7614955f-5 0.00018734005 4.6449848f-5 -3.556733f-6 0.00014254355 -2.6307333f-5 4.0052928f-5 -4.950784f-5 3.2620028f-5 2.9044973f-5 0.0001169443 -0.0001477714 3.3198372f-5 7.44768f-5 -2.2325634f-5 2.1709455f-5 1.9323952f-5 0.00010349797 4.662355f-6 0.000120134464 -1.426732f-5 -3.614648f-5 -4.7784528f-5 -0.000108450906; 0.00012510877 1.6365531f-5 2.6071106f-5 -0.00022927686 2.5291409f-5 -0.00011106614 -4.2158313f-6 -0.00014278505 0.00012849839 -1.0538752f-5 0.00018321196 -5.0901217f-6 5.6905395f-5 -2.0126035f-5 -1.9563063f-6 0.00020969917 9.217068f-6 -5.2084233f-5 -1.9979205f-5 -0.00010448458 -8.920894f-6 -3.2072923f-5 -0.00010738238 6.593582f-5 0.00014089058 -6.93594f-5 0.0001137811 2.7396969f-5 -0.00014477345 -3.0052819f-5 5.1802308f-5 7.691047f-5; 5.539292f-5 1.2016109f-5 -0.00013654484 8.461434f-5 3.012997f-5 -0.00018663032 0.00010116812 -0.00020896742 7.81598f-5 -0.00022899467 5.9576298f-5 -2.3427787f-5 -2.1780272f-5 -0.0001299228 -2.6480795f-6 -4.194423f-5 0.00017164544 -0.00014682561 -6.270525f-5 -2.418831f-6 5.836604f-7 -0.00014961041 0.0001490025 6.167698f-5 6.434922f-5 2.15672f-5 -7.845563f-5 -9.3900235f-5 -0.00010745666 9.943281f-5 3.506144f-5 -0.00010716154; -0.00031913762 -5.373516f-6 -2.4668616f-5 0.00020465998 -0.00015221832 4.4142293f-5 3.4260665f-5 0.00018585508 -1.5315974f-5 -2.7753547f-5 -7.027571f-5 -0.00012219216 0.00014801395 5.5639845f-5 0.00010767018 -4.5898585f-5 9.620609f-6 9.126621f-5 -6.4281805f-5 4.424764f-5 -0.00010782308 8.663507f-5 -0.00018453374 9.6715936f-5 -0.00011947815 -1.0612806f-5 -7.8717985f-6 0.00016270993 -0.0001183861 8.132558f-5 2.7964297f-5 -2.1770951f-5; 0.00013770966 8.160392f-6 0.00014750332 -9.148222f-5 -2.3455923f-5 -0.00012808383 -3.1211828f-5 -9.638449f-5 2.1945178f-5 -0.00010252736 4.3882188f-5 -2.7055738f-5 4.1048006f-5 -0.00011791357 0.0001088796 -3.35307f-5 4.6414452f-6 7.904794f-5 -3.2272903f-5 4.679918f-5 -2.8823291f-5 -0.0002403455 7.2696894f-5 2.1850115f-5 -0.00017884467 0.00013938849 0.00012041823 3.8262526f-5 -4.831809f-6 -0.00011074009 -0.00014165216 6.2243314f-5; 5.5768774f-5 0.00025591633 -8.7005836f-5 -6.97511f-5 -0.00010677257 0.00014461808 9.6828815f-5 -7.235264f-5 0.00013398477 -0.00022309602 5.1014304f-5 -0.00010017812 -0.00013455361 -7.145826f-5 9.886539f-5 4.9542003f-5 -2.2008837f-5 0.00016627453 1.939884f-5 -8.495026f-6 7.701872f-6 -0.00017832675 2.259097f-5 -6.1449035f-5 -0.00019204959 3.286431f-5 2.5918604f-5 2.890915f-5 2.7365557f-5 -0.00010480247 4.5039164f-5 3.899461f-5], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_4 = (weight = Float32[-5.3218864f-5 -8.7234104f-5 0.00024896805 9.7996935f-6 0.00018562307 -0.00013682363 2.88639f-5 0.00013638294 -5.7898418f-5 1.2768941f-6 -7.310829f-5 3.163944f-5 5.68142f-6 -8.405032f-5 -5.7826495f-5 -5.6752317f-5 -0.00013975805 4.9450424f-5 -1.0195218f-5 4.1595777f-5 0.00018590117 2.2314363f-5 0.00022402793 0.000108809094 6.9467824f-5 7.255384f-5 -9.488476f-5 -3.1999603f-5 3.6781903f-6 -2.4678693f-5 2.8044307f-5 -1.9549065f-5; -4.9224338f-5 -0.00014301574 -3.065813f-5 0.00012645814 5.188297f-5 6.0191636f-5 -8.252057f-5 0.00019450326 2.1002388f-5 6.364012f-5 8.513248f-5 -0.00011287401 0.000117475014 0.00010225453 9.432689f-5 -0.00014117833 -0.00017059014 9.827579f-5 6.402922f-6 6.2420164f-5 -6.583349f-5 1.3587606f-5 0.0001036864 0.00012247443 1.927937f-5 -0.00015768438 7.495479f-5 -5.7442823f-5 1.1396786f-5 1.9020681f-5 -9.5680174f-5 -7.685743f-5], bias = Float32[0.0, 0.0])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple()))

      Similar to most DL frameworks, Lux defaults to using Float32, however, in this case we need Float64

      julia
      const params = ComponentArray(ps |> f64)
      +ps, st = Lux.setup(Random.default_rng(), nn)
      ((layer_1 = NamedTuple(), layer_2 = (weight = Float32[-3.4446337f-5; 0.00010394415; 1.0407738f-5; -6.452793f-5; 4.626418f-5; -1.9925024f-5; -9.464696f-5; 0.00011828133; -0.0001320208; -2.9316854f-5; 0.0002301521; 0.00014383887; -0.000110991656; 0.0001545633; -8.774544f-5; 5.4517077f-5; 1.6845874f-5; -7.0865055f-5; -2.4955572f-5; -0.00011097498; 5.276532f-5; 5.01855f-5; -0.00018031502; 4.723036f-6; 0.00010012918; -2.184352f-5; 0.00017082684; 2.1053067f-5; 2.5283302f-5; 3.8338072f-5; -1.7683138f-5; -6.7184796f-5;;], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_3 = (weight = Float32[-4.2053907f-5 4.3386925f-5 -3.131377f-5 -6.60632f-5 -3.9905488f-5 -8.437089f-6 7.8690624f-5 -0.00021180407 3.879939f-5 3.3297678f-5 -5.6983095f-5 6.549813f-5 -4.6091885f-5 0.00023636897 -9.8516684f-5 0.00014633292 5.217717f-5 -5.8151585f-5 2.6223264f-5 0.00020843332 0.00017904844 0.00014736797 7.529488f-5 -6.913635f-6 0.00014460477 -1.672923f-5 7.8827085f-5 6.582538f-5 0.00020819326 5.399849f-6 -0.00014811082 -0.00018036878; 9.777582f-5 -4.8093298f-5 1.485744f-5 -9.0358415f-5 1.924479f-5 3.6809688f-5 0.00014026738 -5.393586f-5 -1.7883678f-5 -0.00014276597 9.19728f-5 -0.00023419409 0.00011673116 -1.031767f-5 8.732497f-5 3.536482f-5 -0.0001028683 -3.4174761f-6 0.0002528165 -4.212384f-5 -2.2573255f-5 0.00022161243 7.783719f-5 9.593876f-6 -0.00014780935 -4.7141395f-5 -1.852038f-5 -1.7244594f-5 0.00010680739 -6.692015f-5 7.6503886f-5 6.023557f-5; -6.1795204f-6 -1.6390075f-5 -0.00013397221 0.00017982828 -2.661647f-5 0.00012682185 -3.583649f-5 -4.9254697f-5 -1.5946554f-5 0.00015366152 5.8358964f-5 -2.492977f-5 2.3328961f-5 0.00010656902 -6.600941f-5 5.650403f-5 -0.00017425556 8.164926f-5 -7.088771f-5 2.4597326f-5 -5.5395718f-5 0.00011095441 3.942657f-5 -1.2046675f-5 8.3262465f-5 -0.00027824836 5.448429f-5 -6.66772f-5 -4.5188946f-5 -4.6224945f-5 -9.3539435f-5 -0.00015698108; -4.262534f-5 0.00010919985 7.178612f-5 4.447823f-5 0.000120952864 7.292193f-5 2.883672f-5 -3.7112368f-5 8.072319f-5 -9.920693f-5 0.00012860054 -2.1822912f-5 0.0001404181 -4.395641f-5 7.241428f-5 4.3160548f-5 -0.00019252152 0.00019974002 -7.287874f-5 2.611762f-5 -0.00010422342 0.00013939405 -5.4051612f-5 -3.291926f-5 -0.00024160316 1.90957f-5 0.0001583958 0.00018530748 -0.00017939544 -2.2220735f-5 -0.00012709879 -0.00016358988; 8.364984f-6 -6.221388f-5 -1.9876286f-5 -0.0001026845 -5.8145673f-5 -7.792514f-5 0.00012760228 -0.00016144144 0.000165219 -7.008369f-5 6.7995854f-5 7.501381f-5 -6.8149544f-5 6.0617996f-5 -0.00012804705 -2.4683293f-6 5.5424996f-5 3.3211414f-5 9.4243966f-5 0.00012305002 0.00013420079 -3.9417897f-5 8.604986f-5 0.00012585316 -0.0001318502 1.9441803f-5 -6.224895f-5 0.00011242075 -9.341854f-5 -3.9140647f-5 0.00016922234 -1.3684947f-5; 0.00012998351 0.00014052702 2.1539017f-5 -0.00019176696 -4.541933f-5 -9.383414f-5 9.159751f-5 -5.3487583f-9 2.4755604f-5 8.439207f-5 -0.0001435018 -1.0001287f-5 5.3363196f-6 -8.822155f-5 -0.00024739647 -1.9378853f-5 -0.00021555215 -7.5717224f-5 -0.000108057386 -6.8901034f-5 -3.297576f-5 0.00012801001 0.00010306175 -0.00011335675 7.893225f-5 3.4183755f-5 6.409453f-5 2.8854318f-5 7.307147f-5 -2.2687107f-6 4.64625f-5 0.000112042806; 3.4868877f-5 4.411143f-5 5.9462025f-5 3.8976737f-5 -8.417206f-6 -3.9721705f-5 -1.27392295f-5 0.00014519217 -0.00012774303 3.1241507f-5 6.61726f-5 3.7206657f-5 -0.00010260069 0.00017139669 -9.503973f-5 -0.00011928401 -0.00011322229 0.00012576614 -1.4027162f-5 3.3692395f-5 0.00024676963 -5.91134f-5 5.9847254f-5 -5.7607062f-6 0.00013581832 -0.00017533799 1.6051288f-5 -2.7718395f-5 6.2843574f-5 0.00015492218 6.3895364f-5 -1.9570874f-5; 4.300263f-5 -3.759355f-5 0.00018775418 8.594025f-6 -3.54082f-5 -0.00011055921 -0.00020306083 -0.00015270218 -0.00014485297 3.8106497f-5 -6.783617f-5 -9.451611f-5 3.7598531f-6 0.00013164424 1.3968381f-5 0.00013892472 0.00015254486 0.00015852142 5.2956682f-5 -5.0795556f-5 6.262729f-5 -9.733962f-5 0.000107884654 6.811489f-5 2.2295862f-6 2.1777894f-6 5.725284f-5 5.3181444f-5 0.0001883264 0.00011238701 1.2314043f-5 -4.316324f-5; 1.20511f-5 -8.1092134f-5 4.6308156f-5 0.000119998535 2.4881458f-5 6.384344f-5 3.1215386f-5 -0.00010765867 1.1283094f-5 0.0001406692 0.00011210508 -0.00021599686 2.718951f-5 -5.5339577f-5 -9.8123564f-5 0.00023663025 -2.7639313f-5 -0.00010749312 -2.789301f-5 6.23893f-5 -0.00010606541 0.00012061757 -9.2707014f-5 -0.000105748826 -6.368685f-5 2.9856978f-5 8.330723f-5 4.174993f-6 8.206204f-5 -0.00014328213 0.00014588062 6.408158f-5; 0.000120730605 7.0018046f-5 0.00013248438 -7.533903f-7 -1.1465964f-5 -7.088935f-5 -1.1740324f-5 1.5477656f-6 -8.777102f-5 -0.000113697504 2.9643372f-5 4.524983f-5 -0.00016962066 -9.1323294f-5 3.8243717f-5 0.00012625684 8.521389f-5 -0.00010420004 4.760375f-5 -0.0001815075 1.0406746f-5 9.0288624f-5 -4.1407417f-5 -0.00010618409 -0.00013016822 9.881702f-5 2.1220458f-5 3.136785f-5 0.00016577568 -4.3345535f-6 -0.00012734522 7.391863f-5; 0.00013549681 4.478237f-5 3.712353f-6 -0.00011569401 -7.741665f-5 -2.2039007f-5 3.0153607f-5 0.00027885364 6.2282474f-5 1.6433369f-5 5.2078617f-6 -4.7000918f-5 -4.681582f-5 -6.339667f-5 -1.0516141f-5 -9.503237f-5 -4.2336775f-5 4.9513845f-5 0.00013973672 0.000126315 -4.4466865f-6 0.00011439561 -7.2713294f-5 -4.5347075f-5 1.0357716f-5 -9.1666f-5 -7.803404f-5 -9.524844f-5 9.770333f-5 -1.4143775f-5 -0.00014762214 -1.2561137f-5; -8.769051f-5 -1.9699017f-5 0.00013252866 4.734714f-5 4.9325303f-5 -0.0001566244 2.5678337f-5 3.595747f-5 -0.00013119343 -0.0001315169 -2.5327217f-5 0.00025388983 -0.00010227109 -1.8038461f-5 1.2503847f-7 2.5749885f-5 6.381259f-5 2.0629448f-6 -9.804977f-5 4.0389128f-5 -1.1470597f-5 6.467271f-6 8.575985f-6 -6.3617685f-5 5.7763566f-5 -3.609813f-5 0.00013409156 -5.992536f-5 -8.75678f-5 -0.00015813061 -1.4344347f-5 0.00010358078; -4.5211193f-5 -8.982089f-5 -9.538837f-5 -3.780156f-5 -5.5938493f-5 1.5488362f-5 1.812237f-5 -4.2632495f-5 0.00012876061 0.00012243056 -0.00017869765 -9.8027405f-5 5.991346f-5 5.9273414f-5 2.3993003f-5 -7.3010124f-5 6.414427f-5 1.2411233f-6 -5.5126857f-5 -1.1964921f-6 4.2523938f-5 5.6468507f-5 -8.10907f-5 -2.4996172f-5 -3.19213f-5 0.00012697169 -0.00020041059 -2.0566074f-5 6.253308f-7 7.127307f-5 6.3227635f-5 1.4143306f-5; 5.7190555f-5 5.2213036f-5 0.000110101304 -3.0514471f-5 2.1582293f-5 -4.849053f-5 6.5118846f-5 9.516466f-5 -3.110828f-5 6.826293f-5 -5.3967105f-5 0.00012646428 9.023525f-5 4.9806326f-6 -9.399997f-5 -4.8966704f-5 -1.910033f-5 -6.380421f-5 6.0419025f-5 3.608888f-5 -2.6495205f-5 1.6526057f-5 -2.20853f-5 -7.925446f-5 -0.00016420889 5.1522966f-6 5.9229023f-5 -8.79329f-5 7.976851f-5 -5.341451f-6 4.5772078f-5 0.00010451297; 8.830987f-5 0.00013208737 -6.5613574f-5 7.393352f-5 8.9889574f-5 -0.00011885097 1.1837484f-5 -0.00010001018 0.00013785157 -8.159959f-5 -0.00011262807 0.0001376406 2.6677524f-5 0.00014831097 3.827098f-5 -9.3074916f-5 -2.5761932f-5 -0.00016038121 4.318028f-5 -1.9102656f-5 -0.00014921126 0.000115171286 -8.063735f-5 1.1720706f-5 -0.00022359965 -6.249485f-5 4.6211785f-6 6.785435f-5 4.7423113f-5 -0.00013142708 1.7368713f-5 -6.1622326f-5; 0.00010093819 -2.0439998f-5 -3.0679537f-5 -6.0320803f-5 -5.984598f-5 -3.1480556f-6 -2.1526082f-6 3.2503962f-5 -1.4426741f-5 -4.334f-5 4.984604f-5 0.00015791321 0.00024633258 0.0001353566 9.062983f-5 -5.7063633f-5 5.6170902f-5 0.00012129234 -0.0001340508 7.869632f-5 -1.0902069f-5 -0.00018631369 -0.00032723838 -0.00015855787 -4.975448f-5 -4.43156f-5 -2.5760326f-5 -8.0857084f-5 8.673312f-5 -0.00025890366 -0.00015582202 -3.5932255f-5; 8.574478f-5 7.658151f-5 7.7282035f-5 0.000102018974 3.833785f-5 -1.5135049f-5 -2.4218472f-5 -4.685639f-5 -8.222127f-5 4.6414843f-7 7.137507f-5 6.276277f-5 -0.0001706207 -0.00023203099 4.7585643f-5 4.725823f-5 4.0834977f-5 6.72172f-5 -4.836937f-5 -0.00011875496 -9.7569085f-5 -5.2672967f-5 1.989273f-5 -0.00010841036 -5.591725f-5 1.5667161f-5 -0.00013297427 -2.483655f-5 1.0715118f-5 -8.723244f-5 0.00012441004 0.00011524323; 2.2642746f-5 -0.00015272667 -1.4285558f-6 0.0001678838 -1.94947f-5 -0.00019399663 -0.00021315523 -0.00020095686 -4.553039f-5 -7.425489f-5 -8.793752f-5 0.00011267003 1.7942952f-5 -0.00023747998 4.44874f-5 -4.9098144f-5 -3.0323032f-5 5.286811f-6 -0.00013527626 -0.00015747514 -9.665816f-5 4.0802694f-5 -7.731109f-6 -2.1128842f-5 1.92424f-5 3.2279077f-5 -2.6483664f-5 -1.851474f-5 3.1695192f-5 0.00012824128 -0.00010954416 -6.127681f-5; 0.00016396995 -1.7849494f-5 -0.00012519558 -0.00011603789 5.5768614f-5 1.9930843f-5 -3.0362457f-6 4.6149562f-5 -0.0001846106 5.4274f-5 5.7559217f-5 -2.5676858f-5 -2.5646903f-5 -0.00016830383 0.00013469068 4.334508f-5 -0.00019906907 5.6317964f-5 -0.00019370418 -9.5345895f-6 -6.809125f-5 -1.0433996f-5 1.514967f-6 6.4279543f-6 1.0697642f-5 7.3860414f-5 -0.00020137579 5.3130923f-5 -0.0001328794 0.0001391857 -2.1289025f-5 -5.7824684f-5; 1.0808964f-5 0.00010476551 -0.000117880474 0.000117220006 1.9460107f-5 0.00012584326 0.00018816844 7.8853016f-5 -8.625334f-5 3.9968258f-5 -5.603305f-6 -9.644718f-5 6.119011f-5 -0.00010118461 -0.00020499196 -0.00010931679 7.1884504f-5 9.989259f-5 0.00012033464 -0.00013321554 -7.3677445f-5 0.0004115561 -5.7697f-6 1.3060458f-5 1.9810932f-5 4.3456563f-5 -3.0828727f-5 1.5608524f-7 3.5201556f-5 -9.530237f-5 7.480124f-5 3.8174334f-5; 1.8344706f-5 -2.252781f-5 0.00012561919 -8.813143f-5 -5.2716914f-5 -7.3666626f-5 -1.6210448f-5 -0.00012577978 5.61369f-5 5.7660477f-6 9.220057f-5 0.00012333161 -0.00012610656 -6.308743f-5 -0.00014293577 -9.992773f-6 -0.00013108953 -5.112995f-5 -8.967487f-5 -2.3644614f-5 -0.00011522683 0.00010042471 -4.9962324f-5 -8.47632f-5 -0.000110084686 -0.0002513219 -0.00018713776 0.0001360887 -8.912003f-5 -6.544659f-5 4.6231955f-5 -4.6639077f-5; -2.197381f-6 -0.00013310836 -4.532192f-5 2.9993416f-5 1.8022136f-5 0.00018153591 -9.603031f-5 -5.413648f-5 4.2869993f-5 -1.7720167f-5 -0.00014343837 0.00015344907 -0.00018855205 -2.7983097f-5 5.3430977f-6 8.0071106f-5 -0.0001849432 2.559237f-5 -0.00013660316 -0.00015405445 1.15452995f-5 0.0001793029 0.00028733205 1.5900598f-5 0.00010199915 -8.186377f-5 8.4180174f-5 0.00015379698 -3.5117813f-5 2.8186989f-5 -4.3052518f-5 1.9419773f-5; 0.00013047179 4.817894f-5 4.0732393f-5 -0.00014995539 0.00014710434 -3.67937f-5 -1.0262098f-5 0.00010308393 1.1467258f-5 -3.3329135f-5 8.338237f-6 -3.0274683f-5 -4.1485044f-5 9.424386f-6 -3.7485945f-5 -0.00012771913 0.00024751297 -5.3930344f-5 3.390676f-5 -0.00015537383 7.248057f-5 -3.891489f-5 1.9789954f-5 7.569962f-5 2.4107821f-5 -4.2702188f-5 -4.8002308f-5 7.070319f-5 -0.0001293727 -4.302074f-5 8.705469f-5 8.415172f-5; -0.00016218047 -8.946247f-5 -2.5866031f-5 -1.4645909f-5 8.638298f-5 1.6371056f-5 8.870913f-6 -0.00012135565 -1.7863642f-5 -5.6675755f-5 3.62417f-5 0.00015631424 8.173435f-5 -7.158864f-5 0.00013528141 0.00010242883 -0.0001272267 0.000109258086 6.720632f-6 0.00011892835 3.4085126f-5 6.105397f-5 -0.00011939068 0.00012109862 9.136028f-5 -3.0875275f-5 2.0644777f-5 -3.479381f-5 -5.334923f-5 -0.00019021974 -1.6745504f-5 1.3943094f-5; 0.00013323785 -0.00017627855 -9.576856f-5 -2.5931658f-5 -6.515243f-5 7.440905f-5 -3.0850835f-5 -5.1631956f-5 0.0002543031 0.00012153343 4.9543978f-5 -0.00010156559 3.5479672f-5 4.177699f-6 -0.00020312569 0.00017418849 -0.00013027941 0.00011281384 0.00019560126 4.458472f-5 9.118938f-5 -8.003967f-5 7.2361865f-5 3.296879f-5 -5.325159f-5 -2.0788357f-5 -7.615318f-5 -0.00010978453 -5.7122175f-5 -6.846378f-6 1.4307137f-5 5.8990106f-5; -8.06886f-5 4.4959514f-5 -0.00021278443 -3.2261687f-5 9.764534f-5 -8.423573f-5 0.0001125497 -1.4034379f-6 3.0579333f-5 -3.161048f-5 5.7838985f-5 0.0001788286 0.00011405091 -0.000102145015 4.392712f-5 -0.00014379322 5.8394504f-5 -0.00016583968 1.0214576f-5 1.25842525f-5 -6.653386f-5 0.00017337779 -0.00017892485 -6.4587985f-5 1.6112404f-5 0.00017418951 2.5661135f-5 9.1543625f-5 0.00010427124 -4.1046784f-5 2.4212042f-5 2.0537844f-5; 0.00013231862 -0.00016531718 -3.903797f-5 0.0001162607 -0.00010705715 -7.781918f-5 -0.00018513748 4.8409518f-5 0.00011539138 9.923989f-6 -6.161437f-5 5.9833288f-5 1.0861265f-5 -7.660697f-5 0.00016022213 0.00017887045 -9.2765746f-5 -0.00014601879 0.00012357271 -0.00015698257 -2.4156461f-5 4.0419607f-5 1.0189255f-5 -6.369725f-5 -0.000120019686 6.6423318f-6 5.8290443f-5 2.9998882f-5 -2.6212781f-5 -3.004527f-6 -6.9801877f-7 3.105117f-5; 0.00020986063 0.00012419584 -0.00020825498 -0.00010945621 -9.314001f-5 8.488033f-5 0.00014737055 9.028464f-5 -4.44054f-5 3.640222f-5 5.627555f-5 -9.140203f-5 -4.810568f-6 -5.4950953f-5 -0.00018010757 8.046163f-5 1.3352965f-5 -0.0001818743 9.104892f-5 5.82443f-6 9.931345f-5 2.152354f-7 6.684913f-5 -8.968921f-5 -2.0080251f-5 -0.00012493068 0.00013471753 5.068596f-5 0.00012188146 -0.000106949046 0.00016678589 3.056754f-6; -3.5848538f-5 -0.000112701135 -0.00012224929 0.00015687355 -8.6629385f-5 -0.0001651372 9.708392f-5 4.934294f-5 -8.735572f-5 -3.124433f-5 -7.913138f-5 0.00010581072 -0.00018598446 -7.34899f-5 -7.338275f-5 -2.5021853f-5 0.00014257689 -5.907165f-5 1.5547153f-5 -7.50885f-5 -5.0319333f-5 -0.00019138037 -0.00016423872 6.0671366f-5 2.242492f-5 0.000106785905 -2.6482256f-5 0.00018647601 -0.0001572809 8.017354f-5 -9.49135f-5 0.00013588318; 4.2787244f-5 -9.1943675f-6 6.915148f-5 -4.7917685f-5 -3.960024f-5 -4.1653864f-5 -1.4677493f-6 0.00011214721 2.8500568f-5 3.611232f-5 -4.3273107f-5 -5.8955353f-5 -1.2186722f-5 0.00026288297 3.117711f-5 -2.2876113f-5 6.70718f-5 -8.035303f-6 -3.3185188f-5 3.947568f-5 -4.163493f-5 -0.00013158418 9.082587f-5 4.1714437f-5 -4.3842498f-5 0.00021099804 1.51678905f-5 -1.5583524f-5 -4.9487164f-5 -3.5895315f-5 0.0001567502 -6.5042135f-5; 0.00011516511 0.00018854189 0.0001457063 1.7245823f-6 -3.5729056f-5 -5.6654986f-5 7.7222874f-5 -0.00013445372 -5.4964265f-5 3.152649f-5 3.8416994f-5 -5.3257634f-5 -0.00010185206 -8.960718f-5 7.329333f-5 -7.270124f-5 5.957602f-6 5.323332f-5 0.00012447026 8.431008f-5 -5.629363f-5 -4.5606484f-5 8.826203f-5 2.4222798f-5 -0.00018978622 3.625876f-5 6.363001f-5 9.625782f-5 -3.218031f-5 -1.1284627f-6 6.951898f-5 -4.9665214f-5; 1.531859f-5 2.3127225f-6 -5.1700008f-5 0.00015922752 -0.00016962092 -4.6529734f-5 -4.5478373f-5 0.00012428919 2.8450175f-5 0.00018228179 6.1834864f-5 -2.695066f-5 -0.00020513852 8.121125f-5 0.00013299647 0.00011844542 -6.963209f-5 -9.7503f-5 0.00014458733 0.0001557746 -5.863989f-5 -5.3325788f-5 2.6907826f-5 5.709038f-5 8.540226f-5 7.1194113f-6 -0.00018346768 -1.1921413f-5 6.308156f-5 -5.4122836f-7 -1.6140875f-5 -0.0001383969], bias = Float32[0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0]), layer_4 = (weight = Float32[5.1893516f-5 3.7079637f-5 0.00017634654 3.0440993f-5 2.5401716f-6 4.476671f-5 2.6396589f-5 -8.747219f-6 5.344276f-5 0.00016784511 -0.00010095129 -3.5999135f-5 8.177836f-5 0.00011939605 0.00014665237 0.00013870627 1.8158957f-5 3.6122958f-5 6.211551f-5 -0.00024395707 -0.00012368748 0.00015478948 9.400265f-5 -0.00014458633 8.1357575f-5 -3.682935f-5 5.6512847f-5 0.0001463257 -4.6914683f-5 1.6573602f-5 -0.000104994106 8.469015f-5; -4.6238537f-7 -0.00015144216 -1.2149378f-5 -0.00022305499 1.599908f-5 5.0801737f-5 -0.00017307313 -0.00010968822 -0.00021031218 -6.3836946f-5 0.00012531393 -6.862671f-5 -4.3170407f-5 1.3885872f-5 2.8827359f-5 0.00010167457 -0.00017535916 0.00022037337 0.00020127723 -0.00011085212 8.650909f-5 0.00017946545 -3.855777f-5 7.952302f-5 -1.6478636f-5 -8.0736776f-5 -0.000106744425 6.852565f-5 -0.00012882442 -1.2838407f-5 0.00014893794 -1.1102453f-5], bias = Float32[0.0, 0.0])), (layer_1 = NamedTuple(), layer_2 = NamedTuple(), layer_3 = NamedTuple(), layer_4 = NamedTuple()))

      Similar to most DL frameworks, Lux defaults to using Float32, however, in this case we need Float64

      julia
      const params = ComponentArray(ps |> f64)
       
       const nn_model = StatefulLuxLayer{true}(nn, nothing, st)
      StatefulLuxLayer{true}(
           Chain(
      @@ -750,13 +865,13 @@
               ["Waveform Data", "Waveform Neural Net (Untrained)"]; position=:lb)
       
           fig
      -end

      Setting Up for Training the Neural Network

      Next, we define the objective (loss) function to be minimized when training the neural differential equations.

      julia
      const mseloss = MSELoss()
      +end

      Setting Up for Training the Neural Network

      Next, we define the objective (loss) function to be minimized when training the neural differential equations.

      julia
      const mseloss = MSELoss()
       
       function loss(θ)
           pred = Array(solve(prob_nn, RK4(); u0, p=θ, saveat=tsteps, dt, adaptive=false))
           pred_waveform = first(compute_waveform(dt_data, pred, mass_ratio, ode_model_params))
           return mseloss(pred_waveform, waveform)
      -end
      loss (generic function with 1 method)

      Warmup the loss function

      julia
      loss(params)
      0.00074655426662429

      Now let us define a callback function to store the loss over time

      julia
      const losses = Float64[]
      +end
      loss (generic function with 1 method)

      Warmup the loss function

      julia
      loss(params)
      0.0007518903236338871

      Now let us define a callback function to store the loss over time

      julia
      const losses = Float64[]
       
       function callback(θ, l)
           push!(losses, l)
      @@ -768,7 +883,7 @@
       res = Optimization.solve(
           optprob, BFGS(; initial_stepnorm=0.01, linesearch=LineSearches.BackTracking());
           callback, maxiters=1000)
      retcode: Success
      -u: ComponentVector{Float64}(layer_1 = Float64[], layer_2 = (weight = [0.00012904078175776253; -0.00011254487617403629; -3.194286909998489e-5; 9.532207332072905e-5; 5.749990668843518e-5; -0.00010688291513358794; -0.00010909062984874433; 3.826104511965575e-5; -1.2864484233419668e-5; -8.822358358875405e-5; 3.54601397702874e-5; -1.8142538465319903e-5; -0.00010211646440444609; -4.560515662882762e-6; -0.00021668768022198756; 0.0001458104816263927; -0.000112148016341931; 6.223739910629945e-5; 0.0002477158559482912; -4.460703712539674e-5; 4.5975793909755746e-5; -0.0001358188455924383; -8.202443859769488e-5; -2.2094563973928504e-5; 2.0224993932037384e-5; -0.00024750977172510844; 8.990632522912101e-6; -9.19615631573604e-5; 1.3154642147125585e-5; 5.571318979492865e-5; -2.2546999389286456e-5; -1.0075345926442328e-5;;], bias = [3.584434140166735e-17, -3.7790832398519353e-17, -4.035584463386469e-17, 1.4173640883628515e-16, 2.977832855146648e-17, -2.5063776117056426e-17, 1.0454082635012388e-16, 7.768131173337056e-17, -1.4275992282234933e-17, -1.0056975360155791e-16, 7.444415986842068e-18, -7.541368570010827e-18, 6.403532485188505e-17, -6.0094632419040616e-18, -1.1801867351985504e-16, 1.6551064034667553e-16, -3.54106732759271e-17, 8.7793657233099e-17, 3.732260928460576e-16, 9.636248802390752e-18, -9.422930643430968e-18, 5.92744761801707e-20, -3.654575479460169e-17, -2.4849142209393e-17, 2.650561506556804e-17, -6.031525195587328e-17, -3.049668143568285e-18, -2.677769293814691e-17, 3.2262887057476924e-18, -1.4834358480483478e-17, -2.198454328000425e-17, -1.1483009379054432e-17]), layer_3 = (weight = [0.00016184421098402112 0.00010803140309250678 9.677311014927782e-5 -0.00014462121920117096 8.76454868059012e-5 -4.604495200949848e-5 -6.289623719097284e-5 1.4203156629426456e-5 3.739779628192939e-5 -3.872813989330821e-5 -2.4568695275612873e-5 -1.5919569827766042e-5 -0.0001338799849098273 0.00013050133553877422 -1.7212312863981315e-5 0.00015190980763250898 3.674171226991112e-5 5.399085045171508e-5 7.551747207908346e-6 4.132384116490144e-5 0.0001245440724825835 -0.00012439271825754832 0.00032224463199342885 -9.023275209512656e-5 8.030222366123042e-6 -4.2297604645573056e-5 -1.2624278619860427e-5 3.8175974500674233e-5 0.00010345557517683067 -3.282782211938267e-5 -0.00013041991781830882 3.6781479016262425e-5; -0.00010449867604768573 6.933300992238576e-5 -0.00025979521651754906 -4.150714212449048e-5 0.00015308497913644838 0.00015436650810295014 4.068345387929354e-5 6.19483931571915e-5 0.0001609140841522923 4.1903755662471326e-5 8.615725036940715e-5 -0.0001501849377760926 -1.8756228566078988e-5 0.0001342673736856988 3.438520306951789e-5 3.751999847588415e-5 -8.24602139039425e-6 8.076808261076978e-6 -7.841605602300225e-5 9.43856599391177e-5 3.103769868027601e-5 1.6321725159625427e-5 -0.00011407547335841485 -9.840091424996789e-5 -9.079246775971852e-6 -0.00012266928879546438 -0.00015015776935036124 -4.883344208940647e-5 6.306695522505033e-5 -8.952660337093097e-5 -6.778347360671323e-5 0.00013526839993425815; 3.5417522044284086e-5 -6.405782431583927e-5 5.020034342551356e-5 -2.1875199932126688e-5 9.658514074062714e-5 1.608665235353539e-5 0.00017726282141184352 5.080980673910808e-5 -1.2770628641464566e-5 0.0002584676306602902 -6.891154288068659e-5 -1.8794859611534308e-5 1.850642303425243e-5 3.420808690380042e-5 5.754521163426771e-5 -0.0001948726227831649 -4.1474936265601624e-5 -7.719732337618453e-5 0.00022728855157284713 7.3225227710919e-5 4.146494149126811e-5 -0.00011304630123531435 -0.00012147761543501332 -7.157685712199914e-5 -0.00019784389104215682 7.3078429656656455e-6 -8.12767854634326e-5 -5.8827756843400645e-5 3.20443262487065e-5 -7.681684900062376e-5 -4.208456328824419e-5 -1.7747005299770105e-5; -4.1025844575904385e-5 6.294133240722606e-5 4.189168384483676e-6 -8.066327532155177e-5 4.1981361254741375e-5 -6.740309722821748e-5 2.4914649520232076e-5 -0.00011881545756345831 -0.0001054756513899967 1.9227306301632845e-5 8.64329454038995e-6 0.00010994218505352448 -1.40190733423352e-5 -7.514463186952786e-6 7.736711891037259e-5 -0.00010223158565216602 0.0001427783198318679 -6.638572917886143e-5 -0.00014546464913156064 -7.752653028135617e-6 -0.00011267308686960662 3.0433457913626107e-5 9.601584210891391e-5 -0.00011903363442847782 -0.0001552761470070882 0.00011697976774439031 -0.0001319178928957165 0.000102387274500376 5.188520194628911e-5 2.141174704725275e-5 -0.00010448997741200304 1.1471450170075303e-5; 8.033831614914848e-6 6.58762349086803e-5 2.5317138603176017e-5 0.00017256773414937995 -4.956921632019073e-5 -1.3792036701308052e-5 2.4065914000128018e-5 9.781800170113297e-5 -4.2802190113243786e-5 -2.0800394598880278e-5 -7.520916708362308e-5 0.00018738551286897687 0.00010471301927846019 7.513753214535398e-5 -4.8927280407803166e-5 0.00014860643411901837 -8.296660391697759e-5 7.298651896799662e-5 -8.980246068299549e-5 0.00023513551266315306 -7.58017106253865e-6 -5.8422634286951925e-5 1.2398322065749257e-5 6.965247146236758e-5 2.4523875505300814e-5 7.617487891816275e-7 -5.2143551987492544e-5 2.6069607225691798e-5 -1.228135872569555e-5 -7.004580559059229e-5 -8.407384459477803e-5 -6.839248930000781e-6; 7.312111023020203e-5 -4.37629542520897e-5 3.065457616139575e-5 2.261373350302043e-5 5.742419210400781e-5 5.5076018665202704e-5 -1.3882741828639766e-5 -9.998722876844519e-6 7.64720088536703e-5 2.628371742866767e-5 -3.444180862607065e-5 3.925051617807157e-5 -7.635199387186866e-5 7.141189319918315e-5 0.00010658812874134512 -1.3486587764421316e-5 -2.323948425329717e-6 -0.00015052314217064189 0.00014461648873656717 2.219133687867678e-5 -0.00012743169742498184 -7.240314249975422e-5 7.515333612354784e-5 -3.7678969480112144e-5 0.00011640474161504216 -8.719276326338101e-5 -0.00011620705578570802 -2.276518631879414e-5 3.5285188399938945e-5 -5.405210055505506e-5 4.82273089201041e-5 -5.916923788936427e-5; -6.719126521181299e-5 -5.771018676147833e-5 -3.750569283999701e-5 9.183191888462663e-5 -2.3900014080365e-5 0.00014002856061290895 5.298480265503702e-5 2.9858484244530953e-5 -0.00014034815229512965 0.00021043877369686858 2.2348026154421175e-5 -9.916997931702271e-5 -5.660744990143062e-5 9.209584697112226e-5 -0.00011463618410322375 -5.019283330847153e-6 1.720709098174664e-5 -9.0288567103354e-5 5.906372699849947e-5 5.650011609270983e-5 5.1515473946554614e-5 -6.18384086786765e-5 -1.869286041744173e-5 -0.00019577764886468858 0.00011163986947815657 2.6238638942811507e-5 2.514570825462949e-5 -0.00020383631034325903 -6.0033476425243785e-5 7.3421308674941e-5 -0.00012018774703880979 0.00019175104408953125; -4.4825145084669266e-5 -1.8101329476251788e-5 -0.0001516227532532436 -9.591978254079201e-5 -0.00030787125091351374 0.00017293241013490452 -0.0001444444098864041 0.00018572575551174637 -4.677475611729583e-5 0.00020110630044896678 0.0001163059676061016 0.00012441315244600152 -1.9472796554823704e-5 0.00017059183643381996 -0.00011531400763243772 -4.787714373136293e-5 -3.391370067887998e-5 -0.0001733818391925073 -0.0001570579517986553 0.00013806564289793466 9.760862645875768e-5 -4.874804311597348e-5 -5.2112140326560686e-5 -0.00017633298215273782 -4.573044155891073e-5 -9.739897017180204e-5 -1.0599498852892281e-5 2.2023888743800232e-5 -6.398595381811298e-5 6.1022601369267256e-5 0.00010338270273009304 -6.827957644177645e-5; 0.00018907153329861083 -2.758470932962243e-5 6.101885932662914e-5 -3.550026545484949e-5 -4.956374564857722e-5 -2.0560465288091306e-5 3.0567135893997024e-5 -6.013817497363848e-5 2.9693534027106658e-6 -3.7177792052481816e-5 -5.445645607027763e-5 0.00012090178401491052 -2.655534686413504e-5 -9.828774150877818e-5 0.00012045840898578325 0.0001302307767612189 -4.382322601250955e-6 -7.90651088822658e-6 -0.0001759151224693883 -6.14550832840387e-5 -9.715382717035302e-5 -0.00011321728621327554 -0.0001902274840692754 -1.527058117738606e-6 -3.714613799888262e-5 -1.4192197856860511e-6 4.071570341989771e-5 -7.115444220054804e-6 -5.224470584601832e-6 6.468931437053212e-5 0.00015458567699466283 6.828191302998365e-5; 9.238470677249613e-5 -0.00010029477259646522 -0.0003239590848124499 5.579873625268897e-5 0.0002548092919142067 0.00010384682111638278 6.894083646787928e-5 9.168109256737795e-5 -0.00014577488461778537 5.929734409168132e-5 0.00029607197514000284 0.00013800825702222952 2.519354832205625e-5 0.00011150925577116165 -5.873707475527475e-5 -7.113390590799612e-5 -5.1084149777091826e-5 3.1498674911780836e-5 8.180201559755303e-5 -1.0872212146714704e-5 6.779924599417398e-5 6.325645820200876e-5 2.281052486527042e-5 -2.5364738280550114e-5 -7.60931549620726e-5 6.995052838194472e-5 -0.00020458228816547466 -0.00014076288487100295 -5.1784580750754484e-5 -6.737999199227543e-5 -9.743617254112895e-5 -2.6103024242692998e-5; 2.994869224486138e-5 -6.738536867571874e-6 -3.87276655966161e-5 -6.075948604206879e-6 7.081672571066655e-5 -2.7500846627020414e-5 -7.967143996219695e-5 -0.00014263122524434918 0.0001551919825577893 4.323967786971796e-6 0.00013839253857431705 6.447511746946438e-5 -0.0001228075003747053 0.0001502353091927029 -0.00010829750921409024 -0.0001213918391295802 -0.00010294706829890184 -6.7720349216853204e-6 -4.841792782832408e-5 -1.781962637414239e-6 -3.3881222989384034e-5 5.578872205579631e-6 3.942482486300257e-5 5.576786621522814e-5 -3.4013881886585616e-5 -8.56431967259246e-5 3.2643698522401245e-6 -1.0635604339854782e-5 -9.24900620158046e-5 6.411097761874896e-5 -0.00017150653242433716 3.373168998214119e-5; -5.78123350160414e-5 0.00016638852086633133 -0.00022763764589309998 -2.2378437805872703e-5 6.214773149711706e-5 0.00016687043664294915 -0.00019428026151128424 -2.5980846275178036e-5 -1.5563364812823254e-5 8.464019030814682e-5 -9.949024986692077e-5 1.2501638329749527e-6 -0.00014455265643757148 -2.9475934369674194e-5 6.066380357968293e-5 -0.00010613065258350538 5.125055323315705e-6 7.563868841405652e-5 5.979991458023563e-5 4.571522860804267e-5 1.86434166798617e-5 -7.33691181985879e-5 -4.440494266436019e-6 -2.5847765372435807e-5 -1.2085667166519694e-5 -3.695886933111921e-5 0.00019028710214204783 -0.00010879812050827156 0.00013453081321220542 2.3690677536234e-5 0.00019774817466985131 -0.00010464021632198549; -0.00014413404597055185 -0.00013636132962621653 3.6047473710807704e-5 7.149533300416954e-6 1.9160008830096583e-5 0.00010817386327924342 -4.005381152002951e-5 1.874470808341595e-5 0.00031020346477542545 -3.8560912346856114e-5 4.490448236263789e-5 3.480523127744434e-5 -0.0001624933825069277 -0.00010392377180598205 5.8277528748447354e-5 9.077814001737008e-5 5.367254243263146e-5 -4.883981549044766e-6 9.078624543414058e-5 8.613980434982519e-5 -1.8302858282384107e-5 -8.899405247850415e-5 3.539008366440695e-5 7.949689003892607e-6 5.5809698548712676e-5 -4.590970593475195e-5 6.699935738377742e-5 -0.0001409013088987373 -0.00020880220427263064 -2.0523686092035774e-5 5.018859928621433e-5 0.00019194803871594836; -5.0836640431256135e-5 -7.538980323372986e-5 0.00012213365219284077 0.00015550967757805854 -0.00012977615303609452 -5.158650426094909e-5 -9.445300135778785e-5 0.00012671904799279023 5.2324022116207556e-5 0.00023098065387673432 -8.698153148654638e-5 0.00018369659185625887 -1.2696110635469216e-5 -2.2369258851732236e-6 3.0694488240208985e-5 -0.00011975622807880166 0.0001247906863943803 -1.4960184008690696e-5 3.87502902674283e-5 0.00023631081669807597 -0.00011999955792927964 -1.5374121369268324e-6 8.959246290830727e-5 -3.424867943178601e-5 1.3676376264610214e-5 -0.00011143940107219109 -0.00010046586728205614 6.015020947304269e-5 -2.4393227496349207e-5 3.272974093195133e-5 8.029209377926103e-5 -4.091005509619314e-5; -2.0077208612465763e-5 4.205642078642352e-5 0.0002203426335448267 0.00020521891535947 -0.00015877615835295628 -0.0001763010589663154 -0.00016422321670933018 0.00010166785766825587 9.380446834013477e-5 6.944013118965557e-5 7.570313726075517e-5 -0.00020894228539207352 -0.00016987823648622463 -6.765709287963186e-5 -0.00010386997048291692 5.2174847657902704e-5 2.339459540718326e-5 -6.166371924580691e-5 0.00017520478984558427 -3.222756670100149e-5 0.00016375791863294776 -3.2101947292805683e-5 0.00013332401402050718 6.26565230753145e-5 0.00017636272939609954 9.217783342798653e-5 -3.0515602995979942e-5 4.447797589574892e-5 -4.3379819837970135e-5 3.675198938919608e-5 7.922617776541621e-5 5.5747520285184625e-6; -2.685894639342157e-5 0.0001781980100810144 4.69559273648144e-5 -9.470961946988632e-5 -0.0001646288436580844 -2.3182670583428215e-5 -4.631952260187872e-6 8.540530700940394e-5 -8.155610691517851e-5 -0.00014111447660014205 7.040009234571584e-5 -0.00032174655631877743 0.00013005707308422635 0.00011168235517345108 8.33655504999339e-5 6.067131119436574e-5 -7.807485951105246e-5 -7.818412256654567e-5 -6.0909078337657313e-5 0.0001418841130822501 0.0001140118839708692 0.00024713810137711416 -2.690377356828372e-5 0.00010221995046833276 8.415895002200944e-5 -0.00015950913063477265 4.0067944888674725e-6 -8.733353740297418e-5 -8.763341599604373e-5 5.5929635100818784e-5 -7.809476653108627e-5 -8.700957539020366e-5; 4.786932413068825e-6 -6.208362415278059e-5 -3.9442386560467134e-5 0.00014633141595326291 0.00012983844985640914 3.785779094827512e-5 3.610460166741359e-5 -5.059065611064305e-5 -0.00010864825403629797 -7.380968113508274e-5 -3.915425500096644e-5 2.2093268261896655e-5 4.4380043027140755e-5 -0.00020987168923099534 -2.2882814528585292e-5 -9.160647700850692e-5 9.35278271712624e-6 5.504214049809485e-6 2.5486969525076378e-5 2.770063415439979e-5 7.04589159336426e-5 -1.7808627266741597e-6 -8.943694651903882e-5 2.804817754579744e-5 -5.7030751714110095e-5 8.365784514893617e-6 5.739902325812697e-6 3.801416946729513e-5 -4.0826899431040455e-5 1.3989509488540305e-5 -0.00016164971261830032 -0.0003113416411456186; 0.000102106123100063 -0.00014016335115625707 -9.471053125195037e-5 -0.00034049777821961217 -0.00021730853985387548 -0.00021877435427482487 -1.7805197188460757e-6 5.131708313969156e-5 -6.843685185627884e-5 2.57806788398924e-6 -7.676785970430642e-5 3.0356342554694465e-5 4.429319016311954e-5 -0.00013344438264936093 8.771185604533769e-5 -4.408363600700849e-5 4.933374076758323e-5 -0.00012167637249545816 0.00014828101631398684 3.6815048238749006e-5 5.1664037178524215e-5 3.541777333852939e-5 1.1030585133668373e-5 -2.014099698895911e-5 -5.131294381895894e-5 4.591231677906073e-6 -0.00010240284098869066 0.0003480696592447589 4.954930192583369e-5 9.322812331050906e-5 0.00013963988532453915 -1.446085060193778e-5; 4.622790451149549e-5 0.00017598764546901608 -9.250365970902723e-5 -0.00012786824796594428 -9.314770564915953e-5 -0.00010438933664945397 0.00012465363992132786 -6.00707058939334e-6 -4.2567278307533954e-5 3.35998288738646e-5 -7.608266738489823e-5 -5.8175724427119784e-5 9.88592060258278e-5 8.666327497675332e-5 -0.00015342065440732578 9.066951980577018e-7 -0.00017334775608278201 -7.168600370192953e-6 7.075255815069164e-6 7.88713719321926e-5 -8.685444614630656e-5 3.5428822719140895e-5 0.00011280686025029057 -0.0001193008951867435 2.41430866827519e-5 -8.293404376840209e-5 1.4106842662012454e-5 -0.00012959851434234343 -9.485527831376266e-5 -0.00019885768548366046 -1.7951882913192419e-7 6.090154287275188e-5; 3.8075657294035454e-5 -0.00014175455732583236 -0.00010340492742215 -5.7892051763693166e-5 -4.655654999622882e-5 8.02469948485016e-5 -4.984069898454611e-5 0.00010194839699580599 0.00012222706566718716 -2.441568910398035e-5 9.291891176873161e-5 7.914746611289046e-6 -6.017135098772205e-5 0.0001888233672889447 4.911679564428883e-5 5.396034969635737e-5 -0.0002572190715636609 -5.734145424856716e-6 4.893062571680914e-5 -0.00013650592888551523 -3.4422551898376484e-5 3.034135879720712e-6 6.10831120229798e-5 -7.605628349664316e-5 0.00013981320073982073 9.759410016157477e-5 8.666510250806197e-5 -2.3674731046403816e-5 0.0001224543374792238 -0.00010652114730872946 -1.3364947399156407e-5 0.00012327676807218543; -3.345835663754017e-5 4.095091920202965e-5 -0.00010974786725387859 5.973815844367526e-5 -4.2660292974010864e-5 -9.30017487637572e-5 8.91551897718699e-5 -0.00010395918813561029 -1.2042423958752527e-5 0.00015913601221722784 -6.577646869960186e-5 -1.2962267800736885e-5 -1.3436992223920287e-6 -2.4246759648355458e-5 -4.7679081189237525e-5 8.946577355117967e-6 0.0002195028717603664 6.022836153601422e-5 4.847099247013986e-5 -1.8124768911780975e-5 4.925494368529956e-6 0.0001624573995566141 -9.93526192990018e-5 -6.088138267774102e-5 -0.00013101725941783716 0.00015148486257272604 -9.982552744009511e-5 2.4380538735154734e-5 9.962192808182691e-6 -9.380313728730091e-5 -8.448989698923161e-5 -9.91685375713648e-5; 9.804364651804558e-5 -0.00013735471168170116 6.38697105170153e-5 6.1175306997164e-5 3.706671248086988e-5 1.743404516323532e-5 4.745594685677983e-5 -1.5213735990850587e-5 3.608851454932399e-5 -8.671320995186048e-5 -7.337214468729621e-6 -2.0154193974899372e-5 -5.847420345512655e-5 8.098568776053526e-5 2.223290118869231e-5 -2.9422212377400927e-5 -3.1217473064180395e-5 -8.40423456234346e-6 -0.00013870699206002676 5.3564175119523264e-5 3.997832516337966e-5 8.018772621303625e-5 -0.00020783417733023503 -3.927642194308173e-5 2.2973533647150512e-5 -0.00013395053849656757 0.00018567254426320238 -3.7418320629378204e-5 -5.72097632225374e-7 5.572545103509233e-5 -0.00011531279471806082 -7.445595713202687e-5; 2.5638073752387038e-5 -3.386199004623343e-5 3.410635709273246e-5 -8.304737256864081e-5 0.0001064900208844284 -7.128141423478875e-5 -1.8211219681205216e-5 0.00015260976961557342 0.0001266979890756742 -0.00010786557667912822 -5.931977629685243e-5 0.00011134139659092425 -0.00010548676782079127 0.00018248076249123237 9.047835210777135e-5 -5.4774398263794144e-5 6.062348275420957e-5 0.0002852504187144746 6.466835493892637e-5 -5.293565478005743e-6 -0.00010272087074141474 0.0001012625654807624 1.4142030708658745e-5 6.679186499653968e-5 -0.0001888192708024324 -5.32562151379396e-5 1.695020201126724e-5 -7.910490053714717e-6 -2.109746655647504e-5 -9.280295470986105e-5 2.0247411343498385e-5 2.3503622473563574e-5; 0.00012267545209250727 -2.3302614846771015e-5 0.0001845576753279514 -3.6030802204195574e-5 -8.969030203080347e-5 5.398002957362856e-6 -6.873341573382579e-6 -0.00010259472945254828 2.565181422905006e-5 -0.00015345501195189177 -0.00013484661938638023 7.593563170156848e-5 -1.713277741292608e-5 -4.6390291405346965e-6 9.142588352768827e-5 -2.321811005605096e-5 5.415817760730499e-5 -5.62900804485342e-5 -0.00015189987787526965 3.1804566045959986e-6 -3.7886873525721716e-5 -2.6533643887533557e-5 0.00012959120614904347 8.168061141840587e-5 0.00016691740713066828 1.2252102368062537e-5 -0.00022118821105469543 -5.3078414549973e-5 4.358265431065218e-6 0.00017463043151873554 -1.0993808229258895e-5 5.75512549570616e-5; 3.159141636274673e-5 -0.00011022104297012485 -6.817752580105708e-5 -4.0171815609258854e-5 4.775350101112189e-5 -6.654445166646065e-5 8.293169802299342e-6 2.4024173061425155e-5 2.1412348022512357e-5 -0.00018230733837413447 4.823277439518023e-6 2.139397441054727e-5 8.545282037516485e-5 -0.00010729954953718456 -3.568299359469238e-5 7.19854593887767e-5 -6.826832247611537e-5 -8.483017006370395e-5 -4.113994725340338e-5 9.732307228545356e-5 0.00013572206228558613 -2.7222343953739493e-6 -4.743237365915371e-5 -0.0001458835307596573 9.53589711832492e-5 4.081624649988187e-5 3.868265560298705e-5 0.000101424093035117 -7.21248710499469e-5 -7.761311860236556e-5 -0.00018438571566663171 -5.836592970014069e-5; 1.9434475735285212e-5 -3.54102412068817e-5 5.494447400514461e-5 0.0001329010774958846 0.00010804757043905769 -9.77762319501544e-5 6.776036757462998e-5 6.780445987778172e-5 6.958511953661725e-5 -6.413187652407419e-5 9.143889478827302e-5 3.0034529164089138e-5 -4.9840822566072335e-5 9.829268678976743e-5 4.441438436574161e-5 3.23076019873696e-5 0.00010130146261024565 -8.869375914682902e-6 6.878185381595422e-5 0.00011770217684051357 -9.962753388145616e-5 -0.00012705672265776708 -0.00023942263937834753 6.482771282271026e-5 0.00019372796673514434 1.375632477959769e-5 -0.0001117432620497369 5.267292003798961e-5 -0.00019889632671884686 2.109820893628288e-5 0.00014671273679659245 0.00010655527880834928; 0.00015326878107489213 0.00017225651296658113 -0.00019746958517596915 0.00010164548345247386 -1.9320526037594694e-5 3.127462602324607e-5 -1.0652513113287296e-5 0.00012335951666220181 1.761864311167285e-5 0.0001873437349485446 4.645353598509099e-5 -3.5530449923320618e-6 0.00014254723552485873 -2.6303644547697315e-5 4.005661591405254e-5 -4.95041504410511e-5 3.262371589417253e-5 2.9048661387528156e-5 0.000116947986468257 -0.00014776770788838128 3.320205993706372e-5 7.4480484697138e-5 -2.2321945864940837e-5 2.1713142784319098e-5 1.9327640044975404e-5 0.00010350166027532305 4.666042938806584e-6 0.00012013815191213221 -1.4263631813337436e-5 -3.614279288111362e-5 -4.778083988010678e-5 -0.00010844721785641021; 0.00012510945583593439 1.6366220221551666e-5 2.6071794706096284e-5 -0.0002292761733439514 2.5292097631191662e-5 -0.00011106545076805646 -4.215142383746798e-6 -0.00014278436225473204 0.00012849907710582722 -1.0538062782561645e-5 0.0001832126485499808 -5.0894328193482666e-6 5.690608435210316e-5 -2.0125346567199934e-5 -1.955617393532209e-6 0.00020969985547375172 9.217756484157493e-6 -5.208354429092567e-5 -1.997851592357001e-5 -0.00010448388872069576 -8.920205403827503e-6 -3.207223432006019e-5 -0.00010738169163565525 6.59365081777121e-5 0.00014089126983557595 -6.935870857312072e-5 0.00011378178663684763 2.73976579013286e-5 -0.00014477276505536627 -3.005213017209725e-5 5.180299689194884e-5 7.691116246527768e-5; 5.538993233130405e-5 1.2013121395275781e-5 -0.00013654782864252595 8.461134953903325e-5 3.0126981673657063e-5 -0.0001866333089674399 0.00010116513536558117 -0.00020897041153150129 7.815681115969442e-5 -0.00022899765948751778 5.957330964518243e-5 -2.343077479965612e-5 -2.1783259708183335e-5 -0.00012992579044296065 -2.6510675433865744e-6 -4.1947217450797755e-5 0.00017164245460661758 -0.00014682859668029686 -6.270823845621428e-5 -2.4218189446592156e-6 5.806723811284637e-7 -0.0001496133966975353 0.00014899951444020555 6.167399502371825e-5 6.43462287007217e-5 2.156421274780055e-5 -7.845862031901837e-5 -9.390322278706127e-5 -0.0001074596487706212 9.94298194745899e-5 3.5058451121565134e-5 -0.00010716452865383237; -0.0003191377717522412 -5.37367159641918e-6 -2.4668771315294324e-5 0.0002046598284286004 -0.0001522184710703305 4.4142137172354915e-5 3.4260509979401333e-5 0.00018585492679984496 -1.5316129887754148e-5 -2.7753702765142216e-5 -7.027586516788974e-5 -0.0001221923111225926 0.00014801379174594998 5.563968906779954e-5 0.00010767002558703952 -4.5898740521908645e-5 9.620453143622839e-6 9.126605172776816e-5 -6.428196038915285e-5 4.424748576265019e-5 -0.00010782323343311245 8.663491198229877e-5 -0.00018453389466604935 9.671578036057946e-5 -0.00011947830617292601 -1.0612961799879316e-5 -7.871953952520206e-6 0.00016270977093507758 -0.00011838625769461324 8.132542743249991e-5 2.7964141823655893e-5 -2.177110664353059e-5; 0.0001377084021509263 8.159131000316127e-6 0.00014750205937833923 -9.148348416838996e-5 -2.3457184027037318e-5 -0.00012808509435908975 -3.121308927130042e-5 -9.638574792244055e-5 2.1943916551562877e-5 -0.00010252862065736459 4.3880926939336946e-5 -2.7056998617454448e-5 4.104674499776066e-5 -0.00011791483356556496 0.00010887833687539623 -3.353196242878853e-5 4.6401841313083126e-6 7.904667782658295e-5 -3.227416399802949e-5 4.67979201925031e-5 -2.8824552457635503e-5 -0.00024034676115898234 7.269563266835809e-5 2.1848854346344296e-5 -0.00017884592906976674 0.0001393872275070107 0.00012041697246549082 3.826126467847036e-5 -4.833069936112578e-6 -0.00011074135053841844 -0.0001416534165333453 6.224205336128725e-5; 5.576822008828056e-5 0.00025591577145627056 -8.700639005161747e-5 -6.975165302043561e-5 -0.00010677312234526383 0.00014461752875598408 9.682826061856272e-5 -7.23531934932611e-5 0.00013398421146797074 -0.00022309657793136115 5.101374922326844e-5 -0.00010017867457697105 -0.00013455416915582251 -7.145881095545308e-5 9.8864837534567e-5 4.95414482861056e-5 -2.2009391207179958e-5 0.00016627398003714284 1.939828548297046e-5 -8.495580507376246e-6 7.701317779203884e-6 -0.00017832730514106884 2.259041543532805e-5 -6.144958893179827e-5 -0.00019205013955121797 3.2863756539376833e-5 2.5918049184321998e-5 2.8908595093011135e-5 2.7365002504150038e-5 -0.00010480302585398664 4.503860913922144e-5 3.899405641953519e-5], bias = [2.7138095716372747e-9, -1.6498382031809333e-10, -4.608275953969919e-10, -1.9291751862516236e-9, 2.7705720728361014e-9, 3.264485710362863e-10, -4.305414876830443e-10, -2.259400710954987e-9, -5.408371575175171e-10, 7.742756136062776e-10, -1.7777348153795833e-9, 1.5844289897328725e-11, 8.990327688345358e-10, 2.1052335604828207e-9, 2.4371523286227764e-9, -3.6367908381110006e-10, -2.9491734226378475e-9, -1.5301196637787933e-9, -3.159095204734098e-9, 1.193193396044492e-9, -8.514409789703838e-10, -1.0173824693803676e-9, 2.0883858638065282e-9, 1.7282071930744482e-10, -2.428688513980065e-9, 2.9031531937412355e-9, 3.6879709296454276e-9, 6.888843398620999e-10, -2.987999344558813e-9, -1.554363854785273e-10, -1.261098632735822e-9, -5.543848307821684e-10]), layer_4 = (weight = [-0.0007628348698585942 -0.0007968502568538448 -0.0004606481010041707 -0.0006998163852451662 -0.0005239929431246982 -0.000846439777128357 -0.0006807522492192497 -0.0005732331186485315 -0.0007675145653677455 -0.0007083392471198734 -0.0007827243794498699 -0.0006779767132701048 -0.0007039347168316969 -0.0007936663818308618 -0.0007674425274839696 -0.0007663684670847433 -0.0008493740251560428 -0.0006601656832833226 -0.0007198111732472897 -0.0006680203486089178 -0.0005237149715754186 -0.0006873017693977915 -0.0004855881439717459 -0.0006008070585930647 -0.0006401482151674277 -0.0006370621519656112 -0.000804500632374427 -0.0007416157470687397 -0.000705937786320906 -0.0007342948457122147 -0.0006815718149522211 -0.0007291652117940308; 0.000175943447597476 8.215208935135869e-5 0.0001945096995201535 0.00035162594564089306 0.00027705075610947075 0.00028535946776719076 0.0001426472594428061 0.00041967105742638807 0.0002461702184744789 0.0002888079448299872 0.00031030029206790394 0.00011229382078070643 0.0003426428411950763 0.00032742233636123233 0.00031949468363295594 8.398950082268685e-5 5.457763855494437e-5 0.0003234436096119211 0.00023157069141912902 0.0002875879872335591 0.0001593343377194908 0.00023875543193984836 0.0003288542043758386 0.0003476422657305056 0.00024444716656784975 6.748340284060588e-5 0.0003001225345201752 0.00016772500603453095 0.00023656456266660993 0.00024418851308823634 0.00012948764830723452 0.00014831039840870855], bias = [-0.0007096161531146074, 0.00022516783217494898]))

      Visualizing the Results

      Let us now plot the loss over time

      julia
      begin
      +u: ComponentVector{Float64}(layer_1 = Float64[], layer_2 = (weight = [-3.444633694017027e-5; 0.0001039441485771382; 1.0407738045610902e-5; -6.452792877103415e-5; 4.626418012775144e-5; -1.9925024389491433e-5; -9.46469590415659e-5; 0.00011828132846848188; -0.0001320207957178097; -2.9316854124754463e-5; 0.0002301521017214964; 0.00014383887173639653; -0.00011099165567425477; 0.0001545632985649061; -8.774543675833383e-5; 5.451707693279354e-5; 1.684587368797963e-5; -7.086505502225862e-5; -2.4955572371183748e-5; -0.00011097497917925826; 5.276532101549441e-5; 5.018549927622104e-5; -0.00018031502258926727; 4.723035999626726e-6; 0.00010012918210118536; -2.184351978938409e-5; 0.00017082683916638447; 2.1053067030159004e-5; 2.5283301511037967e-5; 3.833807204501072e-5; -1.7683138139520778e-5; -6.71847956255333e-5;;], bias = [6.9355129382313795e-18, 1.688039926016194e-16, 4.315277203181784e-18, 5.559977517444896e-17, 1.52001508315218e-17, -3.5540923167334566e-17, -1.6651759850291906e-16, 1.1573865574959218e-16, -2.6674011431911173e-17, -2.0666490256237582e-17, 1.3796747490796324e-16, -3.693878051564067e-16, -5.380525838099007e-17, 4.959567142444426e-16, 7.516426352893122e-17, 3.5831402597941594e-17, 2.1393543540180377e-17, -4.532685340028104e-17, -8.180383189070195e-17, -2.1535290509728552e-16, 1.3723640324634174e-16, 5.611635649463381e-17, -4.301449155436491e-16, 6.064214923865348e-18, 4.886005136988138e-17, -1.8002180440631902e-17, 3.57115207947323e-16, -8.016960934672413e-18, 5.0422985921064145e-17, -3.1155996263835813e-18, -2.8990184509333085e-17, 3.127250447859456e-17]), layer_3 = (weight = [-4.205001286278172e-5 4.339081889086414e-5 -3.1309877241035206e-5 -6.605930510955933e-5 -3.990159356431851e-5 -8.433194207789489e-6 7.869451829649755e-5 -0.00021180017764961743 3.880328395936734e-5 3.330157224740251e-5 -5.697920104542528e-5 6.550202116087691e-5 -4.608799107770618e-5 0.00023637286531921677 -9.851279000101203e-5 0.0001463368188608353 5.218106335564008e-5 -5.8147690734371506e-5 2.6227158659132254e-5 0.0002084372189612855 0.00017905233281384326 0.00014737186748719914 7.529877794622476e-5 -6.909740560110761e-6 0.00014460866251213585 -1.672533479973743e-5 7.883097888152574e-5 6.582927190651587e-5 0.000208197156015778 5.403743588650047e-6 -0.00014810692490824082 -0.0001803648829907284; 9.777762491315963e-5 -4.8091490017820345e-5 1.4859247611038911e-5 -9.035660738996977e-5 1.9246597301697616e-5 3.681149567304587e-5 0.00014026918827615586 -5.3934051240202256e-5 -1.7881870436823375e-5 -0.00014276416593171276 9.197460488240333e-5 -0.00023419227770470874 0.00011673296424153897 -1.031586222932255e-5 8.732677409019296e-5 3.536662873406219e-5 -0.00010286648873412568 -3.415668490254715e-6 0.0002528183136094774 -4.2122032456671046e-5 -2.2571447035863186e-5 0.00022161423433166277 7.78389981208593e-5 9.595683448733404e-6 -0.00014780753960773508 -4.7139587397087056e-5 -1.851857311674287e-5 -1.7242786699771552e-5 0.00010680919834006971 -6.691833957271307e-5 7.65056934399944e-5 6.02373761724604e-5; -6.180848922318268e-6 -1.6391403514363544e-5 -0.00013397353610309097 0.00017982694702403543 -2.661779832089018e-5 0.0001268205189629806 -3.5837817231014925e-5 -4.92560254857468e-5 -1.5947882966072905e-5 0.00015366018782550243 5.835763589894574e-5 -2.4931098559687974e-5 2.3327632552891016e-5 0.00010656769449305567 -6.601073688181039e-5 5.650270326476416e-5 -0.00017425688998593265 8.164793256618043e-5 -7.088904090743741e-5 2.459599749307891e-5 -5.539704648946548e-5 0.0001109530814941399 3.9425241302887315e-5 -1.2048003336182092e-5 8.326113606052651e-5 -0.0002782496928917827 5.448296291469673e-5 -6.667853154760091e-5 -4.5190274940628986e-5 -4.622627307946056e-5 -9.354076315468418e-5 -0.00015698241328174312; -4.262435397975399e-5 0.00010920083438833616 7.178710363661654e-5 4.4479215987684456e-5 0.00012095384879359694 7.292291699997855e-5 2.8837704278719262e-5 -3.7111382723364154e-5 8.072417692602771e-5 -9.92059475058155e-5 0.0001286015278063151 -2.1821927318511916e-5 0.0001404190799556239 -4.3955424941657595e-5 7.241526616127701e-5 4.31615327877986e-5 -0.00019252053318921704 0.00019974101012196015 -7.287775663569804e-5 2.6118604339732293e-5 -0.00010422243651844865 0.00013939503257717642 -5.40506269564419e-5 -3.291827381615741e-5 -0.0002416021787459457 1.9096684966662476e-5 0.00015839677796321706 0.00018530846425421303 -0.00017939445827067134 -2.221974939600825e-5 -0.0001270977998748817 -0.00016358889554538193; 8.366520085372586e-6 -6.221234696760506e-5 -1.987475014367264e-5 -0.00010268296217155972 -5.81441371488035e-5 -7.792360600625302e-5 0.000127603817525491 -0.00016143989967724203 0.00016522053294273028 -7.008215369033789e-5 6.799739014046271e-5 7.501534957362108e-5 -6.814800770563814e-5 6.0619532739907314e-5 -0.00012804550977499733 -2.466793041196418e-6 5.542653263354542e-5 3.321295036392848e-5 9.424550182110481e-5 0.00012305155643651454 0.00013420232527813612 -3.9416360670759e-5 8.60513983915291e-5 0.00012585469186696105 -0.00013184866737184743 1.9443338822283293e-5 -6.22474170833147e-5 0.0001124222845809427 -9.341700104606488e-5 -3.9139110305870014e-5 0.00016922388123738445 -1.3683410436090745e-5; 0.0001299821471066502 0.0001405256499738763 2.1537651397968044e-5 -0.00019176832379611924 -4.5420694598712656e-5 -9.38355059160567e-5 9.15961446908625e-5 -6.714685618071617e-9 2.4754237662873872e-5 8.439070565787211e-5 -0.00014350317165092021 -1.0002652609671686e-5 5.334953718427553e-6 -8.822291952395642e-5 -0.00024739783644049055 -1.938021873952194e-5 -0.000215553516548378 -7.571858973546631e-5 -0.00010805875158707556 -6.890239988291987e-5 -3.297712633224983e-5 0.00012800864546721013 0.00010306038361438175 -0.00011335811527983256 7.89308852718539e-5 3.418238897852649e-5 6.409316315245289e-5 2.8852951933960912e-5 7.307010141362492e-5 -2.270076586635192e-6 4.6461134977154854e-5 0.00011204144006756364; 3.487177087047751e-5 4.4114325240316644e-5 5.946491869802371e-5 3.8979631020290376e-5 -8.414312512077747e-6 -3.971881091890168e-5 -1.2736335926799843e-5 0.0001451950643542815 -0.00012774013380002224 3.124440040958759e-5 6.61754943877683e-5 3.720955060381146e-5 -0.0001025977930537748 0.00017139958577282128 -9.503683617983947e-5 -0.00011928112002967413 -0.00011321939605001596 0.0001257690287759273 -1.4024268645488513e-5 3.369528854202042e-5 0.00024677252581862267 -5.911050761530619e-5 5.985014791185486e-5 -5.7578126386655354e-6 0.00013582121532928907 -0.00017533509748934086 1.6054181790606307e-5 -2.7715501752726425e-5 6.284646729293942e-5 0.00015492507150450375 6.389825789777324e-5 -1.9567980822806048e-5; 4.30052654065131e-5 -3.7590913625494985e-5 0.00018775681507582261 8.596659763544528e-6 -3.54055652041523e-5 -0.00011055657399078439 -0.0002030581951020879 -0.0001526995462245013 -0.0001448503304618146 3.810913164675587e-5 -6.783353331229137e-5 -9.451347307728101e-5 3.762488286196048e-6 0.0001316468709501647 1.3971015841932906e-5 0.00013892735421000216 0.00015254749575438378 0.00015852405465016132 5.2959317481558755e-5 -5.07929204377037e-5 6.262992371380514e-5 -9.733698846498297e-5 0.00010788728956870217 6.81175237060008e-5 2.232221386335118e-6 2.1804245262090085e-6 5.725547577642607e-5 5.318407908821865e-5 0.0001883290400383472 0.00011238964486442804 1.2316677721223581e-5 -4.316060460946486e-5; 1.2051990163880824e-5 -8.109124398922551e-5 4.6309045795196e-5 0.00011999942428067517 2.4882347427618372e-5 6.384433283804282e-5 3.1216275895555067e-5 -0.00010765778067886924 1.12839837338686e-5 0.00014067009244447858 0.00011210597069567706 -0.00021599596737085198 2.7190399417145638e-5 -5.5338686867779225e-5 -9.812267460526811e-5 0.00023663114036301372 -2.7638423195613334e-5 -0.00010749223081527115 -2.789211946671537e-5 6.239018905309913e-5 -0.00010606452058434258 0.00012061846275449036 -9.270612433918453e-5 -0.00010574793639259105 -6.36859601190498e-5 2.9857867341915306e-5 8.330812336045713e-5 4.175882916596657e-6 8.206293052628927e-5 -0.00014328124023174605 0.00014588151163755285 6.408246765479624e-5; 0.00012073029734927432 7.001773861737288e-5 0.00013248406845412403 -7.536979088568644e-7 -1.1466271862099168e-5 -7.088965707191072e-5 -1.174063185282471e-5 1.5474579507849773e-6 -8.77713266523798e-5 -0.00011369781190261977 2.964306419051972e-5 4.524952397367383e-5 -0.00016962096764438475 -9.132360195473337e-5 3.824340979952633e-5 0.00012625653267639905 8.521358337055153e-5 -0.00010420034435775822 4.760344178101243e-5 -0.0001815078014620828 1.040643815975388e-5 9.02883164238858e-5 -4.140772420179515e-5 -0.00010618439613573993 -0.00013016852812108295 9.881671372602889e-5 2.1220150083885688e-5 3.136754254263088e-5 0.0001657753725888879 -4.334861150363281e-6 -0.0001273455293263569 7.391832128667507e-5; 0.00013549696292397282 4.4782520729532884e-5 3.712504179240339e-6 -0.00011569386127735784 -7.741649618470397e-5 -2.20388553650612e-5 3.015375865583452e-5 0.0002788537879161968 6.228262487933607e-5 1.6433519943567903e-5 5.208012920082947e-6 -4.700076691438824e-5 -4.68156701906618e-5 -6.339651894155353e-5 -1.0515989503581901e-5 -9.50322153009969e-5 -4.233662342517915e-5 4.951399593050289e-5 0.00013973686715257073 0.00012631515552424488 -4.4465352737082865e-6 0.00011439576179622385 -7.271314255991458e-5 -4.534692355878928e-5 1.0357867052554138e-5 -9.166584799164253e-5 -7.803389029209924e-5 -9.524828941426518e-5 9.770347811649274e-5 -1.4143623717935339e-5 -0.00014762198564927372 -1.2560985612034067e-5; -8.769151501091377e-5 -1.9700021231792414e-5 0.00013252765310263048 4.734613647834317e-5 4.932429836863505e-5 -0.00015662540719112584 2.5677332440145157e-5 3.5956465206446694e-5 -0.0001311944364822806 -0.00013151791100589748 -2.532822182890829e-5 0.0002538888216697285 -0.00010227209751227683 -1.8039465805291092e-5 1.2403401179566026e-7 2.5748880569342254e-5 6.381158854147211e-5 2.0619403023447958e-6 -9.805077607942942e-5 4.0388123659987015e-5 -1.147160165764285e-5 6.466266393733491e-6 8.57498033675986e-6 -6.361868910225326e-5 5.776256110785641e-5 -3.6099132932389313e-5 0.00013409055790199713 -5.992636616774106e-5 -8.756880598575071e-5 -0.00015813161772875286 -1.434535116622432e-5 0.00010357977552140063; -4.521244771291087e-5 -8.982214420080307e-5 -9.538962693164685e-5 -3.780281620543845e-5 -5.59397485454242e-5 1.5487106811123998e-5 1.8121114425815524e-5 -4.2633750109011224e-5 0.00012875935864890145 0.0001224293046283855 -0.00017869890505656856 -9.802866041404844e-5 5.9912203159406166e-5 5.927215899596406e-5 2.3991748111659956e-5 -7.301137882728573e-5 6.41430124409814e-5 1.239868215126814e-6 -5.5128111835583855e-5 -1.197747206383522e-6 4.252268265139791e-5 5.6467251869835914e-5 -8.109195549017914e-5 -2.4997427299750096e-5 -3.1922554608678126e-5 0.00012697043350221043 -0.00020041184687265496 -2.0567329539117285e-5 6.240757279009177e-7 7.127181381241847e-5 6.322638002478898e-5 1.4142050549200193e-5; 5.71921015585537e-5 5.221458257489788e-5 0.00011010285077698089 -3.051292453112726e-5 2.1583839662139467e-5 -4.8488981810281866e-5 6.512039242859044e-5 9.516620473784184e-5 -3.1106733621921516e-5 6.826447923283547e-5 -5.3965558726716256e-5 0.00012646582461508374 9.02367943859367e-5 4.982179398723983e-6 -9.399842685473047e-5 -4.89651568563434e-5 -1.9098782394779683e-5 -6.380266433500444e-5 6.042057128122542e-5 3.6090427493554216e-5 -2.6493658260210587e-5 1.6527603912049874e-5 -2.2083753100961748e-5 -7.925291294616152e-5 -0.00016420734329533772 5.153843340464007e-6 5.923057022351725e-5 -8.793135504622469e-5 7.977006014750804e-5 -5.339904254855787e-6 4.577362477045988e-5 0.000104514515151617; 8.83084594331987e-5 0.00013208596099301942 -6.561498414358847e-5 7.393211306206346e-5 8.988816441868734e-5 -0.00011885238020314611 1.1836073988698543e-5 -0.00010001158627547407 0.00013785016378987656 -8.160099789366813e-5 -0.00011262947921039688 0.0001376391901228972 2.66761144101045e-5 0.00014830955658708142 3.826957070286014e-5 -9.307632537657983e-5 -2.5763341929826572e-5 -0.000160382622218286 4.317887030791724e-5 -1.9104065379185334e-5 -0.00014921267395239928 0.00011516987613302975 -8.063875977513542e-5 1.1719296687976183e-5 -0.0002236010605176565 -6.249626132759039e-5 4.619768767771663e-6 6.785294133604728e-5 4.742170310945483e-5 -0.00013142848731350815 1.7367303351226586e-5 -6.162373576645326e-5; 0.00010093530811966179 -2.0442878912279912e-5 -3.068241773624291e-5 -6.032368316416563e-5 -5.9848861446223545e-5 -3.1509360444755223e-6 -2.155488636408763e-6 3.250108141467968e-5 -1.4429620995780787e-5 -4.334287866120473e-5 4.984315904109752e-5 0.00015791032722585596 0.0002463296995581305 0.00013535372357250256 9.062694965062129e-5 -5.706651342668618e-5 5.6168021665799215e-5 0.00012128945757533569 -0.00013405368287683735 7.869344056483262e-5 -1.0904949767133873e-5 -0.00018631656627738009 -0.00032724125691788327 -0.00015856075248782845 -4.9757359255881376e-5 -4.431847907588169e-5 -2.5763206457895183e-5 -8.085996439083766e-5 8.673023779076256e-5 -0.00025890653984722865 -0.00015582490511340387 -3.593513526473096e-5; 8.574338144268075e-5 7.658011318295868e-5 7.728063510608977e-5 0.0001020175740459871 3.8336450325969847e-5 -1.5136448472676424e-5 -2.421987219253033e-5 -4.685778831020177e-5 -8.222266760543011e-5 4.62748625805241e-7 7.137367074491195e-5 6.276136709954718e-5 -0.0001706220965378943 -0.00023203238661056113 4.758424338530282e-5 4.725682893064122e-5 4.083357716905078e-5 6.721580293839207e-5 -4.837076912639522e-5 -0.00011875635673180477 -9.757048528530397e-5 -5.267436708794679e-5 1.989133024933483e-5 -0.0001084117567178875 -5.591865110448923e-5 1.5665761476211666e-5 -0.00013297566610974678 -2.4837950239943873e-5 1.0713718348652142e-5 -8.723383789305397e-5 0.00012440864238766146 0.00011524183073657884; 2.2636593513605062e-5 -0.00015273282426144847 -1.4347078545641728e-6 0.00016787764960231856 -1.9500852630290257e-5 -0.00019400278344487945 -0.00021316138392515777 -0.0002009630081543254 -4.5536540909339406e-5 -7.42610393158913e-5 -8.7943674060113e-5 0.00011266387790009605 1.79367996510995e-5 -0.00023748613259022187 4.448124653341628e-5 -4.9104295811547196e-5 -3.0329183772259492e-5 5.280658825955565e-6 -0.00013528241490217056 -0.00015748128882344428 -9.666430943848252e-5 4.0796541526515755e-5 -7.73726080158895e-6 -2.1134994511662673e-5 1.9236247482210526e-5 3.227292452852085e-5 -2.6489815553295507e-5 -1.852089209801681e-5 3.1689039843919384e-5 0.00012823513296229805 -0.00010955031413555335 -6.128296465116822e-5; 0.0001639669261108711 -1.7852515311823013e-5 -0.00012519860517423711 -0.00011604091029803909 5.576559290565641e-5 1.9927821840237917e-5 -3.03926724009262e-6 4.6146540471095255e-5 -0.00018461362394226968 5.42709802444648e-5 5.7556195160621146e-5 -2.567987955479808e-5 -2.5649924437308832e-5 -0.00016830684960076193 0.00013468765776415066 4.334205898848534e-5 -0.000199072086771057 5.631494225744535e-5 -0.00019370720307998422 -9.537610976983005e-6 -6.809427047338184e-5 -1.0437017563533625e-5 1.5119455425749796e-6 6.42493280755817e-6 1.0694620713912318e-5 7.385739248055888e-5 -0.00020137881271728322 5.312790183742453e-5 -0.0001328824238684407 0.00013918267161861086 -2.129204601297713e-5 -5.782770510486932e-5; 1.0811757892775233e-5 0.00010476830313545685 -0.00011787768060475499 0.00011722279989026201 1.9462900599868895e-5 0.00012584605385182484 0.00018817123738681168 7.885580955170764e-5 -8.62505480474385e-5 3.997105115541275e-5 -5.600511467716606e-6 -9.64443829436409e-5 6.11929058502883e-5 -0.00010118181715389958 -0.00020498916317610683 -0.00010931399490818558 7.188729752652418e-5 9.989538331847273e-5 0.00012033743361808402 -0.00013321274353093578 -7.367465193983327e-5 0.0004115588958664411 -5.7669062518711875e-6 1.3063251135287365e-5 1.9813725448144703e-5 4.3459356238423456e-5 -3.0825933645449456e-5 1.5887876639323259e-7 3.520434940565415e-5 -9.529957649664991e-5 7.480403248319472e-5 3.8177127425844396e-5; 1.8338686637164163e-5 -2.2533830637509336e-5 0.00012561317016270552 -8.813745335309354e-5 -5.272293383704488e-5 -7.367264588929118e-5 -1.6216467706072427e-5 -0.0001257858046340239 5.613088178118954e-5 5.760027939415189e-6 9.219455132171401e-5 0.00012332559453693195 -0.00012611258244239548 -6.309344717396738e-5 -0.00014294178509251964 -9.998792669915691e-6 -0.00013109555111832344 -5.1135971083830777e-5 -8.968088768988207e-5 -2.365063374261197e-5 -0.00011523285029788361 0.00010041868990420451 -4.9968343595745904e-5 -8.47692233985983e-5 -0.00011009070549723973 -0.00025132792762114984 -0.0001871437805714198 0.00013608267654099173 -8.912605226605487e-5 -6.545261094679247e-5 4.6225935144048385e-5 -4.664509724924789e-5; -2.1970263511415903e-6 -0.00013310800752116086 -4.532156710375631e-5 2.999377100458256e-5 1.8022490603825117e-5 0.00018153626841048174 -9.602995307396797e-5 -5.413612418088823e-5 4.287034787956824e-5 -1.771981262173578e-5 -0.00014343801924007225 0.00015344942765331058 -0.00018855169220806766 -2.7982741830431546e-5 5.343452340083523e-6 8.007146059789129e-5 -0.0001849428463352618 2.5592724323713863e-5 -0.00013660281003431953 -0.00015405409250853318 1.154565414289149e-5 0.00017930324791485817 0.00028733240285908827 1.5900952330766533e-5 0.00010199950154891523 -8.186341621659298e-5 8.418052855666789e-5 0.0001537973348425927 -3.511745855540573e-5 2.8187343541857908e-5 -4.305216318208326e-5 1.9420127482943934e-5; 0.00013047291307106862 4.8180064396950286e-5 4.073351831852744e-5 -0.00014995426587898148 0.00014710546113878698 -3.679257403614656e-5 -1.0260972856991384e-5 0.00010308505173401682 1.1468383519459738e-5 -3.332801040283146e-5 8.3393624949406e-6 -3.0273557939436337e-5 -4.1483918692899636e-5 9.425511081133681e-6 -3.7484819557499064e-5 -0.00012771800808746188 0.00024751409822006056 -5.392921863727165e-5 3.390788427300087e-5 -0.0001553727006180973 7.248169734470736e-5 -3.891376670121949e-5 1.9791078585908083e-5 7.570074834024823e-5 2.410894651933961e-5 -4.270106271046873e-5 -4.800118310282789e-5 7.070431186680027e-5 -0.0001293715713186908 -4.301961504875446e-5 8.705581151975598e-5 8.415284267488991e-5; -0.00016218015626896033 -8.94621549489345e-5 -2.5865717918577946e-5 -1.4645595558477864e-5 8.638329053405804e-5 1.637136882686671e-5 8.871226225663575e-6 -0.00012135534017238758 -1.7863328696196246e-5 -5.667544211269212e-5 3.624201452816441e-5 0.00015631455643210496 8.173466666243995e-5 -7.158832851223743e-5 0.0001352817274780579 0.00010242914175955702 -0.00012722638313084693 0.00010925839923197505 6.72094519154548e-6 0.00011892866349421466 3.408543888121968e-5 6.105428509522358e-5 -0.00011939036595527266 0.00012109893613137458 9.136059123897974e-5 -3.087496189641618e-5 2.064508988746292e-5 -3.479349799113199e-5 -5.334891794723312e-5 -0.00019021942681524766 -1.674519045286662e-5 1.394340723686492e-5; 0.00013323872651408686 -0.00017627767276980928 -9.576768109372526e-5 -2.59307770899759e-5 -6.515155012284174e-5 7.440993165101207e-5 -3.084995380749891e-5 -5.1631074645864936e-5 0.00025430398188500445 0.00012153430822973407 4.954485945953685e-5 -0.00010156470573538287 3.5480553444632644e-5 4.178580249843196e-6 -0.00020312480676484222 0.0001741893730216401 -0.0001302785303120752 0.00011281472058936172 0.00019560213793038666 4.458560312950922e-5 9.11902616938318e-5 -8.003878513381596e-5 7.236274638880334e-5 3.2969671403684794e-5 -5.325071008673865e-5 -2.078747541208472e-5 -7.615229599441948e-5 -0.00010978364928423146 -5.7121293982795475e-5 -6.845496434557529e-6 1.4308017918943016e-5 5.8990987847486295e-5; -8.068773107643253e-5 4.496038442320833e-5 -0.00021278356374514274 -3.226081620359892e-5 9.76462083259314e-5 -8.423486227675334e-5 0.00011255057094112734 -1.4025670565429145e-6 3.058020362240716e-5 -3.160961072117094e-5 5.783985531032455e-5 0.00017882947346387792 0.00011405178289587279 -0.0001021441444549124 4.392799152137513e-5 -0.0001437923515708472 5.839537467418251e-5 -0.00016583880873204335 1.0215446730034912e-5 1.2585123352069816e-5 -6.653298854635155e-5 0.0001733786607133871 -0.00017892397821689 -6.45871137179712e-5 1.6113275216954894e-5 0.00017419038109674125 2.5662005521181286e-5 9.154449568322866e-5 0.00010427211005899125 -4.104591292834862e-5 2.4212913069822937e-5 2.0538714545726616e-5; 0.00013231759597882783 -0.00016531819770708206 -3.903898747186057e-5 0.00011625968121560546 -0.00010705817172666824 -7.782020171536004e-5 -0.0001851385028766427 4.840849882211186e-5 0.00011539036435178243 9.922969491606616e-6 -6.161539264384642e-5 5.983226886937267e-5 1.0860245618543714e-5 -7.660799079704119e-5 0.00016022111170795186 0.0001788694347288481 -9.276676488472023e-5 -0.0001460198115291046 0.00012357169119976596 -0.00015698358826003438 -2.415748045851735e-5 4.041858786703286e-5 1.0188235444820348e-5 -6.369826647816108e-5 -0.00012002070494591742 6.641312518775373e-6 5.828942388516075e-5 2.9997863146570874e-5 -2.621380068550216e-5 -3.005546124688226e-6 -6.990380024146575e-7 3.1050152154502304e-5; 0.0002098619435244305 0.00012419714688146732 -0.00020825366743085208 -0.0001094549014553375 -9.313870474872496e-5 8.488164088046866e-5 0.00014737185768606465 9.028594570988895e-5 -4.440408962839323e-5 3.6403528731881814e-5 5.627685744253288e-5 -9.140071858130575e-5 -4.809259159832102e-6 -5.494964431566503e-5 -0.00018010626160250601 8.046293726945874e-5 1.335427362644808e-5 -0.00018187299507846564 9.105022684960374e-5 5.825738874388089e-6 9.931476154886771e-5 2.165442864734179e-7 6.68504354805617e-5 -8.968789995125722e-5 -2.0078942604616312e-5 -0.00012492937172939317 0.00013471883642775397 5.0687270352484084e-5 0.00012188276662780651 -0.0001069477374287258 0.00016678719760673813 3.058062823383844e-6; -3.585199986335995e-5 -0.00011270459683944482 -0.00012225274870505853 0.00015687008387541565 -8.663284709544917e-5 -0.00016514066258313396 9.708045515364746e-5 4.933947754691703e-5 -8.735918411618291e-5 -3.124779207499906e-5 -7.913484180673843e-5 0.0001058072605439852 -0.0001859879255141768 -7.349336258699577e-5 -7.338620955923004e-5 -2.5025314906802652e-5 0.00014257342379206514 -5.9075113975980495e-5 1.554369125297988e-5 -7.509196323442113e-5 -5.0322795440284445e-5 -0.0001913838338885272 -0.0001642421836811525 6.066790344558352e-5 2.2421457534822557e-5 0.00010678244259107603 -2.648571783425076e-5 0.00018647255025936032 -0.00015728436003772776 8.017007464442261e-5 -9.491696139665593e-5 0.00013587971741000395; 4.278961614077199e-5 -9.191995109320312e-6 6.91538521049109e-5 -4.791531273908925e-5 -3.959786727605242e-5 -4.165149176073911e-5 -1.465376845004567e-6 0.00011214958112290285 2.8502940490249345e-5 3.611469264328724e-5 -4.327073429994369e-5 -5.895298045758524e-5 -1.2184350002970956e-5 0.0002628853403574167 3.117948335315745e-5 -2.2873740367823404e-5 6.707417241600633e-5 -8.03293051390289e-6 -3.3182815518693234e-5 3.947805134415912e-5 -4.163255608105006e-5 -0.00013158180764107847 9.082824589752237e-5 4.171680893237082e-5 -4.384012527694463e-5 0.00021100041385111674 1.517026293625028e-5 -1.5581151733521386e-5 -4.948479136614798e-5 -3.5892942382952084e-5 0.00015675257645384697 -6.503976283525156e-5; 0.00011516668527934832 0.00018854346635367353 0.00014570788116276455 1.7261589592279639e-6 -3.5727478828392096e-5 -5.665340941317205e-5 7.722445079439977e-5 -0.00013445213917561414 -5.496268786641555e-5 3.1528068208956726e-5 3.841857105456514e-5 -5.325605743832841e-5 -0.00010185048668330986 -8.960560299142385e-5 7.329490981379316e-5 -7.269966081634124e-5 5.959178540309124e-6 5.3234898220635266e-5 0.00012447183471255168 8.431165551615428e-5 -5.629205261624368e-5 -4.5604907793802504e-5 8.826360644620271e-5 2.4224374644348466e-5 -0.00018978464723017397 3.62603364893066e-5 6.36315868151055e-5 9.625939637504688e-5 -3.217873236547994e-5 -1.1268860161522646e-6 6.952055773314264e-5 -4.966363703982947e-5; 1.531988234051425e-5 2.314014654576705e-6 -5.169871586116995e-5 0.00015922881427245944 -0.00016961962976933575 -4.6528442208404164e-5 -4.5477080885073506e-5 0.00012429048071639927 2.8451466977956513e-5 0.0001822830788747122 6.183615620969391e-5 -2.6949367775572175e-5 -0.00020513723141489166 8.121254065330963e-5 0.00013299776471244751 0.00011844671532303873 -6.96308008904852e-5 -9.750170577758233e-5 0.0001445886271263188 0.00015577589217132342 -5.8638596993155374e-5 -5.332449584825098e-5 2.6909118577842545e-5 5.70916715965324e-5 8.54035504467461e-5 7.1207034757716905e-6 -0.00018346638828957787 -1.1920120623875877e-5 6.30828551671183e-5 -5.399361775239503e-7 -1.6139583049556656e-5 -0.00013839561436766043], bias = [3.894373769798364e-9, 1.8076464949467985e-9, -1.3285531315531493e-9, 9.851399415276393e-10, 1.5362437709569422e-9, -1.365927299203859e-9, 2.8936058075982555e-9, 2.635142714310196e-9, 8.89779611791719e-10, -3.0762714144371656e-10, 1.512155374745555e-10, -1.0044566749965056e-9, -1.2550850082862792e-9, 1.5467599735927566e-9, -1.4097369907469507e-9, -2.8804327300928263e-9, -1.3998023316198665e-9, -6.152020937505254e-9, -3.021496386569071e-9, 2.793530990416719e-9, -6.019767443146926e-9, 3.5468509822384623e-10, 1.1250735475854338e-9, 3.132188335458648e-10, 8.813661273152041e-10, 8.708071617554962e-10, -1.0192360717498933e-9, 1.3088836405277338e-9, -3.4621996649483235e-9, 2.372412246981049e-9, 1.5766973440404484e-9, 1.2921823837799265e-9]), layer_4 = (weight = [-0.000671515387886372 -0.0006863294593320278 -0.000547062580375197 -0.0006929681398377475 -0.0007208689380831659 -0.0006786424064657307 -0.0006970124261547597 -0.0007321562545599134 -0.0006699663759985883 -0.000555564038915714 -0.0008243604406214613 -0.0007594082672826099 -0.0006416307649416789 -0.0006040130617433301 -0.0005767567490700625 -0.000584702750938777 -0.0007052501604153043 -0.0006872855625840169 -0.000661293486180718 -0.0009673660790493529 -0.0008470960064196446 -0.0005686196674731176 -0.0006294064772027735 -0.0008679954776256519 -0.0006420515608012984 -0.0007602384872897437 -0.0006668962852593428 -0.0005770834146635521 -0.0007703236351394529 -0.0007068354548441024 -0.0008284032113006212 -0.0006387189687910598; 0.00024534184506921366 9.436213549089752e-5 0.0002336549263445295 2.2749320911528578e-5 0.00026180337971381206 0.0002966060395107406 7.273113699313643e-5 0.0001361160562741851 3.549212847121244e-5 0.00018196736714670334 0.00037111824754254026 0.00017717759912452464 0.00020263389821266977 0.00025969017269987537 0.0002746316613115953 0.0003474788350635215 7.044513852118433e-5 0.00046617747547874494 0.0004470814936101512 0.000134952149724482 0.0003323131938604503 0.0004252697652671806 0.00020724653688725098 0.0003253273352741355 0.00022932567297481477 0.00016506753305175245 0.00013905988311074547 0.00031432995686404174 0.00011697982280827048 0.00023296587503694646 0.0003947422343828539 0.00023470185140701995], bias = [-0.0007234091484616137, 0.000245804313270806]))

      Visualizing the Results

      Let us now plot the loss over time

      julia
      begin
           fig = Figure()
           ax = CairoMakie.Axis(fig[1, 1]; xlabel="Iteration", ylabel="Loss")
       
      @@ -776,7 +891,7 @@
           scatter!(ax, 1:length(losses), losses; marker=:circle, markersize=12, strokewidth=2)
       
           fig
      -end

      Finally let us visualize the results

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, res.u)
      +end

      Finally let us visualize the results

      julia
      prob_nn = ODEProblem(ODE_model, u0, tspan, res.u)
       soln_nn = Array(solve(prob_nn, RK4(); u0, p=res.u, saveat=tsteps, dt, adaptive=false))
       waveform_nn_trained = first(compute_waveform(
           dt_data, soln_nn, mass_ratio, ode_model_params))
      @@ -802,7 +917,7 @@
               position=:lb)
       
           fig
      -end

      Appendix

      julia
      using InteractiveUtils
      +end

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -815,8 +930,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -824,16 +939,16 @@
         CPU: 128 × AMD EPYC 7502 32-Core Processor
         WORD_SIZE: 64
         LLVM: libLLVM-16.0.6 (ORCJIT, znver2)
      -Threads: 16 default, 0 interactive, 8 GC (on 16 virtual cores)
      +Threads: 128 default, 0 interactive, 64 GC (on 128 virtual cores)
       Environment:
      -  JULIA_CPU_THREADS = 16
      +  JULIA_CPU_THREADS = 128
         JULIA_DEPOT_PATH = /cache/julia-buildkite-plugin/depots/01872db4-8c79-43af-ab7d-12abac4f24f6
         JULIA_PKG_SERVER = 
      -  JULIA_NUM_THREADS = 16
      +  JULIA_NUM_THREADS = 128
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/beginner/1_Basics.html b/dev/tutorials/beginner/1_Basics.html index 7bb245cc81..04bd2a2a37 100644 --- a/dev/tutorials/beginner/1_Basics.html +++ b/dev/tutorials/beginner/1_Basics.html @@ -5,15 +5,15 @@ Julia & Lux for the Uninitiated | Lux.jl Docs - + - + - - - + + + @@ -28,7 +28,102 @@ -
      Skip to content

      Julia & Lux for the Uninitiated

      This is a quick intro to Lux loosely based on:

      1. PyTorch's tutorial.

      2. Flux's tutorial (the link for which has now been lost to abyss).

      3. Jax's tutorial.

      It introduces basic Julia programming, as well Zygote, a source-to-source automatic differentiation (AD) framework in Julia. We'll use these tools to build a very simple neural network. Let's start with importing Lux.jl

      julia
      using Lux, Random

      Now let us control the randomness in our code using proper Pseudo Random Number Generator (PRNG)

      julia
      rng = Random.default_rng()
      +    
      Skip to content

      Julia & Lux for the Uninitiated

      This is a quick intro to Lux loosely based on:

      1. PyTorch's tutorial.

      2. Flux's tutorial (the link for which has now been lost to abyss).

      3. Jax's tutorial.

      It introduces basic Julia programming, as well Zygote, a source-to-source automatic differentiation (AD) framework in Julia. We'll use these tools to build a very simple neural network. Let's start with importing Lux.jl

      julia
      using Lux, Random
      Precompiling Lux...
      +    588.7 ms  ✓ ConcreteStructs
      +    557.1 ms  ✓ Future
      +    542.0 ms  ✓ SIMDTypes
      +    554.5 ms  ✓ Reexport
      +    578.7 ms  ✓ CEnum
      +    603.2 ms  ✓ ManualMemory
      +    634.8 ms  ✓ OpenLibm_jll
      +    655.7 ms  ✓ ArgCheck
      +    772.8 ms  ✓ CompilerSupportLibraries_jll
      +    774.2 ms  ✓ Requires
      +    837.0 ms  ✓ Statistics
      +    919.0 ms  ✓ EnzymeCore
      +    978.0 ms  ✓ ADTypes
      +    514.9 ms  ✓ IfElse
      +    537.2 ms  ✓ CommonWorldInvalidations
      +    532.9 ms  ✓ FastClosures
      +    593.8 ms  ✓ StaticArraysCore
      +    674.2 ms  ✓ ConstructionBase
      +   1416.1 ms  ✓ IrrationalConstants
      +    852.3 ms  ✓ Compat
      +    762.3 ms  ✓ JLLWrappers
      +    612.2 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    749.7 ms  ✓ NaNMath
      +    652.1 ms  ✓ Adapt
      +   1012.4 ms  ✓ CpuId
      +   1007.3 ms  ✓ DocStringExtensions
      +    614.9 ms  ✓ DiffResults
      +    585.7 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    664.1 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +   1279.3 ms  ✓ ThreadingUtilities
      +    589.5 ms  ✓ Compat → CompatLinearAlgebraExt
      +    543.6 ms  ✓ EnzymeCore → AdaptExt
      +    578.2 ms  ✓ GPUArraysCore
      +   1086.6 ms  ✓ Static
      +    696.2 ms  ✓ ArrayInterface
      +    913.3 ms  ✓ Hwloc_jll
      +    952.7 ms  ✓ OpenSpecFun_jll
      +    891.9 ms  ✓ LogExpFunctions
      +   2737.1 ms  ✓ UnsafeAtomics
      +    616.6 ms  ✓ BitTwiddlingConvenienceFunctions
      +    524.7 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +    526.1 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    820.5 ms  ✓ Functors
      +   3060.7 ms  ✓ MacroTools
      +    623.6 ms  ✓ Atomix
      +   1626.6 ms  ✓ CPUSummary
      +   1755.2 ms  ✓ ChainRulesCore
      +    958.7 ms  ✓ MLDataDevices
      +    933.9 ms  ✓ CommonSubexpressions
      +   1882.9 ms  ✓ StaticArrayInterface
      +    600.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    609.1 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +    936.4 ms  ✓ PolyesterWeave
      +    900.5 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +    661.5 ms  ✓ CloseOpenIntervals
      +   1817.0 ms  ✓ Setfield
      +    810.8 ms  ✓ LayoutPointers
      +   2126.8 ms  ✓ DispatchDoctor
      +   1388.2 ms  ✓ Optimisers
      +   2862.1 ms  ✓ Hwloc
      +   1666.2 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    482.1 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    492.2 ms  ✓ Optimisers → OptimisersAdaptExt
      +    555.6 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +    742.3 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +   3385.6 ms  ✓ SpecialFunctions
      +   1098.7 ms  ✓ StrideArraysCore
      +    654.6 ms  ✓ DiffRules
      +   1405.9 ms  ✓ LuxCore
      +    818.2 ms  ✓ Polyester
      +    502.0 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    518.2 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    583.9 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    646.2 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    764.0 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1915.1 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   3077.6 ms  ✓ WeightInitializers
      +   7932.7 ms  ✓ StaticArrays
      +    630.8 ms  ✓ Adapt → AdaptStaticArraysExt
      +    641.1 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    666.2 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    686.3 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    713.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +   1037.4 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +   3930.4 ms  ✓ ForwardDiff
      +    875.2 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3408.4 ms  ✓ KernelAbstractions
      +    670.5 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    727.6 ms  ✓ KernelAbstractions → EnzymeExt
      +   5502.1 ms  ✓ NNlib
      +    845.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    929.2 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5954.0 ms  ✓ LuxLib
      +  10166.3 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 37 seconds. 15 already precompiled.

      Now let us control the randomness in our code using proper Pseudo Random Number Generator (PRNG)

      julia
      rng = Random.default_rng()
       Random.seed!(rng, 0)
      Random.TaskLocalRNG()

      Arrays

      The starting point for all of our models is the Array (sometimes referred to as a Tensor in other frameworks). This is really just a list of numbers, which might be arranged into a shape like a square. Let's write down an array with three elements.

      julia
      x = [1, 2, 3]
      3-element Vector{Int64}:
        1
        2
      @@ -119,7 +214,85 @@
           println("Iteration $i ", rand(rng, 10))
       end
      Iteration 1 [0.4552384158732863, 0.5476424498276177, 0.7733535276924052, 0.9405848223512736, 0.02964765308691042, 0.74694291453392, 0.7468008914093891, 0.9766699015845924, 0.08694684883050086, 0.35149138733595564]
       Iteration 2 [0.018743665453639813, 0.8601828553599953, 0.6556360448565952, 0.7746656838366666, 0.7817315740767116, 0.5553797706980106, 0.1261990389976131, 0.4488101521328277, 0.624383955429775, 0.05657739601024536]
      -Iteration 3 [0.19597391412112541, 0.6830945313415872, 0.6776220912718907, 0.6456416023530093, 0.6340362477836592, 0.5595843665394066, 0.5675557670686644, 0.34351700231383653, 0.7237308297251812, 0.3691778381831775]

      Automatic Differentiation

      Julia has quite a few (maybe too many) AD tools. For the purpose of this tutorial, we will use:

      1. ForwardDiff.jl – For Jacobian-Vector Product (JVP)

      2. Zygote.jl – For Vector-Jacobian Product (VJP)

      Slight Detour: We have had several questions regarding if we will be considering any other AD system for the reverse-diff backend. For now we will stick to Zygote.jl, however once we have tested Lux extensively with Enzyme.jl, we will make the switch.

      Even though, theoretically, a VJP (Vector-Jacobian product - reverse autodiff) and a JVP (Jacobian-Vector product - forward-mode autodiff) are similar—they compute a product of a Jacobian and a vector—they differ by the computational complexity of the operation. In short, when you have a large number of parameters (hence a wide matrix), a JVP is less efficient computationally than a VJP, and, conversely, a JVP is more efficient when the Jacobian matrix is a tall matrix.

      julia
      using ComponentArrays, ForwardDiff, Zygote

      Gradients

      For our first example, consider a simple function computing f(x)=12xTx, where f(x)=x

      julia
      f(x) = x' * x / 2
      +Iteration 3 [0.19597391412112541, 0.6830945313415872, 0.6776220912718907, 0.6456416023530093, 0.6340362477836592, 0.5595843665394066, 0.5675557670686644, 0.34351700231383653, 0.7237308297251812, 0.3691778381831775]

      Automatic Differentiation

      Julia has quite a few (maybe too many) AD tools. For the purpose of this tutorial, we will use:

      1. ForwardDiff.jl – For Jacobian-Vector Product (JVP)

      2. Zygote.jl – For Vector-Jacobian Product (VJP)

      Slight Detour: We have had several questions regarding if we will be considering any other AD system for the reverse-diff backend. For now we will stick to Zygote.jl, however once we have tested Lux extensively with Enzyme.jl, we will make the switch.

      Even though, theoretically, a VJP (Vector-Jacobian product - reverse autodiff) and a JVP (Jacobian-Vector product - forward-mode autodiff) are similar—they compute a product of a Jacobian and a vector—they differ by the computational complexity of the operation. In short, when you have a large number of parameters (hence a wide matrix), a JVP is less efficient computationally than a VJP, and, conversely, a JVP is more efficient when the Jacobian matrix is a tall matrix.

      julia
      using ComponentArrays, ForwardDiff, Zygote
      Precompiling ComponentArrays...
      +    985.3 ms  ✓ ComponentArrays
      +  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      +Precompiling MLDataDevicesComponentArraysExt...
      +    570.3 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      +Precompiling LuxComponentArraysExt...
      +    573.6 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      +   1664.0 ms  ✓ Lux → LuxComponentArraysExt
      +   2431.9 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      +  3 dependencies successfully precompiled in 3 seconds. 111 already precompiled.
      +Precompiling Zygote...
      +    451.4 ms  ✓ DataValueInterfaces
      +    494.4 ms  ✓ IteratorInterfaceExtensions
      +    565.1 ms  ✓ RealDot
      +    581.1 ms  ✓ DataAPI
      +    649.9 ms  ✓ OrderedCollections
      +    651.1 ms  ✓ SuiteSparse_jll
      +    615.1 ms  ✓ HashArrayMappedTries
      +    634.7 ms  ✓ Zlib_jll
      +    810.0 ms  ✓ AbstractFFTs
      +    833.0 ms  ✓ Serialization
      +    495.5 ms  ✓ TableTraits
      +    471.4 ms  ✓ ScopedValues
      +   1251.5 ms  ✓ FillArrays
      +    602.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +   1413.6 ms  ✓ ZygoteRules
      +    510.4 ms  ✓ FillArrays → FillArraysStatisticsExt
      +   1297.5 ms  ✓ LazyArtifacts
      +    986.1 ms  ✓ Tables
      +   2402.0 ms  ✓ IRTools
      +    967.7 ms  ✓ StructArrays
      +   2223.1 ms  ✓ Distributed
      +    479.3 ms  ✓ StructArrays → StructArraysAdaptExt
      +    538.1 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +   1696.3 ms  ✓ LLVMExtra_jll
      +    829.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    830.8 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +   4402.7 ms  ✓ SparseArrays
      +    712.3 ms  ✓ SuiteSparse
      +    782.0 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +    785.0 ms  ✓ Statistics → SparseArraysExt
      +    794.0 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    836.5 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +   1140.7 ms  ✓ KernelAbstractions → SparseArraysExt
      +    719.6 ms  ✓ SparseInverseSubset
      +   7074.3 ms  ✓ LLVM
      +   2035.0 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      +   6428.2 ms  ✓ ChainRules
      +   5380.2 ms  ✓ GPUArrays
      +  30733.7 ms  ✓ Zygote
      +  39 dependencies successfully precompiled in 49 seconds. 63 already precompiled.
      +Precompiling ArrayInterfaceSparseArraysExt...
      +    701.6 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
      +Precompiling MLDataDevicesSparseArraysExt...
      +    761.5 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      +Precompiling ArrayInterfaceChainRulesExt...
      +    880.9 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 39 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    955.2 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling MLDataDevicesFillArraysExt...
      +    505.2 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      +Precompiling MLDataDevicesZygoteExt...
      +   1901.7 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      +   1906.4 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      +  2 dependencies successfully precompiled in 2 seconds. 108 already precompiled.
      +Precompiling LuxZygoteExt...
      +   2066.3 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   3444.9 ms  ✓ Lux → LuxZygoteExt
      +  2 dependencies successfully precompiled in 4 seconds. 165 already precompiled.
      +Precompiling ComponentArraysZygoteExt...
      +   1944.2 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +   2246.4 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +  2 dependencies successfully precompiled in 2 seconds. 116 already precompiled.

      Gradients

      For our first example, consider a simple function computing f(x)=12xTx, where f(x)=x

      julia
      f(x) = x' * x / 2
       ∇f(x) = x  # `∇` can be typed as `\nabla<TAB>`
       v = randn(rng, Float32, 4)
      4-element Vector{Float32}:
        -0.4051151
      @@ -192,8 +365,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -210,7 +383,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/beginner/2_PolynomialFitting.html b/dev/tutorials/beginner/2_PolynomialFitting.html index 366c973ad0..3b8e43e6d0 100644 --- a/dev/tutorials/beginner/2_PolynomialFitting.html +++ b/dev/tutorials/beginner/2_PolynomialFitting.html @@ -5,15 +5,15 @@ Fitting a Polynomial using MLP | Lux.jl Docs - + - + - - - + + + @@ -29,162 +29,48 @@
      Skip to content

      Fitting a Polynomial using MLP

      In this tutorial we will fit a MultiLayer Perceptron (MLP) on data generated from a polynomial.

      Package Imports

      julia
      using Lux, ADTypes, Optimisers, Printf, Random, Reactant, Statistics, CairoMakie
      Precompiling Lux...
      -    529.0 ms  ✓ Requires
      -    530.5 ms  ✓ Compat
      -    386.7 ms  ✓ Compat → CompatLinearAlgebraExt
      -   1151.1 ms  ✓ LuxCore
      -   1200.9 ms  ✓ ChainRulesCore
      -    610.8 ms  ✓ Functors
      -    437.8 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    462.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -   1521.2 ms  ✓ StaticArrayInterface
      -    399.8 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    642.7 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -   4009.4 ms  ✓ KernelAbstractions
      -    646.5 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -   1360.1 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -    415.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    621.8 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -   1701.0 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -    962.1 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    451.1 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    788.0 ms  ✓ MLDataDevices
      -    492.9 ms  ✓ CloseOpenIntervals
      -    646.9 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   1097.0 ms  ✓ Optimisers
      -    623.9 ms  ✓ LayoutPointers
      -    673.7 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    729.6 ms  ✓ KernelAbstractions → EnzymeExt
      -    472.8 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    652.4 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    430.9 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    414.9 ms  ✓ Optimisers → OptimisersAdaptExt
      -    941.3 ms  ✓ StrideArraysCore
      -    776.8 ms  ✓ Polyester
      -   5261.6 ms  ✓ NNlib
      -    829.6 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    990.3 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5674.2 ms  ✓ LuxLib
      -   9238.1 ms  ✓ Lux
      -  37 dependencies successfully precompiled in 30 seconds. 72 already precompiled.
      -Precompiling Reactant...
      -    990.3 ms  ✓ LazyArtifacts
      -   1394.9 ms  ✓ Enzyme_jll
      -   1420.0 ms  ✓ LLVMExtra_jll
      -   2244.4 ms  ✓ Reactant_jll
      -   6311.8 ms  ✓ LLVM
      -  26586.6 ms  ✓ GPUCompiler
      - 223529.4 ms  ✓ Enzyme
      -   6414.6 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live 
      -2025-01-20 22:36:53.782407: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 5524933572539585731
      -  64426.5 ms  ✓ Reactant
      -  9 dependencies successfully precompiled in 330 seconds. 52 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling ChainRulesCoreSparseArraysExt...
      -    637.5 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 11 already precompiled.
      -Precompiling SparseArraysExt...
      -    903.9 ms  ✓ KernelAbstractions → SparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 26 already precompiled.
      -Precompiling MLDataDevicesSparseArraysExt...
      -    678.9 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
      -Precompiling UnsafeAtomicsLLVM...
      -   1762.2 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      +    901.4 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +   6123.7 ms  ✓ LuxLib
      +   9384.6 ms  ✓ Lux
      +  3 dependencies successfully precompiled in 17 seconds. 106 already precompiled.
       Precompiling LuxLibEnzymeExt...
      -   6930.7 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6478.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1422.9 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17255.9 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  18958.9 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 19 seconds. 126 already precompiled.
      +   1293.8 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
       Precompiling LuxEnzymeExt...
      -   7693.1 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20495.7 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20685.0 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      +   6868.0 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
       Precompiling LuxLibReactantExt...
      -  20460.8 ms  ✓ Reactant → ReactantStatisticsExt
      -  20780.8 ms  ✓ Reactant → ReactantNNlibExt
      -  21412.0 ms  ✓ LuxLib → LuxLibReactantExt
      -  20497.7 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20385.4 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20824.7 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      +  12883.5 ms  ✓ LuxLib → LuxLibReactantExt
      +  1 dependency successfully precompiled in 13 seconds. 143 already precompiled.
       Precompiling LuxReactantExt...
      -   9886.8 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.
      +   8417.6 ms  ✓ Lux → LuxReactantExt
      +  1 dependency successfully precompiled in 9 seconds. 161 already precompiled.
       Precompiling CairoMakie...
      -    439.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -   1626.9 ms  ✓ DataStructures
      -   3537.7 ms  ✓ Test
      -    749.1 ms  ✓ FilePaths
      -   3961.4 ms  ✓ PkgVersion
      -   1185.8 ms  ✓ IntelOpenMP_jll
      -   4150.2 ms  ✓ FileIO
      -   1960.0 ms  ✓ Interpolations
      -   1181.2 ms  ✓ ColorBrewer
      -    513.0 ms  ✓ SortingAlgorithms
      -   1497.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    968.4 ms  ✓ QuadGK
      -    586.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   1294.5 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -   1182.8 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   1948.3 ms  ✓ Netpbm
      -   3206.6 ms  ✓ JpegTurbo
      -   8126.4 ms  ✓ MKL_jll
      -   1508.5 ms  ✓ QOI
      -   4115.3 ms  ✓ Sixel
      -   1493.5 ms  ✓ OpenEXR
      -  13826.7 ms  ✓ MathTeXEngine
      -   2539.9 ms  ✓ WebP
      -   1188.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   2240.7 ms  ✓ StatsBase
      -   4921.5 ms  ✓ Distributions
      -   1405.9 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1413.8 ms  ✓ Distributions → DistributionsTestExt
      -   1760.2 ms  ✓ KernelDensity
      -  60596.8 ms  ✓ TiffImages
      -   1229.7 ms  ✓ ImageIO
      - 153721.8 ms  ✓ Makie
      -  88146.2 ms  ✓ CairoMakie
      -  33 dependencies successfully precompiled in 325 seconds. 238 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ MKL_jll
      -│   Downloading artifact: IntelOpenMP
      -
      +    600.9 ms  ✓ Graphics
      +   1224.7 ms  ✓ HypergeometricFunctions
      +   1225.4 ms  ✓ IntelOpenMP_jll
      +   1421.3 ms  ✓ Cairo
      +   1268.6 ms  ✓ MKL_jll
      +   1918.0 ms  ✓ StatsFuns
      +    701.8 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +   1650.1 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +   5239.5 ms  ✓ Distributions
      +   1476.6 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +   1532.2 ms  ✓ Distributions → DistributionsTestExt
      +   9676.7 ms  ✓ FFTW
      +   1771.5 ms  ✓ KernelDensity
      + 153295.0 ms  ✓ Makie
      +  96578.4 ms  ✓ CairoMakie
      +  15 dependencies successfully precompiled in 265 seconds. 256 already precompiled.
       Precompiling StructArraysGPUArraysCoreExt...
      -    716.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    721.7 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
         1 dependency successfully precompiled in 1 seconds. 34 already precompiled.
      -Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    455.2 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
      -Precompiling ReactantOffsetArraysExt...
      -  20631.6 ms  ✓ Reactant → ReactantOffsetArraysExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      -Precompiling QuadGKEnzymeExt...
      -   6457.9 ms  ✓ QuadGK → QuadGKEnzymeExt
      -  1 dependency successfully precompiled in 7 seconds. 49 already precompiled.
      -Precompiling MLDataDevicesFillArraysExt...
      -    414.6 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
      -Precompiling ReactantAbstractFFTsExt...
      -  20623.8 ms  ✓ Reactant → ReactantAbstractFFTsExt
      -  1 dependency successfully precompiled in 21 seconds. 62 already precompiled.
       Precompiling NNlibFFTWExt...
      -    899.9 ms  ✓ NNlib → NNlibFFTWExt
      -  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.

      Dataset

      Generate 128 datapoints from the polynomial y=x22x.

      julia
      function generate_data(rng::AbstractRNG)
      +    923.3 ms  ✓ NNlib → NNlibFFTWExt
      +  1 dependency successfully precompiled in 1 seconds. 54 already precompiled.
      +Precompiling IntervalArithmeticForwardDiffExt...
      +    712.0 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +  1 dependency successfully precompiled in 1 seconds. 43 already precompiled.

      Dataset

      Generate 128 datapoints from the polynomial y=x22x.

      julia
      function generate_data(rng::AbstractRNG)
           x = reshape(collect(range(-2.0f0, 2.0f0, 128)), (1, 128))
           y = evalpoly.(x, ((0, -2, 1),)) .+ randn(rng, Float32, (1, 128)) .* 0.1f0
           return (x, y)
      @@ -266,8 +152,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -285,7 +171,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/beginner/3_SimpleRNN.html b/dev/tutorials/beginner/3_SimpleRNN.html index a46fc47b17..83ca3c0752 100644 --- a/dev/tutorials/beginner/3_SimpleRNN.html +++ b/dev/tutorials/beginner/3_SimpleRNN.html @@ -5,15 +5,15 @@ Training a Simple LSTM | Lux.jl Docs - + - + - - - + + + @@ -29,98 +29,24 @@
      Skip to content

      Training a Simple LSTM

      In this tutorial we will go over using a recurrent neural network to classify clockwise and anticlockwise spirals. By the end of this tutorial you will be able to:

      1. Create custom Lux models.

      2. Become familiar with the Lux recurrent neural network API.

      3. Training using Optimisers.jl and Zygote.jl.

      Package Imports

      julia
      using ADTypes, Lux, JLD2, MLUtils, Optimisers, Printf, Reactant, Random
      Precompiling Lux...
      -    332.2 ms  ✓ SIMDTypes
      -    380.9 ms  ✓ ManualMemory
      -    324.8 ms  ✓ FastClosures
      -    544.2 ms  ✓ EnzymeCore
      -    533.0 ms  ✓ ArrayInterface
      -   1791.7 ms  ✓ UnsafeAtomics
      -    827.1 ms  ✓ ThreadingUtilities
      -    514.5 ms  ✓ EnzymeCore → AdaptExt
      -    378.5 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    434.8 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -    443.2 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    435.4 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    371.2 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    410.5 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    367.8 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -    497.5 ms  ✓ Atomix
      -    651.3 ms  ✓ PolyesterWeave
      -   1495.3 ms  ✓ StaticArrayInterface
      -    505.8 ms  ✓ CloseOpenIntervals
      -    657.3 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -    590.9 ms  ✓ LayoutPointers
      -    961.8 ms  ✓ StrideArraysCore
      -    789.0 ms  ✓ Polyester
      -   3904.2 ms  ✓ KernelAbstractions
      -    670.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    781.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5249.4 ms  ✓ NNlib
      -    859.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    989.9 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5584.1 ms  ✓ LuxLib
      -   9258.3 ms  ✓ Lux
      -  31 dependencies successfully precompiled in 30 seconds. 78 already precompiled.
      -Precompiling MLUtils...
      -    331.5 ms  ✓ PtrArrays
      -    450.2 ms  ✓ AliasTables
      -    941.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -   2234.7 ms  ✓ StatsBase
      -   6135.5 ms  ✓ MLUtils
      -  5 dependencies successfully precompiled in 10 seconds. 93 already precompiled.
      -Precompiling ArrayInterfaceSparseArraysExt...
      -    651.6 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 7 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1676.4 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      +   5868.9 ms  ✓ LuxLib
      +   9340.6 ms  ✓ Lux
      +  2 dependencies successfully precompiled in 15 seconds. 107 already precompiled.
       Precompiling LuxMLUtilsExt...
      -   2335.3 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling Reactant...
      -   2459.2 ms  ✓ Reactant_jll
      - 224139.2 ms  ✓ Enzyme
      -   6428.0 ms  ✓ Enzyme → EnzymeGPUArraysCoreExt
      -Info Given Reactant was explicitly requested, output will be shown live 
      -2025-01-20 22:37:27.775820: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 12032411168801079817
      -  65476.8 ms  ✓ Reactant
      -  4 dependencies successfully precompiled in 297 seconds. 57 already precompiled.
      -  1 dependency had output during precompilation:
      -┌ Reactant
      -│  [Output was shown above]
      -
      -Precompiling UnsafeAtomicsLLVM...
      -   1774.4 ms  ✓ UnsafeAtomics → UnsafeAtomicsLLVM
      -  1 dependency successfully precompiled in 2 seconds. 30 already precompiled.
      +   2224.8 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 167 already precompiled.
       Precompiling LuxLibEnzymeExt...
      -   6628.4 ms  ✓ Enzyme → EnzymeSpecialFunctionsExt
      -   6521.8 ms  ✓ Enzyme → EnzymeLogExpFunctionsExt
      -   1380.4 ms  ✓ LuxLib → LuxLibEnzymeExt
      -  17170.7 ms  ✓ Enzyme → EnzymeStaticArraysExt
      -  19038.5 ms  ✓ Enzyme → EnzymeChainRulesCoreExt
      -  5 dependencies successfully precompiled in 20 seconds. 126 already precompiled.
      +   1341.8 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
       Precompiling LuxEnzymeExt...
      -   7795.4 ms  ✓ Lux → LuxEnzymeExt
      -  1 dependency successfully precompiled in 8 seconds. 146 already precompiled.
      -Precompiling LuxCoreReactantExt...
      -  20461.9 ms  ✓ LuxCore → LuxCoreReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 66 already precompiled.
      -Precompiling MLDataDevicesReactantExt...
      -  20508.9 ms  ✓ MLDataDevices → MLDataDevicesReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 63 already precompiled.
      +   6784.3 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
       Precompiling LuxLibReactantExt...
      -  20586.0 ms  ✓ Reactant → ReactantStatisticsExt
      -  21062.9 ms  ✓ Reactant → ReactantNNlibExt
      -  21496.8 ms  ✓ LuxLib → LuxLibReactantExt
      -  20521.6 ms  ✓ Reactant → ReactantSpecialFunctionsExt
      -  20457.3 ms  ✓ Reactant → ReactantArrayInterfaceExt
      -  5 dependencies successfully precompiled in 42 seconds. 139 already precompiled.
      -Precompiling WeightInitializersReactantExt...
      -  20610.5 ms  ✓ WeightInitializers → WeightInitializersReactantExt
      -  1 dependency successfully precompiled in 21 seconds. 77 already precompiled.
      +  12962.1 ms  ✓ LuxLib → LuxLibReactantExt
      +  1 dependency successfully precompiled in 13 seconds. 143 already precompiled.
       Precompiling LuxReactantExt...
      -   9920.4 ms  ✓ Lux → LuxReactantExt
      -  1 dependency successfully precompiled in 10 seconds. 161 already precompiled.

      Dataset

      We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

      julia
      function get_dataloaders(; dataset_size=1000, sequence_length=50)
      +   8587.6 ms  ✓ Lux → LuxReactantExt
      +  1 dependency successfully precompiled in 9 seconds. 161 already precompiled.

      Dataset

      We will use MLUtils to generate 500 (noisy) clockwise and 500 (noisy) anticlockwise spirals. Using this data we will create a MLUtils.DataLoader. Our dataloader will give us sequences of size 2 × seq_len × batch_size and we need to predict a binary value whether the sequence is clockwise or anticlockwise.

      julia
      function get_dataloaders(; dataset_size=1000, sequence_length=50)
           # Create the spirals
           data = [MLUtils.Datasets.make_spiral(sequence_length) for _ in 1:dataset_size]
           # Get the labels
      @@ -239,109 +165,109 @@
       end
       
       ps_trained, st_trained = main(SpiralClassifier)
      ┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -2025-01-20 22:46:45.587352: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 15513715445056030835
      -Epoch [  1]: Loss 0.67334
      -Validation:	Loss 0.61507	Accuracy 1.00000
      -Epoch [  2]: Loss 0.57744
      -Validation:	Loss 0.52876	Accuracy 1.00000
      -Epoch [  3]: Loss 0.49384
      -Validation:	Loss 0.43874	Accuracy 1.00000
      -Epoch [  4]: Loss 0.39211
      -Validation:	Loss 0.31860	Accuracy 1.00000
      -Epoch [  5]: Loss 0.27281
      -Validation:	Loss 0.21501	Accuracy 1.00000
      -Epoch [  6]: Loss 0.18506
      -Validation:	Loss 0.14783	Accuracy 1.00000
      -Epoch [  7]: Loss 0.12820
      -Validation:	Loss 0.10396	Accuracy 1.00000
      -Epoch [  8]: Loss 0.09103
      -Validation:	Loss 0.07621	Accuracy 1.00000
      -Epoch [  9]: Loss 0.06807
      -Validation:	Loss 0.05899	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.05358
      -Validation:	Loss 0.04783	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.04407
      -Validation:	Loss 0.04008	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.03731
      -Validation:	Loss 0.03443	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.03229
      -Validation:	Loss 0.03012	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.02840
      -Validation:	Loss 0.02670	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.02528
      -Validation:	Loss 0.02391	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.02279
      -Validation:	Loss 0.02157	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.02055
      -Validation:	Loss 0.01954	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.01861
      -Validation:	Loss 0.01774	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.01699
      -Validation:	Loss 0.01614	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.01549
      -Validation:	Loss 0.01472	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.01413
      -Validation:	Loss 0.01346	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.01294
      -Validation:	Loss 0.01233	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.01186
      -Validation:	Loss 0.01129	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.01086
      -Validation:	Loss 0.01033	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.00996
      -Validation:	Loss 0.00945	Accuracy 1.00000

      We can also train the compact model with the exact same code!

      julia
      ps_trained2, st_trained2 = main(SpiralClassifierCompact)
      ┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-5/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      -Epoch [  1]: Loss 0.90661
      -Validation:	Loss 0.81372	Accuracy 0.46094
      -Epoch [  2]: Loss 0.71494
      -Validation:	Loss 0.63222	Accuracy 0.46094
      -Epoch [  3]: Loss 0.56538
      -Validation:	Loss 0.50507	Accuracy 1.00000
      -Epoch [  4]: Loss 0.45855
      -Validation:	Loss 0.40424	Accuracy 1.00000
      -Epoch [  5]: Loss 0.37246
      -Validation:	Loss 0.33160	Accuracy 1.00000
      -Epoch [  6]: Loss 0.31438
      -Validation:	Loss 0.28433	Accuracy 1.00000
      -Epoch [  7]: Loss 0.27284
      -Validation:	Loss 0.24691	Accuracy 1.00000
      -Epoch [  8]: Loss 0.23912
      -Validation:	Loss 0.21678	Accuracy 1.00000
      -Epoch [  9]: Loss 0.21161
      -Validation:	Loss 0.19076	Accuracy 1.00000
      -Epoch [ 10]: Loss 0.18482
      -Validation:	Loss 0.16652	Accuracy 1.00000
      -Epoch [ 11]: Loss 0.16043
      -Validation:	Loss 0.14203	Accuracy 1.00000
      -Epoch [ 12]: Loss 0.13420
      -Validation:	Loss 0.11569	Accuracy 1.00000
      -Epoch [ 13]: Loss 0.10542
      -Validation:	Loss 0.09126	Accuracy 1.00000
      -Epoch [ 14]: Loss 0.08311
      -Validation:	Loss 0.07461	Accuracy 1.00000
      -Epoch [ 15]: Loss 0.06884
      -Validation:	Loss 0.06360	Accuracy 1.00000
      -Epoch [ 16]: Loss 0.05892
      -Validation:	Loss 0.05550	Accuracy 1.00000
      -Epoch [ 17]: Loss 0.05177
      -Validation:	Loss 0.04936	Accuracy 1.00000
      -Epoch [ 18]: Loss 0.04635
      -Validation:	Loss 0.04460	Accuracy 1.00000
      -Epoch [ 19]: Loss 0.04199
      -Validation:	Loss 0.04063	Accuracy 1.00000
      -Epoch [ 20]: Loss 0.03849
      -Validation:	Loss 0.03719	Accuracy 1.00000
      -Epoch [ 21]: Loss 0.03532
      -Validation:	Loss 0.03413	Accuracy 1.00000
      -Epoch [ 22]: Loss 0.03245
      -Validation:	Loss 0.03123	Accuracy 1.00000
      -Epoch [ 23]: Loss 0.02946
      -Validation:	Loss 0.02835	Accuracy 1.00000
      -Epoch [ 24]: Loss 0.02653
      -Validation:	Loss 0.02533	Accuracy 1.00000
      -Epoch [ 25]: Loss 0.02337
      -Validation:	Loss 0.02250	Accuracy 1.00000

      Saving the Model

      We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

      julia
      @save "trained_model.jld2" ps_trained st_trained

      Let's try loading the model

      julia
      @load "trained_model.jld2" ps_trained st_trained
      2-element Vector{Symbol}:
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-6/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +2025-01-24 04:36:23.325653: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 3095950067347489316
      +Epoch [  1]: Loss 0.74550
      +Validation:	Loss 0.70002	Accuracy 0.35938
      +Epoch [  2]: Loss 0.66010
      +Validation:	Loss 0.63599	Accuracy 1.00000
      +Epoch [  3]: Loss 0.59519
      +Validation:	Loss 0.57492	Accuracy 1.00000
      +Epoch [  4]: Loss 0.52772
      +Validation:	Loss 0.50520	Accuracy 1.00000
      +Epoch [  5]: Loss 0.45341
      +Validation:	Loss 0.43112	Accuracy 1.00000
      +Epoch [  6]: Loss 0.38036
      +Validation:	Loss 0.36367	Accuracy 1.00000
      +Epoch [  7]: Loss 0.31730
      +Validation:	Loss 0.30521	Accuracy 1.00000
      +Epoch [  8]: Loss 0.25966
      +Validation:	Loss 0.25263	Accuracy 1.00000
      +Epoch [  9]: Loss 0.20685
      +Validation:	Loss 0.20167	Accuracy 1.00000
      +Epoch [ 10]: Loss 0.15741
      +Validation:	Loss 0.16060	Accuracy 1.00000
      +Epoch [ 11]: Loss 0.12361
      +Validation:	Loss 0.12724	Accuracy 1.00000
      +Epoch [ 12]: Loss 0.09638
      +Validation:	Loss 0.09984	Accuracy 1.00000
      +Epoch [ 13]: Loss 0.07572
      +Validation:	Loss 0.07530	Accuracy 1.00000
      +Epoch [ 14]: Loss 0.05634
      +Validation:	Loss 0.05227	Accuracy 1.00000
      +Epoch [ 15]: Loss 0.03886
      +Validation:	Loss 0.03394	Accuracy 1.00000
      +Epoch [ 16]: Loss 0.02640
      +Validation:	Loss 0.02209	Accuracy 1.00000
      +Epoch [ 17]: Loss 0.01860
      +Validation:	Loss 0.01601	Accuracy 1.00000
      +Epoch [ 18]: Loss 0.01482
      +Validation:	Loss 0.01309	Accuracy 1.00000
      +Epoch [ 19]: Loss 0.01229
      +Validation:	Loss 0.01084	Accuracy 1.00000
      +Epoch [ 20]: Loss 0.01058
      +Validation:	Loss 0.00959	Accuracy 1.00000
      +Epoch [ 21]: Loss 0.00944
      +Validation:	Loss 0.00863	Accuracy 1.00000
      +Epoch [ 22]: Loss 0.00851
      +Validation:	Loss 0.00785	Accuracy 1.00000
      +Epoch [ 23]: Loss 0.00778
      +Validation:	Loss 0.00722	Accuracy 1.00000
      +Epoch [ 24]: Loss 0.00715
      +Validation:	Loss 0.00664	Accuracy 1.00000
      +Epoch [ 25]: Loss 0.00659
      +Validation:	Loss 0.00610	Accuracy 1.00000

      We can also train the compact model with the exact same code!

      julia
      ps_trained2, st_trained2 = main(SpiralClassifierCompact)
      ┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-6/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +Epoch [  1]: Loss 0.81240
      +Validation:	Loss 0.76507	Accuracy 0.45312
      +Epoch [  2]: Loss 0.71654
      +Validation:	Loss 0.67404	Accuracy 0.45312
      +Epoch [  3]: Loss 0.63550
      +Validation:	Loss 0.59338	Accuracy 1.00000
      +Epoch [  4]: Loss 0.56080
      +Validation:	Loss 0.51291	Accuracy 1.00000
      +Epoch [  5]: Loss 0.48194
      +Validation:	Loss 0.42384	Accuracy 1.00000
      +Epoch [  6]: Loss 0.39456
      +Validation:	Loss 0.32968	Accuracy 1.00000
      +Epoch [  7]: Loss 0.30701
      +Validation:	Loss 0.24736	Accuracy 1.00000
      +Epoch [  8]: Loss 0.22990
      +Validation:	Loss 0.17724	Accuracy 1.00000
      +Epoch [  9]: Loss 0.16442
      +Validation:	Loss 0.12688	Accuracy 1.00000
      +Epoch [ 10]: Loss 0.12165
      +Validation:	Loss 0.09625	Accuracy 1.00000
      +Epoch [ 11]: Loss 0.09444
      +Validation:	Loss 0.07708	Accuracy 1.00000
      +Epoch [ 12]: Loss 0.07802
      +Validation:	Loss 0.06419	Accuracy 1.00000
      +Epoch [ 13]: Loss 0.06459
      +Validation:	Loss 0.05486	Accuracy 1.00000
      +Epoch [ 14]: Loss 0.05622
      +Validation:	Loss 0.04781	Accuracy 1.00000
      +Epoch [ 15]: Loss 0.04940
      +Validation:	Loss 0.04230	Accuracy 1.00000
      +Epoch [ 16]: Loss 0.04372
      +Validation:	Loss 0.03781	Accuracy 1.00000
      +Epoch [ 17]: Loss 0.03940
      +Validation:	Loss 0.03394	Accuracy 1.00000
      +Epoch [ 18]: Loss 0.03520
      +Validation:	Loss 0.03044	Accuracy 1.00000
      +Epoch [ 19]: Loss 0.03168
      +Validation:	Loss 0.02703	Accuracy 1.00000
      +Epoch [ 20]: Loss 0.02804
      +Validation:	Loss 0.02359	Accuracy 1.00000
      +Epoch [ 21]: Loss 0.02441
      +Validation:	Loss 0.02034	Accuracy 1.00000
      +Epoch [ 22]: Loss 0.02077
      +Validation:	Loss 0.01761	Accuracy 1.00000
      +Epoch [ 23]: Loss 0.01803
      +Validation:	Loss 0.01546	Accuracy 1.00000
      +Epoch [ 24]: Loss 0.01596
      +Validation:	Loss 0.01381	Accuracy 1.00000
      +Epoch [ 25]: Loss 0.01431
      +Validation:	Loss 0.01252	Accuracy 1.00000

      Saving the Model

      We can save the model using JLD2 (and any other serialization library of your choice) Note that we transfer the model to CPU before saving. Additionally, we recommend that you don't save the model struct and only save the parameters and states.

      julia
      @save "trained_model.jld2" ps_trained st_trained

      Let's try loading the model

      julia
      @load "trained_model.jld2" ps_trained st_trained
      2-element Vector{Symbol}:
        :ps_trained
        :st_trained

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
      @@ -356,8 +282,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -375,7 +301,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/beginner/4_SimpleChains.html b/dev/tutorials/beginner/4_SimpleChains.html index e7915cb403..83580562b1 100644 --- a/dev/tutorials/beginner/4_SimpleChains.html +++ b/dev/tutorials/beginner/4_SimpleChains.html @@ -5,15 +5,15 @@ MNIST Classification with SimpleChains | Lux.jl Docs - + - + - - - + + + @@ -32,7 +32,7 @@ using MLDatasets: MNIST using SimpleChains: SimpleChains -Reactant.set_default_backend("cpu")
      Reactant.XLA.Client(Ptr{Nothing} @0x000000000705d7b0)

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      +Reactant.set_default_backend("cpu")
      Reactant.XLA.Client(Ptr{Nothing} @0x00000000086f4710)

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
           # Load MNIST
           N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
           dataset = MNIST(; split=:train)
      @@ -144,26 +144,26 @@
           end
       
           return tr_acc, te_acc
      -end
      train (generic function with 2 methods)

      Finally Training the Model

      First we will train the Lux model

      julia
      tr_acc, te_acc = train(lux_model, reactant_device())
      2025-01-20 22:46:34.664977: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 14589407905380107767
      -[ 1/10] 	 Time 480.95s 	 Training Accuracy: 15.23% 	 Test Accuracy: 17.19%
      -[ 2/10] 	 Time 0.59s 	 Training Accuracy: 33.75% 	 Test Accuracy: 32.03%
      -[ 3/10] 	 Time 0.60s 	 Training Accuracy: 47.27% 	 Test Accuracy: 46.09%
      -[ 4/10] 	 Time 0.60s 	 Training Accuracy: 54.69% 	 Test Accuracy: 53.91%
      -[ 5/10] 	 Time 0.59s 	 Training Accuracy: 64.38% 	 Test Accuracy: 61.72%
      -[ 6/10] 	 Time 0.58s 	 Training Accuracy: 73.75% 	 Test Accuracy: 70.31%
      -[ 7/10] 	 Time 0.57s 	 Training Accuracy: 78.20% 	 Test Accuracy: 75.78%
      -[ 8/10] 	 Time 0.59s 	 Training Accuracy: 80.00% 	 Test Accuracy: 81.25%
      -[ 9/10] 	 Time 0.60s 	 Training Accuracy: 81.95% 	 Test Accuracy: 84.38%
      -[10/10] 	 Time 0.59s 	 Training Accuracy: 83.75% 	 Test Accuracy: 85.16%

      Now we will train the SimpleChains model

      julia
      tr_acc, te_acc = train(simple_chains_model)
      [ 1/10] 	 Time 869.66s 	 Training Accuracy: 28.36% 	 Test Accuracy: 28.12%
      -[ 2/10] 	 Time 13.19s 	 Training Accuracy: 36.48% 	 Test Accuracy: 32.81%
      -[ 3/10] 	 Time 13.20s 	 Training Accuracy: 46.64% 	 Test Accuracy: 44.53%
      -[ 4/10] 	 Time 13.21s 	 Training Accuracy: 59.53% 	 Test Accuracy: 52.34%
      -[ 5/10] 	 Time 13.19s 	 Training Accuracy: 71.09% 	 Test Accuracy: 65.62%
      -[ 6/10] 	 Time 13.19s 	 Training Accuracy: 75.23% 	 Test Accuracy: 67.19%
      -[ 7/10] 	 Time 13.19s 	 Training Accuracy: 79.53% 	 Test Accuracy: 70.31%
      -[ 8/10] 	 Time 13.19s 	 Training Accuracy: 80.62% 	 Test Accuracy: 74.22%
      -[ 9/10] 	 Time 13.22s 	 Training Accuracy: 84.22% 	 Test Accuracy: 81.25%
      -[10/10] 	 Time 13.20s 	 Training Accuracy: 85.00% 	 Test Accuracy: 82.03%

      On my local machine we see a 3-4x speedup when using SimpleChains.jl. The conditions of the server this documentation is being built on is not ideal for CPU benchmarking hence, the speedup may not be as significant and even there might be regressions.

      Appendix

      julia
      using InteractiveUtils
      +end
      train (generic function with 2 methods)

      Finally Training the Model

      First we will train the Lux model

      julia
      tr_acc, te_acc = train(lux_model, reactant_device())
      2025-01-24 04:33:37.063340: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 2039916195886688483
      +[ 1/10] 	 Time 498.21s 	 Training Accuracy: 18.98% 	 Test Accuracy: 14.06%
      +[ 2/10] 	 Time 0.53s 	 Training Accuracy: 18.98% 	 Test Accuracy: 17.19%
      +[ 3/10] 	 Time 0.55s 	 Training Accuracy: 34.06% 	 Test Accuracy: 31.25%
      +[ 4/10] 	 Time 0.57s 	 Training Accuracy: 46.88% 	 Test Accuracy: 35.94%
      +[ 5/10] 	 Time 0.56s 	 Training Accuracy: 56.41% 	 Test Accuracy: 41.41%
      +[ 6/10] 	 Time 0.55s 	 Training Accuracy: 64.14% 	 Test Accuracy: 48.44%
      +[ 7/10] 	 Time 0.57s 	 Training Accuracy: 69.69% 	 Test Accuracy: 62.50%
      +[ 8/10] 	 Time 0.58s 	 Training Accuracy: 75.08% 	 Test Accuracy: 69.53%
      +[ 9/10] 	 Time 0.59s 	 Training Accuracy: 78.44% 	 Test Accuracy: 75.78%
      +[10/10] 	 Time 0.56s 	 Training Accuracy: 82.19% 	 Test Accuracy: 77.34%

      Now we will train the SimpleChains model

      julia
      tr_acc, te_acc = train(simple_chains_model)
      [ 1/10] 	 Time 877.38s 	 Training Accuracy: 33.67% 	 Test Accuracy: 31.25%
      +[ 2/10] 	 Time 12.46s 	 Training Accuracy: 54.61% 	 Test Accuracy: 50.78%
      +[ 3/10] 	 Time 12.47s 	 Training Accuracy: 63.83% 	 Test Accuracy: 60.16%
      +[ 4/10] 	 Time 12.47s 	 Training Accuracy: 70.31% 	 Test Accuracy: 60.16%
      +[ 5/10] 	 Time 12.48s 	 Training Accuracy: 76.09% 	 Test Accuracy: 69.53%
      +[ 6/10] 	 Time 12.45s 	 Training Accuracy: 80.31% 	 Test Accuracy: 78.12%
      +[ 7/10] 	 Time 12.47s 	 Training Accuracy: 82.97% 	 Test Accuracy: 80.47%
      +[ 8/10] 	 Time 12.47s 	 Training Accuracy: 85.23% 	 Test Accuracy: 81.25%
      +[ 9/10] 	 Time 12.46s 	 Training Accuracy: 87.42% 	 Test Accuracy: 82.03%
      +[10/10] 	 Time 12.46s 	 Training Accuracy: 88.67% 	 Test Accuracy: 82.81%

      On my local machine we see a 3-4x speedup when using SimpleChains.jl. The conditions of the server this documentation is being built on is not ideal for CPU benchmarking hence, the speedup may not be as significant and even there might be regressions.

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -176,8 +176,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -195,7 +195,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/beginner/5_OptimizationIntegration.html b/dev/tutorials/beginner/5_OptimizationIntegration.html index 7cd2ef8740..bd514e54fe 100644 --- a/dev/tutorials/beginner/5_OptimizationIntegration.html +++ b/dev/tutorials/beginner/5_OptimizationIntegration.html @@ -5,15 +5,15 @@ Training Lux Models using Optimization.jl | Lux.jl Docs - + - + - - - + + + @@ -112,31 +112,31 @@ return StatefulLuxLayer{true}(model, res.u, smodel.st) end -trained_model = train_model(dataloader)
      Iteration:     1, Loss: 2.565469e-01
      -Iteration:    26, Loss: 2.280865e-01
      -Iteration:    51, Loss: 4.032271e-01
      -Iteration:    76, Loss: 1.888162e-01
      -Iteration:     1, Loss: 1.996490e-01
      -Iteration:     1, Loss: 2.400779e-02
      -Iteration:    26, Loss: 4.619161e-02
      -Iteration:    51, Loss: 2.679822e-02
      -Iteration:    76, Loss: 2.340804e-02
      -Iteration:   101, Loss: 2.250691e-02
      -Iteration:   126, Loss: 2.183978e-02
      -Iteration:   151, Loss: 2.115316e-02
      -Iteration:   176, Loss: 2.031358e-02
      -Iteration:   201, Loss: 1.912147e-02
      -Iteration:   226, Loss: 1.786558e-02
      -Iteration:   251, Loss: 1.672475e-02
      -Iteration:   276, Loss: 2.473499e-02
      -Iteration:   301, Loss: 1.778203e-02
      -Iteration:   326, Loss: 1.589140e-02
      -Iteration:   351, Loss: 1.458615e-02
      -Iteration:   376, Loss: 1.336883e-02
      -Iteration:   401, Loss: 1.203184e-02
      -Iteration:   426, Loss: 2.416629e-02
      -Iteration:   451, Loss: 1.476158e-02
      -Iteration:   476, Loss: 1.223332e-02

      Plotting the results

      julia
      dudt(u, p, t) = trained_model(u, p)
      +trained_model = train_model(dataloader)
      Iteration:     1, Loss: 2.275711e-01
      +Iteration:    26, Loss: 7.558612e-02
      +Iteration:    51, Loss: 1.401070e-01
      +Iteration:    76, Loss: 2.368127e-01
      +Iteration:     1, Loss: 3.022343e-01
      +Iteration:     1, Loss: 2.887229e-02
      +Iteration:    26, Loss: 3.702169e-02
      +Iteration:    51, Loss: 3.170453e-02
      +Iteration:    76, Loss: 2.796158e-02
      +Iteration:   101, Loss: 2.620041e-02
      +Iteration:   126, Loss: 2.488948e-02
      +Iteration:   151, Loss: 2.385878e-02
      +Iteration:   176, Loss: 2.295489e-02
      +Iteration:   201, Loss: 2.205519e-02
      +Iteration:   226, Loss: 2.108170e-02
      +Iteration:   251, Loss: 1.997719e-02
      +Iteration:   276, Loss: 1.870240e-02
      +Iteration:   301, Loss: 2.226003e-02
      +Iteration:   326, Loss: 1.756263e-02
      +Iteration:   351, Loss: 1.611929e-02
      +Iteration:   376, Loss: 1.807818e-02
      +Iteration:   401, Loss: 1.606911e-02
      +Iteration:   426, Loss: 1.444251e-02
      +Iteration:   451, Loss: 1.293867e-02
      +Iteration:   476, Loss: 4.631519e-02

      Plotting the results

      julia
      dudt(u, p, t) = trained_model(u, p)
       prob = ODEProblem(dudt, gdev(u0), (tspan[1], tspan[2]), trained_model.ps)
       sol = solve(prob, Tsit5(); saveat=t)
       pred = convert(AbstractArray, sol) |> cdev
      @@ -150,7 +150,7 @@
           lines!(ax, t, pred[2, :]; label=L"\hat{u}_2(t)", color=:red, linewidth=4)
           axislegend(ax; position=:lt)
           fig
      -end

      Appendix

      julia
      using InteractiveUtils
      +end

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -163,8 +163,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -181,7 +181,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/index.html b/dev/tutorials/index.html index b938428b4f..cccad93c6f 100644 --- a/dev/tutorials/index.html +++ b/dev/tutorials/index.html @@ -5,15 +5,15 @@ Tutorials | Lux.jl Docs - + - + - - - + + + @@ -29,7 +29,7 @@
      Skip to content

      Tutorials

      Beginner Tutorials

      Intermediate Tutorials

      Advanced Tutorials

      Larger Models

      WARNING

      These models are part of the Lux examples, however, these are larger model that cannot be run on CI and aren't frequently tested. If you find a bug in one of these models, please open an issue or PR to fix it.

      Selected 3rd Party Tutorials

      WARNING

      These tutorials are developed by the community and may not be up-to-date with the latest version of Lux.jl. Please refer to the official documentation for the most up-to-date information.

      Please open an issue (ideally both at Lux.jl and at the downstream linked package) if any of them are non-functional and we will try to get them updated.

      TIP

      If you found an amazing tutorial showcasing Lux.jl online, or wrote one yourself, please open an issue or PR to add it to the list!

      - + \ No newline at end of file diff --git a/dev/tutorials/intermediate/1_NeuralODE.html b/dev/tutorials/intermediate/1_NeuralODE.html index e7516ce80e..507a00bcb1 100644 --- a/dev/tutorials/intermediate/1_NeuralODE.html +++ b/dev/tutorials/intermediate/1_NeuralODE.html @@ -5,15 +5,15 @@ MNIST Classification using Neural ODEs | Lux.jl Docs - + - + - - - + + + @@ -33,154 +33,256 @@ using MLDatasets: MNIST using MLUtils: DataLoader, splitobs -CUDA.allowscalar(false)
      Precompiling SciMLSensitivity...
      -    459.9 ms  ✓ EnumX
      -    405.0 ms  ✓ Parameters
      -    416.4 ms  ✓ RuntimeGeneratedFunctions
      -    856.1 ms  ✓ DifferentiationInterface
      -    950.2 ms  ✓ KLU
      -    901.9 ms  ✓ PDMats
      -    373.5 ms  ✓ SciMLStructures
      -    496.2 ms  ✓ TruncatedStacktraces
      -   1156.2 ms  ✓ Sparspak
      -   6366.1 ms  ✓ Krylov
      -   1855.0 ms  ✓ SciMLOperators
      -    595.0 ms  ✓ ResettableStacks
      -   1013.6 ms  ✓ QuadGK
      -    493.1 ms  ✓ FunctionProperties
      -   1255.9 ms  ✓ HypergeometricFunctions
      -    805.0 ms  ✓ FastPower → FastPowerForwardDiffExt
      -    759.7 ms  ✓ PreallocationTools
      -   1064.1 ms  ✓ NLSolversBase
      -   1103.1 ms  ✓ FastPower → FastPowerTrackerExt
      -    773.9 ms  ✓ FastBroadcast
      -  11893.3 ms  ✓ ArrayLayouts
      -   3497.7 ms  ✓ FastPower → FastPowerReverseDiffExt
      -   1467.7 ms  ✓ SymbolicIndexingInterface
      -   3983.2 ms  ✓ TriangularSolve
      -    606.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -    423.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -   3680.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      -    423.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1163.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    836.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -   7196.5 ms  ✓ FastPower → FastPowerEnzymeExt
      -    660.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -   1736.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      -    674.2 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    578.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   1437.5 ms  ✓ Tracker → TrackerPDMatsExt
      -    828.0 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1865.3 ms  ✓ StatsFuns
      -   7263.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      -   3452.8 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      -    778.5 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      -   6674.2 ms  ✓ QuadGK → QuadGKEnzymeExt
      -   1848.5 ms  ✓ LineSearches
      -    684.6 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -   2217.5 ms  ✓ RecursiveArrayTools
      -   1593.9 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -   2524.6 ms  ✓ LazyArrays
      -   5109.7 ms  ✓ Distributions
      -    924.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      -   3276.1 ms  ✓ Optim
      -    661.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -    934.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   1268.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    789.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -   3277.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      -   1295.5 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      -   1439.2 ms  ✓ Distributions → DistributionsTestExt
      -  16123.6 ms  ✓ RecursiveFactorization
      -   1410.3 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   5446.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      -  12207.4 ms  ✓ SciMLBase
      -   1132.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2826.2 ms  ✓ SciMLJacobianOperators
      -   3556.3 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      -   6279.1 ms  ✓ DiffEqBase
      -   1580.8 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      -   2457.9 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      -   1993.1 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      -   1730.4 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      -   4864.7 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      -   4526.5 ms  ✓ DiffEqCallbacks
      -   3928.1 ms  ✓ DiffEqNoiseProcess
      -   5087.3 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      -  18101.4 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      -  35745.3 ms  ✓ LinearSolve
      -   2708.0 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      -   2747.1 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      -   4383.7 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      -  30389.4 ms  ✓ SciMLSensitivity
      -  79 dependencies successfully precompiled in 125 seconds. 212 already precompiled.
      +CUDA.allowscalar(false)
      Precompiling Lux...
      +   5857.2 ms  ✓ LuxLib
      +   9238.5 ms  ✓ Lux
      +  2 dependencies successfully precompiled in 16 seconds. 107 already precompiled.
      +Precompiling LuxComponentArraysExt...
      +   1552.9 ms  ✓ Lux → LuxComponentArraysExt
      +  1 dependency successfully precompiled in 2 seconds. 113 already precompiled.
      +Precompiling SciMLSensitivity...
      +    385.5 ms  ✓ MuladdMacro
      +    526.0 ms  ✓ PositiveFactorizations
      +    328.0 ms  ✓ CommonSolve
      +    402.5 ms  ✓ PoissonRandom
      +    935.9 ms  ✓ Cassette
      +    428.7 ms  ✓ RuntimeGeneratedFunctions
      +    344.0 ms  ✓ FastPower
      +    434.7 ms  ✓ Parameters
      +    362.3 ms  ✓ FunctionWrappersWrappers
      +    873.3 ms  ✓ DifferentiationInterface
      +   1244.0 ms  ✓ FastLapackInterface
      +    995.4 ms  ✓ KLU
      +    387.8 ms  ✓ SciMLStructures
      +    504.2 ms  ✓ TruncatedStacktraces
      +   1183.1 ms  ✓ Sparspak
      +   1883.5 ms  ✓ SciMLOperators
      +   6120.6 ms  ✓ Krylov
      +   5447.4 ms  ✓ ChainRules
      +  11841.4 ms  ✓ ArrayLayouts
      +    597.8 ms  ✓ ResettableStacks
      +    743.8 ms  ✓ PreallocationTools
      +   1024.7 ms  ✓ NLSolversBase
      +    713.8 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +    727.0 ms  ✓ FastBroadcast
      +   1333.3 ms  ✓ Tracker → TrackerPDMatsExt
      +    525.4 ms  ✓ FunctionProperties
      +   8341.0 ms  ✓ Expronicon
      +   1083.1 ms  ✓ FastPower → FastPowerTrackerExt
      +   1507.4 ms  ✓ SymbolicIndexingInterface
      +   3459.6 ms  ✓ TriangularSolve
      +    642.5 ms  ✓ FastPower → FastPowerForwardDiffExt
      +   3357.1 ms  ✓ FastPower → FastPowerReverseDiffExt
      +    598.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +    426.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +   3519.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceReverseDiffExt
      +   1109.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   5732.7 ms  ✓ FastPower → FastPowerEnzymeExt
      +    411.8 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +    651.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    797.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +    568.2 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +    824.9 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +    799.5 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +    796.6 ms  ✓ ArrayLayouts → ArrayLayoutsSparseArraysExt
      +   1782.5 ms  ✓ LineSearches
      +   3351.8 ms  ✓ PreallocationTools → PreallocationToolsReverseDiffExt
      +   5880.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceEnzymeExt
      +   2130.5 ms  ✓ RecursiveArrayTools
      +   2520.3 ms  ✓ LazyArrays
      +   3391.5 ms  ✓ Optim
      +    805.3 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsFastBroadcastExt
      +    627.5 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    886.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1205.0 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +    771.3 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +  14634.6 ms  ✓ RecursiveFactorization
      +   1312.4 ms  ✓ LazyArrays → LazyArraysStaticArraysExt
      +  12143.8 ms  ✓ SciMLBase
      +   1101.2 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   2787.0 ms  ✓ SciMLJacobianOperators
      +   6309.2 ms  ✓ DiffEqBase
      +  34179.3 ms  ✓ Zygote
      +   1580.3 ms  ✓ DiffEqBase → DiffEqBaseChainRulesCoreExt
      +   2393.5 ms  ✓ DiffEqBase → DiffEqBaseTrackerExt
      +   1907.2 ms  ✓ DiffEqBase → DiffEqBaseDistributionsExt
      +   4967.0 ms  ✓ DiffEqBase → DiffEqBaseReverseDiffExt
      +   1645.7 ms  ✓ DiffEqBase → DiffEqBaseSparseArraysExt
      +   1934.6 ms  ✓ Zygote → ZygoteTrackerExt
      +   1601.0 ms  ✓ DifferentiationInterface → DifferentiationInterfaceZygoteExt
      +   4504.4 ms  ✓ DiffEqCallbacks
      +   3313.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsZygoteExt
      +   3574.1 ms  ✓ SciMLBase → SciMLBaseZygoteExt
      +   3803.9 ms  ✓ DiffEqNoiseProcess
      +   8492.1 ms  ✓ DiffEqBase → DiffEqBaseEnzymeExt
      +   5526.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsReverseDiffExt
      +   4875.9 ms  ✓ DiffEqNoiseProcess → DiffEqNoiseProcessReverseDiffExt
      +  33232.0 ms  ✓ LinearSolve
      +   2533.3 ms  ✓ LinearSolve → LinearSolveRecursiveArrayToolsExt
      +   2584.2 ms  ✓ LinearSolve → LinearSolveEnzymeExt
      +   4250.5 ms  ✓ LinearSolve → LinearSolveKernelAbstractionsExt
      +  21623.0 ms  ✓ SciMLSensitivity
      +  81 dependencies successfully precompiled in 116 seconds. 210 already precompiled.
       Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    637.4 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +    605.5 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
       Precompiling ComponentArraysRecursiveArrayToolsExt...
      -    726.9 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
      +    697.3 ms  ✓ ComponentArrays → ComponentArraysRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 69 already precompiled.
       Precompiling ComponentArraysSciMLBaseExt...
      -   1158.7 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
      +   1117.1 ms  ✓ ComponentArrays → ComponentArraysSciMLBaseExt
         1 dependency successfully precompiled in 1 seconds. 98 already precompiled.
      +Precompiling LuxLibSLEEFPiratesExt...
      +   2436.5 ms  ✓ LuxLib → LuxLibSLEEFPiratesExt
      +  1 dependency successfully precompiled in 3 seconds. 97 already precompiled.
      +Precompiling LuxLibLoopVectorizationExt...
      +   3957.6 ms  ✓ LuxLib → LuxLibLoopVectorizationExt
      +  1 dependency successfully precompiled in 4 seconds. 105 already precompiled.
      +Precompiling LuxLibEnzymeExt...
      +   1322.0 ms  ✓ LuxLib → LuxLibEnzymeExt
      +  1 dependency successfully precompiled in 2 seconds. 130 already precompiled.
      +Precompiling LuxEnzymeExt...
      +   6746.2 ms  ✓ Lux → LuxEnzymeExt
      +  1 dependency successfully precompiled in 7 seconds. 146 already precompiled.
      +Precompiling LuxLibTrackerExt...
      +   1136.8 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3346.0 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
      +Precompiling LuxTrackerExt...
      +   2194.9 ms  ✓ Lux → LuxTrackerExt
      +  1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
      +Precompiling LuxLibReverseDiffExt...
      +   4156.4 ms  ✓ LuxLib → LuxLibReverseDiffExt
      +  1 dependency successfully precompiled in 4 seconds. 99 already precompiled.
      +Precompiling LuxReverseDiffExt...
      +   4350.1 ms  ✓ Lux → LuxReverseDiffExt
      +  1 dependency successfully precompiled in 5 seconds. 115 already precompiled.
      +Precompiling MLDataDevicesChainRulesExt...
      +    824.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      +Precompiling MLDataDevicesZygoteExt...
      +   1592.5 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      +  1 dependency successfully precompiled in 2 seconds. 109 already precompiled.
      +Precompiling LuxZygoteExt...
      +   1749.0 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      +   3082.2 ms  ✓ Lux → LuxZygoteExt
      +  2 dependencies successfully precompiled in 3 seconds. 165 already precompiled.
      +Precompiling ComponentArraysZygoteExt...
      +   1576.4 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      +  1 dependency successfully precompiled in 2 seconds. 117 already precompiled.
       Precompiling LuxCUDA...
      -   5348.9 ms  ✓ LuxCUDA
      -  1 dependency successfully precompiled in 6 seconds. 101 already precompiled.
      +  45492.8 ms  ✓ CUDA
      +   5210.1 ms  ✓ Atomix → AtomixCUDAExt
      +   7985.1 ms  ✓ cuDNN
      +   5372.0 ms  ✓ LuxCUDA
      +  4 dependencies successfully precompiled in 64 seconds. 98 already precompiled.
      +Precompiling EnzymeBFloat16sExt...
      +   6039.1 ms  ✓ Enzyme → EnzymeBFloat16sExt
      +  1 dependency successfully precompiled in 6 seconds. 47 already precompiled.
      +Precompiling ZygoteColorsExt...
      +   1863.8 ms  ✓ Zygote → ZygoteColorsExt
      +  1 dependency successfully precompiled in 2 seconds. 105 already precompiled.
      +Precompiling ArrayInterfaceCUDAExt...
      +   4981.7 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
      +  1 dependency successfully precompiled in 5 seconds. 103 already precompiled.
      +Precompiling NNlibCUDAExt...
      +   4965.1 ms  ✓ CUDA → ChainRulesCoreExt
      +   5450.7 ms  ✓ NNlib → NNlibCUDAExt
      +  2 dependencies successfully precompiled in 6 seconds. 104 already precompiled.
      +Precompiling MLDataDevicesCUDAExt...
      +   5160.7 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
      +  1 dependency successfully precompiled in 6 seconds. 106 already precompiled.
      +Precompiling LuxLibCUDAExt...
      +   5047.4 ms  ✓ CUDA → SpecialFunctionsExt
      +   5202.7 ms  ✓ CUDA → EnzymeCoreExt
      +   5519.2 ms  ✓ LuxLib → LuxLibCUDAExt
      +  3 dependencies successfully precompiled in 6 seconds. 169 already precompiled.
       Precompiling DiffEqBaseCUDAExt...
      -   5925.6 ms  ✓ DiffEqBase → DiffEqBaseCUDAExt
      +   5613.2 ms  ✓ DiffEqBase → DiffEqBaseCUDAExt
         1 dependency successfully precompiled in 6 seconds. 186 already precompiled.
       Precompiling LinearSolveCUDAExt...
      -   7171.0 ms  ✓ LinearSolve → LinearSolveCUDAExt
      -  1 dependency successfully precompiled in 8 seconds. 189 already precompiled.
      +   6913.8 ms  ✓ LinearSolve → LinearSolveCUDAExt
      +  1 dependency successfully precompiled in 7 seconds. 189 already precompiled.
      +Precompiling WeightInitializersCUDAExt...
      +   4884.7 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
      +  1 dependency successfully precompiled in 5 seconds. 111 already precompiled.
      +Precompiling NNlibCUDACUDNNExt...
      +   5201.0 ms  ✓ NNlib → NNlibCUDACUDNNExt
      +  1 dependency successfully precompiled in 6 seconds. 108 already precompiled.
      +Precompiling MLDataDevicescuDNNExt...
      +   4865.1 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      +  1 dependency successfully precompiled in 5 seconds. 109 already precompiled.
      +Precompiling LuxLibcuDNNExt...
      +   5822.7 ms  ✓ LuxLib → LuxLibcuDNNExt
      +  1 dependency successfully precompiled in 6 seconds. 176 already precompiled.
       Precompiling OrdinaryDiffEqTsit5...
      -   4612.7 ms  ✓ OrdinaryDiffEqCore
      -   1516.0 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      -   7501.0 ms  ✓ OrdinaryDiffEqTsit5
      -  3 dependencies successfully precompiled in 14 seconds. 122 already precompiled.
      +    357.0 ms  ✓ SimpleUnPack
      +   4989.3 ms  ✓ OrdinaryDiffEqCore
      +   1538.3 ms  ✓ OrdinaryDiffEqCore → OrdinaryDiffEqCoreEnzymeCoreExt
      +   7349.3 ms  ✓ OrdinaryDiffEqTsit5
      +  4 dependencies successfully precompiled in 14 seconds. 121 already precompiled.
      +Precompiling OneHotArrays...
      +    975.6 ms  ✓ OneHotArrays
      +  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      +Precompiling MLDataDevicesOneHotArraysExt...
      +    730.6 ms  ✓ MLDataDevices → MLDataDevicesOneHotArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 35 already precompiled.
       Precompiling MLDatasets...
      -    359.3 ms  ✓ Glob
      -    400.3 ms  ✓ WorkerUtilities
      -    428.1 ms  ✓ BufferedStreams
      -    324.8 ms  ✓ SimpleBufferStream
      -    302.1 ms  ✓ PackageExtensionCompat
      -    563.8 ms  ✓ URIs
      -    340.6 ms  ✓ BitFlags
      -    622.7 ms  ✓ GZip
      -    686.1 ms  ✓ ConcurrentUtilities
      -    611.1 ms  ✓ ZipFile
      -    589.6 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    634.9 ms  ✓ Accessors → UnitfulExt
      -    335.9 ms  ✓ InternedStrings
      -    478.0 ms  ✓ ExceptionUnwrapping
      -   2085.3 ms  ✓ ColorVectorSpace
      -   1449.8 ms  ✓ MPICH_jll
      -    858.5 ms  ✓ WeakRefStrings
      -    576.8 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -   2333.7 ms  ✓ AtomsBase
      -   1187.5 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -    458.7 ms  ✓ StridedViews
      -   2056.4 ms  ✓ OpenSSL
      -   1493.2 ms  ✓ NPZ
      -  11070.4 ms  ✓ JSON3
      -   3396.8 ms  ✓ ColorSchemes
      -   1577.2 ms  ✓ HDF5_jll
      -  19765.7 ms  ✓ ImageCore
      -   2412.2 ms  ✓ Chemfiles
      -   2510.7 ms  ✓ Pickle
      -  19761.2 ms  ✓ CSV
      -  34760.8 ms  ✓ JLD2
      -   2154.8 ms  ✓ ImageBase
      -   2026.5 ms  ✓ ImageShow
      -   7672.4 ms  ✓ HDF5
      -   2433.0 ms  ✓ MAT
      -  19388.6 ms  ✓ HTTP
      -   1917.2 ms  ✓ FileIO → HTTPExt
      -   3092.3 ms  ✓ DataDeps
      -   9295.4 ms  ✓ MLDatasets
      -  39 dependencies successfully precompiled in 67 seconds. 160 already precompiled.
      +    381.4 ms  ✓ Glob
      +    427.4 ms  ✓ WorkerUtilities
      +    486.3 ms  ✓ BufferedStreams
      +    372.1 ms  ✓ SimpleBufferStream
      +    621.4 ms  ✓ URIs
      +    504.6 ms  ✓ CodecZlib
      +    366.5 ms  ✓ PackageExtensionCompat
      +    386.4 ms  ✓ BitFlags
      +    698.9 ms  ✓ GZip
      +    733.9 ms  ✓ ConcurrentUtilities
      +    630.0 ms  ✓ ZipFile
      +    827.1 ms  ✓ StructTypes
      +   1055.6 ms  ✓ MbedTLS
      +    590.9 ms  ✓ MPIPreferences
      +    369.7 ms  ✓ InternedStrings
      +    516.6 ms  ✓ ExceptionUnwrapping
      +   2347.6 ms  ✓ PeriodicTable
      +   2960.5 ms  ✓ UnitfulAtomic
      +    628.9 ms  ✓ Chemfiles_jll
      +    511.7 ms  ✓ MicrosoftMPI_jll
      +    664.1 ms  ✓ libaec_jll
      +    573.5 ms  ✓ StringEncodings
      +    781.0 ms  ✓ WeakRefStrings
      +   1457.6 ms  ✓ Transducers → TransducersDataFramesExt
      +   1964.6 ms  ✓ ImageShow
      +   1669.9 ms  ✓ BangBang → BangBangDataFramesExt
      +    470.1 ms  ✓ StridedViews
      +   1537.2 ms  ✓ NPZ
      +   1928.1 ms  ✓ OpenSSL
      +   1157.1 ms  ✓ OpenMPI_jll
      +   1512.1 ms  ✓ MPICH_jll
      +   6401.7 ms  ✓ MLUtils
      +   1176.0 ms  ✓ MPItrampoline_jll
      +   2302.3 ms  ✓ AtomsBase
      +   2416.3 ms  ✓ Pickle
      +   9985.4 ms  ✓ JSON3
      +   1541.5 ms  ✓ HDF5_jll
      +   2277.5 ms  ✓ Chemfiles
      +   7572.7 ms  ✓ HDF5
      +  17252.6 ms  ✓ CSV
      +   2666.7 ms  ✓ MAT
      +  19080.0 ms  ✓ HTTP
      +   2096.1 ms  ✓ FileIO → HTTPExt
      +   3299.8 ms  ✓ DataDeps
      +   9534.9 ms  ✓ MLDatasets
      +  45 dependencies successfully precompiled in 51 seconds. 154 already precompiled.
       Precompiling TransducersLazyArraysExt...
      -   1239.6 ms  ✓ Transducers → TransducersLazyArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
      +   1227.9 ms  ✓ Transducers → TransducersLazyArraysExt
      +  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      +Precompiling MLDataDevicesMLUtilsExt...
      +   1745.8 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
      +Precompiling LuxMLUtilsExt...
      +   2170.9 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.

      Loading MNIST

      julia
      function loadmnist(batchsize, train_split)
           # Load MNIST: Only 1500 for demonstration purposes
           N = parse(Bool, get(ENV, "CI", "false")) ? 1500 : nothing
           dataset = MNIST(; split=:train)
      @@ -289,47 +391,47 @@
           end
       end
       
      -train(NeuralODECompact)
      [1/9]	Time 141.6985s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.6474s	Training Accuracy: 58.22222%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7605s	Training Accuracy: 67.85185%	Test Accuracy: 70.66667%
      -[4/9]	Time 0.5718s	Training Accuracy: 74.29630%	Test Accuracy: 74.66667%
      -[5/9]	Time 0.6067s	Training Accuracy: 76.29630%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5708s	Training Accuracy: 78.74074%	Test Accuracy: 80.00000%
      -[7/9]	Time 0.5745s	Training Accuracy: 82.22222%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5886s	Training Accuracy: 83.62963%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5742s	Training Accuracy: 85.18519%	Test Accuracy: 82.66667%
      julia
      train(NeuralODE)
      [1/9]	Time 35.3338s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5750s	Training Accuracy: 57.18519%	Test Accuracy: 57.33333%
      -[3/9]	Time 0.7617s	Training Accuracy: 68.37037%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5737s	Training Accuracy: 73.77778%	Test Accuracy: 75.33333%
      -[5/9]	Time 0.5843s	Training Accuracy: 76.14815%	Test Accuracy: 77.33333%
      -[6/9]	Time 0.8359s	Training Accuracy: 79.48148%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.5807s	Training Accuracy: 81.25926%	Test Accuracy: 80.66667%
      -[8/9]	Time 0.5803s	Training Accuracy: 83.40741%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5896s	Training Accuracy: 84.81481%	Test Accuracy: 82.00000%

      We can also change the sensealg and train the model! GaussAdjoint allows you to use any arbitrary parameter structure and not just a flat vector (ComponentArray).

      julia
      train(NeuralODE; sensealg=GaussAdjoint(; autojacvec=ZygoteVJP()), use_named_tuple=true)
      [1/9]	Time 42.9532s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5744s	Training Accuracy: 58.44444%	Test Accuracy: 58.00000%
      -[3/9]	Time 0.5544s	Training Accuracy: 66.96296%	Test Accuracy: 68.00000%
      -[4/9]	Time 0.5532s	Training Accuracy: 72.44444%	Test Accuracy: 73.33333%
      -[5/9]	Time 0.7753s	Training Accuracy: 76.37037%	Test Accuracy: 76.00000%
      -[6/9]	Time 0.5458s	Training Accuracy: 78.81481%	Test Accuracy: 79.33333%
      -[7/9]	Time 0.5535s	Training Accuracy: 80.51852%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5598s	Training Accuracy: 82.74074%	Test Accuracy: 83.33333%
      -[9/9]	Time 0.5679s	Training Accuracy: 85.25926%	Test Accuracy: 82.66667%

      But remember some AD backends like ReverseDiff is not GPU compatible. For a model this size, you will notice that training time is significantly lower for training on CPU than on GPU.

      julia
      train(NeuralODE; sensealg=InterpolatingAdjoint(; autojacvec=ReverseDiffVJP()), cpu=true)
      [1/9]	Time 109.8302s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 17.2264s	Training Accuracy: 58.74074%	Test Accuracy: 56.66667%
      -[3/9]	Time 17.9930s	Training Accuracy: 69.92593%	Test Accuracy: 71.33333%
      -[4/9]	Time 15.8127s	Training Accuracy: 72.81481%	Test Accuracy: 74.00000%
      -[5/9]	Time 14.3418s	Training Accuracy: 76.37037%	Test Accuracy: 78.66667%
      -[6/9]	Time 17.5343s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 16.2169s	Training Accuracy: 81.62963%	Test Accuracy: 80.66667%
      -[8/9]	Time 17.0311s	Training Accuracy: 83.33333%	Test Accuracy: 80.00000%
      -[9/9]	Time 15.2291s	Training Accuracy: 85.40741%	Test Accuracy: 82.00000%

      For completeness, let's also test out discrete sensitivities!

      julia
      train(NeuralODE; sensealg=ReverseDiffAdjoint(), cpu=true)
      [1/9]	Time 54.7833s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 28.2044s	Training Accuracy: 58.66667%	Test Accuracy: 57.33333%
      -[3/9]	Time 28.1614s	Training Accuracy: 69.70370%	Test Accuracy: 71.33333%
      -[4/9]	Time 28.1398s	Training Accuracy: 72.74074%	Test Accuracy: 74.00000%
      -[5/9]	Time 24.6311s	Training Accuracy: 76.14815%	Test Accuracy: 78.66667%
      -[6/9]	Time 26.7294s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      -[7/9]	Time 23.7177s	Training Accuracy: 81.55556%	Test Accuracy: 80.66667%
      -[8/9]	Time 23.6944s	Training Accuracy: 83.40741%	Test Accuracy: 80.00000%
      -[9/9]	Time 23.0376s	Training Accuracy: 85.25926%	Test Accuracy: 81.33333%

      Alternate Implementation using Stateful Layer

      Starting v0.5.5, Lux provides a StatefulLuxLayer which can be used to avoid the Boxing of st. Using the @compact API avoids this problem entirely.

      julia
      struct StatefulNeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <:
      +train(NeuralODECompact)
      [1/9]	Time 145.6055s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.6923s	Training Accuracy: 58.22222%	Test Accuracy: 57.33333%
      +[3/9]	Time 0.5880s	Training Accuracy: 67.85185%	Test Accuracy: 70.66667%
      +[4/9]	Time 0.7365s	Training Accuracy: 74.29630%	Test Accuracy: 74.66667%
      +[5/9]	Time 0.5768s	Training Accuracy: 76.29630%	Test Accuracy: 76.00000%
      +[6/9]	Time 0.7812s	Training Accuracy: 78.74074%	Test Accuracy: 80.00000%
      +[7/9]	Time 0.5771s	Training Accuracy: 82.22222%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.5847s	Training Accuracy: 83.62963%	Test Accuracy: 83.33333%
      +[9/9]	Time 0.5776s	Training Accuracy: 85.18519%	Test Accuracy: 82.66667%
      julia
      train(NeuralODE)
      [1/9]	Time 34.0779s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.6949s	Training Accuracy: 57.18519%	Test Accuracy: 57.33333%
      +[3/9]	Time 0.5889s	Training Accuracy: 68.37037%	Test Accuracy: 68.00000%
      +[4/9]	Time 0.7887s	Training Accuracy: 73.77778%	Test Accuracy: 75.33333%
      +[5/9]	Time 0.5735s	Training Accuracy: 76.14815%	Test Accuracy: 77.33333%
      +[6/9]	Time 0.5772s	Training Accuracy: 79.48148%	Test Accuracy: 80.66667%
      +[7/9]	Time 0.8112s	Training Accuracy: 81.25926%	Test Accuracy: 80.66667%
      +[8/9]	Time 0.5766s	Training Accuracy: 83.40741%	Test Accuracy: 82.66667%
      +[9/9]	Time 0.5746s	Training Accuracy: 84.81481%	Test Accuracy: 82.00000%

      We can also change the sensealg and train the model! GaussAdjoint allows you to use any arbitrary parameter structure and not just a flat vector (ComponentArray).

      julia
      train(NeuralODE; sensealg=GaussAdjoint(; autojacvec=ZygoteVJP()), use_named_tuple=true)
      [1/9]	Time 42.2203s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.5489s	Training Accuracy: 58.44444%	Test Accuracy: 58.00000%
      +[3/9]	Time 0.7084s	Training Accuracy: 66.96296%	Test Accuracy: 68.00000%
      +[4/9]	Time 0.5476s	Training Accuracy: 72.44444%	Test Accuracy: 73.33333%
      +[5/9]	Time 0.7615s	Training Accuracy: 76.37037%	Test Accuracy: 76.00000%
      +[6/9]	Time 0.5601s	Training Accuracy: 78.81481%	Test Accuracy: 79.33333%
      +[7/9]	Time 0.5529s	Training Accuracy: 80.51852%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.7865s	Training Accuracy: 82.74074%	Test Accuracy: 83.33333%
      +[9/9]	Time 0.5436s	Training Accuracy: 85.25926%	Test Accuracy: 82.66667%

      But remember some AD backends like ReverseDiff is not GPU compatible. For a model this size, you will notice that training time is significantly lower for training on CPU than on GPU.

      julia
      train(NeuralODE; sensealg=InterpolatingAdjoint(; autojacvec=ReverseDiffVJP()), cpu=true)
      [1/9]	Time 98.5688s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 15.5218s	Training Accuracy: 58.74074%	Test Accuracy: 56.66667%
      +[3/9]	Time 17.8079s	Training Accuracy: 69.92593%	Test Accuracy: 71.33333%
      +[4/9]	Time 4.9542s	Training Accuracy: 72.81481%	Test Accuracy: 74.00000%
      +[5/9]	Time 4.8191s	Training Accuracy: 76.37037%	Test Accuracy: 78.66667%
      +[6/9]	Time 5.0913s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      +[7/9]	Time 6.6715s	Training Accuracy: 81.62963%	Test Accuracy: 80.66667%
      +[8/9]	Time 15.3470s	Training Accuracy: 83.33333%	Test Accuracy: 80.00000%
      +[9/9]	Time 14.9399s	Training Accuracy: 85.40741%	Test Accuracy: 82.00000%

      For completeness, let's also test out discrete sensitivities!

      julia
      train(NeuralODE; sensealg=ReverseDiffAdjoint(), cpu=true)
      [1/9]	Time 53.4534s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 28.3519s	Training Accuracy: 58.66667%	Test Accuracy: 57.33333%
      +[3/9]	Time 27.1995s	Training Accuracy: 69.70370%	Test Accuracy: 71.33333%
      +[4/9]	Time 27.9635s	Training Accuracy: 72.74074%	Test Accuracy: 74.00000%
      +[5/9]	Time 28.7721s	Training Accuracy: 76.14815%	Test Accuracy: 78.66667%
      +[6/9]	Time 28.9540s	Training Accuracy: 79.03704%	Test Accuracy: 80.66667%
      +[7/9]	Time 29.1487s	Training Accuracy: 81.55556%	Test Accuracy: 80.66667%
      +[8/9]	Time 27.9752s	Training Accuracy: 83.40741%	Test Accuracy: 80.00000%
      +[9/9]	Time 26.8059s	Training Accuracy: 85.25926%	Test Accuracy: 81.33333%

      Alternate Implementation using Stateful Layer

      Starting v0.5.5, Lux provides a StatefulLuxLayer which can be used to avoid the Boxing of st. Using the @compact API avoids this problem entirely.

      julia
      struct StatefulNeuralODE{M <: Lux.AbstractLuxLayer, So, T, K} <:
              Lux.AbstractLuxWrapperLayer{:model}
           model::M
           solver::So
      @@ -347,20 +449,20 @@
           dudt(u, p, t) = st_model(u, p)
           prob = ODEProblem{false}(ODEFunction{false}(dudt), x, n.tspan, ps)
           return solve(prob, n.solver; n.kwargs...), st_model.st
      -end

      Train the new Stateful Neural ODE

      julia
      train(StatefulNeuralODE)
      [1/9]	Time 38.7067s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      -[2/9]	Time 0.5533s	Training Accuracy: 58.22222%	Test Accuracy: 55.33333%
      -[3/9]	Time 0.8202s	Training Accuracy: 68.29630%	Test Accuracy: 68.66667%
      -[4/9]	Time 0.5436s	Training Accuracy: 73.11111%	Test Accuracy: 76.00000%
      -[5/9]	Time 0.5437s	Training Accuracy: 75.92593%	Test Accuracy: 76.66667%
      -[6/9]	Time 0.5719s	Training Accuracy: 78.96296%	Test Accuracy: 80.66667%
      -[7/9]	Time 0.8741s	Training Accuracy: 80.81481%	Test Accuracy: 81.33333%
      -[8/9]	Time 0.5530s	Training Accuracy: 83.25926%	Test Accuracy: 82.66667%
      -[9/9]	Time 0.5426s	Training Accuracy: 84.59259%	Test Accuracy: 82.00000%

      We might not see a significant difference in the training time, but let us investigate the type stabilities of the layers.

      Type Stability

      julia
      model, ps, st = create_model(NeuralODE)
      +end

      Train the new Stateful Neural ODE

      julia
      train(StatefulNeuralODE)
      [1/9]	Time 37.5871s	Training Accuracy: 37.48148%	Test Accuracy: 40.00000%
      +[2/9]	Time 0.8049s	Training Accuracy: 58.22222%	Test Accuracy: 55.33333%
      +[3/9]	Time 0.5681s	Training Accuracy: 68.29630%	Test Accuracy: 68.66667%
      +[4/9]	Time 0.5527s	Training Accuracy: 73.11111%	Test Accuracy: 76.00000%
      +[5/9]	Time 0.8499s	Training Accuracy: 75.92593%	Test Accuracy: 76.66667%
      +[6/9]	Time 0.5449s	Training Accuracy: 78.96296%	Test Accuracy: 80.66667%
      +[7/9]	Time 0.5404s	Training Accuracy: 80.81481%	Test Accuracy: 81.33333%
      +[8/9]	Time 0.5494s	Training Accuracy: 83.25926%	Test Accuracy: 82.66667%
      +[9/9]	Time 0.8664s	Training Accuracy: 84.59259%	Test Accuracy: 82.00000%

      We might not see a significant difference in the training time, but let us investigate the type stabilities of the layers.

      Type Stability

      julia
      model, ps, st = create_model(NeuralODE)
       
       model_stateful, ps_stateful, st_stateful = create_model(StatefulNeuralODE)
       
       x = gpu_device()(ones(Float32, 28, 28, 1, 3));

      NeuralODE is not type stable due to the boxing of st

      julia
      @code_warntype model(x, ps, st)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -371,7 +473,7 @@
       │   %2 = Base.getproperty(c, :layers)::@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".NeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}
       │   %3 = (%1)(%2, x, ps, st)::TUPLE{CUDA.CUARRAY{FLOAT32, 2, CUDA.DEVICEMEMORY}, NAMEDTUPLE{(:LAYER_1, :LAYER_2, :LAYER_3, :LAYER_4, :LAYER_5), <:TUPLE{@NAMEDTUPLE{}, @NAMEDTUPLE{}, ANY, @NAMEDTUPLE{}, @NAMEDTUPLE{}}}}
       └──      return %3

      We avoid the problem entirely by using StatefulNeuralODE

      julia
      @code_warntype model_stateful(x, ps_stateful, st_stateful)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Main.var"##230".StatefulNeuralODE{Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}, OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, Tuple{Float32, Float32}, Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -384,7 +486,7 @@
       └──      return %3

      Note, that we still recommend using this layer internally and not exposing this as the default API to the users.

      Finally checking the compact model

      julia
      model_compact, ps_compact, st_compact = create_model(NeuralODECompact)
       
       @code_warntype model_compact(x, ps_compact, st_compact)
      MethodInstance for (::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing})(::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}, ::ComponentArrays.ComponentVector{Float32, CUDA.CuArray{Float32, 1, CUDA.DeviceMemory}, Tuple{ComponentArrays.Axis{(layer_1 = 1:0, layer_2 = ViewAxis(1:15700, Axis(weight = ViewAxis(1:15680, ShapedAxis((20, 784))), bias = 15681:15700)), layer_3 = ViewAxis(15701:16240, Axis(model = ViewAxis(1:540, Axis(layer_1 = ViewAxis(1:210, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)), layer_2 = ViewAxis(211:320, Axis(weight = ViewAxis(1:100, ShapedAxis((10, 10))), bias = 101:110)), layer_3 = ViewAxis(321:540, Axis(weight = ViewAxis(1:200, ShapedAxis((20, 10))), bias = 201:220)))),)), layer_4 = 16241:16240, layer_5 = ViewAxis(16241:16450, Axis(weight = ViewAxis(1:200, ShapedAxis((10, 20))), bias = 201:210)))}}}, ::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{model::@NamedTuple{layer_1::@NamedTuple{}, layer_2::@NamedTuple{}, layer_3::@NamedTuple{}}, solver::OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}, tspan::Tuple{Float32, Float32}, ₋₋₋kwargs₋₋₋::Lux.CompactMacroImpl.KwargsStorage{@NamedTuple{kwargs::Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::@NamedTuple{}, layer_5::@NamedTuple{}})
      -  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/src/layers/containers.jl:480
      +  from (c::Lux.Chain)(x, ps, st::NamedTuple) @ Lux /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/src/layers/containers.jl:480
       Arguments
         c::Lux.Chain{@NamedTuple{layer_1::Lux.FlattenLayer{Nothing}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.CompactLuxLayer{:₋₋₋no_special_dispatch₋₋₋, Main.var"##230".var"#2#3", Nothing, @NamedTuple{model::Lux.Chain{@NamedTuple{layer_1::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_2::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}, layer_3::Lux.Dense{typeof(tanh), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}}, Lux.CompactMacroImpl.ValueStorage{@NamedTuple{}, @NamedTuple{solver::Returns{OrdinaryDiffEqTsit5.Tsit5{typeof(OrdinaryDiffEqCore.trivial_limiter!), typeof(OrdinaryDiffEqCore.trivial_limiter!), Static.False}}, tspan::Returns{Tuple{Float32, Float32}}}}, Tuple{Tuple{Symbol}, Tuple{Base.Pairs{Symbol, Any, NTuple{5, Symbol}, @NamedTuple{save_everystep::Bool, reltol::Float32, abstol::Float32, save_start::Bool, sensealg::SciMLSensitivity.InterpolatingAdjoint{0, true, Val{:central}, SciMLSensitivity.ZygoteVJP}}}}}}, layer_4::Lux.WrappedFunction{Base.Fix1{typeof(Main.var"##230".diffeqsol_to_array), Int64}}, layer_5::Lux.Dense{typeof(identity), Int64, Int64, Nothing, Nothing, Static.True}}, Nothing}
         x::CUDA.CuArray{Float32, 4, CUDA.DeviceMemory}
      @@ -407,8 +509,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -446,15 +548,15 @@
       - CUDA_Runtime_jll: 0.15.5+0
       
       Toolchain:
      -- Julia: 1.11.2
      +- Julia: 1.11.3
       - LLVM: 16.0.6
       
       Environment:
       - JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
       
       1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.920 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      - + 0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.982 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      + \ No newline at end of file diff --git a/dev/tutorials/intermediate/2_BayesianNN.html b/dev/tutorials/intermediate/2_BayesianNN.html index d1dc68d9f6..c25d095d32 100644 --- a/dev/tutorials/intermediate/2_BayesianNN.html +++ b/dev/tutorials/intermediate/2_BayesianNN.html @@ -5,15 +5,15 @@ Bayesian Neural Network | Lux.jl Docs - + - + - - - + + + @@ -34,493 +34,523 @@ # Sampling progress Turing.setprogress!(true);
      Precompiling Lux...
      -    303.5 ms  ✓ Reexport
      -    395.0 ms  ✓ ConcreteStructs
      -    305.7 ms  ✓ SIMDTypes
      -    328.3 ms  ✓ IfElse
      -    340.6 ms  ✓ Future
      -    362.3 ms  ✓ CEnum
      -    366.1 ms  ✓ OpenLibm_jll
      -    366.1 ms  ✓ ArgCheck
      -    374.4 ms  ✓ ManualMemory
      -    448.0 ms  ✓ CompilerSupportLibraries_jll
      -    500.9 ms  ✓ Statistics
      -    538.3 ms  ✓ EnzymeCore
      -    558.0 ms  ✓ ADTypes
      -    444.0 ms  ✓ FastClosures
      -    441.7 ms  ✓ StaticArraysCore
      -    416.6 ms  ✓ NaNMath
      -    473.7 ms  ✓ ConstructionBase
      -    469.3 ms  ✓ CommonWorldInvalidations
      -    470.6 ms  ✓ Adapt
      -    858.7 ms  ✓ IrrationalConstants
      -    592.6 ms  ✓ OpenSpecFun_jll
      -    354.8 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      -    766.0 ms  ✓ ThreadingUtilities
      -    549.5 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      -    357.2 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      -    359.7 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      -    358.6 ms  ✓ EnzymeCore → AdaptExt
      -    376.0 ms  ✓ DiffResults
      -    436.2 ms  ✓ GPUArraysCore
      -    500.7 ms  ✓ ArrayInterface
      -    559.6 ms  ✓ LogExpFunctions
      -    750.7 ms  ✓ Static
      -    356.5 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      -    343.5 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      -   1718.2 ms  ✓ UnsafeAtomics
      -    384.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      -    583.8 ms  ✓ Functors
      -    381.4 ms  ✓ BitTwiddlingConvenienceFunctions
      -   1978.8 ms  ✓ MacroTools
      -    452.3 ms  ✓ Atomix
      -   2017.7 ms  ✓ Hwloc
      -    943.0 ms  ✓ CPUSummary
      -    766.9 ms  ✓ MLDataDevices
      -    597.3 ms  ✓ CommonSubexpressions
      -   1285.2 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      -   1066.4 ms  ✓ Optimisers
      -   1492.6 ms  ✓ StaticArrayInterface
      -    609.5 ms  ✓ PolyesterWeave
      -    594.2 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      -    397.3 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      -    395.3 ms  ✓ Optimisers → OptimisersAdaptExt
      -   1318.7 ms  ✓ Setfield
      -   1465.1 ms  ✓ DispatchDoctor
      -    445.8 ms  ✓ CloseOpenIntervals
      -    568.8 ms  ✓ LayoutPointers
      -    387.2 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      -   2415.8 ms  ✓ SpecialFunctions
      -    582.8 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      -    573.6 ms  ✓ DiffRules
      -    885.0 ms  ✓ StrideArraysCore
      -   1111.1 ms  ✓ LuxCore
      -    416.3 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      -    417.8 ms  ✓ LuxCore → LuxCoreFunctorsExt
      -    422.7 ms  ✓ LuxCore → LuxCoreSetfieldExt
      -    441.5 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      -    545.9 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      -    711.4 ms  ✓ Polyester
      -   1776.5 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      -   2539.2 ms  ✓ WeightInitializers
      -   5980.7 ms  ✓ StaticArrays
      -    877.2 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      -    572.5 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      -    575.4 ms  ✓ Adapt → AdaptStaticArraysExt
      -    587.3 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      -    597.1 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      -    644.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      -   3333.4 ms  ✓ ForwardDiff
      -    838.1 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      -   3101.5 ms  ✓ KernelAbstractions
      -    644.4 ms  ✓ KernelAbstractions → LinearAlgebraExt
      -    709.2 ms  ✓ KernelAbstractions → EnzymeExt
      -   5063.5 ms  ✓ NNlib
      -    803.9 ms  ✓ NNlib → NNlibEnzymeCoreExt
      -    907.5 ms  ✓ NNlib → NNlibForwardDiffExt
      -   5642.3 ms  ✓ LuxLib
      -   8885.5 ms  ✓ Lux
      -  86 dependencies successfully precompiled in 32 seconds. 23 already precompiled.
      +    322.6 ms  ✓ Reexport
      +    321.7 ms  ✓ SIMDTypes
      +    384.9 ms  ✓ ConcreteStructs
      +    349.2 ms  ✓ Future
      +    366.3 ms  ✓ CEnum
      +    373.7 ms  ✓ OpenLibm_jll
      +    373.7 ms  ✓ ManualMemory
      +    379.1 ms  ✓ ArgCheck
      +    452.5 ms  ✓ CompilerSupportLibraries_jll
      +    458.5 ms  ✓ Requires
      +    516.7 ms  ✓ Statistics
      +    537.8 ms  ✓ EnzymeCore
      +    598.6 ms  ✓ ADTypes
      +    324.3 ms  ✓ IfElse
      +    322.4 ms  ✓ CommonWorldInvalidations
      +    339.3 ms  ✓ FastClosures
      +    380.8 ms  ✓ StaticArraysCore
      +    430.2 ms  ✓ ConstructionBase
      +    856.6 ms  ✓ IrrationalConstants
      +    441.3 ms  ✓ NaNMath
      +    541.8 ms  ✓ Compat
      +    474.3 ms  ✓ JLLWrappers
      +    407.1 ms  ✓ Adapt
      +    368.9 ms  ✓ ADTypes → ADTypesEnzymeCoreExt
      +    616.7 ms  ✓ CpuId
      +    618.5 ms  ✓ DocStringExtensions
      +    368.6 ms  ✓ ConstructionBase → ConstructionBaseLinearAlgebraExt
      +    366.7 ms  ✓ ADTypes → ADTypesConstructionBaseExt
      +    388.7 ms  ✓ DiffResults
      +    791.0 ms  ✓ ThreadingUtilities
      +    385.5 ms  ✓ EnzymeCore → AdaptExt
      +    440.3 ms  ✓ Compat → CompatLinearAlgebraExt
      +    762.0 ms  ✓ Static
      +    460.6 ms  ✓ GPUArraysCore
      +    523.4 ms  ✓ ArrayInterface
      +    575.8 ms  ✓ Hwloc_jll
      +    606.6 ms  ✓ OpenSpecFun_jll
      +    577.7 ms  ✓ LogExpFunctions
      +   1705.6 ms  ✓ UnsafeAtomics
      +    351.9 ms  ✓ ArrayInterface → ArrayInterfaceGPUArraysCoreExt
      +    400.8 ms  ✓ ArrayInterface → ArrayInterfaceStaticArraysCoreExt
      +    483.0 ms  ✓ BitTwiddlingConvenienceFunctions
      +   1944.0 ms  ✓ MacroTools
      +    678.4 ms  ✓ Functors
      +    466.5 ms  ✓ Atomix
      +   1120.8 ms  ✓ CPUSummary
      +   1230.2 ms  ✓ ChainRulesCore
      +    648.4 ms  ✓ CommonSubexpressions
      +    778.8 ms  ✓ MLDataDevices
      +   1497.7 ms  ✓ StaticArrayInterface
      +    388.8 ms  ✓ ADTypes → ADTypesChainRulesCoreExt
      +    388.6 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesCoreExt
      +    595.1 ms  ✓ PolyesterWeave
      +   1335.2 ms  ✓ Setfield
      +    629.1 ms  ✓ MLDataDevices → MLDataDevicesChainRulesCoreExt
      +    473.3 ms  ✓ CloseOpenIntervals
      +   1579.7 ms  ✓ DispatchDoctor
      +    579.2 ms  ✓ LayoutPointers
      +   2021.7 ms  ✓ Hwloc
      +   1092.3 ms  ✓ Optimisers
      +   1282.9 ms  ✓ LogExpFunctions → LogExpFunctionsChainRulesCoreExt
      +    409.8 ms  ✓ DispatchDoctor → DispatchDoctorEnzymeCoreExt
      +   2408.6 ms  ✓ SpecialFunctions
      +    402.8 ms  ✓ Optimisers → OptimisersEnzymeCoreExt
      +    402.7 ms  ✓ Optimisers → OptimisersAdaptExt
      +    606.5 ms  ✓ DispatchDoctor → DispatchDoctorChainRulesCoreExt
      +    966.4 ms  ✓ StrideArraysCore
      +    591.9 ms  ✓ DiffRules
      +   1152.0 ms  ✓ LuxCore
      +    427.9 ms  ✓ LuxCore → LuxCoreEnzymeCoreExt
      +    430.9 ms  ✓ LuxCore → LuxCoreFunctorsExt
      +    441.5 ms  ✓ LuxCore → LuxCoreSetfieldExt
      +    444.7 ms  ✓ LuxCore → LuxCoreMLDataDevicesExt
      +    682.8 ms  ✓ Polyester
      +    582.6 ms  ✓ LuxCore → LuxCoreChainRulesCoreExt
      +   1634.6 ms  ✓ SpecialFunctions → SpecialFunctionsChainRulesCoreExt
      +   2545.3 ms  ✓ WeightInitializers
      +   6198.5 ms  ✓ StaticArrays
      +    913.3 ms  ✓ WeightInitializers → WeightInitializersChainRulesCoreExt
      +    570.4 ms  ✓ Adapt → AdaptStaticArraysExt
      +    602.1 ms  ✓ ConstructionBase → ConstructionBaseStaticArraysExt
      +    605.5 ms  ✓ StaticArrays → StaticArraysChainRulesCoreExt
      +    620.0 ms  ✓ StaticArrays → StaticArraysStatisticsExt
      +    625.7 ms  ✓ StaticArrayInterface → StaticArrayInterfaceStaticArraysExt
      +   3320.3 ms  ✓ ForwardDiff
      +    845.8 ms  ✓ ForwardDiff → ForwardDiffStaticArraysExt
      +   3112.0 ms  ✓ KernelAbstractions
      +    641.7 ms  ✓ KernelAbstractions → LinearAlgebraExt
      +    693.2 ms  ✓ KernelAbstractions → EnzymeExt
      +   5189.7 ms  ✓ NNlib
      +    804.7 ms  ✓ NNlib → NNlibEnzymeCoreExt
      +    893.4 ms  ✓ NNlib → NNlibForwardDiffExt
      +   5788.3 ms  ✓ LuxLib
      +   9017.6 ms  ✓ Lux
      +  94 dependencies successfully precompiled in 32 seconds. 15 already precompiled.
       Precompiling Turing...
      -    298.3 ms  ✓ IteratorInterfaceExtensions
      -    308.6 ms  ✓ NaturalSort
      -    310.6 ms  ✓ UnPack
      -    323.4 ms  ✓ SimpleUnPack
      -    344.6 ms  ✓ RangeArrays
      -    360.8 ms  ✓ LaTeXStrings
      -    369.6 ms  ✓ ScientificTypesBase
      -    368.0 ms  ✓ ExprTools
      -    366.3 ms  ✓ StatsAPI
      -    416.3 ms  ✓ ChangesOfVariables
      -    444.3 ms  ✓ PositiveFactorizations
      -    544.6 ms  ✓ AbstractFFTs
      -    300.6 ms  ✓ CommonSolve
      -    287.3 ms  ✓ DataValueInterfaces
      -    669.0 ms  ✓ FunctionWrappers
      -    400.3 ms  ✓ InverseFunctions
      -    327.2 ms  ✓ EnumX
      -    410.4 ms  ✓ SuiteSparse_jll
      -    334.9 ms  ✓ RealDot
      -    829.8 ms  ✓ Combinatorics
      -    809.3 ms  ✓ InitialValues
      -    502.5 ms  ✓ IterTools
      -    480.4 ms  ✓ OrderedCollections
      -    553.1 ms  ✓ Serialization
      -    363.5 ms  ✓ Zlib_jll
      -    951.8 ms  ✓ OffsetArrays
      -    337.4 ms  ✓ CompositionsBase
      -    312.6 ms  ✓ PtrArrays
      -    331.7 ms  ✓ DefineSingletons
      -    452.6 ms  ✓ IntervalSets
      -    347.9 ms  ✓ Ratios
      -    511.0 ms  ✓ AbstractTrees
      -    343.3 ms  ✓ InvertedIndices
      -    343.8 ms  ✓ DataAPI
      -    434.8 ms  ✓ DelimitedFiles
      -    949.3 ms  ✓ FillArrays
      -    907.8 ms  ✓ RandomNumbers
      -    447.0 ms  ✓ LRUCache
      -    425.8 ms  ✓ ProgressLogging
      -    390.4 ms  ✓ MappedArrays
      -    376.3 ms  ✓ SciMLStructures
      -    490.4 ms  ✓ LoggingExtras
      -    546.5 ms  ✓ Rmath_jll
      -    324.1 ms  ✓ TableTraits
      -    584.1 ms  ✓ FiniteDiff
      -    597.0 ms  ✓ oneTBB_jll
      -    838.2 ms  ✓ DifferentiationInterface
      -    741.9 ms  ✓ LogDensityProblems
      -    386.1 ms  ✓ StatisticalTraits
      -   1060.4 ms  ✓ Crayons
      -    455.3 ms  ✓ LogExpFunctions → LogExpFunctionsChangesOfVariablesExt
      -   1024.5 ms  ✓ Baselet
      -    345.4 ms  ✓ FunctionWrappersWrappers
      -    396.5 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      -    433.1 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      -    445.7 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      -    394.0 ms  ✓ ChangesOfVariables → ChangesOfVariablesInverseFunctionsExt
      -    412.2 ms  ✓ Parameters
      -   1048.7 ms  ✓ ZygoteRules
      -   1122.5 ms  ✓ HypergeometricFunctions
      -    381.4 ms  ✓ RuntimeGeneratedFunctions
      -   1441.9 ms  ✓ RecipesBase
      -    396.5 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      -    379.8 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      -    569.5 ms  ✓ FFTW_jll
      -    595.5 ms  ✓ L_BFGS_B_jll
      -    443.5 ms  ✓ AliasTables
      -    356.3 ms  ✓ IntervalSets → IntervalSetsRandomExt
      -    370.3 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      -   1735.1 ms  ✓ StringManipulation
      -    352.6 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      -    377.2 ms  ✓ LeftChildRightSiblingTrees
      -    389.3 ms  ✓ FillArrays → FillArraysStatisticsExt
      -    421.2 ms  ✓ Missings
      -    360.9 ms  ✓ LRUCache → SerializationExt
      -    640.7 ms  ✓ Libtask
      -    412.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      -    400.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      -   1255.9 ms  ✓ IntelOpenMP_jll
      -    813.0 ms  ✓ Random123
      -   1668.3 ms  ✓ DataStructures
      -    788.9 ms  ✓ Rmath
      -    589.3 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      -    602.9 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      -   1700.0 ms  ✓ Distributed
      -    776.0 ms  ✓ Tables
      -    504.2 ms  ✓ LogDensityProblemsAD
      -    807.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      -    499.3 ms  ✓ LBFGSB
      -    553.2 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      -    486.9 ms  ✓ SortingAlgorithms
      -    728.6 ms  ✓ MLJModelInterface
      -    648.4 ms  ✓ TerminalLoggers
      -    728.5 ms  ✓ AxisArrays
      -    641.1 ms  ✓ SharedArrays
      -    968.9 ms  ✓ QuadGK
      -    504.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADADTypesExt
      -    770.8 ms  ✓ StructArrays
      -    881.7 ms  ✓ ProgressMeter
      -   1283.9 ms  ✓ MKL_jll
      -   2882.4 ms  ✓ Test
      -    703.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADForwardDiffExt
      -   1045.4 ms  ✓ NLSolversBase
      -    483.0 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADDifferentiationInterfaceExt
      -    390.7 ms  ✓ StructArrays → StructArraysAdaptExt
      -    419.5 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      -    327.9 ms  ✓ InplaceOps
      -    455.6 ms  ✓ ConsoleProgressMonitor
      -   1712.3 ms  ✓ StatsFuns
      -    672.5 ms  ✓ StructArrays → StructArraysStaticArraysExt
      -    713.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   2139.0 ms  ✓ Accessors
      -    575.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      -   3812.8 ms  ✓ SparseArrays
      -    934.4 ms  ✓ ChangesOfVariables → ChangesOfVariablesTestExt
      -   1121.7 ms  ✓ SplittablesBase
      -    461.9 ms  ✓ Accessors → StructArraysExt
      -    660.4 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      -    616.3 ms  ✓ Accessors → TestExt
      -   5051.1 ms  ✓ Tracker
      -    653.3 ms  ✓ Accessors → StaticArraysExt
      -    778.3 ms  ✓ Accessors → LinearAlgebraExt
      -    631.3 ms  ✓ DensityInterface
      -    858.3 ms  ✓ Accessors → IntervalSetsExt
      -   1436.4 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      -    628.2 ms  ✓ Statistics → SparseArraysExt
      -    692.8 ms  ✓ WoodburyMatrices
      -    608.9 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      -    628.3 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      -    583.8 ms  ✓ SuiteSparse
      -   1557.7 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      -    615.5 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      -   1840.5 ms  ✓ LineSearches
      -    620.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      -    675.1 ms  ✓ FillArrays → FillArraysSparseArraysExt
      -    958.8 ms  ✓ KernelAbstractions → SparseArraysExt
      -    644.1 ms  ✓ StructArrays → StructArraysSparseArraysExt
      -    743.7 ms  ✓ BangBang
      -   1216.1 ms  ✓ SparseMatrixColorings
      -    649.1 ms  ✓ AxisAlgorithms
      -   1122.4 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      -    588.2 ms  ✓ SparseInverseSubset
      -   1185.1 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADTrackerExt
      -   1205.2 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      -    853.1 ms  ✓ PDMats
      -   1108.3 ms  ✓ NamedArrays
      -    497.9 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    483.8 ms  ✓ BangBang → BangBangStructArraysExt
      -    471.7 ms  ✓ BangBang → BangBangTablesExt
      -    674.3 ms  ✓ BangBang → BangBangStaticArraysExt
      -   1377.9 ms  ✓ SymbolicIndexingInterface
      -    845.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      -    897.5 ms  ✓ MicroCollections
      -   1776.8 ms  ✓ SciMLOperators
      -    647.4 ms  ✓ FillArrays → FillArraysPDMatsExt
      -    571.1 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      -   2401.9 ms  ✓ StatsBase
      -   4735.1 ms  ✓ FFTW
      -    954.6 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      -   1528.7 ms  ✓ Tracker → TrackerPDMatsExt
      -   3196.9 ms  ✓ Roots
      -   2120.6 ms  ✓ Interpolations
      -    906.3 ms  ✓ NNlib → NNlibFFTWExt
      -    567.6 ms  ✓ Roots → RootsChainRulesCoreExt
      -   2205.4 ms  ✓ RecursiveArrayTools
      -    680.0 ms  ✓ Roots → RootsForwardDiffExt
      -    585.2 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      -   3676.4 ms  ✓ SparseConnectivityTracer
      -    727.6 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      -    932.7 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      -   2782.9 ms  ✓ Transducers
      -   1341.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      -    673.8 ms  ✓ Transducers → TransducersAdaptExt
      -   1343.1 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      -   3231.5 ms  ✓ Optim
      -  11966.7 ms  ✓ MLStyle
      -   1474.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      -   1765.0 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      -   1829.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      -   1928.3 ms  ✓ AbstractMCMC
      -   5670.3 ms  ✓ ChainRules
      -   4938.6 ms  ✓ Distributions
      -    774.8 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -   1201.7 ms  ✓ SSMProblems
      -   1409.6 ms  ✓ AbstractPPL
      -   1221.7 ms  ✓ Distributions → DistributionsDensityInterfaceExt
      -   1357.6 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      -   1363.2 ms  ✓ Distributions → DistributionsTestExt
      -   1479.1 ms  ✓ MCMCDiagnosticTools
      -   2732.1 ms  ✓ AdvancedHMC
      -   1771.3 ms  ✓ EllipticalSliceSampling
      -   1915.6 ms  ✓ KernelDensity
      -   1931.6 ms  ✓ AdvancedPS
      -   1937.9 ms  ✓ AdvancedMH
      -  13717.5 ms  ✓ PrettyTables
      -   3259.4 ms  ✓ Bijectors
      -   1550.6 ms  ✓ AdvancedMH → AdvancedMHStructArraysExt
      -   1635.4 ms  ✓ AdvancedPS → AdvancedPSLibtaskExt
      -   1704.5 ms  ✓ AdvancedMH → AdvancedMHForwardDiffExt
      -   3759.9 ms  ✓ DistributionsAD
      -   7528.4 ms  ✓ Expronicon
      -   1318.1 ms  ✓ Bijectors → BijectorsForwardDiffExt
      -   2942.8 ms  ✓ MCMCChains
      -   1396.7 ms  ✓ DistributionsAD → DistributionsADForwardDiffExt
      -   1492.0 ms  ✓ Bijectors → BijectorsDistributionsADExt
      -   2460.9 ms  ✓ Bijectors → BijectorsTrackerExt
      -   2743.4 ms  ✓ DistributionsAD → DistributionsADTrackerExt
      -   2173.5 ms  ✓ AdvancedHMC → AdvancedHMCMCMCChainsExt
      -   2180.1 ms  ✓ AdvancedMH → AdvancedMHMCMCChainsExt
      -   1979.3 ms  ✓ AdvancedVI
      -   8087.6 ms  ✓ DynamicPPL
      -   1818.1 ms  ✓ DynamicPPL → DynamicPPLForwardDiffExt
      -   1908.3 ms  ✓ DynamicPPL → DynamicPPLChainRulesCoreExt
      -   2408.4 ms  ✓ DynamicPPL → DynamicPPLMCMCChainsExt
      -   2658.7 ms  ✓ DynamicPPL → DynamicPPLZygoteRulesExt
      -  11009.7 ms  ✓ SciMLBase
      -    958.1 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      -   2133.2 ms  ✓ OptimizationBase
      -    351.6 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      -    617.7 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      -   1968.5 ms  ✓ Optimization
      -  11867.2 ms  ✓ OptimizationOptimJL
      -   5158.5 ms  ✓ Turing
      -   4136.3 ms  ✓ Turing → TuringOptimExt
      +    323.7 ms  ✓ NaturalSort
      +    332.3 ms  ✓ IteratorInterfaceExtensions
      +    349.1 ms  ✓ SimpleUnPack
      +    361.0 ms  ✓ ScientificTypesBase
      +    351.8 ms  ✓ UnPack
      +    368.4 ms  ✓ RangeArrays
      +    366.8 ms  ✓ LaTeXStrings
      +    384.8 ms  ✓ ExprTools
      +    389.6 ms  ✓ StatsAPI
      +    433.4 ms  ✓ ChangesOfVariables
      +    477.6 ms  ✓ PositiveFactorizations
      +    539.3 ms  ✓ AbstractFFTs
      +    693.6 ms  ✓ FunctionWrappers
      +    345.7 ms  ✓ CommonSolve
      +    325.9 ms  ✓ DataValueInterfaces
      +    428.7 ms  ✓ InverseFunctions
      +    358.5 ms  ✓ EnumX
      +    443.6 ms  ✓ SuiteSparse_jll
      +    845.5 ms  ✓ Combinatorics
      +    351.0 ms  ✓ RealDot
      +    828.6 ms  ✓ InitialValues
      +    491.3 ms  ✓ OrderedCollections
      +    521.2 ms  ✓ IterTools
      +    582.6 ms  ✓ Serialization
      +    953.8 ms  ✓ OffsetArrays
      +    346.8 ms  ✓ CompositionsBase
      +    516.2 ms  ✓ AbstractTrees
      +    360.1 ms  ✓ PtrArrays
      +    353.0 ms  ✓ DefineSingletons
      +    372.6 ms  ✓ Ratios
      +    473.7 ms  ✓ IntervalSets
      +    380.0 ms  ✓ InvertedIndices
      +    365.9 ms  ✓ DataAPI
      +    917.5 ms  ✓ FillArrays
      +    919.2 ms  ✓ RandomNumbers
      +    482.9 ms  ✓ DelimitedFiles
      +    464.6 ms  ✓ LRUCache
      +    457.1 ms  ✓ ProgressLogging
      +    410.9 ms  ✓ MappedArrays
      +    397.1 ms  ✓ SciMLStructures
      +    509.2 ms  ✓ LoggingExtras
      +    580.4 ms  ✓ Rmath_jll
      +    623.4 ms  ✓ FiniteDiff
      +    589.0 ms  ✓ FFTW_jll
      +    618.9 ms  ✓ oneTBB_jll
      +    612.6 ms  ✓ L_BFGS_B_jll
      +    856.8 ms  ✓ DifferentiationInterface
      +    815.2 ms  ✓ LogDensityProblems
      +   1069.2 ms  ✓ Crayons
      +   1044.1 ms  ✓ Baselet
      +    355.0 ms  ✓ TableTraits
      +    406.6 ms  ✓ StatisticalTraits
      +    381.0 ms  ✓ FunctionWrappersWrappers
      +    473.9 ms  ✓ LogExpFunctions → LogExpFunctionsChangesOfVariablesExt
      +    448.5 ms  ✓ AbstractFFTs → AbstractFFTsChainRulesCoreExt
      +    432.7 ms  ✓ InverseFunctions → InverseFunctionsDatesExt
      +   1014.3 ms  ✓ ZygoteRules
      +    400.3 ms  ✓ ChangesOfVariables → ChangesOfVariablesInverseFunctionsExt
      +   1033.4 ms  ✓ LazyArtifacts
      +    474.8 ms  ✓ LogExpFunctions → LogExpFunctionsInverseFunctionsExt
      +    441.1 ms  ✓ Parameters
      +   1137.2 ms  ✓ HypergeometricFunctions
      +   1429.7 ms  ✓ RecipesBase
      +    413.1 ms  ✓ RuntimeGeneratedFunctions
      +    392.3 ms  ✓ OffsetArrays → OffsetArraysAdaptExt
      +    383.1 ms  ✓ CompositionsBase → CompositionsBaseInverseFunctionsExt
      +    398.9 ms  ✓ LeftChildRightSiblingTrees
      +    390.7 ms  ✓ IntervalSets → IntervalSetsRandomExt
      +    477.4 ms  ✓ AliasTables
      +    392.4 ms  ✓ IntervalSets → IntervalSetsStatisticsExt
      +    424.6 ms  ✓ ConstructionBase → ConstructionBaseIntervalSetsExt
      +   1790.9 ms  ✓ StringManipulation
      +    413.3 ms  ✓ FillArrays → FillArraysStatisticsExt
      +    456.9 ms  ✓ Missings
      +    454.1 ms  ✓ LRUCache → SerializationExt
      +    646.0 ms  ✓ Libtask
      +    519.1 ms  ✓ LBFGSB
      +    582.8 ms  ✓ FiniteDiff → FiniteDiffStaticArraysExt
      +    447.5 ms  ✓ DifferentiationInterface → DifferentiationInterfaceFiniteDiffExt
      +    807.8 ms  ✓ Random123
      +    413.4 ms  ✓ DifferentiationInterface → DifferentiationInterfaceChainRulesCoreExt
      +    800.3 ms  ✓ Rmath
      +    612.7 ms  ✓ DifferentiationInterface → DifferentiationInterfaceStaticArraysExt
      +    522.6 ms  ✓ LogDensityProblemsAD
      +   1673.0 ms  ✓ DataStructures
      +    815.6 ms  ✓ DifferentiationInterface → DifferentiationInterfaceForwardDiffExt
      +   1805.1 ms  ✓ Distributed
      +    602.7 ms  ✓ IntervalSets → IntervalSetsRecipesBaseExt
      +    743.7 ms  ✓ MLJModelInterface
      +    666.8 ms  ✓ TerminalLoggers
      +    799.2 ms  ✓ Tables
      +    520.4 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADADTypesExt
      +    754.2 ms  ✓ AxisArrays
      +    545.8 ms  ✓ SortingAlgorithms
      +    733.9 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADForwardDiffExt
      +   1235.0 ms  ✓ IntelOpenMP_jll
      +    691.2 ms  ✓ SharedArrays
      +    508.8 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADDifferentiationInterfaceExt
      +    961.8 ms  ✓ QuadGK
      +    759.4 ms  ✓ StructArrays
      +   2736.2 ms  ✓ Test
      +    857.1 ms  ✓ ProgressMeter
      +   1107.4 ms  ✓ NLSolversBase
      +    394.2 ms  ✓ StructArrays → StructArraysAdaptExt
      +    409.2 ms  ✓ StructArrays → StructArraysLinearAlgebraExt
      +    375.8 ms  ✓ InplaceOps
      +   1753.8 ms  ✓ StatsFuns
      +    655.5 ms  ✓ StructArrays → StructArraysStaticArraysExt
      +    685.7 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      +   2186.3 ms  ✓ Accessors
      +    498.5 ms  ✓ ConsoleProgressMonitor
      +    657.4 ms  ✓ InverseFunctions → InverseFunctionsTestExt
      +   3741.7 ms  ✓ SparseArrays
      +    979.9 ms  ✓ ChangesOfVariables → ChangesOfVariablesTestExt
      +    669.2 ms  ✓ StatsFuns → StatsFunsInverseFunctionsExt
      +    487.7 ms  ✓ Accessors → StructArraysExt
      +   1288.5 ms  ✓ SplittablesBase
      +    648.8 ms  ✓ Accessors → TestExt
      +   1396.0 ms  ✓ AbstractFFTs → AbstractFFTsTestExt
      +    859.7 ms  ✓ Accessors → LinearAlgebraExt
      +    684.1 ms  ✓ Accessors → StaticArraysExt
      +    668.3 ms  ✓ DensityInterface
      +   5133.3 ms  ✓ Tracker
      +    693.4 ms  ✓ WoodburyMatrices
      +    647.7 ms  ✓ Statistics → SparseArraysExt
      +    622.9 ms  ✓ ArrayInterface → ArrayInterfaceSparseArraysExt
      +   1021.0 ms  ✓ Accessors → IntervalSetsExt
      +    652.3 ms  ✓ ChainRulesCore → ChainRulesCoreSparseArraysExt
      +   1621.0 ms  ✓ StatsFuns → StatsFunsChainRulesCoreExt
      +    627.0 ms  ✓ SuiteSparse
      +   1834.5 ms  ✓ LineSearches
      +    700.0 ms  ✓ FillArrays → FillArraysSparseArraysExt
      +    928.2 ms  ✓ KernelAbstractions → SparseArraysExt
      +    642.8 ms  ✓ FiniteDiff → FiniteDiffSparseArraysExt
      +    638.6 ms  ✓ StructArrays → StructArraysSparseArraysExt
      +    684.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseArraysExt
      +    767.6 ms  ✓ BangBang
      +    608.3 ms  ✓ SparseInverseSubset
      +    714.9 ms  ✓ AxisAlgorithms
      +   1336.5 ms  ✓ SparseMatrixColorings
      +   1115.3 ms  ✓ ArrayInterface → ArrayInterfaceTrackerExt
      +    850.3 ms  ✓ PDMats
      +   1138.3 ms  ✓ DifferentiationInterface → DifferentiationInterfaceTrackerExt
      +   1133.7 ms  ✓ LogDensityProblemsAD → LogDensityProblemsADTrackerExt
      +    512.3 ms  ✓ BangBang → BangBangChainRulesCoreExt
      +   1218.7 ms  ✓ NamedArrays
      +    519.8 ms  ✓ BangBang → BangBangStructArraysExt
      +   1616.7 ms  ✓ SymbolicIndexingInterface
      +    683.3 ms  ✓ BangBang → BangBangStaticArraysExt
      +    500.4 ms  ✓ BangBang → BangBangTablesExt
      +   1973.8 ms  ✓ SciMLOperators
      +    667.6 ms  ✓ FillArrays → FillArraysPDMatsExt
      +   1088.1 ms  ✓ MicroCollections
      +    872.1 ms  ✓ DifferentiationInterface → DifferentiationInterfaceSparseMatrixColoringsExt
      +    543.3 ms  ✓ SciMLOperators → SciMLOperatorsStaticArraysCoreExt
      +   2217.7 ms  ✓ StatsBase
      +    886.2 ms  ✓ SciMLOperators → SciMLOperatorsSparseArraysExt
      +   3188.6 ms  ✓ Roots
      +   1377.8 ms  ✓ Tracker → TrackerPDMatsExt
      +   1992.7 ms  ✓ Interpolations
      +    499.7 ms  ✓ Roots → RootsChainRulesCoreExt
      +    690.9 ms  ✓ Roots → RootsForwardDiffExt
      +   2088.6 ms  ✓ RecursiveArrayTools
      +   3626.3 ms  ✓ SparseConnectivityTracer
      +    603.1 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsStructArraysExt
      +    733.4 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsForwardDiffExt
      +    865.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsSparseArraysExt
      +   1198.9 ms  ✓ RecursiveArrayTools → RecursiveArrayToolsTrackerExt
      +   2810.6 ms  ✓ Transducers
      +  11623.4 ms  ✓ MLStyle
      +   7835.0 ms  ✓ MKL_jll
      +   1383.8 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerLogExpFunctionsExt
      +   3155.6 ms  ✓ Optim
      +   1498.4 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNaNMathExt
      +    658.4 ms  ✓ Transducers → TransducersAdaptExt
      +   1759.5 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerNNlibExt
      +   1808.7 ms  ✓ SparseConnectivityTracer → SparseConnectivityTracerSpecialFunctionsExt
      +   5606.1 ms  ✓ ChainRules
      +   1781.7 ms  ✓ AbstractMCMC
      +    781.4 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      +   5230.2 ms  ✓ Distributions
      +   1224.6 ms  ✓ SSMProblems
      +   1407.6 ms  ✓ AbstractPPL
      +   1375.9 ms  ✓ Distributions → DistributionsDensityInterfaceExt
      +   1392.6 ms  ✓ Distributions → DistributionsTestExt
      +   1395.5 ms  ✓ Distributions → DistributionsChainRulesCoreExt
      +   1506.9 ms  ✓ MCMCDiagnosticTools
      +   2658.7 ms  ✓ AdvancedHMC
      +   4645.9 ms  ✓ FFTW
      +   1686.5 ms  ✓ EllipticalSliceSampling
      +    859.4 ms  ✓ NNlib → NNlibFFTWExt
      +   1955.9 ms  ✓ AdvancedPS
      +   2000.2 ms  ✓ AdvancedMH
      +  14142.0 ms  ✓ PrettyTables
      +   1706.5 ms  ✓ KernelDensity
      +   3358.5 ms  ✓ Bijectors
      +   1567.2 ms  ✓ AdvancedMH → AdvancedMHStructArraysExt
      +   1682.5 ms  ✓ AdvancedMH → AdvancedMHForwardDiffExt
      +   1742.1 ms  ✓ AdvancedPS → AdvancedPSLibtaskExt
      +   3811.4 ms  ✓ DistributionsAD
      +   7849.5 ms  ✓ Expronicon
      +   1330.0 ms  ✓ Bijectors → BijectorsForwardDiffExt
      +   1407.4 ms  ✓ DistributionsAD → DistributionsADForwardDiffExt
      +   1470.6 ms  ✓ Bijectors → BijectorsDistributionsADExt
      +   3068.4 ms  ✓ MCMCChains
      +   2687.8 ms  ✓ Bijectors → BijectorsTrackerExt
      +   3164.1 ms  ✓ DistributionsAD → DistributionsADTrackerExt
      +   2180.3 ms  ✓ AdvancedHMC → AdvancedHMCMCMCChainsExt
      +   2178.8 ms  ✓ AdvancedMH → AdvancedMHMCMCChainsExt
      +   1978.5 ms  ✓ AdvancedVI
      +   8158.9 ms  ✓ DynamicPPL
      +   1826.0 ms  ✓ DynamicPPL → DynamicPPLForwardDiffExt
      +   1944.0 ms  ✓ DynamicPPL → DynamicPPLChainRulesCoreExt
      +   2415.6 ms  ✓ DynamicPPL → DynamicPPLMCMCChainsExt
      +   2798.5 ms  ✓ DynamicPPL → DynamicPPLZygoteRulesExt
      +  11139.8 ms  ✓ SciMLBase
      +   1045.6 ms  ✓ SciMLBase → SciMLBaseChainRulesCoreExt
      +   2173.9 ms  ✓ OptimizationBase
      +    373.1 ms  ✓ OptimizationBase → OptimizationFiniteDiffExt
      +    627.9 ms  ✓ OptimizationBase → OptimizationForwardDiffExt
      +   2020.0 ms  ✓ Optimization
      +  11893.1 ms  ✓ OptimizationOptimJL
      +   5188.7 ms  ✓ Turing
      +   4143.7 ms  ✓ Turing → TuringOptimExt
         224 dependencies successfully precompiled in 57 seconds. 82 already precompiled.
      +  1 dependency had output during precompilation:
      +┌ MKL_jll
      +│   Downloading artifact: IntelOpenMP
      +
       Precompiling MLDataDevicesSparseArraysExt...
      -    650.3 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
      +    664.4 ms  ✓ MLDataDevices → MLDataDevicesSparseArraysExt
         1 dependency successfully precompiled in 1 seconds. 17 already precompiled.
       Precompiling MLDataDevicesFillArraysExt...
      -    428.0 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
      +    445.6 ms  ✓ MLDataDevices → MLDataDevicesFillArraysExt
         1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling MLDataDevicesChainRulesExt...
      -    806.1 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      +    833.7 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
         1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
       Precompiling BijectorsEnzymeCoreExt...
      -   1281.2 ms  ✓ Bijectors → BijectorsEnzymeCoreExt
      +   1303.3 ms  ✓ Bijectors → BijectorsEnzymeCoreExt
         1 dependency successfully precompiled in 1 seconds. 79 already precompiled.
       Precompiling HwlocTrees...
      -    500.2 ms  ✓ Hwloc → HwlocTrees
      +    553.5 ms  ✓ Hwloc → HwlocTrees
         1 dependency successfully precompiled in 1 seconds. 10 already precompiled.
       Precompiling StaticArrayInterfaceOffsetArraysExt...
      -    437.5 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
      +    487.0 ms  ✓ StaticArrayInterface → StaticArrayInterfaceOffsetArraysExt
         1 dependency successfully precompiled in 1 seconds. 18 already precompiled.
       Precompiling MLDataDevicesTrackerExt...
      -   1157.5 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
      +   1172.3 ms  ✓ MLDataDevices → MLDataDevicesTrackerExt
         1 dependency successfully precompiled in 1 seconds. 59 already precompiled.
       Precompiling LuxLibTrackerExt...
      -   1079.1 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      -   3236.3 ms  ✓ LuxLib → LuxLibTrackerExt
      -  2 dependencies successfully precompiled in 3 seconds. 100 already precompiled.
      +   1078.5 ms  ✓ LuxCore → LuxCoreArrayInterfaceTrackerExt
      +   3289.7 ms  ✓ LuxLib → LuxLibTrackerExt
      +  2 dependencies successfully precompiled in 4 seconds. 100 already precompiled.
       Precompiling LuxTrackerExt...
      -   2027.4 ms  ✓ Lux → LuxTrackerExt
      +   2082.8 ms  ✓ Lux → LuxTrackerExt
         1 dependency successfully precompiled in 2 seconds. 114 already precompiled.
       Precompiling DynamicPPLEnzymeCoreExt...
      -   1751.3 ms  ✓ DynamicPPL → DynamicPPLEnzymeCoreExt
      -  1 dependency successfully precompiled in 2 seconds. 128 already precompiled.
      +   1749.6 ms  ✓ DynamicPPL → DynamicPPLEnzymeCoreExt
      +  1 dependency successfully precompiled in 2 seconds. 144 already precompiled.
       Precompiling MLDataDevicesRecursiveArrayToolsExt...
      -    580.6 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
      +    604.1 ms  ✓ MLDataDevices → MLDataDevicesRecursiveArrayToolsExt
         1 dependency successfully precompiled in 1 seconds. 47 already precompiled.
       Precompiling OptimizationMLDataDevicesExt...
      -   1366.7 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
      +   1386.8 ms  ✓ OptimizationBase → OptimizationMLDataDevicesExt
         1 dependency successfully precompiled in 2 seconds. 97 already precompiled.
       Precompiling CairoMakie...
      -    346.1 ms  ✓ SignedDistanceFields
      -    424.0 ms  ✓ PaddedViews
      -    389.4 ms  ✓ StackViews
      -    394.2 ms  ✓ Scratch
      -    401.6 ms  ✓ Showoff
      -    411.6 ms  ✓ Extents
      -    561.8 ms  ✓ Xorg_libXau_jll
      -    568.1 ms  ✓ Graphite2_jll
      -    569.3 ms  ✓ LLVMOpenMP_jll
      -    574.2 ms  ✓ Libmount_jll
      -    597.0 ms  ✓ OpenSSL_jll
      -    597.2 ms  ✓ Bzip2_jll
      -    568.4 ms  ✓ libpng_jll
      -    548.2 ms  ✓ libfdk_aac_jll
      -    549.5 ms  ✓ Imath_jll
      -    554.2 ms  ✓ Giflib_jll
      -    567.1 ms  ✓ LERC_jll
      -    590.7 ms  ✓ LAME_jll
      -   1043.7 ms  ✓ SimpleTraits
      -    567.2 ms  ✓ EarCut_jll
      -    576.2 ms  ✓ CRlibm_jll
      -    574.5 ms  ✓ Ogg_jll
      -    576.5 ms  ✓ x265_jll
      -    606.5 ms  ✓ XZ_jll
      -    629.7 ms  ✓ JpegTurbo_jll
      -    565.6 ms  ✓ Xorg_libXdmcp_jll
      -    578.7 ms  ✓ x264_jll
      -    593.2 ms  ✓ libaom_jll
      -    561.6 ms  ✓ LZO_jll
      -    583.0 ms  ✓ Expat_jll
      -    587.3 ms  ✓ Zstd_jll
      -    489.3 ms  ✓ Xorg_xtrans_jll
      -   1577.5 ms  ✓ UnicodeFun
      -    561.9 ms  ✓ Opus_jll
      -    493.8 ms  ✓ Xorg_libpthread_stubs_jll
      -    550.5 ms  ✓ Libffi_jll
      -    555.9 ms  ✓ Libgpg_error_jll
      -    600.9 ms  ✓ Libiconv_jll
      -    575.0 ms  ✓ isoband_jll
      -   1892.0 ms  ✓ FixedPointNumbers
      -    376.6 ms  ✓ RelocatableFolders
      -    443.9 ms  ✓ MosaicViews
      -    568.0 ms  ✓ Libuuid_jll
      -    586.2 ms  ✓ FriBidi_jll
      -    612.7 ms  ✓ Pixman_jll
      -    607.5 ms  ✓ FreeType2_jll
      -    643.2 ms  ✓ libvorbis_jll
      -    707.4 ms  ✓ OpenEXR_jll
      -    419.2 ms  ✓ Isoband
      -    610.5 ms  ✓ libsixel_jll
      -   1008.9 ms  ✓ FilePathsBase
      -    646.3 ms  ✓ Libtiff_jll
      -    611.6 ms  ✓ Libgcrypt_jll
      -    386.5 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      -   1102.2 ms  ✓ GeoInterface
      -    706.5 ms  ✓ XML2_jll
      -    496.2 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      -    751.5 ms  ✓ Fontconfig_jll
      -    722.6 ms  ✓ FilePaths
      -    974.0 ms  ✓ FreeType
      -    648.4 ms  ✓ XSLT_jll
      -    737.1 ms  ✓ Gettext_jll
      -   1178.2 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      -   2515.9 ms  ✓ IntervalArithmetic
      -    756.7 ms  ✓ Glib_jll
      -   2178.2 ms  ✓ ColorTypes
      -   1105.7 ms  ✓ Xorg_libxcb_jll
      -    485.8 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      -   3187.4 ms  ✓ PkgVersion
      -   3356.3 ms  ✓ FileIO
      -    619.0 ms  ✓ Xorg_libX11_jll
      -    582.1 ms  ✓ Xorg_libXrender_jll
      -    583.6 ms  ✓ Xorg_libXext_jll
      -   1733.1 ms  ✓ ColorVectorSpace
      -   1460.0 ms  ✓ QOI
      -    713.2 ms  ✓ Libglvnd_jll
      -    746.8 ms  ✓ Cairo_jll
      -    694.6 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      -    726.9 ms  ✓ HarfBuzz_jll
      -    764.7 ms  ✓ libwebp_jll
      -   4594.7 ms  ✓ GeometryBasics
      -   3577.6 ms  ✓ ExactPredicates
      -    701.1 ms  ✓ libass_jll
      -    739.5 ms  ✓ Pango_jll
      -   6456.8 ms  ✓ SIMD
      -   1037.6 ms  ✓ Packing
      -   3970.8 ms  ✓ Colors
      -   1281.3 ms  ✓ ShaderAbstractions
      -    912.6 ms  ✓ FFMPEG_jll
      -    548.3 ms  ✓ Graphics
      -    568.5 ms  ✓ Animations
      -   1174.1 ms  ✓ ColorBrewer
      -   1529.2 ms  ✓ OpenEXR
      -   1819.0 ms  ✓ FreeTypeAbstraction
      -   1418.2 ms  ✓ Cairo
      -   3473.9 ms  ✓ MakieCore
      -   3316.1 ms  ✓ ColorSchemes
      -   4876.0 ms  ✓ GridLayoutBase
      -   5146.7 ms  ✓ DelaunayTriangulation
      -  15078.1 ms  ✓ Unitful
      -    543.7 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      -    547.9 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      -   8162.9 ms  ✓ Automa
      -   1205.2 ms  ✓ Interpolations → InterpolationsUnitfulExt
      -   7868.3 ms  ✓ PlotUtils
      -  13993.4 ms  ✓ ImageCore
      -   1903.7 ms  ✓ ImageBase
      -   2553.6 ms  ✓ WebP
      -   3189.4 ms  ✓ PNGFiles
      -   3444.6 ms  ✓ JpegTurbo
      -   1909.7 ms  ✓ ImageAxes
      -   4182.7 ms  ✓ Sixel
      -  10561.3 ms  ✓ MathTeXEngine
      -   1120.1 ms  ✓ ImageMetadata
      -   1885.7 ms  ✓ Netpbm
      -  43255.8 ms  ✓ TiffImages
      -   1183.9 ms  ✓ ImageIO
      - 106062.6 ms  ✓ Makie
      -  72508.2 ms  ✓ CairoMakie
      -  119 dependencies successfully precompiled in 231 seconds. 151 already precompiled.
      +    407.2 ms  ✓ IndirectArrays
      +    404.9 ms  ✓ PolygonOps
      +    435.0 ms  ✓ GeoFormatTypes
      +    443.2 ms  ✓ Contour
      +    433.4 ms  ✓ PCRE2_jll
      +    450.0 ms  ✓ TriplotBase
      +    461.2 ms  ✓ TensorCore
      +    456.2 ms  ✓ StableRNGs
      +    479.2 ms  ✓ PaddedViews
      +    482.8 ms  ✓ Observables
      +    498.0 ms  ✓ RoundingEmulator
      +    504.4 ms  ✓ Extents
      +    559.1 ms  ✓ TranscodingStreams
      +    337.5 ms  ✓ CRC32c
      +    409.0 ms  ✓ LazyModules
      +    803.0 ms  ✓ Grisu
      +    480.6 ms  ✓ Inflate
      +    391.8 ms  ✓ SignedDistanceFields
      +    442.8 ms  ✓ StackViews
      +    428.5 ms  ✓ Scratch
      +   1082.8 ms  ✓ Format
      +    610.3 ms  ✓ Graphite2_jll
      +    651.3 ms  ✓ OpenSSL_jll
      +    610.9 ms  ✓ Libmount_jll
      +    610.2 ms  ✓ LLVMOpenMP_jll
      +    595.8 ms  ✓ Bzip2_jll
      +    643.6 ms  ✓ Xorg_libXau_jll
      +    596.9 ms  ✓ libfdk_aac_jll
      +    606.9 ms  ✓ Giflib_jll
      +    609.1 ms  ✓ Imath_jll
      +    621.3 ms  ✓ libpng_jll
      +   1524.2 ms  ✓ AdaptivePredicates
      +   1207.4 ms  ✓ SimpleTraits
      +    629.3 ms  ✓ LAME_jll
      +    617.5 ms  ✓ LERC_jll
      +    594.2 ms  ✓ EarCut_jll
      +    614.4 ms  ✓ CRlibm_jll
      +    651.0 ms  ✓ XZ_jll
      +    655.7 ms  ✓ JpegTurbo_jll
      +   1551.4 ms  ✓ UnicodeFun
      +    601.0 ms  ✓ Ogg_jll
      +    599.2 ms  ✓ Xorg_libXdmcp_jll
      +    608.4 ms  ✓ x264_jll
      +    633.7 ms  ✓ x265_jll
      +    624.4 ms  ✓ libaom_jll
      +    635.1 ms  ✓ Zstd_jll
      +    525.6 ms  ✓ Xorg_xtrans_jll
      +    634.8 ms  ✓ Expat_jll
      +    617.3 ms  ✓ LZO_jll
      +    619.8 ms  ✓ Opus_jll
      +   1947.9 ms  ✓ FixedPointNumbers
      +    682.9 ms  ✓ Libiconv_jll
      +    563.0 ms  ✓ Xorg_libpthread_stubs_jll
      +    621.4 ms  ✓ Libgpg_error_jll
      +    624.1 ms  ✓ Libffi_jll
      +    638.6 ms  ✓ isoband_jll
      +    628.1 ms  ✓ FriBidi_jll
      +    616.1 ms  ✓ Libuuid_jll
      +    430.8 ms  ✓ Showoff
      +    433.5 ms  ✓ RelocatableFolders
      +    475.4 ms  ✓ MosaicViews
      +   1033.6 ms  ✓ FilePathsBase
      +    679.7 ms  ✓ Pixman_jll
      +    430.4 ms  ✓ Ratios → RatiosFixedPointNumbersExt
      +    677.5 ms  ✓ FreeType2_jll
      +    656.0 ms  ✓ libsixel_jll
      +   1057.3 ms  ✓ GeoInterface
      +    687.2 ms  ✓ libvorbis_jll
      +    688.1 ms  ✓ Libtiff_jll
      +    755.9 ms  ✓ OpenEXR_jll
      +    698.1 ms  ✓ XML2_jll
      +    481.9 ms  ✓ Isoband
      +    535.5 ms  ✓ FilePathsBase → FilePathsBaseMmapExt
      +    654.2 ms  ✓ Libgcrypt_jll
      +    788.3 ms  ✓ FilePaths
      +    798.4 ms  ✓ Fontconfig_jll
      +   1445.8 ms  ✓ ColorTypes
      +    697.9 ms  ✓ Gettext_jll
      +    970.1 ms  ✓ FreeType
      +    683.1 ms  ✓ XSLT_jll
      +   1237.3 ms  ✓ FilePathsBase → FilePathsBaseTestExt
      +   2382.1 ms  ✓ PkgVersion
      +    503.4 ms  ✓ ColorTypes → StyledStringsExt
      +    804.0 ms  ✓ Glib_jll
      +   2588.5 ms  ✓ IntervalArithmetic
      +   1088.3 ms  ✓ Xorg_libxcb_jll
      +   3343.9 ms  ✓ FileIO
      +    500.6 ms  ✓ IntervalArithmetic → IntervalArithmeticIntervalSetsExt
      +   1831.6 ms  ✓ ColorVectorSpace
      +    652.9 ms  ✓ Xorg_libX11_jll
      +    749.8 ms  ✓ ColorVectorSpace → SpecialFunctionsExt
      +    633.6 ms  ✓ Xorg_libXext_jll
      +    637.6 ms  ✓ Xorg_libXrender_jll
      +   1451.9 ms  ✓ QOI
      +    759.2 ms  ✓ Libglvnd_jll
      +    783.0 ms  ✓ Cairo_jll
      +   3878.0 ms  ✓ Colors
      +    800.0 ms  ✓ HarfBuzz_jll
      +    835.2 ms  ✓ libwebp_jll
      +   6475.9 ms  ✓ SIMD
      +    595.2 ms  ✓ Graphics
      +    612.5 ms  ✓ Animations
      +    777.9 ms  ✓ ColorBrewer
      +   3643.0 ms  ✓ ExactPredicates
      +    802.8 ms  ✓ libass_jll
      +    799.5 ms  ✓ Pango_jll
      +   1547.5 ms  ✓ OpenEXR
      +    940.9 ms  ✓ FFMPEG_jll
      +   1308.4 ms  ✓ Cairo
      +   3468.3 ms  ✓ ColorSchemes
      +   9182.5 ms  ✓ GeometryBasics
      +   1036.5 ms  ✓ Packing
      +   1303.3 ms  ✓ ShaderAbstractions
      +   5206.5 ms  ✓ DelaunayTriangulation
      +   1930.0 ms  ✓ FreeTypeAbstraction
      +  15163.9 ms  ✓ Unitful
      +    600.9 ms  ✓ Unitful → InverseFunctionsUnitfulExt
      +    633.4 ms  ✓ Unitful → ConstructionBaseUnitfulExt
      +   3821.1 ms  ✓ MakieCore
      +   8219.0 ms  ✓ Automa
      +   1247.3 ms  ✓ Interpolations → InterpolationsUnitfulExt
      +   5098.3 ms  ✓ GridLayoutBase
      +   8204.5 ms  ✓ PlotUtils
      +  14493.9 ms  ✓ ImageCore
      +   1871.6 ms  ✓ ImageBase
      +   2308.9 ms  ✓ WebP
      +   8720.9 ms  ✓ MathTeXEngine
      +   3040.8 ms  ✓ PNGFiles
      +   3186.9 ms  ✓ JpegTurbo
      +   3478.8 ms  ✓ Sixel
      +   2074.0 ms  ✓ ImageAxes
      +   1102.7 ms  ✓ ImageMetadata
      +   1851.4 ms  ✓ Netpbm
      +  42547.7 ms  ✓ TiffImages
      +   1195.4 ms  ✓ ImageIO
      + 104912.3 ms  ✓ Makie
      +  81735.0 ms  ✓ CairoMakie
      +  137 dependencies successfully precompiled in 239 seconds. 134 already precompiled.
       Precompiling SparseMatrixColoringsColorsExt...
      -    874.9 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
      +    919.4 ms  ✓ SparseMatrixColorings → SparseMatrixColoringsColorsExt
         1 dependency successfully precompiled in 1 seconds. 29 already precompiled.
       Precompiling UnitfulExt...
      -    582.9 ms  ✓ Accessors → UnitfulExt
      +    621.7 ms  ✓ Accessors → UnitfulExt
         1 dependency successfully precompiled in 1 seconds. 15 already precompiled.
       Precompiling IntervalArithmeticForwardDiffExt...
      -    455.6 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      -    640.7 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
      +    515.6 ms  ✓ IntervalArithmetic → IntervalArithmeticDiffRulesExt
      +    698.1 ms  ✓ IntervalArithmetic → IntervalArithmeticForwardDiffExt
         2 dependencies successfully precompiled in 1 seconds. 42 already precompiled.
       Precompiling IntervalArithmeticRecipesBaseExt...
      -    763.1 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
      +    850.1 ms  ✓ IntervalArithmetic → IntervalArithmeticRecipesBaseExt
         1 dependency successfully precompiled in 1 seconds. 31 already precompiled.
       Precompiling SciMLBaseMakieExt...
      -   9177.8 ms  ✓ SciMLBase → SciMLBaseMakieExt
      -  1 dependency successfully precompiled in 10 seconds. 303 already precompiled.
      +   7984.5 ms  ✓ SciMLBase → SciMLBaseMakieExt
      +  1 dependency successfully precompiled in 9 seconds. 304 already precompiled.
       [ Info: [Turing]: progress logging is enabled globally
       [ Info: [AdvancedVI]: global PROGRESS is set as true

      Generating data

      Our goal here is to use a Bayesian neural network to classify points in an artificial dataset. The code below generates data points arranged in a box-like pattern and displays a graph of the dataset we'll be working with.

      julia
      # Number of points to generate
       N = 80
      @@ -564,7 +594,7 @@
           return fig
       end
       
      -plot_data()

      Building the Neural Network

      The next step is to define a feedforward neural network where we express our parameters as distributions, and not single points as with traditional neural networks. For this we will use Dense to define liner layers and compose them via Chain, both are neural network primitives from Lux. The network nn we will create will have two hidden layers with tanh activations and one output layer with sigmoid activation, as shown below.

      The nn is an instance that acts as a function and can take data, parameters and current state as inputs and output predictions. We will define distributions on the neural network parameters.

      julia
      # Construct a neural network using Lux
      +plot_data()

      Building the Neural Network

      The next step is to define a feedforward neural network where we express our parameters as distributions, and not single points as with traditional neural networks. For this we will use Dense to define liner layers and compose them via Chain, both are neural network primitives from Lux. The network nn we will create will have two hidden layers with tanh activations and one output layer with sigmoid activation, as shown below.

      The nn is an instance that acts as a function and can take data, parameters and current state as inputs and output predictions. We will define distributions on the neural network parameters.

      julia
      # Construct a neural network using Lux
       nn = Chain(Dense(2 => 3, tanh), Dense(3 => 2, tanh), Dense(2 => 1, sigmoid))
       
       # Initialize the model weights and state
      @@ -603,8 +633,8 @@
       Iterations        = 1:1:5000
       Number of chains  = 1
       Samples per chain = 5000
      -Wall duration     = 30.89 seconds
      -Compute duration  = 30.89 seconds
      +Wall duration     = 33.37 seconds
      +Compute duration  = 33.37 seconds
       parameters        = parameters[1], parameters[2], parameters[3], parameters[4], parameters[5], parameters[6], parameters[7], parameters[8], parameters[9], parameters[10], parameters[11], parameters[12], parameters[13], parameters[14], parameters[15], parameters[16], parameters[17], parameters[18], parameters[19], parameters[20]
       internals         = lp, n_steps, is_accept, acceptance_rate, log_density, hamiltonian_energy, hamiltonian_energy_error, numerical_error, step_size, nom_step_size
       
      @@ -612,26 +642,26 @@
             parameters      mean       std      mcse   ess_bulk   ess_tail      rhat   ess_per_sec
                 Symbol   Float64   Float64   Float64    Float64    Float64   Float64       Float64
       
      -   parameters[1]    5.8536    2.5579    0.6449    16.6210    21.2567    1.2169        0.5381
      -   parameters[2]    0.1106    0.3642    0.0467    76.1493    35.8893    1.0235        2.4651
      -   parameters[3]    4.1685    2.2970    0.6025    15.9725    62.4537    1.0480        0.5171
      -   parameters[4]    1.0580    1.9179    0.4441    22.3066    51.3818    1.0513        0.7221
      -   parameters[5]    4.7925    2.0622    0.5484    15.4001    28.2539    1.1175        0.4985
      -   parameters[6]    0.7155    1.3734    0.2603    28.7492    59.2257    1.0269        0.9307
      -   parameters[7]    0.4981    2.7530    0.7495    14.5593    22.0260    1.2506        0.4713
      -   parameters[8]    0.4568    1.1324    0.2031    31.9424    38.7102    1.0447        1.0340
      -   parameters[9]   -1.0215    2.6186    0.7268    14.2896    22.8493    1.2278        0.4626
      -  parameters[10]    2.1324    1.6319    0.4231    15.0454    43.2111    1.3708        0.4870
      -  parameters[11]   -2.0262    1.8130    0.4727    15.0003    23.5212    1.2630        0.4856
      -  parameters[12]   -4.5525    1.9168    0.4399    18.6812    29.9668    1.0581        0.6047
      -  parameters[13]    3.7207    1.3736    0.2889    22.9673    55.7445    1.0128        0.7435
      -  parameters[14]    2.5799    1.7626    0.4405    17.7089    38.8364    1.1358        0.5733
      -  parameters[15]   -1.3181    1.9554    0.5213    14.6312    22.0160    1.1793        0.4736
      -  parameters[16]   -2.9322    1.2308    0.2334    28.3970   130.8667    1.0216        0.9193
      -  parameters[17]   -2.4957    2.7976    0.7745    16.2068    20.1562    1.0692        0.5246
      -  parameters[18]   -5.0880    1.1401    0.1828    39.8971    52.4786    1.1085        1.2915
      -  parameters[19]   -4.7674    2.0627    0.5354    21.4562    18.3886    1.0764        0.6946
      -  parameters[20]   -4.7466    1.2214    0.2043    38.5170    32.7162    1.0004        1.2469
      +   parameters[1]    5.8536    2.5579    0.6449    16.6210    21.2567    1.2169        0.4980
      +   parameters[2]    0.1106    0.3642    0.0467    76.1493    35.8893    1.0235        2.2817
      +   parameters[3]    4.1685    2.2970    0.6025    15.9725    62.4537    1.0480        0.4786
      +   parameters[4]    1.0580    1.9179    0.4441    22.3066    51.3818    1.0513        0.6684
      +   parameters[5]    4.7925    2.0622    0.5484    15.4001    28.2539    1.1175        0.4614
      +   parameters[6]    0.7155    1.3734    0.2603    28.7492    59.2257    1.0269        0.8614
      +   parameters[7]    0.4981    2.7530    0.7495    14.5593    22.0260    1.2506        0.4362
      +   parameters[8]    0.4568    1.1324    0.2031    31.9424    38.7102    1.0447        0.9571
      +   parameters[9]   -1.0215    2.6186    0.7268    14.2896    22.8493    1.2278        0.4282
      +  parameters[10]    2.1324    1.6319    0.4231    15.0454    43.2111    1.3708        0.4508
      +  parameters[11]   -2.0262    1.8130    0.4727    15.0003    23.5212    1.2630        0.4495
      +  parameters[12]   -4.5525    1.9168    0.4399    18.6812    29.9668    1.0581        0.5598
      +  parameters[13]    3.7207    1.3736    0.2889    22.9673    55.7445    1.0128        0.6882
      +  parameters[14]    2.5799    1.7626    0.4405    17.7089    38.8364    1.1358        0.5306
      +  parameters[15]   -1.3181    1.9554    0.5213    14.6312    22.0160    1.1793        0.4384
      +  parameters[16]   -2.9322    1.2308    0.2334    28.3970   130.8667    1.0216        0.8509
      +  parameters[17]   -2.4957    2.7976    0.7745    16.2068    20.1562    1.0692        0.4856
      +  parameters[18]   -5.0880    1.1401    0.1828    39.8971    52.4786    1.1085        1.1955
      +  parameters[19]   -4.7674    2.0627    0.5354    21.4562    18.3886    1.0764        0.6429
      +  parameters[20]   -4.7466    1.2214    0.2043    38.5170    32.7162    1.0004        1.1541
       
       Quantiles
             parameters      2.5%     25.0%     50.0%     75.0%     97.5%
      @@ -674,7 +704,7 @@
       x2_range = collect(range(-6; stop=6, length=25))
       Z = [nn_forward([x1, x2], θ[i, :])[1] for x1 in x1_range, x2 in x2_range]
       contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      The contour plot above shows that the MAP method is not too bad at classifying our data. Now we can visualize our predictions.

      p(x~|X,α)=θp(x~|θ)p(θ|X,α)θp(θ|X,α)fθ(x~)

      The nn_predict function takes the average predicted value from a network parameterized by weights drawn from the MCMC chain.

      julia
      # Return the average predicted value across multiple weights.
      +fig

      The contour plot above shows that the MAP method is not too bad at classifying our data. Now we can visualize our predictions.

      p(x~|X,α)=θp(x~|θ)p(θ|X,α)θp(θ|X,α)fθ(x~)

      The nn_predict function takes the average predicted value from a network parameterized by weights drawn from the MCMC chain.

      julia
      # Return the average predicted value across multiple weights.
       nn_predict(x, θ, num) = mean([first(nn_forward(x, view(θ, i, :))) for i in 1:10:num])
      nn_predict (generic function with 1 method)

      Next, we use the nn_predict function to predict the value at a sample of points where the x1 and x2 coordinates range between -6 and 6. As we can see below, we still have a satisfactory fit to our data, and more importantly, we can also see where the neural network is uncertain about its predictions much easier–-those regions between cluster boundaries.

      Plot the average prediction.

      julia
      fig = plot_data()
       
       n_end = 1500
      @@ -682,7 +712,7 @@
       x2_range = collect(range(-6; stop=6, length=25))
       Z = [nn_predict([x1, x2], θ, n_end)[1] for x1 in x1_range, x2 in x2_range]
       contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
      -fig

      Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 5,000.

      julia
      fig = plot_data()
      +fig

      Suppose we are interested in how the predictive power of our Bayesian neural network evolved between samples. In that case, the following graph displays an animation of the contour plot generated from the network weights in samples 1 to 5,000.

      julia
      fig = plot_data()
       Z = [first(nn_forward([x1, x2], θ[1, :])) for x1 in x1_range, x2 in x2_range]
       c = contour!(x1_range, x2_range, Z; linewidth=3, colormap=:seaborn_bright)
       record(fig, "results.gif", 1:250:size(θ, 1)) do i
      @@ -690,7 +720,7 @@
           Z = [first(nn_forward([x1, x2], θ[i, :])) for x1 in x1_range, x2 in x2_range]
           c[3] = Z
           return fig
      -end
      "results.gif"

      Appendix

      julia
      using InteractiveUtils
      +end
      "results.gif"

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -703,8 +733,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -721,7 +751,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/intermediate/3_HyperNet.html b/dev/tutorials/intermediate/3_HyperNet.html index 1ff5944139..3a8a1dd9c5 100644 --- a/dev/tutorials/intermediate/3_HyperNet.html +++ b/dev/tutorials/intermediate/3_HyperNet.html @@ -5,15 +5,15 @@ Training a HyperNetwork on MNIST and FashionMNIST | Lux.jl Docs - + - + - - - + + + @@ -31,156 +31,76 @@
      Skip to content

      Training a HyperNetwork on MNIST and FashionMNIST

      Package Imports

      julia
      using Lux, ComponentArrays, LuxCUDA, MLDatasets, MLUtils, OneHotArrays, Optimisers,
             Printf, Random, Zygote
       
      -CUDA.allowscalar(false)
      Precompiling ComponentArrays...
      -   1001.1 ms  ✓ ComponentArrays
      -  1 dependency successfully precompiled in 1 seconds. 45 already precompiled.
      -Precompiling MLDataDevicesComponentArraysExt...
      -    661.5 ms  ✓ MLDataDevices → MLDataDevicesComponentArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 48 already precompiled.
      -Precompiling LuxComponentArraysExt...
      -    533.0 ms  ✓ ComponentArrays → ComponentArraysOptimisersExt
      -   1563.8 ms  ✓ Lux → LuxComponentArraysExt
      -   1970.8 ms  ✓ ComponentArrays → ComponentArraysKernelAbstractionsExt
      -  3 dependencies successfully precompiled in 2 seconds. 111 already precompiled.
      +CUDA.allowscalar(false)
      Precompiling LuxComponentArraysExt...
      +   1590.7 ms  ✓ Lux → LuxComponentArraysExt
      +  1 dependency successfully precompiled in 2 seconds. 113 already precompiled.
       Precompiling LuxCUDA...
      -   1325.0 ms  ✓ LLVM → BFloat16sExt
      -   2729.2 ms  ✓ CUDA_Runtime_jll
      -   2014.3 ms  ✓ CUDNN_jll
      -   4589.2 ms  ✓ GPUArrays
      -  45598.6 ms  ✓ DataFrames
      -  51987.5 ms  ✓ CUDA
      -   5027.8 ms  ✓ Atomix → AtomixCUDAExt
      -   8136.7 ms  ✓ cuDNN
      -   5323.2 ms  ✓ LuxCUDA
      -  9 dependencies successfully precompiled in 117 seconds. 93 already precompiled.
      -Precompiling MLDataDevicesGPUArraysExt...
      -   1645.2 ms  ✓ MLDataDevices → MLDataDevicesGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 57 already precompiled.
      -Precompiling WeightInitializersGPUArraysExt...
      -   1718.4 ms  ✓ WeightInitializers → WeightInitializersGPUArraysExt
      -  1 dependency successfully precompiled in 2 seconds. 60 already precompiled.
      +  45981.1 ms  ✓ CUDA
      +   4883.1 ms  ✓ Atomix → AtomixCUDAExt
      +   8068.2 ms  ✓ cuDNN
      +   5238.1 ms  ✓ LuxCUDA
      +  4 dependencies successfully precompiled in 64 seconds. 98 already precompiled.
       Precompiling ComponentArraysGPUArraysExt...
      -   1879.8 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
      +   1844.3 ms  ✓ ComponentArrays → ComponentArraysGPUArraysExt
         1 dependency successfully precompiled in 2 seconds. 84 already precompiled.
      -Precompiling ParsersExt...
      -    486.4 ms  ✓ InlineStrings → ParsersExt
      -  1 dependency successfully precompiled in 1 seconds. 9 already precompiled.
       Precompiling ArrayInterfaceCUDAExt...
      -   4955.2 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
      +   4736.3 ms  ✓ ArrayInterface → ArrayInterfaceCUDAExt
         1 dependency successfully precompiled in 5 seconds. 103 already precompiled.
       Precompiling NNlibCUDAExt...
      -   4974.6 ms  ✓ CUDA → ChainRulesCoreExt
      -   5317.6 ms  ✓ NNlib → NNlibCUDAExt
      +   4930.1 ms  ✓ CUDA → ChainRulesCoreExt
      +   5731.7 ms  ✓ NNlib → NNlibCUDAExt
         2 dependencies successfully precompiled in 6 seconds. 104 already precompiled.
       Precompiling MLDataDevicesCUDAExt...
      -   4994.5 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
      +   4751.9 ms  ✓ MLDataDevices → MLDataDevicesCUDAExt
         1 dependency successfully precompiled in 5 seconds. 106 already precompiled.
       Precompiling LuxLibCUDAExt...
      -   5181.0 ms  ✓ CUDA → EnzymeCoreExt
      -   5284.5 ms  ✓ CUDA → SpecialFunctionsExt
      -   5833.5 ms  ✓ LuxLib → LuxLibCUDAExt
      +   5070.0 ms  ✓ CUDA → EnzymeCoreExt
      +   5159.6 ms  ✓ CUDA → SpecialFunctionsExt
      +   5610.4 ms  ✓ LuxLib → LuxLibCUDAExt
         3 dependencies successfully precompiled in 6 seconds. 169 already precompiled.
       Precompiling WeightInitializersCUDAExt...
      -   5018.4 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
      +   4901.3 ms  ✓ WeightInitializers → WeightInitializersCUDAExt
         1 dependency successfully precompiled in 5 seconds. 111 already precompiled.
       Precompiling NNlibCUDACUDNNExt...
      -   5394.3 ms  ✓ NNlib → NNlibCUDACUDNNExt
      +   5615.2 ms  ✓ NNlib → NNlibCUDACUDNNExt
         1 dependency successfully precompiled in 6 seconds. 108 already precompiled.
       Precompiling MLDataDevicescuDNNExt...
      -   5182.8 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      -  1 dependency successfully precompiled in 6 seconds. 109 already precompiled.
      +   4950.2 ms  ✓ MLDataDevices → MLDataDevicescuDNNExt
      +  1 dependency successfully precompiled in 5 seconds. 109 already precompiled.
       Precompiling LuxLibcuDNNExt...
      -   5880.1 ms  ✓ LuxLib → LuxLibcuDNNExt
      +   5761.4 ms  ✓ LuxLib → LuxLibcuDNNExt
         1 dependency successfully precompiled in 6 seconds. 176 already precompiled.
      -Precompiling MLDatasets...
      -    371.9 ms  ✓ ContextVariablesX
      -    497.1 ms  ✓ LoggingExtras
      -    818.8 ms  ✓ StructTypes
      -    555.4 ms  ✓ BangBang → BangBangChainRulesCoreExt
      -    532.6 ms  ✓ ExceptionUnwrapping
      -    630.5 ms  ✓ Accessors → TestExt
      -   1245.3 ms  ✓ SplittablesBase
      -   1345.1 ms  ✓ OpenMPI_jll
      -   1434.5 ms  ✓ MPICH_jll
      -    762.2 ms  ✓ WeakRefStrings
      -   2204.6 ms  ✓ AtomsBase
      -   1248.2 ms  ✓ MPItrampoline_jll
      -   1965.3 ms  ✓ ImageShow
      -   1508.6 ms  ✓ NPZ
      -   2277.0 ms  ✓ Pickle
      -   1654.9 ms  ✓ BangBang → BangBangDataFramesExt
      -    561.1 ms  ✓ FLoopsBase
      -  11194.3 ms  ✓ JSON3
      -   2865.5 ms  ✓ Transducers
      -  18568.5 ms  ✓ HTTP
      -   2237.4 ms  ✓ Chemfiles
      -   1487.7 ms  ✓ HDF5_jll
      -   1513.8 ms  ✓ Transducers → TransducersDataFramesExt
      -    703.0 ms  ✓ Transducers → TransducersAdaptExt
      -   5329.5 ms  ✓ FLoops
      -  33710.2 ms  ✓ JLD2
      -   1859.5 ms  ✓ FileIO → HTTPExt
      -  19361.0 ms  ✓ CSV
      -   3128.6 ms  ✓ DataDeps
      -   6299.5 ms  ✓ MLUtils
      -   7534.5 ms  ✓ HDF5
      -   2337.4 ms  ✓ MAT
      -   8931.5 ms  ✓ MLDatasets
      -  33 dependencies successfully precompiled in 59 seconds. 166 already precompiled.
      -Precompiling MLDataDevicesMLUtilsExt...
      -   1604.5 ms  ✓ MLDataDevices → MLDataDevicesMLUtilsExt
      -  1 dependency successfully precompiled in 2 seconds. 102 already precompiled.
       Precompiling LuxMLUtilsExt...
      -   2242.9 ms  ✓ Lux → LuxMLUtilsExt
      -  1 dependency successfully precompiled in 3 seconds. 167 already precompiled.
      -Precompiling OneHotArrays...
      -    958.0 ms  ✓ OneHotArrays
      -  1 dependency successfully precompiled in 1 seconds. 28 already precompiled.
      -Precompiling MLDataDevicesOneHotArraysExt...
      -    741.6 ms  ✓ MLDataDevices → MLDataDevicesOneHotArraysExt
      -  1 dependency successfully precompiled in 1 seconds. 35 already precompiled.
      -Precompiling Zygote...
      -    712.4 ms  ✓ StructArrays → StructArraysGPUArraysCoreExt
      -   1059.8 ms  ✓ ZygoteRules
      -   5340.2 ms  ✓ ChainRules
      -  32833.6 ms  ✓ Zygote
      -  4 dependencies successfully precompiled in 38 seconds. 98 already precompiled.
      -Precompiling ArrayInterfaceChainRulesExt...
      -    789.6 ms  ✓ ArrayInterface → ArrayInterfaceChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 39 already precompiled.
      -Precompiling MLDataDevicesChainRulesExt...
      -    836.9 ms  ✓ MLDataDevices → MLDataDevicesChainRulesExt
      -  1 dependency successfully precompiled in 1 seconds. 40 already precompiled.
      -Precompiling MLDataDevicesZygoteExt...
      -   1601.4 ms  ✓ MLDataDevices → MLDataDevicesZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 109 already precompiled.
      +   2084.7 ms  ✓ Lux → LuxMLUtilsExt
      +  1 dependency successfully precompiled in 2 seconds. 167 already precompiled.
       Precompiling LuxZygoteExt...
      -   2791.8 ms  ✓ Lux → LuxZygoteExt
      +   2880.7 ms  ✓ Lux → LuxZygoteExt
         1 dependency successfully precompiled in 3 seconds. 166 already precompiled.
      -Precompiling ComponentArraysZygoteExt...
      -   1592.1 ms  ✓ ComponentArrays → ComponentArraysZygoteExt
      -  1 dependency successfully precompiled in 2 seconds. 117 already precompiled.
       Precompiling ZygoteColorsExt...
      -   1803.2 ms  ✓ Zygote → ZygoteColorsExt
      +   1843.1 ms  ✓ Zygote → ZygoteColorsExt
         1 dependency successfully precompiled in 2 seconds. 105 already precompiled.

      Loading Datasets

      julia
      function load_dataset(::Type{dset}, n_train::Union{Nothing, Int},
               n_eval::Union{Nothing, Int}, batchsize::Int) where {dset}
      -    if n_train === nothing
      -        imgs, labels = dset(:train)
      +    (; features, targets) = if n_train === nothing
      +        tmp = dset(:train)
      +        tmp[1:length(tmp)]
           else
      -        imgs, labels = dset(:train)[1:n_train]
      +        dset(:train)[1:n_train]
           end
      -    x_train, y_train = reshape(imgs, 28, 28, 1, n_train), onehotbatch(labels, 0:9)
      +    x_train, y_train = reshape(features, 28, 28, 1, :), onehotbatch(targets, 0:9)
       
      -    if n_eval === nothing
      -        imgs, labels = dset(:test)
      +    (; features, targets) = if n_eval === nothing
      +        tmp = dset(:test)
      +        tmp[1:length(tmp)]
           else
      -        imgs, labels = dset(:test)[1:n_eval]
      +        dset(:test)[1:n_eval]
           end
      -    x_test, y_test = reshape(imgs, 28, 28, 1, n_eval), onehotbatch(labels, 0:9)
      +    x_test, y_test = reshape(features, 28, 28, 1, :), onehotbatch(targets, 0:9)
       
           return (
      -        DataLoader((x_train, y_train); batchsize=min(batchsize, n_train), shuffle=true),
      -        DataLoader((x_test, y_test); batchsize=min(batchsize, n_eval), shuffle=false)
      +        DataLoader(
      +            (x_train, y_train); batchsize=min(batchsize, size(x_train, 4)), shuffle=true),
      +        DataLoader(
      +            (x_test, y_test); batchsize=min(batchsize, size(x_test, 4)), shuffle=false)
           )
       end
       
      @@ -283,109 +203,109 @@
           return test_acc_list
       end
       
      -test_acc_list = train()
      [  1/ 50]	       MNIST	Time 88.66826s	Training Accuracy: 56.45%	Test Accuracy: 56.25%
      -[  1/ 50]	FashionMNIST	Time 0.03836s	Training Accuracy: 53.71%	Test Accuracy: 53.12%
      -[  2/ 50]	       MNIST	Time 0.08196s	Training Accuracy: 66.50%	Test Accuracy: 65.62%
      -[  2/ 50]	FashionMNIST	Time 0.03446s	Training Accuracy: 59.96%	Test Accuracy: 50.00%
      -[  3/ 50]	       MNIST	Time 0.03057s	Training Accuracy: 79.20%	Test Accuracy: 65.62%
      -[  3/ 50]	FashionMNIST	Time 0.03504s	Training Accuracy: 65.53%	Test Accuracy: 59.38%
      -[  4/ 50]	       MNIST	Time 0.03339s	Training Accuracy: 77.05%	Test Accuracy: 62.50%
      -[  4/ 50]	FashionMNIST	Time 0.05684s	Training Accuracy: 67.19%	Test Accuracy: 71.88%
      -[  5/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 83.30%	Test Accuracy: 68.75%
      -[  5/ 50]	FashionMNIST	Time 0.02341s	Training Accuracy: 72.66%	Test Accuracy: 68.75%
      -[  6/ 50]	       MNIST	Time 0.02367s	Training Accuracy: 88.38%	Test Accuracy: 81.25%
      -[  6/ 50]	FashionMNIST	Time 0.02576s	Training Accuracy: 74.51%	Test Accuracy: 62.50%
      -[  7/ 50]	       MNIST	Time 0.03584s	Training Accuracy: 90.53%	Test Accuracy: 81.25%
      -[  7/ 50]	FashionMNIST	Time 0.02394s	Training Accuracy: 73.44%	Test Accuracy: 71.88%
      -[  8/ 50]	       MNIST	Time 0.02397s	Training Accuracy: 91.99%	Test Accuracy: 78.12%
      -[  8/ 50]	FashionMNIST	Time 0.02374s	Training Accuracy: 77.34%	Test Accuracy: 78.12%
      -[  9/ 50]	       MNIST	Time 0.03774s	Training Accuracy: 94.43%	Test Accuracy: 81.25%
      -[  9/ 50]	FashionMNIST	Time 0.02355s	Training Accuracy: 81.35%	Test Accuracy: 75.00%
      -[ 10/ 50]	       MNIST	Time 0.02356s	Training Accuracy: 96.29%	Test Accuracy: 81.25%
      -[ 10/ 50]	FashionMNIST	Time 0.03722s	Training Accuracy: 79.98%	Test Accuracy: 56.25%
      -[ 11/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 11/ 50]	FashionMNIST	Time 0.02326s	Training Accuracy: 77.15%	Test Accuracy: 68.75%
      -[ 12/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 97.46%	Test Accuracy: 84.38%
      -[ 12/ 50]	FashionMNIST	Time 0.02445s	Training Accuracy: 80.57%	Test Accuracy: 71.88%
      -[ 13/ 50]	       MNIST	Time 0.03150s	Training Accuracy: 98.63%	Test Accuracy: 84.38%
      -[ 13/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 80.08%	Test Accuracy: 68.75%
      -[ 14/ 50]	       MNIST	Time 0.02402s	Training Accuracy: 98.93%	Test Accuracy: 84.38%
      -[ 14/ 50]	FashionMNIST	Time 0.03091s	Training Accuracy: 81.25%	Test Accuracy: 62.50%
      -[ 15/ 50]	       MNIST	Time 0.02384s	Training Accuracy: 99.51%	Test Accuracy: 84.38%
      -[ 15/ 50]	FashionMNIST	Time 0.02378s	Training Accuracy: 81.05%	Test Accuracy: 65.62%
      -[ 16/ 50]	       MNIST	Time 0.02343s	Training Accuracy: 99.71%	Test Accuracy: 84.38%
      -[ 16/ 50]	FashionMNIST	Time 0.02407s	Training Accuracy: 82.52%	Test Accuracy: 62.50%
      -[ 17/ 50]	       MNIST	Time 0.02576s	Training Accuracy: 99.90%	Test Accuracy: 84.38%
      -[ 17/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 83.79%	Test Accuracy: 62.50%
      -[ 18/ 50]	       MNIST	Time 0.02349s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 18/ 50]	FashionMNIST	Time 0.03048s	Training Accuracy: 84.47%	Test Accuracy: 68.75%
      -[ 19/ 50]	       MNIST	Time 0.02385s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 19/ 50]	FashionMNIST	Time 0.02377s	Training Accuracy: 85.35%	Test Accuracy: 65.62%
      -[ 20/ 50]	       MNIST	Time 0.02316s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 20/ 50]	FashionMNIST	Time 0.02373s	Training Accuracy: 86.82%	Test Accuracy: 59.38%
      -[ 21/ 50]	       MNIST	Time 0.02517s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 21/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 87.79%	Test Accuracy: 59.38%
      -[ 22/ 50]	       MNIST	Time 0.02330s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 22/ 50]	FashionMNIST	Time 0.02971s	Training Accuracy: 87.40%	Test Accuracy: 65.62%
      -[ 23/ 50]	       MNIST	Time 0.02482s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 23/ 50]	FashionMNIST	Time 0.02376s	Training Accuracy: 87.60%	Test Accuracy: 62.50%
      -[ 24/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 24/ 50]	FashionMNIST	Time 0.02386s	Training Accuracy: 87.79%	Test Accuracy: 68.75%
      -[ 25/ 50]	       MNIST	Time 0.02470s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 25/ 50]	FashionMNIST	Time 0.02365s	Training Accuracy: 88.96%	Test Accuracy: 65.62%
      -[ 26/ 50]	       MNIST	Time 0.02345s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 26/ 50]	FashionMNIST	Time 0.02828s	Training Accuracy: 89.55%	Test Accuracy: 71.88%
      -[ 27/ 50]	       MNIST	Time 0.02378s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 27/ 50]	FashionMNIST	Time 0.02444s	Training Accuracy: 90.62%	Test Accuracy: 68.75%
      -[ 28/ 50]	       MNIST	Time 0.02371s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 28/ 50]	FashionMNIST	Time 0.02400s	Training Accuracy: 91.21%	Test Accuracy: 75.00%
      -[ 29/ 50]	       MNIST	Time 0.02480s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 29/ 50]	FashionMNIST	Time 0.02456s	Training Accuracy: 91.99%	Test Accuracy: 75.00%
      -[ 30/ 50]	       MNIST	Time 0.02380s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 30/ 50]	FashionMNIST	Time 0.02846s	Training Accuracy: 92.38%	Test Accuracy: 75.00%
      -[ 31/ 50]	       MNIST	Time 0.02401s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 31/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 93.07%	Test Accuracy: 71.88%
      -[ 32/ 50]	       MNIST	Time 0.02334s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 32/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 92.97%	Test Accuracy: 75.00%
      -[ 33/ 50]	       MNIST	Time 0.02533s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 33/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 92.68%	Test Accuracy: 75.00%
      -[ 34/ 50]	       MNIST	Time 0.02396s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 34/ 50]	FashionMNIST	Time 0.02806s	Training Accuracy: 93.36%	Test Accuracy: 75.00%
      -[ 35/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 35/ 50]	FashionMNIST	Time 0.02383s	Training Accuracy: 93.65%	Test Accuracy: 75.00%
      -[ 36/ 50]	       MNIST	Time 0.02370s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      -[ 36/ 50]	FashionMNIST	Time 0.02325s	Training Accuracy: 93.46%	Test Accuracy: 75.00%
      -[ 37/ 50]	       MNIST	Time 0.02495s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 37/ 50]	FashionMNIST	Time 0.02470s	Training Accuracy: 93.26%	Test Accuracy: 75.00%
      -[ 38/ 50]	       MNIST	Time 0.02389s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 38/ 50]	FashionMNIST	Time 0.03001s	Training Accuracy: 94.24%	Test Accuracy: 68.75%
      -[ 39/ 50]	       MNIST	Time 0.02408s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 39/ 50]	FashionMNIST	Time 0.02401s	Training Accuracy: 94.04%	Test Accuracy: 75.00%
      -[ 40/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 40/ 50]	FashionMNIST	Time 0.02338s	Training Accuracy: 94.92%	Test Accuracy: 71.88%
      -[ 41/ 50]	       MNIST	Time 0.02520s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 41/ 50]	FashionMNIST	Time 0.02382s	Training Accuracy: 94.53%	Test Accuracy: 71.88%
      -[ 42/ 50]	       MNIST	Time 0.02382s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 42/ 50]	FashionMNIST	Time 0.02841s	Training Accuracy: 94.63%	Test Accuracy: 71.88%
      -[ 43/ 50]	       MNIST	Time 0.02395s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 43/ 50]	FashionMNIST	Time 0.02380s	Training Accuracy: 95.61%	Test Accuracy: 65.62%
      -[ 44/ 50]	       MNIST	Time 0.02326s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 44/ 50]	FashionMNIST	Time 0.02379s	Training Accuracy: 95.51%	Test Accuracy: 71.88%
      -[ 45/ 50]	       MNIST	Time 0.02467s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 45/ 50]	FashionMNIST	Time 0.02389s	Training Accuracy: 95.90%	Test Accuracy: 65.62%
      -[ 46/ 50]	       MNIST	Time 0.02387s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 46/ 50]	FashionMNIST	Time 0.02879s	Training Accuracy: 95.61%	Test Accuracy: 68.75%
      -[ 47/ 50]	       MNIST	Time 0.02407s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 47/ 50]	FashionMNIST	Time 0.02429s	Training Accuracy: 96.00%	Test Accuracy: 68.75%
      -[ 48/ 50]	       MNIST	Time 0.02374s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 48/ 50]	FashionMNIST	Time 0.02387s	Training Accuracy: 96.19%	Test Accuracy: 68.75%
      -[ 49/ 50]	       MNIST	Time 0.02529s	Training Accuracy: 100.00%	Test Accuracy: 87.50%
      -[ 49/ 50]	FashionMNIST	Time 0.02356s	Training Accuracy: 96.00%	Test Accuracy: 71.88%
      -[ 50/ 50]	       MNIST	Time 0.02376s	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[ 50/ 50]	FashionMNIST	Time 0.02772s	Training Accuracy: 96.88%	Test Accuracy: 68.75%
      +test_acc_list = train()
      [  1/ 50]	       MNIST	Time 90.93891s	Training Accuracy: 58.30%	Test Accuracy: 50.00%
      +[  1/ 50]	FashionMNIST	Time 0.03589s	Training Accuracy: 52.25%	Test Accuracy: 40.62%
      +[  2/ 50]	       MNIST	Time 0.03529s	Training Accuracy: 65.82%	Test Accuracy: 59.38%
      +[  2/ 50]	FashionMNIST	Time 0.03677s	Training Accuracy: 61.43%	Test Accuracy: 53.12%
      +[  3/ 50]	       MNIST	Time 0.03784s	Training Accuracy: 78.71%	Test Accuracy: 62.50%
      +[  3/ 50]	FashionMNIST	Time 0.02366s	Training Accuracy: 63.87%	Test Accuracy: 65.62%
      +[  4/ 50]	       MNIST	Time 0.02413s	Training Accuracy: 78.91%	Test Accuracy: 59.38%
      +[  4/ 50]	FashionMNIST	Time 0.02402s	Training Accuracy: 62.70%	Test Accuracy: 50.00%
      +[  5/ 50]	       MNIST	Time 0.02468s	Training Accuracy: 83.01%	Test Accuracy: 71.88%
      +[  5/ 50]	FashionMNIST	Time 0.02561s	Training Accuracy: 66.60%	Test Accuracy: 59.38%
      +[  6/ 50]	       MNIST	Time 0.02672s	Training Accuracy: 87.40%	Test Accuracy: 71.88%
      +[  6/ 50]	FashionMNIST	Time 0.04283s	Training Accuracy: 75.39%	Test Accuracy: 56.25%
      +[  7/ 50]	       MNIST	Time 0.02878s	Training Accuracy: 90.92%	Test Accuracy: 78.12%
      +[  7/ 50]	FashionMNIST	Time 0.02569s	Training Accuracy: 77.73%	Test Accuracy: 65.62%
      +[  8/ 50]	       MNIST	Time 0.02505s	Training Accuracy: 91.99%	Test Accuracy: 78.12%
      +[  8/ 50]	FashionMNIST	Time 0.02606s	Training Accuracy: 75.68%	Test Accuracy: 71.88%
      +[  9/ 50]	       MNIST	Time 0.03860s	Training Accuracy: 95.41%	Test Accuracy: 78.12%
      +[  9/ 50]	FashionMNIST	Time 0.02483s	Training Accuracy: 80.57%	Test Accuracy: 71.88%
      +[ 10/ 50]	       MNIST	Time 0.02384s	Training Accuracy: 96.00%	Test Accuracy: 81.25%
      +[ 10/ 50]	FashionMNIST	Time 0.02516s	Training Accuracy: 80.08%	Test Accuracy: 78.12%
      +[ 11/ 50]	       MNIST	Time 0.04125s	Training Accuracy: 97.07%	Test Accuracy: 81.25%
      +[ 11/ 50]	FashionMNIST	Time 0.02846s	Training Accuracy: 82.13%	Test Accuracy: 75.00%
      +[ 12/ 50]	       MNIST	Time 0.02746s	Training Accuracy: 98.05%	Test Accuracy: 78.12%
      +[ 12/ 50]	FashionMNIST	Time 0.03960s	Training Accuracy: 83.30%	Test Accuracy: 68.75%
      +[ 13/ 50]	       MNIST	Time 0.02454s	Training Accuracy: 99.02%	Test Accuracy: 84.38%
      +[ 13/ 50]	FashionMNIST	Time 0.02544s	Training Accuracy: 84.57%	Test Accuracy: 78.12%
      +[ 14/ 50]	       MNIST	Time 0.03625s	Training Accuracy: 99.22%	Test Accuracy: 84.38%
      +[ 14/ 50]	FashionMNIST	Time 0.02589s	Training Accuracy: 85.94%	Test Accuracy: 71.88%
      +[ 15/ 50]	       MNIST	Time 0.02439s	Training Accuracy: 99.61%	Test Accuracy: 84.38%
      +[ 15/ 50]	FashionMNIST	Time 0.03808s	Training Accuracy: 86.72%	Test Accuracy: 78.12%
      +[ 16/ 50]	       MNIST	Time 0.02549s	Training Accuracy: 99.90%	Test Accuracy: 81.25%
      +[ 16/ 50]	FashionMNIST	Time 0.02385s	Training Accuracy: 87.60%	Test Accuracy: 65.62%
      +[ 17/ 50]	       MNIST	Time 0.03981s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 17/ 50]	FashionMNIST	Time 0.02388s	Training Accuracy: 89.26%	Test Accuracy: 62.50%
      +[ 18/ 50]	       MNIST	Time 0.02439s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 18/ 50]	FashionMNIST	Time 0.03753s	Training Accuracy: 88.48%	Test Accuracy: 68.75%
      +[ 19/ 50]	       MNIST	Time 0.02457s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 19/ 50]	FashionMNIST	Time 0.02341s	Training Accuracy: 89.84%	Test Accuracy: 71.88%
      +[ 20/ 50]	       MNIST	Time 0.04014s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 20/ 50]	FashionMNIST	Time 0.02331s	Training Accuracy: 88.67%	Test Accuracy: 75.00%
      +[ 21/ 50]	       MNIST	Time 0.02383s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 21/ 50]	FashionMNIST	Time 0.03570s	Training Accuracy: 89.94%	Test Accuracy: 68.75%
      +[ 22/ 50]	       MNIST	Time 0.02272s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 22/ 50]	FashionMNIST	Time 0.02405s	Training Accuracy: 90.82%	Test Accuracy: 75.00%
      +[ 23/ 50]	       MNIST	Time 0.03896s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 23/ 50]	FashionMNIST	Time 0.02609s	Training Accuracy: 91.50%	Test Accuracy: 75.00%
      +[ 24/ 50]	       MNIST	Time 0.02517s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 24/ 50]	FashionMNIST	Time 0.04117s	Training Accuracy: 90.53%	Test Accuracy: 78.12%
      +[ 25/ 50]	       MNIST	Time 0.02548s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 25/ 50]	FashionMNIST	Time 0.02606s	Training Accuracy: 89.45%	Test Accuracy: 75.00%
      +[ 26/ 50]	       MNIST	Time 0.04266s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 26/ 50]	FashionMNIST	Time 0.02453s	Training Accuracy: 88.96%	Test Accuracy: 71.88%
      +[ 27/ 50]	       MNIST	Time 0.02379s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 27/ 50]	FashionMNIST	Time 0.04063s	Training Accuracy: 89.84%	Test Accuracy: 68.75%
      +[ 28/ 50]	       MNIST	Time 0.02613s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      +[ 28/ 50]	FashionMNIST	Time 0.02591s	Training Accuracy: 90.04%	Test Accuracy: 68.75%
      +[ 29/ 50]	       MNIST	Time 0.04262s	Training Accuracy: 100.00%	Test Accuracy: 84.38%
      +[ 29/ 50]	FashionMNIST	Time 0.02624s	Training Accuracy: 89.94%	Test Accuracy: 75.00%
      +[ 30/ 50]	       MNIST	Time 0.02549s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 30/ 50]	FashionMNIST	Time 0.04256s	Training Accuracy: 90.82%	Test Accuracy: 75.00%
      +[ 31/ 50]	       MNIST	Time 0.02368s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 31/ 50]	FashionMNIST	Time 0.02326s	Training Accuracy: 92.29%	Test Accuracy: 71.88%
      +[ 32/ 50]	       MNIST	Time 0.03570s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 32/ 50]	FashionMNIST	Time 0.02291s	Training Accuracy: 92.97%	Test Accuracy: 68.75%
      +[ 33/ 50]	       MNIST	Time 0.02354s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 33/ 50]	FashionMNIST	Time 0.03821s	Training Accuracy: 93.75%	Test Accuracy: 75.00%
      +[ 34/ 50]	       MNIST	Time 0.02333s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 34/ 50]	FashionMNIST	Time 0.02589s	Training Accuracy: 93.16%	Test Accuracy: 68.75%
      +[ 35/ 50]	       MNIST	Time 0.04239s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 35/ 50]	FashionMNIST	Time 0.02489s	Training Accuracy: 94.04%	Test Accuracy: 71.88%
      +[ 36/ 50]	       MNIST	Time 0.02313s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 36/ 50]	FashionMNIST	Time 0.03971s	Training Accuracy: 94.53%	Test Accuracy: 71.88%
      +[ 37/ 50]	       MNIST	Time 0.02368s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 37/ 50]	FashionMNIST	Time 0.02640s	Training Accuracy: 94.43%	Test Accuracy: 71.88%
      +[ 38/ 50]	       MNIST	Time 0.04166s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 38/ 50]	FashionMNIST	Time 0.02454s	Training Accuracy: 95.12%	Test Accuracy: 75.00%
      +[ 39/ 50]	       MNIST	Time 0.02416s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 39/ 50]	FashionMNIST	Time 0.04181s	Training Accuracy: 95.21%	Test Accuracy: 75.00%
      +[ 40/ 50]	       MNIST	Time 0.02274s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 40/ 50]	FashionMNIST	Time 0.02501s	Training Accuracy: 95.41%	Test Accuracy: 75.00%
      +[ 41/ 50]	       MNIST	Time 0.04484s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 41/ 50]	FashionMNIST	Time 0.02362s	Training Accuracy: 95.51%	Test Accuracy: 75.00%
      +[ 42/ 50]	       MNIST	Time 0.02328s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 42/ 50]	FashionMNIST	Time 0.04095s	Training Accuracy: 96.09%	Test Accuracy: 75.00%
      +[ 43/ 50]	       MNIST	Time 0.02287s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 43/ 50]	FashionMNIST	Time 0.02328s	Training Accuracy: 96.09%	Test Accuracy: 75.00%
      +[ 44/ 50]	       MNIST	Time 0.04195s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 44/ 50]	FashionMNIST	Time 0.02327s	Training Accuracy: 96.29%	Test Accuracy: 75.00%
      +[ 45/ 50]	       MNIST	Time 0.02345s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 45/ 50]	FashionMNIST	Time 0.03968s	Training Accuracy: 96.39%	Test Accuracy: 75.00%
      +[ 46/ 50]	       MNIST	Time 0.02272s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 46/ 50]	FashionMNIST	Time 0.02340s	Training Accuracy: 96.68%	Test Accuracy: 75.00%
      +[ 47/ 50]	       MNIST	Time 0.03976s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 47/ 50]	FashionMNIST	Time 0.02335s	Training Accuracy: 96.58%	Test Accuracy: 78.12%
      +[ 48/ 50]	       MNIST	Time 0.02336s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 48/ 50]	FashionMNIST	Time 0.03924s	Training Accuracy: 96.97%	Test Accuracy: 75.00%
      +[ 49/ 50]	       MNIST	Time 0.02358s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 49/ 50]	FashionMNIST	Time 0.02355s	Training Accuracy: 96.97%	Test Accuracy: 78.12%
      +[ 50/ 50]	       MNIST	Time 0.04359s	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[ 50/ 50]	FashionMNIST	Time 0.02275s	Training Accuracy: 97.17%	Test Accuracy: 75.00%
       
      -[FINAL]	       MNIST	Training Accuracy: 100.00%	Test Accuracy: 90.62%
      -[FINAL]	FashionMNIST	Training Accuracy: 96.88%	Test Accuracy: 68.75%

      Appendix

      julia
      using InteractiveUtils
      +[FINAL]	       MNIST	Training Accuracy: 100.00%	Test Accuracy: 81.25%
      +[FINAL]	FashionMNIST	Training Accuracy: 97.17%	Test Accuracy: 75.00%

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -398,8 +318,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -437,15 +357,15 @@
       - CUDA_Runtime_jll: 0.15.5+0
       
       Toolchain:
      -- Julia: 1.11.2
      +- Julia: 1.11.3
       - LLVM: 16.0.6
       
       Environment:
       - JULIA_CUDA_HARD_MEMORY_LIMIT: 100%
       
       1 device:
      -  0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 3.170 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      - + 0: NVIDIA A100-PCIE-40GB MIG 1g.5gb (sm_80, 2.857 GiB / 4.750 GiB available)

      This page was generated using Literate.jl.

      + \ No newline at end of file diff --git a/dev/tutorials/intermediate/4_PINN2DPDE.html b/dev/tutorials/intermediate/4_PINN2DPDE.html index 9d5d310f3b..c880f9b9a3 100644 --- a/dev/tutorials/intermediate/4_PINN2DPDE.html +++ b/dev/tutorials/intermediate/4_PINN2DPDE.html @@ -5,15 +5,15 @@ Training a PINN on 2D PDE | Lux.jl Docs - + - + - - - + + + @@ -183,84 +183,84 @@ trained_model = train_model(xyt, target_data, xyt_bc, target_bc) trained_u = Lux.testmode( StatefulLuxLayer{true}(trained_model.model.u, trained_model.ps.u, trained_model.st.u) -)
      2025-01-20 23:01:51.964035: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 4000489932546559643
      -E0120 23:01:52.335396 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.335702 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.335707 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.335711 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.335715 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.335719 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.335722 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.335726 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.335730 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.335733 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.335762: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.338530 3951999 buffer_comparator.cc:156] Difference at 16: -nan, expected 11.6059
      -E0120 23:01:52.338554 3951999 buffer_comparator.cc:156] Difference at 17: -nan, expected 14.502
      -E0120 23:01:52.338558 3951999 buffer_comparator.cc:156] Difference at 18: -nan, expected 11.2449
      -E0120 23:01:52.338562 3951999 buffer_comparator.cc:156] Difference at 19: -nan, expected 10.0998
      -E0120 23:01:52.338565 3951999 buffer_comparator.cc:156] Difference at 20: -nan, expected 14.0222
      -E0120 23:01:52.338569 3951999 buffer_comparator.cc:156] Difference at 21: -nan, expected 10.1321
      -E0120 23:01:52.338573 3951999 buffer_comparator.cc:156] Difference at 22: -nan, expected 10.2986
      -E0120 23:01:52.338576 3951999 buffer_comparator.cc:156] Difference at 23: -nan, expected 14.1109
      -E0120 23:01:52.338580 3951999 buffer_comparator.cc:156] Difference at 24: -nan, expected 13.3463
      -E0120 23:01:52.338584 3951999 buffer_comparator.cc:156] Difference at 25: -nan, expected 12.8369
      -2025-01-20 23:01:52.338590: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.341328 3951999 buffer_comparator.cc:156] Difference at 1024: -nan, expected 11.5293
      -E0120 23:01:52.341348 3951999 buffer_comparator.cc:156] Difference at 1025: -nan, expected 10.1983
      -E0120 23:01:52.341352 3951999 buffer_comparator.cc:156] Difference at 1026: -nan, expected 13.3385
      -E0120 23:01:52.341356 3951999 buffer_comparator.cc:156] Difference at 1027: -nan, expected 12.4705
      -E0120 23:01:52.341359 3951999 buffer_comparator.cc:156] Difference at 1028: -nan, expected 8.94387
      -E0120 23:01:52.341363 3951999 buffer_comparator.cc:156] Difference at 1029: -nan, expected 10.8997
      -E0120 23:01:52.341367 3951999 buffer_comparator.cc:156] Difference at 1030: -nan, expected 10.6486
      -E0120 23:01:52.341370 3951999 buffer_comparator.cc:156] Difference at 1031: -nan, expected 9.73507
      -E0120 23:01:52.341374 3951999 buffer_comparator.cc:156] Difference at 1032: -nan, expected 12.2806
      -E0120 23:01:52.341378 3951999 buffer_comparator.cc:156] Difference at 1033: -nan, expected 10.1883
      -2025-01-20 23:01:52.341384: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.344116 3951999 buffer_comparator.cc:156] Difference at 1040: -nan, expected 9.99799
      -E0120 23:01:52.344135 3951999 buffer_comparator.cc:156] Difference at 1041: -nan, expected 12.209
      -E0120 23:01:52.344140 3951999 buffer_comparator.cc:156] Difference at 1042: -nan, expected 9.4851
      -E0120 23:01:52.344143 3951999 buffer_comparator.cc:156] Difference at 1043: -nan, expected 8.26397
      -E0120 23:01:52.344147 3951999 buffer_comparator.cc:156] Difference at 1044: -nan, expected 11.9253
      -E0120 23:01:52.344152 3951999 buffer_comparator.cc:156] Difference at 1045: -nan, expected 8.99047
      -E0120 23:01:52.344156 3951999 buffer_comparator.cc:156] Difference at 1046: -nan, expected 8.81842
      -E0120 23:01:52.344159 3951999 buffer_comparator.cc:156] Difference at 1047: -nan, expected 12.2714
      -E0120 23:01:52.344163 3951999 buffer_comparator.cc:156] Difference at 1048: -nan, expected 11.1417
      -E0120 23:01:52.344167 3951999 buffer_comparator.cc:156] Difference at 1049: -nan, expected 10.6572
      -2025-01-20 23:01:52.344173: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.346894 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.346907 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.346910 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.346913 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.346916 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.346919 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.346921 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.346924 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.346926 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.346929 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.346934: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.349626 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.349640 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.349643 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.349646 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.349648 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.349651 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.349653 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.349656 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.349659 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.349661 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.349666: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:01:52.352365 3951999 buffer_comparator.cc:156] Difference at 1056: -nan, expected 10.6543
      -E0120 23:01:52.352384 3951999 buffer_comparator.cc:156] Difference at 1057: -nan, expected 11.0945
      -E0120 23:01:52.352387 3951999 buffer_comparator.cc:156] Difference at 1058: -nan, expected 11.1424
      -E0120 23:01:52.352389 3951999 buffer_comparator.cc:156] Difference at 1059: -nan, expected 12.7556
      -E0120 23:01:52.352392 3951999 buffer_comparator.cc:156] Difference at 1060: -nan, expected 12.6932
      -E0120 23:01:52.352395 3951999 buffer_comparator.cc:156] Difference at 1061: -nan, expected 10.0594
      -E0120 23:01:52.352397 3951999 buffer_comparator.cc:156] Difference at 1062: -nan, expected 12.3478
      -E0120 23:01:52.352400 3951999 buffer_comparator.cc:156] Difference at 1063: -nan, expected 10.8381
      -E0120 23:01:52.352402 3951999 buffer_comparator.cc:156] Difference at 1064: -nan, expected 10.409
      -E0120 23:01:52.352405 3951999 buffer_comparator.cc:156] Difference at 1065: -nan, expected 10.3688
      -2025-01-20 23:01:52.352411: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +)
      2025-01-24 04:44:46.958300: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 17932355638179910565
      +E0124 04:44:47.189670 1598851 buffer_comparator.cc:156] Difference at 16: 0, expected 11.6059
      +E0124 04:44:47.189735 1598851 buffer_comparator.cc:156] Difference at 17: 0, expected 14.502
      +E0124 04:44:47.189743 1598851 buffer_comparator.cc:156] Difference at 18: 0, expected 11.2449
      +E0124 04:44:47.189750 1598851 buffer_comparator.cc:156] Difference at 19: 0, expected 10.0998
      +E0124 04:44:47.189756 1598851 buffer_comparator.cc:156] Difference at 20: 0, expected 14.0222
      +E0124 04:44:47.189763 1598851 buffer_comparator.cc:156] Difference at 21: 0, expected 10.1321
      +E0124 04:44:47.189769 1598851 buffer_comparator.cc:156] Difference at 22: 0, expected 10.2986
      +E0124 04:44:47.189776 1598851 buffer_comparator.cc:156] Difference at 23: 0, expected 14.1109
      +E0124 04:44:47.189782 1598851 buffer_comparator.cc:156] Difference at 24: 0, expected 13.3463
      +E0124 04:44:47.189788 1598851 buffer_comparator.cc:156] Difference at 25: 0, expected 12.8369
      +2025-01-24 04:44:47.189808: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.192718 1598851 buffer_comparator.cc:156] Difference at 16: 0, expected 11.6059
      +E0124 04:44:47.192749 1598851 buffer_comparator.cc:156] Difference at 17: 0, expected 14.502
      +E0124 04:44:47.192756 1598851 buffer_comparator.cc:156] Difference at 18: 0, expected 11.2449
      +E0124 04:44:47.192762 1598851 buffer_comparator.cc:156] Difference at 19: 0, expected 10.0998
      +E0124 04:44:47.192769 1598851 buffer_comparator.cc:156] Difference at 20: 0, expected 14.0222
      +E0124 04:44:47.192775 1598851 buffer_comparator.cc:156] Difference at 21: 0, expected 10.1321
      +E0124 04:44:47.192781 1598851 buffer_comparator.cc:156] Difference at 22: 0, expected 10.2986
      +E0124 04:44:47.192788 1598851 buffer_comparator.cc:156] Difference at 23: 0, expected 14.1109
      +E0124 04:44:47.192794 1598851 buffer_comparator.cc:156] Difference at 24: 0, expected 13.3463
      +E0124 04:44:47.192800 1598851 buffer_comparator.cc:156] Difference at 25: 0, expected 12.8369
      +2025-01-24 04:44:47.192810: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.195610 1598851 buffer_comparator.cc:156] Difference at 1024: 0, expected 11.5293
      +E0124 04:44:47.195632 1598851 buffer_comparator.cc:156] Difference at 1025: 0, expected 10.1983
      +E0124 04:44:47.195636 1598851 buffer_comparator.cc:156] Difference at 1026: 0, expected 13.3385
      +E0124 04:44:47.195641 1598851 buffer_comparator.cc:156] Difference at 1027: 0, expected 12.4705
      +E0124 04:44:47.195645 1598851 buffer_comparator.cc:156] Difference at 1028: 0, expected 8.94387
      +E0124 04:44:47.195649 1598851 buffer_comparator.cc:156] Difference at 1029: 0, expected 10.8997
      +E0124 04:44:47.195653 1598851 buffer_comparator.cc:156] Difference at 1030: 0, expected 10.6486
      +E0124 04:44:47.195657 1598851 buffer_comparator.cc:156] Difference at 1031: 0, expected 9.73507
      +E0124 04:44:47.195662 1598851 buffer_comparator.cc:156] Difference at 1032: 0, expected 12.2806
      +E0124 04:44:47.195666 1598851 buffer_comparator.cc:156] Difference at 1033: 0, expected 10.1883
      +2025-01-24 04:44:47.195673: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.198396 1598851 buffer_comparator.cc:156] Difference at 1040: 0, expected 9.99799
      +E0124 04:44:47.198418 1598851 buffer_comparator.cc:156] Difference at 1041: 0, expected 12.209
      +E0124 04:44:47.198423 1598851 buffer_comparator.cc:156] Difference at 1042: 0, expected 9.4851
      +E0124 04:44:47.198427 1598851 buffer_comparator.cc:156] Difference at 1043: 0, expected 8.26397
      +E0124 04:44:47.198431 1598851 buffer_comparator.cc:156] Difference at 1044: 0, expected 11.9253
      +E0124 04:44:47.198436 1598851 buffer_comparator.cc:156] Difference at 1045: 0, expected 8.99047
      +E0124 04:44:47.198442 1598851 buffer_comparator.cc:156] Difference at 1046: 0, expected 8.81842
      +E0124 04:44:47.198446 1598851 buffer_comparator.cc:156] Difference at 1047: 0, expected 12.2714
      +E0124 04:44:47.198450 1598851 buffer_comparator.cc:156] Difference at 1048: 0, expected 11.1417
      +E0124 04:44:47.198454 1598851 buffer_comparator.cc:156] Difference at 1049: 0, expected 10.6572
      +2025-01-24 04:44:47.198461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.201184 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.201209 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.201213 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.201218 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.201222 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.201226 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.201230 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.201234 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.201239 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.201243 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.201250: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.203979 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.203999 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.204003 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.204007 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.204012 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.204016 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.204020 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.204024 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.204028 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.204033 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.204039: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:44:47.206760 1598851 buffer_comparator.cc:156] Difference at 1056: 0, expected 10.6543
      +E0124 04:44:47.206781 1598851 buffer_comparator.cc:156] Difference at 1057: 0, expected 11.0945
      +E0124 04:44:47.206785 1598851 buffer_comparator.cc:156] Difference at 1058: 0, expected 11.1424
      +E0124 04:44:47.206790 1598851 buffer_comparator.cc:156] Difference at 1059: 0, expected 12.7556
      +E0124 04:44:47.206794 1598851 buffer_comparator.cc:156] Difference at 1060: 0, expected 12.6932
      +E0124 04:44:47.206798 1598851 buffer_comparator.cc:156] Difference at 1061: 0, expected 10.0594
      +E0124 04:44:47.206802 1598851 buffer_comparator.cc:156] Difference at 1062: 0, expected 12.3478
      +E0124 04:44:47.206807 1598851 buffer_comparator.cc:156] Difference at 1063: 0, expected 10.8381
      +E0124 04:44:47.206811 1598851 buffer_comparator.cc:156] Difference at 1064: 0, expected 10.409
      +E0124 04:44:47.206815 1598851 buffer_comparator.cc:156] Difference at 1065: 0, expected 10.3688
      +2025-01-24 04:44:47.206822: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
       Iteration: [     1/ 50000] 	 Loss: 3.158992767 (3.158992767) 	 Physics Loss: 1.982357264 (1.982357264) 	 Data Loss: 0.578243732 (0.578243732) 	 BC Loss: 0.598391712 (0.598391712)
       Iteration: [  1001/ 50000] 	 Loss: 0.034934171 (0.028678153) 	 Physics Loss: 0.000550666 (0.000427971) 	 Data Loss: 0.022407684 (0.011849238) 	 BC Loss: 0.011975820 (0.016400939)
       Iteration: [  2001/ 50000] 	 Loss: 0.029342793 (0.031756539) 	 Physics Loss: 0.001803906 (0.001986223) 	 Data Loss: 0.013016323 (0.012399685) 	 BC Loss: 0.014522564 (0.017370628)
      @@ -347,8 +347,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -366,7 +366,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/intermediate/5_ConvolutionalVAE.html b/dev/tutorials/intermediate/5_ConvolutionalVAE.html index 716fe8bff2..aedb69fdea 100644 --- a/dev/tutorials/intermediate/5_ConvolutionalVAE.html +++ b/dev/tutorials/intermediate/5_ConvolutionalVAE.html @@ -5,15 +5,15 @@ Convolutional VAE for MNIST using Reactant | Lux.jl Docs - + - + - - - + + + @@ -280,109 +280,109 @@ end img = main()
      ┌ Warning: `training` is set to `Val{false}()` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a `Lux.jl` model, set it to training mode using `LuxCore.trainmode`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-4/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 23:18:11.917010: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 11895952635772526774
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-16/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      +2025-01-24 05:04:18.737046: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 12154174806817483476
       Total Trainable Parameters: 0.1493 M
      -Epoch 1, Iter 11, Loss: 44447.5625000, Throughput: 21.315431 im/s
      -Epoch 1, Train Loss: 62683.6601562, Time: 66.4955s, Throughput: 21.174372 im/s
      -Epoch 2, Iter 11, Loss: 31240.5175781, Throughput: 1604.194354 im/s
      -Epoch 2, Train Loss: 35970.2460938, Time: 0.8781s, Throughput: 1603.482196 im/s
      -Epoch 3, Iter 11, Loss: 25488.8515625, Throughput: 1647.041705 im/s
      -Epoch 3, Train Loss: 27952.8320312, Time: 0.8554s, Throughput: 1646.061570 im/s
      -Epoch 4, Iter 11, Loss: 22313.5644531, Throughput: 1692.354699 im/s
      -Epoch 4, Train Loss: 23722.8613281, Time: 0.8322s, Throughput: 1691.830117 im/s
      -Epoch 5, Iter 11, Loss: 19402.6035156, Throughput: 1711.697829 im/s
      -Epoch 5, Train Loss: 21031.9238281, Time: 0.8229s, Throughput: 1710.922240 im/s
      -Epoch 6, Iter 11, Loss: 18538.2871094, Throughput: 1669.197785 im/s
      -Epoch 6, Train Loss: 19078.5078125, Time: 0.8439s, Throughput: 1668.527162 im/s
      -Epoch 7, Iter 11, Loss: 16468.9140625, Throughput: 1752.542727 im/s
      -Epoch 7, Train Loss: 17712.4550781, Time: 0.8037s, Throughput: 1751.821146 im/s
      -Epoch 8, Iter 11, Loss: 16625.4785156, Throughput: 1705.391086 im/s
      -Epoch 8, Train Loss: 16778.6386719, Time: 0.8261s, Throughput: 1704.311769 im/s
      -Epoch 9, Iter 11, Loss: 15829.8203125, Throughput: 1735.845295 im/s
      -Epoch 9, Train Loss: 16046.4257812, Time: 0.8115s, Throughput: 1735.101199 im/s
      -Epoch 10, Iter 11, Loss: 15746.0839844, Throughput: 1788.674827 im/s
      -Epoch 10, Train Loss: 15447.8466797, Time: 0.7874s, Throughput: 1788.122411 im/s
      -Epoch 11, Iter 11, Loss: 15215.0312500, Throughput: 1759.938214 im/s
      -Epoch 11, Train Loss: 15064.4023438, Time: 0.8002s, Throughput: 1759.511911 im/s
      -Epoch 12, Iter 11, Loss: 14532.9414062, Throughput: 1778.945628 im/s
      -Epoch 12, Train Loss: 14696.0908203, Time: 0.7917s, Throughput: 1778.393313 im/s
      -Epoch 13, Iter 11, Loss: 14967.7109375, Throughput: 1797.568888 im/s
      -Epoch 13, Train Loss: 14457.2812500, Time: 0.7835s, Throughput: 1796.965581 im/s
      -Epoch 14, Iter 11, Loss: 14077.5800781, Throughput: 1789.538796 im/s
      -Epoch 14, Train Loss: 14026.6181641, Time: 0.7905s, Throughput: 1781.119643 im/s
      -Epoch 15, Iter 11, Loss: 13518.2919922, Throughput: 1743.134711 im/s
      -Epoch 15, Train Loss: 13632.0664062, Time: 0.8084s, Throughput: 1741.795939 im/s
      -Epoch 16, Iter 11, Loss: 13380.7929688, Throughput: 1819.513944 im/s
      -Epoch 16, Train Loss: 13282.1279297, Time: 0.7743s, Throughput: 1818.383929 im/s
      -Epoch 17, Iter 11, Loss: 12565.8125000, Throughput: 1825.887155 im/s
      -Epoch 17, Train Loss: 13235.3505859, Time: 0.7716s, Throughput: 1824.810108 im/s
      -Epoch 18, Iter 11, Loss: 12630.6835938, Throughput: 1754.058005 im/s
      -Epoch 18, Train Loss: 13102.6396484, Time: 0.8031s, Throughput: 1753.240960 im/s
      -Epoch 19, Iter 11, Loss: 12953.1679688, Throughput: 1817.470635 im/s
      -Epoch 19, Train Loss: 12875.8691406, Time: 0.7751s, Throughput: 1816.633136 im/s
      -Epoch 20, Iter 11, Loss: 12055.4677734, Throughput: 1825.307005 im/s
      -Epoch 20, Train Loss: 12572.4384766, Time: 0.7717s, Throughput: 1824.538367 im/s
      -Epoch 21, Iter 11, Loss: 13185.9873047, Throughput: 1810.913795 im/s
      -Epoch 21, Train Loss: 12500.9931641, Time: 0.7779s, Throughput: 1809.975813 im/s
      -Epoch 22, Iter 11, Loss: 12371.1367188, Throughput: 1812.958520 im/s
      -Epoch 22, Train Loss: 12382.7285156, Time: 0.7770s, Throughput: 1812.155757 im/s
      -Epoch 23, Iter 11, Loss: 12146.3300781, Throughput: 1825.927238 im/s
      -Epoch 23, Train Loss: 12117.9287109, Time: 0.7715s, Throughput: 1824.916684 im/s
      -Epoch 24, Iter 11, Loss: 12079.3994141, Throughput: 1401.442699 im/s
      -Epoch 24, Train Loss: 11997.0742188, Time: 1.0057s, Throughput: 1399.970944 im/s
      -Epoch 25, Iter 11, Loss: 11770.6054688, Throughput: 1620.130117 im/s
      -Epoch 25, Train Loss: 11831.6191406, Time: 0.8694s, Throughput: 1619.544968 im/s
      -Epoch 26, Iter 11, Loss: 12617.6123047, Throughput: 1716.463947 im/s
      -Epoch 26, Train Loss: 11800.9433594, Time: 0.8206s, Throughput: 1715.782232 im/s
      -Epoch 27, Iter 11, Loss: 11695.0839844, Throughput: 1838.151494 im/s
      -Epoch 27, Train Loss: 11604.6308594, Time: 0.7662s, Throughput: 1837.748226 im/s
      -Epoch 28, Iter 11, Loss: 11695.5996094, Throughput: 1806.788953 im/s
      -Epoch 28, Train Loss: 11651.4423828, Time: 0.7798s, Throughput: 1805.595737 im/s
      -Epoch 29, Iter 11, Loss: 11384.9062500, Throughput: 1784.109790 im/s
      -Epoch 29, Train Loss: 11569.6884766, Time: 0.7895s, Throughput: 1783.328058 im/s
      -Epoch 30, Iter 11, Loss: 10836.7968750, Throughput: 1764.955002 im/s
      -Epoch 30, Train Loss: 11339.8691406, Time: 0.8096s, Throughput: 1739.121482 im/s
      -Epoch 31, Iter 11, Loss: 11125.7089844, Throughput: 1698.424038 im/s
      -Epoch 31, Train Loss: 11223.2167969, Time: 0.8293s, Throughput: 1697.741932 im/s
      -Epoch 32, Iter 11, Loss: 11135.4785156, Throughput: 1667.973904 im/s
      -Epoch 32, Train Loss: 11096.8427734, Time: 0.8445s, Throughput: 1667.330154 im/s
      -Epoch 33, Iter 11, Loss: 12250.8027344, Throughput: 1685.428124 im/s
      -Epoch 33, Train Loss: 11205.4375000, Time: 0.8375s, Throughput: 1681.246007 im/s
      -Epoch 34, Iter 11, Loss: 11169.9980469, Throughput: 1724.074720 im/s
      -Epoch 34, Train Loss: 11285.1865234, Time: 0.8170s, Throughput: 1723.332131 im/s
      -Epoch 35, Iter 11, Loss: 10973.0673828, Throughput: 1673.721785 im/s
      -Epoch 35, Train Loss: 11315.8671875, Time: 0.8416s, Throughput: 1672.952260 im/s
      -Epoch 36, Iter 11, Loss: 10784.8652344, Throughput: 1689.407790 im/s
      -Epoch 36, Train Loss: 11000.4267578, Time: 0.8340s, Throughput: 1688.216841 im/s
      -Epoch 37, Iter 11, Loss: 10987.7080078, Throughput: 1660.775089 im/s
      -Epoch 37, Train Loss: 10908.0439453, Time: 0.8481s, Throughput: 1660.126616 im/s
      -Epoch 38, Iter 11, Loss: 10832.6396484, Throughput: 1662.204075 im/s
      -Epoch 38, Train Loss: 10761.0380859, Time: 0.8475s, Throughput: 1661.407709 im/s
      -Epoch 39, Iter 11, Loss: 10657.9746094, Throughput: 1650.764429 im/s
      -Epoch 39, Train Loss: 10716.2685547, Time: 0.8534s, Throughput: 1649.948099 im/s
      -Epoch 40, Iter 11, Loss: 11321.4941406, Throughput: 1726.715686 im/s
      -Epoch 40, Train Loss: 10696.7226562, Time: 0.8157s, Throughput: 1726.082812 im/s
      -Epoch 41, Iter 11, Loss: 10473.2099609, Throughput: 1808.169746 im/s
      -Epoch 41, Train Loss: 10672.8632812, Time: 0.7792s, Throughput: 1807.083633 im/s
      -Epoch 42, Iter 11, Loss: 10543.8730469, Throughput: 1697.991372 im/s
      -Epoch 42, Train Loss: 10663.1875000, Time: 0.8297s, Throughput: 1696.974060 im/s
      -Epoch 43, Iter 11, Loss: 11151.8281250, Throughput: 1742.673312 im/s
      -Epoch 43, Train Loss: 10626.8242188, Time: 0.8083s, Throughput: 1741.864781 im/s
      -Epoch 44, Iter 11, Loss: 10089.9355469, Throughput: 1790.705991 im/s
      -Epoch 44, Train Loss: 10622.2226562, Time: 0.7866s, Throughput: 1789.904907 im/s
      -Epoch 45, Iter 11, Loss: 10384.1025391, Throughput: 1799.232128 im/s
      -Epoch 45, Train Loss: 10386.0625000, Time: 0.7829s, Throughput: 1798.365896 im/s
      -Epoch 46, Iter 11, Loss: 10393.1582031, Throughput: 1808.824368 im/s
      -Epoch 46, Train Loss: 10321.5947266, Time: 0.7788s, Throughput: 1807.981533 im/s
      -Epoch 47, Iter 11, Loss: 10101.5517578, Throughput: 1813.404992 im/s
      -Epoch 47, Train Loss: 10217.5341797, Time: 0.7768s, Throughput: 1812.646342 im/s
      -Epoch 48, Iter 11, Loss: 10764.6113281, Throughput: 1812.191346 im/s
      -Epoch 48, Train Loss: 10252.5224609, Time: 0.7773s, Throughput: 1811.459271 im/s
      -Epoch 49, Iter 11, Loss: 10491.2128906, Throughput: 1725.773609 im/s
      -Epoch 49, Train Loss: 10155.1474609, Time: 0.8162s, Throughput: 1725.088512 im/s
      -Epoch 50, Iter 11, Loss: 10332.0097656, Throughput: 1723.435230 im/s
      -Epoch 50, Train Loss: 10062.9257812, Time: 0.8173s, Throughput: 1722.734901 im/s

      Appendix

      julia
      using InteractiveUtils
      +Epoch 1, Iter 11, Loss: 44148.5234375, Throughput: 21.693584 im/s
      +Epoch 1, Train Loss: 62822.9492188, Time: 65.3384s, Throughput: 21.549355 im/s
      +Epoch 2, Iter 11, Loss: 31040.7773438, Throughput: 2029.239494 im/s
      +Epoch 2, Train Loss: 36056.7773438, Time: 0.6942s, Throughput: 2028.260295 im/s
      +Epoch 3, Iter 11, Loss: 26118.6953125, Throughput: 2142.259650 im/s
      +Epoch 3, Train Loss: 27947.4707031, Time: 0.6576s, Throughput: 2141.216502 im/s
      +Epoch 4, Iter 11, Loss: 21178.0957031, Throughput: 2137.962330 im/s
      +Epoch 4, Train Loss: 23641.0703125, Time: 0.6589s, Throughput: 2136.940373 im/s
      +Epoch 5, Iter 11, Loss: 20625.9550781, Throughput: 2126.326315 im/s
      +Epoch 5, Train Loss: 20930.2343750, Time: 0.6624s, Throughput: 2125.485262 im/s
      +Epoch 6, Iter 11, Loss: 18482.9648438, Throughput: 2144.402726 im/s
      +Epoch 6, Train Loss: 18999.2207031, Time: 0.6569s, Throughput: 2143.551206 im/s
      +Epoch 7, Iter 11, Loss: 17671.8437500, Throughput: 2131.305874 im/s
      +Epoch 7, Train Loss: 17797.9199219, Time: 0.6610s, Throughput: 2130.061296 im/s
      +Epoch 8, Iter 11, Loss: 17184.2773438, Throughput: 2143.805657 im/s
      +Epoch 8, Train Loss: 16943.1210938, Time: 0.6570s, Throughput: 2142.963165 im/s
      +Epoch 9, Iter 11, Loss: 16169.7519531, Throughput: 2137.329390 im/s
      +Epoch 9, Train Loss: 16133.9785156, Time: 0.6590s, Throughput: 2136.488888 im/s
      +Epoch 10, Iter 11, Loss: 15327.4101562, Throughput: 2147.595323 im/s
      +Epoch 10, Train Loss: 15587.2353516, Time: 0.6559s, Throughput: 2146.557116 im/s
      +Epoch 11, Iter 11, Loss: 14786.2734375, Throughput: 2159.555169 im/s
      +Epoch 11, Train Loss: 15082.0771484, Time: 0.6523s, Throughput: 2158.651333 im/s
      +Epoch 12, Iter 11, Loss: 14255.7041016, Throughput: 2158.227698 im/s
      +Epoch 12, Train Loss: 14557.6679688, Time: 0.6526s, Throughput: 2157.371470 im/s
      +Epoch 13, Iter 11, Loss: 14895.6621094, Throughput: 2139.947919 im/s
      +Epoch 13, Train Loss: 14324.0869141, Time: 0.6583s, Throughput: 2138.907794 im/s
      +Epoch 14, Iter 11, Loss: 13865.3027344, Throughput: 2124.660927 im/s
      +Epoch 14, Train Loss: 14012.7285156, Time: 0.6629s, Throughput: 2123.863200 im/s
      +Epoch 15, Iter 11, Loss: 13444.6328125, Throughput: 2149.116195 im/s
      +Epoch 15, Train Loss: 13671.2695312, Time: 0.6554s, Throughput: 2148.257021 im/s
      +Epoch 16, Iter 11, Loss: 12813.3994141, Throughput: 2140.130161 im/s
      +Epoch 16, Train Loss: 13489.2880859, Time: 0.6582s, Throughput: 2139.300629 im/s
      +Epoch 17, Iter 11, Loss: 13994.4716797, Throughput: 2151.480708 im/s
      +Epoch 17, Train Loss: 13345.4775391, Time: 0.6548s, Throughput: 2150.376099 im/s
      +Epoch 18, Iter 11, Loss: 13775.0205078, Throughput: 2156.532458 im/s
      +Epoch 18, Train Loss: 13114.6132812, Time: 0.6532s, Throughput: 2155.703542 im/s
      +Epoch 19, Iter 11, Loss: 12336.1689453, Throughput: 2136.979036 im/s
      +Epoch 19, Train Loss: 12843.8837891, Time: 0.6592s, Throughput: 2136.042229 im/s
      +Epoch 20, Iter 11, Loss: 12736.4951172, Throughput: 2153.497780 im/s
      +Epoch 20, Train Loss: 12550.9990234, Time: 0.6541s, Throughput: 2152.628038 im/s
      +Epoch 21, Iter 11, Loss: 12392.1289062, Throughput: 2148.699421 im/s
      +Epoch 21, Train Loss: 12481.2226562, Time: 0.6556s, Throughput: 2147.589075 im/s
      +Epoch 22, Iter 11, Loss: 12498.3085938, Throughput: 2151.149206 im/s
      +Epoch 22, Train Loss: 12354.0312500, Time: 0.6548s, Throughput: 2150.160011 im/s
      +Epoch 23, Iter 11, Loss: 12620.2519531, Throughput: 2162.225307 im/s
      +Epoch 23, Train Loss: 12183.5683594, Time: 0.6515s, Throughput: 2161.187146 im/s
      +Epoch 24, Iter 11, Loss: 11805.5263672, Throughput: 2133.556567 im/s
      +Epoch 24, Train Loss: 12003.1621094, Time: 0.6602s, Throughput: 2132.684370 im/s
      +Epoch 25, Iter 11, Loss: 12150.3369141, Throughput: 2148.001512 im/s
      +Epoch 25, Train Loss: 11926.1621094, Time: 0.6559s, Throughput: 2146.786528 im/s
      +Epoch 26, Iter 11, Loss: 12146.6865234, Throughput: 2148.060891 im/s
      +Epoch 26, Train Loss: 11826.6103516, Time: 0.6558s, Throughput: 2147.153377 im/s
      +Epoch 27, Iter 11, Loss: 12543.7255859, Throughput: 2159.248016 im/s
      +Epoch 27, Train Loss: 11727.1728516, Time: 0.6524s, Throughput: 2158.162235 im/s
      +Epoch 28, Iter 11, Loss: 11681.7880859, Throughput: 2157.735638 im/s
      +Epoch 28, Train Loss: 11689.2412109, Time: 0.6529s, Throughput: 2156.505684 im/s
      +Epoch 29, Iter 11, Loss: 11415.7578125, Throughput: 2136.720018 im/s
      +Epoch 29, Train Loss: 11647.7333984, Time: 0.6593s, Throughput: 2135.650591 im/s
      +Epoch 30, Iter 11, Loss: 11249.3818359, Throughput: 2146.128855 im/s
      +Epoch 30, Train Loss: 11345.8574219, Time: 0.6563s, Throughput: 2145.236221 im/s
      +Epoch 31, Iter 11, Loss: 10843.5869141, Throughput: 2151.786438 im/s
      +Epoch 31, Train Loss: 11344.9736328, Time: 0.6546s, Throughput: 2150.777074 im/s
      +Epoch 32, Iter 11, Loss: 11195.3828125, Throughput: 2154.317928 im/s
      +Epoch 32, Train Loss: 11353.8515625, Time: 0.6538s, Throughput: 2153.409047 im/s
      +Epoch 33, Iter 11, Loss: 11060.8808594, Throughput: 2141.569024 im/s
      +Epoch 33, Train Loss: 11371.8496094, Time: 0.6578s, Throughput: 2140.559134 im/s
      +Epoch 34, Iter 11, Loss: 10756.1621094, Throughput: 2126.117329 im/s
      +Epoch 34, Train Loss: 11223.6240234, Time: 0.6625s, Throughput: 2125.161722 im/s
      +Epoch 35, Iter 11, Loss: 10554.7626953, Throughput: 2145.066353 im/s
      +Epoch 35, Train Loss: 11073.7353516, Time: 0.6567s, Throughput: 2144.053941 im/s
      +Epoch 36, Iter 11, Loss: 10349.1953125, Throughput: 2142.992715 im/s
      +Epoch 36, Train Loss: 10935.1337891, Time: 0.6574s, Throughput: 2141.920108 im/s
      +Epoch 37, Iter 11, Loss: 11334.4550781, Throughput: 2130.512374 im/s
      +Epoch 37, Train Loss: 10844.4248047, Time: 0.6612s, Throughput: 2129.446078 im/s
      +Epoch 38, Iter 11, Loss: 11061.8173828, Throughput: 2157.797922 im/s
      +Epoch 38, Train Loss: 10925.8564453, Time: 0.6528s, Throughput: 2156.764007 im/s
      +Epoch 39, Iter 11, Loss: 10529.3828125, Throughput: 2132.393282 im/s
      +Epoch 39, Train Loss: 10848.6083984, Time: 0.6606s, Throughput: 2131.405103 im/s
      +Epoch 40, Iter 11, Loss: 10760.2636719, Throughput: 2128.605625 im/s
      +Epoch 40, Train Loss: 10779.3212891, Time: 0.6618s, Throughput: 2127.617886 im/s
      +Epoch 41, Iter 11, Loss: 11441.4560547, Throughput: 2162.281516 im/s
      +Epoch 41, Train Loss: 10626.4257812, Time: 0.6515s, Throughput: 2161.246465 im/s
      +Epoch 42, Iter 11, Loss: 11216.9746094, Throughput: 2133.911968 im/s
      +Epoch 42, Train Loss: 10579.4189453, Time: 0.6601s, Throughput: 2132.967832 im/s
      +Epoch 43, Iter 11, Loss: 11094.8496094, Throughput: 2148.804968 im/s
      +Epoch 43, Train Loss: 10495.9335938, Time: 0.6555s, Throughput: 2147.831206 im/s
      +Epoch 44, Iter 11, Loss: 10862.9042969, Throughput: 2126.862364 im/s
      +Epoch 44, Train Loss: 10442.4248047, Time: 0.6623s, Throughput: 2125.893078 im/s
      +Epoch 45, Iter 11, Loss: 10530.5800781, Throughput: 2141.151290 im/s
      +Epoch 45, Train Loss: 10395.5947266, Time: 0.6579s, Throughput: 2140.002976 im/s
      +Epoch 46, Iter 11, Loss: 10718.9667969, Throughput: 2145.232324 im/s
      +Epoch 46, Train Loss: 10428.6806641, Time: 0.6566s, Throughput: 2144.337320 im/s
      +Epoch 47, Iter 11, Loss: 9776.3222656, Throughput: 2153.596730 im/s
      +Epoch 47, Train Loss: 10375.9951172, Time: 0.6541s, Throughput: 2152.418557 im/s
      +Epoch 48, Iter 11, Loss: 10743.1513672, Throughput: 2158.230853 im/s
      +Epoch 48, Train Loss: 10290.3066406, Time: 0.6527s, Throughput: 2157.172884 im/s
      +Epoch 49, Iter 11, Loss: 10566.3828125, Throughput: 2130.489316 im/s
      +Epoch 49, Train Loss: 10200.8916016, Time: 0.6611s, Throughput: 2129.628840 im/s
      +Epoch 50, Iter 11, Loss: 9890.2968750, Throughput: 2162.534099 im/s
      +Epoch 50, Train Loss: 10062.1943359, Time: 0.6514s, Throughput: 2161.448175 im/s

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -395,8 +395,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -414,7 +414,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/intermediate/6_GCN_Cora.html b/dev/tutorials/intermediate/6_GCN_Cora.html index 26a7a49509..80328f9a34 100644 --- a/dev/tutorials/intermediate/6_GCN_Cora.html +++ b/dev/tutorials/intermediate/6_GCN_Cora.html @@ -5,15 +5,15 @@ Graph Convolutional Networks on Cora | Lux.jl Docs - + - + - - - + + + @@ -147,1190 +147,1190 @@ end main()
      ┌ Warning: `replicate` doesn't work for `TaskLocalRNG`. Returning the same `TaskLocalRNG`.
      -└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
      +└ @ LuxCore /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxCore/src/LuxCore.jl:18
       Total Trainable Parameters: 0.0964 M
       ┌ Warning: `training` is set to `Val{false}()` but is being used within an autodiff call (gradient, jacobian, etc...). This might lead to incorrect results. If you are using a `Lux.jl` model, set it to training mode using `LuxCore.trainmode`.
      -└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-9/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      -2025-01-20 22:59:29.678566: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 3250673994701609345
      -2025-01-20 22:59:30.518900: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      +└ @ LuxLib.Utils /var/lib/buildkite-agent/builds/gpuci-11/julialang/lux-dot-jl/lib/LuxLib/src/utils.jl:324
      +2025-01-24 04:57:23.183688: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 6197855343288158519
      +2025-01-24 04:57:23.727855: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 24 bytes spill stores, 24 bytes spill loads
       
      -2025-01-20 22:59:30.527924: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 24 bytes spill stores, 24 bytes spill loads
      +2025-01-24 04:57:23.739295: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
       
      -2025-01-20 22:59:30.647255: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
      +2025-01-24 04:57:23.838790: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 328 bytes spill stores, 328 bytes spill loads
       
      -2025-01-20 22:59:30.698953: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 328 bytes spill stores, 328 bytes spill loads
      +2025-01-24 04:57:23.953902: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
       
      -2025-01-20 22:59:30.738672: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 376 bytes spill stores, 376 bytes spill loads
      +2025-01-24 04:57:23.989053: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 132 bytes spill stores, 132 bytes spill loads
       
      -2025-01-20 22:59:30.764969: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 856 bytes spill stores, 808 bytes spill loads
      +2025-01-24 04:57:24.022760: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 108 bytes spill stores, 108 bytes spill loads
       
      -2025-01-20 22:59:30.821027: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      +2025-01-24 04:57:24.125019: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 376 bytes spill stores, 376 bytes spill loads
       
      -2025-01-20 22:59:30.833276: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 132 bytes spill stores, 132 bytes spill loads
      +2025-01-24 04:57:24.127351: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_22', 856 bytes spill stores, 808 bytes spill loads
       
      -E0120 22:59:31.077662 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.078796 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.078805 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.078812 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.078818 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.078825 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.078832 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.078839 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.078846 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.078852 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.078872: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.081918 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.081947 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.081954 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.081961 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.081967 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.081974 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.081981 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.081987 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.081994 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.082002 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.082013: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.084990 3944451 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      -E0120 22:59:31.085004 3944451 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      -E0120 22:59:31.085007 3944451 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      -E0120 22:59:31.085010 3944451 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      -E0120 22:59:31.085013 3944451 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      -E0120 22:59:31.085016 3944451 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      -E0120 22:59:31.085019 3944451 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      -E0120 22:59:31.085022 3944451 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      -E0120 22:59:31.085025 3944451 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      -E0120 22:59:31.085028 3944451 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      -2025-01-20 22:59:31.085032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.087802 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.087817 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.087821 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.087824 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.087827 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.087830 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.087833 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.087836 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.087839 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.087842 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -2025-01-20 22:59:31.087847: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.090676 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.090690 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.090693 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.090696 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.090699 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.090702 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.090705 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.090708 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.090711 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.090714 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.090718: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.093486 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.093502 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.093507 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.093510 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.093513 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.093516 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.093519 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.093522 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.093525 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.093528 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.093533: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.096300 3944451 buffer_comparator.cc:156] Difference at 224: 1185.44, expected 942.345
      -E0120 22:59:31.096314 3944451 buffer_comparator.cc:156] Difference at 225: 1033.21, expected 1208.53
      -E0120 22:59:31.096317 3944451 buffer_comparator.cc:156] Difference at 226: 723.209, expected 1824.94
      -E0120 22:59:31.096321 3944451 buffer_comparator.cc:156] Difference at 227: 1155.3, expected 1592.15
      -E0120 22:59:31.096323 3944451 buffer_comparator.cc:156] Difference at 228: 842.032, expected 1119.85
      -E0120 22:59:31.096326 3944451 buffer_comparator.cc:156] Difference at 229: 632.011, expected 1778.8
      -E0120 22:59:31.096329 3944451 buffer_comparator.cc:156] Difference at 230: 809.938, expected 1283.28
      -E0120 22:59:31.096332 3944451 buffer_comparator.cc:156] Difference at 231: 1217.57, expected 935.373
      -E0120 22:59:31.096335 3944451 buffer_comparator.cc:156] Difference at 232: 1063.63, expected 1192.72
      -E0120 22:59:31.096338 3944451 buffer_comparator.cc:156] Difference at 233: 740.205, expected 1803.13
      -2025-01-20 22:59:31.096343: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.099157 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.099170 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.099174 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.099176 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.099179 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.099182 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.099185 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.099188 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.099191 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.099194 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.099199: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.101952 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.101965 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.101968 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.101971 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.101974 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.101977 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.101980 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.101985 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.101988 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.101991 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.101995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.104768 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.104785 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.104788 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.104791 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.104794 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.104797 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.104800 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.104803 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.104805 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.104808 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.104813: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.107562 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.107576 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.107579 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.107582 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.107585 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.107588 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.107591 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.107594 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.107597 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.107600 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.107605: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.110365 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.110379 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.110382 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.110385 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.110388 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.110391 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.110394 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.110396 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.110399 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.110402 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.110407: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.113168 3944451 buffer_comparator.cc:156] Difference at 448: 1213.2, expected 948.676
      -E0120 22:59:31.113182 3944451 buffer_comparator.cc:156] Difference at 449: 1055.43, expected 1198.64
      -E0120 22:59:31.113185 3944451 buffer_comparator.cc:156] Difference at 450: 735.576, expected 1813.49
      -E0120 22:59:31.113188 3944451 buffer_comparator.cc:156] Difference at 451: 1184.55, expected 1575.23
      -E0120 22:59:31.113191 3944451 buffer_comparator.cc:156] Difference at 452: 859.094, expected 1104.71
      -E0120 22:59:31.113194 3944451 buffer_comparator.cc:156] Difference at 453: 619.996, expected 1764.87
      -E0120 22:59:31.113197 3944451 buffer_comparator.cc:156] Difference at 454: 795.493, expected 1269.69
      -E0120 22:59:31.113199 3944451 buffer_comparator.cc:156] Difference at 455: 1199.74, expected 952.925
      -E0120 22:59:31.113202 3944451 buffer_comparator.cc:156] Difference at 456: 1044.81, expected 1213.59
      -E0120 22:59:31.113205 3944451 buffer_comparator.cc:156] Difference at 457: 732.124, expected 1821.28
      -2025-01-20 22:59:31.113210: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.116093 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.116107 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.116110 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.116113 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.116116 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.116119 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.116122 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.116125 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.116128 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.116131 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.116136: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.119021 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.119034 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.119037 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.119040 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.119043 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.119046 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.119049 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.119052 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.119055 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.119058 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.119063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.121781 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      -E0120 22:59:31.121795 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 22:59:31.121799 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      -E0120 22:59:31.121802 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.413
      -E0120 22:59:31.121807 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.54
      -E0120 22:59:31.121810 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 22:59:31.121814 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 22:59:31.121817 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.76
      -E0120 22:59:31.121820 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.08
      -E0120 22:59:31.121823 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.61
      -2025-01-20 22:59:31.121828: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.124551 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.593
      -E0120 22:59:31.124566 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.89
      -E0120 22:59:31.124569 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.77
      -E0120 22:59:31.124572 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.84
      -E0120 22:59:31.124575 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 22:59:31.124578 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.95
      -E0120 22:59:31.124581 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 22:59:31.124583 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.76
      -E0120 22:59:31.124586 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.37
      -E0120 22:59:31.124589 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 22:59:31.124594: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.127422 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.127436 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.127439 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.127442 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.127445 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.127448 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.127451 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.127454 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.127456 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.127459 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.127464: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.130250 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.130264 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.130267 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.130270 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.130273 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.130276 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.130279 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.130282 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.130285 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.130290 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.130295: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.133056 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.133071 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.133074 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.133077 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.133080 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.133083 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.133086 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.133089 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.133092 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.133095 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.133099: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.135925 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.135939 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.135942 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.135945 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.135948 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.135951 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.135954 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.135956 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.135959 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.135962 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.135967: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.138807 3944451 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      -E0120 22:59:31.138823 3944451 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      -E0120 22:59:31.138826 3944451 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      -E0120 22:59:31.138829 3944451 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      -E0120 22:59:31.138832 3944451 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      -E0120 22:59:31.138835 3944451 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      -E0120 22:59:31.138838 3944451 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      -E0120 22:59:31.138841 3944451 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      -E0120 22:59:31.138843 3944451 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      -E0120 22:59:31.138846 3944451 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      -2025-01-20 22:59:31.138851: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.141858 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.141873 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.141877 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.141880 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.141883 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.141886 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.141889 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.141892 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.141895 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.141898 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.141903: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.144958 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.144971 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.144974 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.144977 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.144980 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.144983 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.144986 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.144989 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.144992 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.144995 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.145000: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.147960 3944451 buffer_comparator.cc:156] Difference at 1792: 1216.65, expected 926.778
      -E0120 22:59:31.147974 3944451 buffer_comparator.cc:156] Difference at 1793: 1058.09, expected 1190.76
      -E0120 22:59:31.147978 3944451 buffer_comparator.cc:156] Difference at 1794: 743.338, expected 1807.71
      -E0120 22:59:31.147981 3944451 buffer_comparator.cc:156] Difference at 1795: 1184.75, expected 1565.59
      -E0120 22:59:31.147984 3944451 buffer_comparator.cc:156] Difference at 1796: 852.404, expected 1101.04
      -E0120 22:59:31.147987 3944451 buffer_comparator.cc:156] Difference at 1797: 626.131, expected 1756.21
      -E0120 22:59:31.147990 3944451 buffer_comparator.cc:156] Difference at 1798: 799.781, expected 1272.34
      -E0120 22:59:31.147993 3944451 buffer_comparator.cc:156] Difference at 1799: 1209.98, expected 944.465
      -E0120 22:59:31.147995 3944451 buffer_comparator.cc:156] Difference at 1800: 1057.15, expected 1200.58
      -E0120 22:59:31.147998 3944451 buffer_comparator.cc:156] Difference at 1801: 742.39, expected 1808.36
      -2025-01-20 22:59:31.148003: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.156378 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.156394 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.156397 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.156400 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.156403 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.156406 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.156409 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.156412 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.156415 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.156418 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.156423: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.162393 3944451 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      -E0120 22:59:31.162409 3944451 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      -E0120 22:59:31.162412 3944451 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      -E0120 22:59:31.162415 3944451 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      -E0120 22:59:31.162418 3944451 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      -E0120 22:59:31.162421 3944451 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      -E0120 22:59:31.162423 3944451 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      -E0120 22:59:31.162426 3944451 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      -E0120 22:59:31.162429 3944451 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      -E0120 22:59:31.162432 3944451 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      -2025-01-20 22:59:31.162436: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.167007 3944451 buffer_comparator.cc:156] Difference at 64: 0, expected 1106.21
      -E0120 22:59:31.167022 3944451 buffer_comparator.cc:156] Difference at 65: 0, expected 1087.83
      -E0120 22:59:31.167025 3944451 buffer_comparator.cc:156] Difference at 66: 0, expected 1090.54
      -E0120 22:59:31.167028 3944451 buffer_comparator.cc:156] Difference at 67: 0, expected 1104.23
      -E0120 22:59:31.167031 3944451 buffer_comparator.cc:156] Difference at 68: 0, expected 1104.3
      -E0120 22:59:31.167034 3944451 buffer_comparator.cc:156] Difference at 69: 0, expected 1093.45
      -E0120 22:59:31.167037 3944451 buffer_comparator.cc:156] Difference at 70: 0, expected 1091.52
      -E0120 22:59:31.167039 3944451 buffer_comparator.cc:156] Difference at 71: 0, expected 1110.4
      -E0120 22:59:31.167042 3944451 buffer_comparator.cc:156] Difference at 72: 0, expected 1106.92
      -E0120 22:59:31.167045 3944451 buffer_comparator.cc:156] Difference at 73: 0, expected 1088.44
      -2025-01-20 22:59:31.167050: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.171423 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.171437 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.171440 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.171443 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.171446 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.171449 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.171452 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.171455 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.171457 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.171460 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.171465: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.175989 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.176003 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.176007 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.176010 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.176013 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.176016 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.176019 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.176022 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.176024 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.176027 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.176032: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.185164 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.185181 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.185184 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.185187 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.185190 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.185193 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.185196 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.185199 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.185202 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.185204 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.185209: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.189333 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.189348 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.189351 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.189354 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.189357 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.189360 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.189363 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.189366 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.189368 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.189371 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.189376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.193559 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.193574 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.193577 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.193580 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.193583 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.193586 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.193589 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.193591 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.193594 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.193599 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.193603: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.197920 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.197941 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.197944 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.197947 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.197950 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.197954 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.197957 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.197961 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.197964 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.197966 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.197971: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.202160 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.202175 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.202178 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.202181 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.202184 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.202187 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.202190 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.202192 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.202195 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.202198 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.202203: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.205427 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.205441 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.205444 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.205447 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.205449 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.205452 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.205455 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.205458 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.205461 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.205463 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.205468: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.208666 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.208681 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.208684 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.208687 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.208691 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.208694 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.208696 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.208699 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.208702 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.208705 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.208709: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.211900 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.211913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.211916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.211919 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.211922 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.211925 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.211928 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.211930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.211933 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.211936 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.211941: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.215087 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.215101 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.215104 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.215107 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.215110 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.215113 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.215116 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.215118 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.215121 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.215124 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.215129: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.218258 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.218273 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.218276 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.218279 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.218282 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.218285 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.218288 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.218291 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.218293 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.218296 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.218301: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.221413 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.221439 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.221442 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.221445 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.221448 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.221450 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.221453 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.221456 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.221459 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.221462 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.221466: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.224647 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.224662 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.224665 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.224668 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.224671 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.224673 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.224676 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.224679 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.224682 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.224685 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.224689: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.227897 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.227913 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.227916 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.227918 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.227921 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.227924 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.227927 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.227930 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.227932 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.227935 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.227940: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.231171 3944451 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      -E0120 22:59:31.231197 3944451 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      -E0120 22:59:31.231200 3944451 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      -E0120 22:59:31.231203 3944451 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      -E0120 22:59:31.231206 3944451 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      -E0120 22:59:31.231209 3944451 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      -E0120 22:59:31.231213 3944451 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      -E0120 22:59:31.231216 3944451 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      -E0120 22:59:31.231219 3944451 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      -E0120 22:59:31.231221 3944451 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      -2025-01-20 22:59:31.231226: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:31.234459 3944451 buffer_comparator.cc:156] Difference at 256: 0, expected 1091.26
      -E0120 22:59:31.234474 3944451 buffer_comparator.cc:156] Difference at 257: 0, expected 1117.91
      -E0120 22:59:31.234477 3944451 buffer_comparator.cc:156] Difference at 258: 0, expected 1086.11
      -E0120 22:59:31.234480 3944451 buffer_comparator.cc:156] Difference at 259: 0, expected 1095.59
      -E0120 22:59:31.234483 3944451 buffer_comparator.cc:156] Difference at 260: 0, expected 1098.42
      -E0120 22:59:31.234486 3944451 buffer_comparator.cc:156] Difference at 261: 0, expected 1113.28
      -E0120 22:59:31.234488 3944451 buffer_comparator.cc:156] Difference at 262: 0, expected 1088.03
      -E0120 22:59:31.234491 3944451 buffer_comparator.cc:156] Difference at 263: 0, expected 1093.88
      -E0120 22:59:31.234494 3944451 buffer_comparator.cc:156] Difference at 264: 0, expected 1115.18
      -E0120 22:59:31.234497 3944451 buffer_comparator.cc:156] Difference at 265: 0, expected 1104.89
      -2025-01-20 22:59:31.234501: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 22:59:35.131435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      +E0124 04:57:24.392006 1635447 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      +E0124 04:57:24.392070 1635447 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      +E0124 04:57:24.392074 1635447 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      +E0124 04:57:24.392079 1635447 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      +E0124 04:57:24.392083 1635447 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      +E0124 04:57:24.392087 1635447 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      +E0124 04:57:24.392091 1635447 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      +E0124 04:57:24.392095 1635447 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      +E0124 04:57:24.392098 1635447 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      +E0124 04:57:24.392102 1635447 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      +2025-01-24 04:57:24.392132: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.398296 1635447 buffer_comparator.cc:156] Difference at 16: 0, expected 1111.43
      +E0124 04:57:24.398316 1635447 buffer_comparator.cc:156] Difference at 17: 0, expected 1083.84
      +E0124 04:57:24.398320 1635447 buffer_comparator.cc:156] Difference at 18: 0, expected 1092.57
      +E0124 04:57:24.398324 1635447 buffer_comparator.cc:156] Difference at 19: 0, expected 1118.75
      +E0124 04:57:24.398329 1635447 buffer_comparator.cc:156] Difference at 20: 0, expected 1088.49
      +E0124 04:57:24.398332 1635447 buffer_comparator.cc:156] Difference at 21: 0, expected 1083.52
      +E0124 04:57:24.398336 1635447 buffer_comparator.cc:156] Difference at 22: 0, expected 1097.64
      +E0124 04:57:24.398340 1635447 buffer_comparator.cc:156] Difference at 23: 0, expected 1122.75
      +E0124 04:57:24.398344 1635447 buffer_comparator.cc:156] Difference at 24: 0, expected 1084.65
      +E0124 04:57:24.398348 1635447 buffer_comparator.cc:156] Difference at 25: 0, expected 1084.58
      +2025-01-24 04:57:24.398356: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.403031 1635447 buffer_comparator.cc:156] Difference at 64: 0, expected 1106.21
      +E0124 04:57:24.403047 1635447 buffer_comparator.cc:156] Difference at 65: 0, expected 1087.83
      +E0124 04:57:24.403051 1635447 buffer_comparator.cc:156] Difference at 66: 0, expected 1090.54
      +E0124 04:57:24.403054 1635447 buffer_comparator.cc:156] Difference at 67: 0, expected 1104.23
      +E0124 04:57:24.403056 1635447 buffer_comparator.cc:156] Difference at 68: 0, expected 1104.3
      +E0124 04:57:24.403059 1635447 buffer_comparator.cc:156] Difference at 69: 0, expected 1093.45
      +E0124 04:57:24.403062 1635447 buffer_comparator.cc:156] Difference at 70: 0, expected 1091.52
      +E0124 04:57:24.403065 1635447 buffer_comparator.cc:156] Difference at 71: 0, expected 1110.4
      +E0124 04:57:24.403068 1635447 buffer_comparator.cc:156] Difference at 72: 0, expected 1106.92
      +E0124 04:57:24.403071 1635447 buffer_comparator.cc:156] Difference at 73: 0, expected 1088.44
      +2025-01-24 04:57:24.403075: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.407521 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.407535 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.407538 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.407541 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.407544 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.407547 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.407549 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.407552 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.407555 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.407558 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.407562: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.412139 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.412157 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.412160 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.412163 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.412166 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.412169 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.412172 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.412174 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.412177 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.412180 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.412185: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.416418 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.416432 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.416435 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.416438 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.416441 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.416445 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.416448 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.416451 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.416454 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.416457 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.416461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.420617 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.420630 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.420633 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.420636 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.420639 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.420642 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.420644 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.420647 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.420650 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.420653 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.420657: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.424934 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.424948 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.424951 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.424954 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.424957 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.424960 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.424963 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.424965 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.424968 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.424971 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.424976: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.429345 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.429359 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.429362 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.429365 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.429368 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.429370 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.429373 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.429376 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.429379 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.429382 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.429386: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.433634 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.433649 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.433652 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.433655 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.433658 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.433661 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.433664 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.433667 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.433669 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.433672 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.433677: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.437762 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.437777 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.437780 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.437783 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.437785 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.437788 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.437791 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.437794 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.437797 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.437800 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.437804: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.441885 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.441899 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.441902 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.441905 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.441908 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.441911 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.441914 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.441916 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.441919 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.441922 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.441927: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.446005 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.446020 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.446023 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.446026 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.446028 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.446031 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.446036 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.446038 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.446041 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.446044 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.446057: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.450070 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.450086 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.450089 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.450092 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.450095 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.450098 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.450100 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.450103 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.450106 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.450109 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.450114: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.454089 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.454107 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.454110 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.454113 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.454116 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.454119 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.454122 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.454125 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.454128 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.454130 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.454135: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.458113 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.458130 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.458133 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.458135 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.458138 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.458141 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.458144 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.458147 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.458150 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.458153 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.458157: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.462205 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.462220 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.462223 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.462226 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.462229 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.462232 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.462235 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.462238 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.462240 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.462243 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.462248: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.466346 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.466359 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.466362 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.466365 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.466368 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.466371 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.466374 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.466377 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.466379 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.466382 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.466387: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.470526 1635447 buffer_comparator.cc:156] Difference at 128: 0, expected 1099.62
      +E0124 04:57:24.470539 1635447 buffer_comparator.cc:156] Difference at 129: 0, expected 1084.13
      +E0124 04:57:24.470542 1635447 buffer_comparator.cc:156] Difference at 130: 0, expected 1112.78
      +E0124 04:57:24.470545 1635447 buffer_comparator.cc:156] Difference at 131: 0, expected 1094.63
      +E0124 04:57:24.470548 1635447 buffer_comparator.cc:156] Difference at 132: 0, expected 1088.07
      +E0124 04:57:24.470551 1635447 buffer_comparator.cc:156] Difference at 133: 0, expected 1107.65
      +E0124 04:57:24.470554 1635447 buffer_comparator.cc:156] Difference at 134: 0, expected 1101.24
      +E0124 04:57:24.470556 1635447 buffer_comparator.cc:156] Difference at 135: 0, expected 1110.56
      +E0124 04:57:24.470559 1635447 buffer_comparator.cc:156] Difference at 136: 0, expected 1080.81
      +E0124 04:57:24.470562 1635447 buffer_comparator.cc:156] Difference at 137: 0, expected 1091.15
      +2025-01-24 04:57:24.470567: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.474657 1635447 buffer_comparator.cc:156] Difference at 256: 0, expected 1091.26
      +E0124 04:57:24.474672 1635447 buffer_comparator.cc:156] Difference at 257: 0, expected 1117.91
      +E0124 04:57:24.474675 1635447 buffer_comparator.cc:156] Difference at 258: 0, expected 1086.11
      +E0124 04:57:24.474678 1635447 buffer_comparator.cc:156] Difference at 259: 0, expected 1095.59
      +E0124 04:57:24.474681 1635447 buffer_comparator.cc:156] Difference at 260: 0, expected 1098.42
      +E0124 04:57:24.474683 1635447 buffer_comparator.cc:156] Difference at 261: 0, expected 1113.28
      +E0124 04:57:24.474686 1635447 buffer_comparator.cc:156] Difference at 262: 0, expected 1088.03
      +E0124 04:57:24.474689 1635447 buffer_comparator.cc:156] Difference at 263: 0, expected 1093.88
      +E0124 04:57:24.474694 1635447 buffer_comparator.cc:156] Difference at 264: 0, expected 1115.18
      +E0124 04:57:24.474696 1635447 buffer_comparator.cc:156] Difference at 265: 0, expected 1104.89
      +2025-01-24 04:57:24.474701: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.486858 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.486914 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.486917 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.486920 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.486923 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.486926 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.486929 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.486932 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.486936 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.486938 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.486948: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.489855 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.489869 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.489872 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.489875 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.489878 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.489881 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.489884 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.489887 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.489890 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.489893 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.489898: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.492857 1635447 buffer_comparator.cc:156] Difference at 112: 513.993, expected 949.79
      +E0124 04:57:24.492871 1635447 buffer_comparator.cc:156] Difference at 113: 357.807, expected 1213.3
      +E0124 04:57:24.492874 1635447 buffer_comparator.cc:156] Difference at 114: 585.471, expected 1837.44
      +E0124 04:57:24.492877 1635447 buffer_comparator.cc:156] Difference at 115: 420.444, expected 1600.27
      +E0124 04:57:24.492880 1635447 buffer_comparator.cc:156] Difference at 116: 302.398, expected 1117.2
      +E0124 04:57:24.492883 1635447 buffer_comparator.cc:156] Difference at 117: 386.144, expected 1790.83
      +E0124 04:57:24.492886 1635447 buffer_comparator.cc:156] Difference at 118: 587.071, expected 1291.05
      +E0124 04:57:24.492889 1635447 buffer_comparator.cc:156] Difference at 119: 508.94, expected 943.242
      +E0124 04:57:24.492892 1635447 buffer_comparator.cc:156] Difference at 120: 358.918, expected 1198.86
      +E0124 04:57:24.492895 1635447 buffer_comparator.cc:156] Difference at 121: 578.094, expected 1820.15
      +2025-01-24 04:57:24.492899: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.495770 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      +E0124 04:57:24.495785 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:57:24.495788 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      +E0124 04:57:24.495792 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.495795 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.495798 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.495801 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.495804 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.495806 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.495809 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +2025-01-24 04:57:24.495814: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.498765 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.498780 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.498783 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.498786 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.498789 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.498792 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.498795 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.498798 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.498801 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.498804 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.498809: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.501694 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.501707 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.501710 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.501713 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.501716 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.501719 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.501722 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.501725 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.501728 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.501730 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.501735: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.504267 1635447 buffer_comparator.cc:156] Difference at 224: 1187.54, expected 942.345
      +E0124 04:57:24.504281 1635447 buffer_comparator.cc:156] Difference at 225: 1035.45, expected 1208.53
      +E0124 04:57:24.504284 1635447 buffer_comparator.cc:156] Difference at 226: 725.657, expected 1824.94
      +E0124 04:57:24.504287 1635447 buffer_comparator.cc:156] Difference at 227: 1158.04, expected 1592.15
      +E0124 04:57:24.504290 1635447 buffer_comparator.cc:156] Difference at 228: 847.142, expected 1119.85
      +E0124 04:57:24.504293 1635447 buffer_comparator.cc:156] Difference at 229: 635.433, expected 1778.8
      +E0124 04:57:24.504307 1635447 buffer_comparator.cc:156] Difference at 230: 811.507, expected 1283.28
      +E0124 04:57:24.504310 1635447 buffer_comparator.cc:156] Difference at 231: 1220.34, expected 935.373
      +E0124 04:57:24.504313 1635447 buffer_comparator.cc:156] Difference at 232: 1066.07, expected 1192.72
      +E0124 04:57:24.504316 1635447 buffer_comparator.cc:156] Difference at 233: 743.671, expected 1803.13
      +2025-01-24 04:57:24.504321: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.506769 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.506783 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.506786 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.506789 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.506792 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.506795 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.506798 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.506801 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.506804 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.506806 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.506811: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.509228 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.509244 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.509247 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.509250 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.509253 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.509256 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.509259 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.509262 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.509265 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.509268 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.509272: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.511686 1635447 buffer_comparator.cc:156] Difference at 448: 1215.5, expected 948.676
      +E0124 04:57:24.511699 1635447 buffer_comparator.cc:156] Difference at 449: 1058.51, expected 1198.64
      +E0124 04:57:24.511702 1635447 buffer_comparator.cc:156] Difference at 450: 738.69, expected 1813.49
      +E0124 04:57:24.511705 1635447 buffer_comparator.cc:156] Difference at 451: 1187.68, expected 1575.23
      +E0124 04:57:24.511708 1635447 buffer_comparator.cc:156] Difference at 452: 862.098, expected 1104.71
      +E0124 04:57:24.511711 1635447 buffer_comparator.cc:156] Difference at 453: 623.212, expected 1764.87
      +E0124 04:57:24.511714 1635447 buffer_comparator.cc:156] Difference at 454: 798.796, expected 1269.69
      +E0124 04:57:24.511716 1635447 buffer_comparator.cc:156] Difference at 455: 1203.21, expected 952.925
      +E0124 04:57:24.511719 1635447 buffer_comparator.cc:156] Difference at 456: 1047.72, expected 1213.59
      +E0124 04:57:24.511722 1635447 buffer_comparator.cc:156] Difference at 457: 733.884, expected 1821.28
      +2025-01-24 04:57:24.511727: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.514123 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.514139 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.514142 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.514145 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.514148 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.514151 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.514154 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.514157 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.514160 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.514162 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.514167: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.516575 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.516588 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.516591 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.516594 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.516597 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.516599 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.516602 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.516605 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.516608 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.516611 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.516616: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.519012 1635447 buffer_comparator.cc:156] Difference at 448: 1214.22, expected 948.676
      +E0124 04:57:24.519025 1635447 buffer_comparator.cc:156] Difference at 449: 1056.45, expected 1198.64
      +E0124 04:57:24.519028 1635447 buffer_comparator.cc:156] Difference at 450: 736.847, expected 1813.49
      +E0124 04:57:24.519031 1635447 buffer_comparator.cc:156] Difference at 451: 1184.91, expected 1575.23
      +E0124 04:57:24.519034 1635447 buffer_comparator.cc:156] Difference at 452: 859.942, expected 1104.71
      +E0124 04:57:24.519037 1635447 buffer_comparator.cc:156] Difference at 453: 620.77, expected 1764.87
      +E0124 04:57:24.519040 1635447 buffer_comparator.cc:156] Difference at 454: 796.75, expected 1269.69
      +E0124 04:57:24.519043 1635447 buffer_comparator.cc:156] Difference at 455: 1201.02, expected 952.925
      +E0124 04:57:24.519046 1635447 buffer_comparator.cc:156] Difference at 456: 1045.45, expected 1213.59
      +E0124 04:57:24.519049 1635447 buffer_comparator.cc:156] Difference at 457: 732.834, expected 1821.28
      +2025-01-24 04:57:24.519053: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.521568 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.521580 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.521583 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.521586 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.521590 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.521593 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.521596 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.521599 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.521602 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.521605 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.521610: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.524110 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.524126 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.524129 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.524132 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.524135 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.524138 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.524141 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.524144 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.524147 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.524149 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.524154: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.526553 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.95
      +E0124 04:57:24.526566 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:57:24.526570 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.8
      +E0124 04:57:24.526574 1635447 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.413
      +E0124 04:57:24.526577 1635447 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.54
      +E0124 04:57:24.526580 1635447 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      +E0124 04:57:24.526583 1635447 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      +E0124 04:57:24.526586 1635447 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.76
      +E0124 04:57:24.526590 1635447 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.08
      +E0124 04:57:24.526593 1635447 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.61
      +2025-01-24 04:57:24.526597: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.528978 1635447 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.593
      +E0124 04:57:24.528990 1635447 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.89
      +E0124 04:57:24.528993 1635447 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.77
      +E0124 04:57:24.528996 1635447 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.84
      +E0124 04:57:24.528999 1635447 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      +E0124 04:57:24.529002 1635447 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.95
      +E0124 04:57:24.529005 1635447 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      +E0124 04:57:24.529008 1635447 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.76
      +E0124 04:57:24.529011 1635447 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.37
      +E0124 04:57:24.529015 1635447 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      +2025-01-24 04:57:24.529020: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.531492 1635447 buffer_comparator.cc:156] Difference at 896: 1204.66, expected 958.128
      +E0124 04:57:24.531508 1635447 buffer_comparator.cc:156] Difference at 897: 1053.28, expected 1218.67
      +E0124 04:57:24.531511 1635447 buffer_comparator.cc:156] Difference at 898: 740.998, expected 1826.79
      +E0124 04:57:24.531514 1635447 buffer_comparator.cc:156] Difference at 899: 1185.71, expected 1593.43
      +E0124 04:57:24.531517 1635447 buffer_comparator.cc:156] Difference at 900: 850.478, expected 1119.04
      +E0124 04:57:24.531520 1635447 buffer_comparator.cc:156] Difference at 901: 634.712, expected 1796.71
      +E0124 04:57:24.531523 1635447 buffer_comparator.cc:156] Difference at 902: 799.593, expected 1279.87
      +E0124 04:57:24.531526 1635447 buffer_comparator.cc:156] Difference at 903: 1208.15, expected 941.479
      +E0124 04:57:24.531529 1635447 buffer_comparator.cc:156] Difference at 904: 1055.09, expected 1202.97
      +E0124 04:57:24.531531 1635447 buffer_comparator.cc:156] Difference at 905: 746.267, expected 1817.41
      +2025-01-24 04:57:24.531536: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.533971 1635447 buffer_comparator.cc:156] Difference at 896: 1198.32, expected 958.128
      +E0124 04:57:24.533985 1635447 buffer_comparator.cc:156] Difference at 897: 1047.69, expected 1218.67
      +E0124 04:57:24.533988 1635447 buffer_comparator.cc:156] Difference at 898: 733.669, expected 1826.79
      +E0124 04:57:24.533991 1635447 buffer_comparator.cc:156] Difference at 899: 1177.34, expected 1593.43
      +E0124 04:57:24.533994 1635447 buffer_comparator.cc:156] Difference at 900: 842.502, expected 1119.04
      +E0124 04:57:24.533997 1635447 buffer_comparator.cc:156] Difference at 901: 627.594, expected 1796.71
      +E0124 04:57:24.534000 1635447 buffer_comparator.cc:156] Difference at 902: 792.637, expected 1279.87
      +E0124 04:57:24.534003 1635447 buffer_comparator.cc:156] Difference at 903: 1202.18, expected 941.479
      +E0124 04:57:24.534006 1635447 buffer_comparator.cc:156] Difference at 904: 1049.9, expected 1202.97
      +E0124 04:57:24.534009 1635447 buffer_comparator.cc:156] Difference at 905: 739.86, expected 1817.41
      +2025-01-24 04:57:24.534014: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.536420 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.536433 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.536436 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.536439 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.536442 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.536445 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.536448 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.536451 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.536454 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.536457 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.536461: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.538923 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.538936 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.538941 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.538944 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.538947 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.538950 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.538953 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.538955 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.538958 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.538961 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.538966: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.541431 1635447 buffer_comparator.cc:156] Difference at 896: 1199.33, expected 958.128
      +E0124 04:57:24.541445 1635447 buffer_comparator.cc:156] Difference at 897: 1048.92, expected 1218.67
      +E0124 04:57:24.541448 1635447 buffer_comparator.cc:156] Difference at 898: 734.914, expected 1826.79
      +E0124 04:57:24.541451 1635447 buffer_comparator.cc:156] Difference at 899: 1178.58, expected 1593.43
      +E0124 04:57:24.541454 1635447 buffer_comparator.cc:156] Difference at 900: 842.885, expected 1119.04
      +E0124 04:57:24.541457 1635447 buffer_comparator.cc:156] Difference at 901: 628.641, expected 1796.71
      +E0124 04:57:24.541460 1635447 buffer_comparator.cc:156] Difference at 902: 793.715, expected 1279.87
      +E0124 04:57:24.541463 1635447 buffer_comparator.cc:156] Difference at 903: 1203.49, expected 941.479
      +E0124 04:57:24.541466 1635447 buffer_comparator.cc:156] Difference at 904: 1050.86, expected 1202.97
      +E0124 04:57:24.541469 1635447 buffer_comparator.cc:156] Difference at 905: 741.148, expected 1817.41
      +2025-01-24 04:57:24.541474: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.544077 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.544090 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.544093 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.544096 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.544099 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.544102 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.544105 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.544108 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.544111 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.544114 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.544118: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.546747 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.546760 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.546764 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.546767 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.546770 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.546773 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.546777 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.546780 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.546783 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.546786 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.546791: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:24.549367 1635447 buffer_comparator.cc:156] Difference at 1792: 1217.57, expected 926.778
      +E0124 04:57:24.549379 1635447 buffer_comparator.cc:156] Difference at 1793: 1059.66, expected 1190.76
      +E0124 04:57:24.549382 1635447 buffer_comparator.cc:156] Difference at 1794: 744.124, expected 1807.71
      +E0124 04:57:24.549386 1635447 buffer_comparator.cc:156] Difference at 1795: 1186.2, expected 1565.59
      +E0124 04:57:24.549389 1635447 buffer_comparator.cc:156] Difference at 1796: 853.454, expected 1101.04
      +E0124 04:57:24.549392 1635447 buffer_comparator.cc:156] Difference at 1797: 626.917, expected 1756.21
      +E0124 04:57:24.549395 1635447 buffer_comparator.cc:156] Difference at 1798: 800.954, expected 1272.34
      +E0124 04:57:24.549397 1635447 buffer_comparator.cc:156] Difference at 1799: 1211.6, expected 944.465
      +E0124 04:57:24.549400 1635447 buffer_comparator.cc:156] Difference at 1800: 1058.71, expected 1200.58
      +E0124 04:57:24.549403 1635447 buffer_comparator.cc:156] Difference at 1801: 743.269, expected 1808.36
      +2025-01-24 04:57:24.549408: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +2025-01-24 04:57:28.024854: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
       
      -E0120 22:59:35.144561 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.144631 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.144634 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.144638 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.144641 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.144644 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.144647 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.144650 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.144653 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.144656 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.144669: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.147349 3944451 buffer_comparator.cc:156] Difference at 0: 2299.91, expected 1852.66
      -E0120 22:59:35.147362 3944451 buffer_comparator.cc:156] Difference at 1: 2365.05, expected 1976.49
      -E0120 22:59:35.147366 3944451 buffer_comparator.cc:156] Difference at 2: 1113.52, expected 959.045
      -E0120 22:59:35.147369 3944451 buffer_comparator.cc:156] Difference at 3: 1907.18, expected 1563.86
      -E0120 22:59:35.147372 3944451 buffer_comparator.cc:156] Difference at 4: 1468.14, expected 1234.14
      -E0120 22:59:35.147375 3944451 buffer_comparator.cc:156] Difference at 5: 1437.09, expected 1193.4
      -E0120 22:59:35.147378 3944451 buffer_comparator.cc:156] Difference at 6: 2153.91, expected 1811.25
      -E0120 22:59:35.147381 3944451 buffer_comparator.cc:156] Difference at 14: 2152.45, expected 1862.99
      -E0120 22:59:35.147384 3944451 buffer_comparator.cc:156] Difference at 15: 2224.9, expected 1998.11
      -E0120 22:59:35.147389 3944451 buffer_comparator.cc:156] Difference at 17: 1795.48, expected 1583.4
      -2025-01-20 22:59:35.147394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.150038 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.150058 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.150061 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.150064 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.150067 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.150070 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.150073 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.150076 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.150079 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.150081 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.150086: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.152723 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.152734 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.152738 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.152741 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.152744 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.152746 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.152749 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.152752 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.152755 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.152758 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.152763: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.155412 3944451 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      -E0120 22:59:35.155425 3944451 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      -E0120 22:59:35.155428 3944451 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      -E0120 22:59:35.155431 3944451 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      -E0120 22:59:35.155434 3944451 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      -E0120 22:59:35.155436 3944451 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      -E0120 22:59:35.155439 3944451 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      -E0120 22:59:35.155442 3944451 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      -E0120 22:59:35.155445 3944451 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      -E0120 22:59:35.155448 3944451 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      -2025-01-20 22:59:35.155453: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.158094 3944451 buffer_comparator.cc:156] Difference at 0: 2999.66, expected 1852.66
      -E0120 22:59:35.158106 3944451 buffer_comparator.cc:156] Difference at 1: 3136.74, expected 1976.49
      -E0120 22:59:35.158112 3944451 buffer_comparator.cc:156] Difference at 2: 1562.51, expected 959.045
      -E0120 22:59:35.158115 3944451 buffer_comparator.cc:156] Difference at 3: 2497.23, expected 1563.86
      -E0120 22:59:35.158118 3944451 buffer_comparator.cc:156] Difference at 4: 1995.79, expected 1234.14
      -E0120 22:59:35.158121 3944451 buffer_comparator.cc:156] Difference at 5: 1957.18, expected 1193.4
      -E0120 22:59:35.158124 3944451 buffer_comparator.cc:156] Difference at 6: 2863.57, expected 1811.25
      -E0120 22:59:35.158127 3944451 buffer_comparator.cc:156] Difference at 7: 2789.26, expected 1869.16
      -E0120 22:59:35.158130 3944451 buffer_comparator.cc:156] Difference at 8: 2929.61, expected 1993.31
      -E0120 22:59:35.158133 3944451 buffer_comparator.cc:156] Difference at 9: 1452.92, expected 968.067
      -2025-01-20 22:59:35.158138: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.160769 3944451 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      -E0120 22:59:35.160780 3944451 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      -E0120 22:59:35.160783 3944451 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      -E0120 22:59:35.160786 3944451 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      -E0120 22:59:35.160789 3944451 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      -E0120 22:59:35.160792 3944451 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      -E0120 22:59:35.160795 3944451 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      -E0120 22:59:35.160798 3944451 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      -E0120 22:59:35.160801 3944451 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      -E0120 22:59:35.160804 3944451 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      -2025-01-20 22:59:35.160808: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.163529 3944451 buffer_comparator.cc:156] Difference at 224: 5.86135, expected 1844.62
      -E0120 22:59:35.163541 3944451 buffer_comparator.cc:156] Difference at 225: 4.06782, expected 1964.03
      -E0120 22:59:35.163544 3944451 buffer_comparator.cc:156] Difference at 226: 4.13873, expected 949.238
      -E0120 22:59:35.163547 3944451 buffer_comparator.cc:156] Difference at 227: 5.5797, expected 1560.45
      -E0120 22:59:35.163550 3944451 buffer_comparator.cc:156] Difference at 228: 5.25988, expected 1215.36
      -E0120 22:59:35.163553 3944451 buffer_comparator.cc:156] Difference at 229: 5.30797, expected 1186.63
      -E0120 22:59:35.163556 3944451 buffer_comparator.cc:156] Difference at 230: 4.14199, expected 1795.53
      -E0120 22:59:35.163558 3944451 buffer_comparator.cc:156] Difference at 231: 3.45867, expected 1837.81
      -E0120 22:59:35.163561 3944451 buffer_comparator.cc:156] Difference at 232: 3.85107, expected 1966.39
      -E0120 22:59:35.163564 3944451 buffer_comparator.cc:156] Difference at 233: 3.28131, expected 950.028
      -2025-01-20 22:59:35.163569: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 22:59:35.166200 3944451 buffer_comparator.cc:156] Difference at 224: 3.22034, expected 1844.62
      -E0120 22:59:35.166212 3944451 buffer_comparator.cc:156] Difference at 225: 2.49684, expected 1964.03
      -E0120 22:59:35.166215 3944451 buffer_comparator.cc:156] Difference at 226: 1.42502, expected 949.238
      -E0120 22:59:35.166218 3944451 buffer_comparator.cc:156] Difference at 227: 2.53522, expected 1560.45
      -E0120 22:59:35.166221 3944451 buffer_comparator.cc:156] Difference at 228: 2.01038, expected 1215.36
      -E0120 22:59:35.166224 3944451 buffer_comparator.cc:156] Difference at 229: 2.60944, expected 1186.63
      -Epoch   1	Train Loss: 15.782338	Train Acc: 21.4286%	Val Loss: 7.079204	Val Acc: 23.8000%
      -Epoch   2	Train Loss: 8.093776	Train Acc: 22.1429%	Val Loss: 3.137945	Val Acc: 28.0000%
      -Epoch   3	Train Loss: 2.983344	Train Acc: 43.5714%	Val Loss: 1.921002	Val Acc: 39.6000%
      -Epoch   4	Train Loss: 1.953735	Train Acc: 58.5714%	Val Loss: 2.015625	Val Acc: 42.0000%
      -Epoch   5	Train Loss: 1.699691	Train Acc: 62.8571%	Val Loss: 1.914729	Val Acc: 44.4000%
      -Epoch   6	Train Loss: 1.365187	Train Acc: 70.7143%	Val Loss: 1.669696	Val Acc: 52.6000%
      -Epoch   7	Train Loss: 1.127103	Train Acc: 73.5714%	Val Loss: 1.538632	Val Acc: 58.6000%
      -Epoch   8	Train Loss: 0.973196	Train Acc: 74.2857%	Val Loss: 1.553579	Val Acc: 60.4000%
      -Epoch   9	Train Loss: 0.915362	Train Acc: 75.0000%	Val Loss: 1.562308	Val Acc: 59.4000%
      -Epoch  10	Train Loss: 0.901013	Train Acc: 79.2857%	Val Loss: 1.568935	Val Acc: 59.8000%
      -Epoch  11	Train Loss: 0.794685	Train Acc: 80.0000%	Val Loss: 1.644144	Val Acc: 57.6000%
      -Epoch  12	Train Loss: 0.754806	Train Acc: 80.7143%	Val Loss: 1.734306	Val Acc: 56.2000%
      -Epoch  13	Train Loss: 0.725162	Train Acc: 81.4286%	Val Loss: 1.803399	Val Acc: 57.4000%
      -Epoch  14	Train Loss: 0.691019	Train Acc: 82.8571%	Val Loss: 1.826088	Val Acc: 57.6000%
      -Epoch  15	Train Loss: 0.648507	Train Acc: 84.2857%	Val Loss: 1.802276	Val Acc: 58.0000%
      -Epoch  16	Train Loss: 0.599733	Train Acc: 86.4286%	Val Loss: 1.755903	Val Acc: 59.0000%
      -Epoch  17	Train Loss: 0.569892	Train Acc: 86.4286%	Val Loss: 1.712767	Val Acc: 59.8000%
      -Epoch  18	Train Loss: 0.556019	Train Acc: 86.4286%	Val Loss: 1.689576	Val Acc: 60.6000%
      -Epoch  19	Train Loss: 0.542816	Train Acc: 87.8571%	Val Loss: 1.693934	Val Acc: 61.0000%
      -Epoch  20	Train Loss: 0.514605	Train Acc: 87.8571%	Val Loss: 1.706551	Val Acc: 61.2000%
      -Epoch  21	Train Loss: 0.496801	Train Acc: 87.8571%	Val Loss: 1.721584	Val Acc: 61.6000%
      -Epoch  22	Train Loss: 0.483525	Train Acc: 87.8571%	Val Loss: 1.733926	Val Acc: 62.0000%
      -Epoch  23	Train Loss: 0.472684	Train Acc: 89.2857%	Val Loss: 1.741387	Val Acc: 62.8000%
      -Epoch  24	Train Loss: 0.462024	Train Acc: 90.0000%	Val Loss: 1.743580	Val Acc: 63.4000%
      -Epoch  25	Train Loss: 0.449647	Train Acc: 90.0000%	Val Loss: 1.739714	Val Acc: 64.2000%
      -Epoch  26	Train Loss: 0.435166	Train Acc: 90.0000%	Val Loss: 1.733291	Val Acc: 64.4000%
      -Epoch  27	Train Loss: 0.419638	Train Acc: 91.4286%	Val Loss: 1.727418	Val Acc: 64.8000%
      -Early Stopping at Epoch 27
      -2025-01-20 23:00:08.518435: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
      +E0124 04:57:28.471448 1635447 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      +E0124 04:57:28.471523 1635447 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      +E0124 04:57:28.471527 1635447 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      +E0124 04:57:28.471532 1635447 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      +E0124 04:57:28.471536 1635447 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      +E0124 04:57:28.471540 1635447 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      +E0124 04:57:28.471544 1635447 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      +E0124 04:57:28.471548 1635447 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      +E0124 04:57:28.471552 1635447 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      +E0124 04:57:28.471556 1635447 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      +2025-01-24 04:57:28.471568: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.474398 1635447 buffer_comparator.cc:156] Difference at 0: 2299.91, expected 1852.66
      +E0124 04:57:28.474414 1635447 buffer_comparator.cc:156] Difference at 1: 2365.05, expected 1976.49
      +E0124 04:57:28.474418 1635447 buffer_comparator.cc:156] Difference at 2: 1113.52, expected 959.045
      +E0124 04:57:28.474423 1635447 buffer_comparator.cc:156] Difference at 3: 1907.18, expected 1563.86
      +E0124 04:57:28.474427 1635447 buffer_comparator.cc:156] Difference at 4: 1468.14, expected 1234.14
      +E0124 04:57:28.474431 1635447 buffer_comparator.cc:156] Difference at 5: 1437.09, expected 1193.4
      +E0124 04:57:28.474435 1635447 buffer_comparator.cc:156] Difference at 6: 2153.91, expected 1811.25
      +E0124 04:57:28.474439 1635447 buffer_comparator.cc:156] Difference at 14: 2152.45, expected 1862.99
      +E0124 04:57:28.474445 1635447 buffer_comparator.cc:156] Difference at 15: 2224.9, expected 1998.11
      +E0124 04:57:28.474450 1635447 buffer_comparator.cc:156] Difference at 17: 1795.48, expected 1583.4
      +2025-01-24 04:57:28.474457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.477270 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.477285 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.477290 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.477294 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.477298 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.477302 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.477306 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.477310 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.477314 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.477318 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.477324: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.480112 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.480124 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.480127 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.480130 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.480133 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.480136 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.480139 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.480142 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.480144 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.480147 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.480152: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.482900 1635447 buffer_comparator.cc:156] Difference at 112: 9.61375, expected 1872.36
      +E0124 04:57:28.482911 1635447 buffer_comparator.cc:156] Difference at 113: 8.65445, expected 1994.81
      +E0124 04:57:28.482914 1635447 buffer_comparator.cc:156] Difference at 114: 6.27245, expected 964.469
      +E0124 04:57:28.482917 1635447 buffer_comparator.cc:156] Difference at 115: 6.61216, expected 1581.68
      +E0124 04:57:28.482920 1635447 buffer_comparator.cc:156] Difference at 116: 8.95425, expected 1238.51
      +E0124 04:57:28.482923 1635447 buffer_comparator.cc:156] Difference at 117: 7.46079, expected 1214.35
      +E0124 04:57:28.482926 1635447 buffer_comparator.cc:156] Difference at 118: 6.87961, expected 1823.85
      +E0124 04:57:28.482929 1635447 buffer_comparator.cc:156] Difference at 119: 8.19096, expected 1866.65
      +E0124 04:57:28.482932 1635447 buffer_comparator.cc:156] Difference at 120: 7.90554, expected 1990.53
      +E0124 04:57:28.482934 1635447 buffer_comparator.cc:156] Difference at 121: 7.49312, expected 963.479
      +2025-01-24 04:57:28.482939: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.485704 1635447 buffer_comparator.cc:156] Difference at 0: 2999.66, expected 1852.66
      +E0124 04:57:28.485717 1635447 buffer_comparator.cc:156] Difference at 1: 3136.74, expected 1976.49
      +E0124 04:57:28.485720 1635447 buffer_comparator.cc:156] Difference at 2: 1562.51, expected 959.045
      +E0124 04:57:28.485723 1635447 buffer_comparator.cc:156] Difference at 3: 2497.23, expected 1563.86
      +E0124 04:57:28.485726 1635447 buffer_comparator.cc:156] Difference at 4: 1995.79, expected 1234.14
      +E0124 04:57:28.485729 1635447 buffer_comparator.cc:156] Difference at 5: 1957.18, expected 1193.4
      +E0124 04:57:28.485732 1635447 buffer_comparator.cc:156] Difference at 6: 2863.57, expected 1811.25
      +E0124 04:57:28.485735 1635447 buffer_comparator.cc:156] Difference at 7: 2789.26, expected 1869.16
      +E0124 04:57:28.485738 1635447 buffer_comparator.cc:156] Difference at 8: 2929.61, expected 1993.31
      +E0124 04:57:28.485741 1635447 buffer_comparator.cc:156] Difference at 9: 1452.92, expected 968.067
      +2025-01-24 04:57:28.485745: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.488509 1635447 buffer_comparator.cc:156] Difference at 112: 3.96357, expected 1872.36
      +E0124 04:57:28.488524 1635447 buffer_comparator.cc:156] Difference at 113: 4.08387, expected 1994.81
      +E0124 04:57:28.488527 1635447 buffer_comparator.cc:156] Difference at 114: 3.43697, expected 964.469
      +E0124 04:57:28.488530 1635447 buffer_comparator.cc:156] Difference at 115: 3.49205, expected 1581.68
      +E0124 04:57:28.488533 1635447 buffer_comparator.cc:156] Difference at 116: 4.636, expected 1238.51
      +E0124 04:57:28.488535 1635447 buffer_comparator.cc:156] Difference at 117: 3.27176, expected 1214.35
      +E0124 04:57:28.488538 1635447 buffer_comparator.cc:156] Difference at 118: 4.36485, expected 1823.85
      +E0124 04:57:28.488541 1635447 buffer_comparator.cc:156] Difference at 119: 4.08502, expected 1866.65
      +E0124 04:57:28.488544 1635447 buffer_comparator.cc:156] Difference at 120: 4.00569, expected 1990.53
      +E0124 04:57:28.488547 1635447 buffer_comparator.cc:156] Difference at 121: 3.21014, expected 963.479
      +2025-01-24 04:57:28.488551: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.491393 1635447 buffer_comparator.cc:156] Difference at 224: 5.86135, expected 1844.62
      +E0124 04:57:28.491408 1635447 buffer_comparator.cc:156] Difference at 225: 4.06782, expected 1964.03
      +E0124 04:57:28.491411 1635447 buffer_comparator.cc:156] Difference at 226: 4.13873, expected 949.238
      +E0124 04:57:28.491414 1635447 buffer_comparator.cc:156] Difference at 227: 5.5797, expected 1560.45
      +E0124 04:57:28.491417 1635447 buffer_comparator.cc:156] Difference at 228: 5.25988, expected 1215.36
      +E0124 04:57:28.491420 1635447 buffer_comparator.cc:156] Difference at 229: 5.30797, expected 1186.63
      +E0124 04:57:28.491423 1635447 buffer_comparator.cc:156] Difference at 230: 4.14199, expected 1795.53
      +E0124 04:57:28.491426 1635447 buffer_comparator.cc:156] Difference at 231: 3.45867, expected 1837.81
      +E0124 04:57:28.491428 1635447 buffer_comparator.cc:156] Difference at 232: 3.85107, expected 1966.39
      +E0124 04:57:28.491431 1635447 buffer_comparator.cc:156] Difference at 233: 3.28131, expected 950.028
      +2025-01-24 04:57:28.491436: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:28.494189 1635447 buffer_comparator.cc:156] Difference at 224: 3.22034, expected 1844.62
      +E0124 04:57:28.494205 1635447 buffer_comparator.cc:156] Difference at 225: 2.49684, expected 1964.03
      +E0124 04:57:28.494208 1635447 buffer_comparator.cc:156] Difference at 226: 1.42502, expected 949.238
      +E0124 04:57:28.494211 1635447 buffer_comparator.cc:156] Difference at 227: 2.53522, expected 1560.45
      +E0124 04:57:28.494214 1635447 buffer_comparator.cc:156] Difference at 228: 2.01038, expected 1215.36
      +Epoch   1	Train Loss: 15.798144	Train Acc: 22.8571%	Val Loss: 7.463610	Val Acc: 24.8000%
      +Epoch   2	Train Loss: 8.541474	Train Acc: 22.1429%	Val Loss: 3.486367	Val Acc: 30.0000%
      +Epoch   3	Train Loss: 3.427734	Train Acc: 43.5714%	Val Loss: 2.192992	Val Acc: 39.8000%
      +Epoch   4	Train Loss: 2.246899	Train Acc: 56.4286%	Val Loss: 1.773665	Val Acc: 44.2000%
      +Epoch   5	Train Loss: 1.623290	Train Acc: 65.0000%	Val Loss: 1.666274	Val Acc: 49.2000%
      +Epoch   6	Train Loss: 1.454367	Train Acc: 70.7143%	Val Loss: 1.493687	Val Acc: 55.0000%
      +Epoch   7	Train Loss: 1.245771	Train Acc: 70.7143%	Val Loss: 1.406759	Val Acc: 59.2000%
      +Epoch   8	Train Loss: 1.096862	Train Acc: 73.5714%	Val Loss: 1.398893	Val Acc: 61.6000%
      +Epoch   9	Train Loss: 0.985403	Train Acc: 74.2857%	Val Loss: 1.414804	Val Acc: 63.4000%
      +Epoch  10	Train Loss: 0.951388	Train Acc: 76.4286%	Val Loss: 1.408753	Val Acc: 63.6000%
      +Epoch  11	Train Loss: 0.843768	Train Acc: 77.8571%	Val Loss: 1.405046	Val Acc: 62.8000%
      +Epoch  12	Train Loss: 0.733548	Train Acc: 77.8571%	Val Loss: 1.433094	Val Acc: 61.8000%
      +Epoch  13	Train Loss: 0.719941	Train Acc: 79.2857%	Val Loss: 1.442757	Val Acc: 63.0000%
      +Epoch  14	Train Loss: 0.655222	Train Acc: 80.7143%	Val Loss: 1.453139	Val Acc: 63.4000%
      +Epoch  15	Train Loss: 0.623899	Train Acc: 82.8571%	Val Loss: 1.455770	Val Acc: 63.8000%
      +Epoch  16	Train Loss: 0.584657	Train Acc: 83.5714%	Val Loss: 1.447179	Val Acc: 64.0000%
      +Epoch  17	Train Loss: 0.542308	Train Acc: 84.2857%	Val Loss: 1.435064	Val Acc: 64.8000%
      +Epoch  18	Train Loss: 0.512126	Train Acc: 85.0000%	Val Loss: 1.426513	Val Acc: 65.4000%
      +Epoch  19	Train Loss: 0.490881	Train Acc: 85.0000%	Val Loss: 1.425405	Val Acc: 65.2000%
      +Epoch  20	Train Loss: 0.473799	Train Acc: 86.4286%	Val Loss: 1.433259	Val Acc: 66.0000%
      +Epoch  21	Train Loss: 0.458639	Train Acc: 87.1429%	Val Loss: 1.450437	Val Acc: 65.4000%
      +Epoch  22	Train Loss: 0.439167	Train Acc: 87.1429%	Val Loss: 1.475696	Val Acc: 65.6000%
      +Epoch  23	Train Loss: 0.420514	Train Acc: 88.5714%	Val Loss: 1.507743	Val Acc: 64.6000%
      +Epoch  24	Train Loss: 0.405530	Train Acc: 89.2857%	Val Loss: 1.545164	Val Acc: 64.4000%
      +Epoch  25	Train Loss: 0.393426	Train Acc: 90.0000%	Val Loss: 1.584067	Val Acc: 63.8000%
      +Epoch  26	Train Loss: 0.383347	Train Acc: 89.2857%	Val Loss: 1.620508	Val Acc: 64.6000%
      +Epoch  27	Train Loss: 0.374202	Train Acc: 88.5714%	Val Loss: 1.651417	Val Acc: 64.2000%
      +Epoch  28	Train Loss: 0.364877	Train Acc: 88.5714%	Val Loss: 1.672666	Val Acc: 64.0000%
      +Early Stopping at Epoch 28
      +2025-01-24 04:57:59.569522: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_35_0', 48 bytes spill stores, 48 bytes spill loads
       
      -E0120 23:00:08.819090 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.819165 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.819173 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.819180 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.819187 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.819194 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.819201 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.819208 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.819215 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.819221 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.819239: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.821455 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.821484 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.821491 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.821498 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.821505 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.821511 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.821518 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.821524 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.821531 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.821538 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.821548: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.823854 3944451 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      -E0120 23:00:08.823884 3944451 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      -E0120 23:00:08.823891 3944451 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      -E0120 23:00:08.823897 3944451 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      -E0120 23:00:08.823904 3944451 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      -E0120 23:00:08.823911 3944451 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      -E0120 23:00:08.823917 3944451 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      -E0120 23:00:08.823924 3944451 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      -E0120 23:00:08.823930 3944451 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      -E0120 23:00:08.823937 3944451 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      -2025-01-20 23:00:08.823947: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.826144 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.826178 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.826185 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.826194 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.826201 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.826207 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.826214 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.826221 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.826227 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.826234 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.826244: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.828433 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.828460 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.828466 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.828473 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.828480 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.828486 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.828493 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.828499 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.828506 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.828512 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.828523: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.830665 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.830697 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.830704 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.830711 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.830717 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.830724 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.830731 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.830737 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.830744 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.830750 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.830761: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.832792 3944451 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      -E0120 23:00:08.832809 3944451 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      -E0120 23:00:08.832812 3944451 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      -E0120 23:00:08.832815 3944451 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      -E0120 23:00:08.832818 3944451 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      -E0120 23:00:08.832821 3944451 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      -E0120 23:00:08.832824 3944451 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      -E0120 23:00:08.832827 3944451 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      -E0120 23:00:08.832832 3944451 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      -E0120 23:00:08.832835 3944451 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      -2025-01-20 23:00:08.832839: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.834833 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.834849 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.834852 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.834855 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.834858 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.834861 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.834864 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.834866 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.834869 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.834872 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.834877: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.836860 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.836873 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.836876 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.836879 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.836882 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.836885 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.836888 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.836891 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.836894 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.836897 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.836901: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.838876 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.838890 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.838893 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.838896 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.838899 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.838902 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.838905 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.838908 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.838910 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.838913 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.838918: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.840900 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.840915 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.840918 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.840921 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.840924 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.840927 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.840930 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.840932 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.840935 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.840938 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.840943: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.842915 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.842932 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.842935 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.842938 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.842941 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.842944 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.842946 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.842949 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.842952 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.842955 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.842960: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.844939 3944451 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      -E0120 23:00:08.844954 3944451 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      -E0120 23:00:08.844957 3944451 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      -E0120 23:00:08.844960 3944451 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      -E0120 23:00:08.844963 3944451 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      -E0120 23:00:08.844965 3944451 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      -E0120 23:00:08.844968 3944451 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      -E0120 23:00:08.844971 3944451 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      -E0120 23:00:08.844974 3944451 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      -E0120 23:00:08.844977 3944451 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      -2025-01-20 23:00:08.844982: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.847021 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.847034 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.847037 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.847040 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.847043 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.847046 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.847051 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.847054 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.847057 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.847060 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.847064: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.849110 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.849127 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.849130 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.849133 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.849136 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.849139 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.849142 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.849145 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.849148 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.849151 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.849155: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.851143 3944451 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.955
      -E0120 23:00:08.851156 3944451 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      -E0120 23:00:08.851160 3944451 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.81
      -E0120 23:00:08.851164 3944451 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.417
      -E0120 23:00:08.851167 3944451 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.55
      -E0120 23:00:08.851170 3944451 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      -E0120 23:00:08.851173 3944451 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      -E0120 23:00:08.851176 3944451 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.77
      -E0120 23:00:08.851179 3944451 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.09
      -E0120 23:00:08.851182 3944451 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.62
      -2025-01-20 23:00:08.851187: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.853170 3944451 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.598
      -E0120 23:00:08.853183 3944451 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.9
      -E0120 23:00:08.853186 3944451 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.78
      -E0120 23:00:08.853189 3944451 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.85
      -E0120 23:00:08.853192 3944451 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      -E0120 23:00:08.853195 3944451 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.955
      -E0120 23:00:08.853198 3944451 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      -E0120 23:00:08.853201 3944451 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.77
      -E0120 23:00:08.853204 3944451 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.38
      -E0120 23:00:08.853207 3944451 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      -2025-01-20 23:00:08.853212: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.855224 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.855240 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.855243 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.855246 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.855249 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.855252 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.855255 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.855258 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.855261 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.855263 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.855268: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.857271 3944451 buffer_comparator.cc:156] Difference at 896: 485.098, expected 958.133
      -E0120 23:00:08.857285 3944451 buffer_comparator.cc:156] Difference at 897: 732.587, expected 1218.67
      -E0120 23:00:08.857288 3944451 buffer_comparator.cc:156] Difference at 898: 635.29, expected 1826.8
      -E0120 23:00:08.857291 3944451 buffer_comparator.cc:156] Difference at 899: 446.948, expected 1593.44
      -E0120 23:00:08.857294 3944451 buffer_comparator.cc:156] Difference at 900: 712.745, expected 1119.04
      -E0120 23:00:08.857297 3944451 buffer_comparator.cc:156] Difference at 901: 516.07, expected 1796.72
      -E0120 23:00:08.857300 3944451 buffer_comparator.cc:156] Difference at 902: 373.095, expected 1279.87
      -E0120 23:00:08.857303 3944451 buffer_comparator.cc:156] Difference at 903: 483.905, expected 941.483
      -E0120 23:00:08.857306 3944451 buffer_comparator.cc:156] Difference at 904: 721.412, expected 1202.98
      -E0120 23:00:08.857309 3944451 buffer_comparator.cc:156] Difference at 905: 633.571, expected 1817.42
      -2025-01-20 23:00:08.857314: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.859309 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.859323 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.859326 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.859329 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.859332 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.859335 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.859338 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.859341 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.859343 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.859346 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.859351: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.861349 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.861365 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.861368 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.861371 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.861375 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.861378 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.861381 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.861384 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.861387 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.861390 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.861394: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.863415 3944451 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      -E0120 23:00:08.863428 3944451 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      -E0120 23:00:08.863431 3944451 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      -E0120 23:00:08.863434 3944451 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      -E0120 23:00:08.863437 3944451 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      -E0120 23:00:08.863440 3944451 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      -E0120 23:00:08.863443 3944451 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      -E0120 23:00:08.863446 3944451 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      -E0120 23:00:08.863449 3944451 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      -E0120 23:00:08.863452 3944451 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      -2025-01-20 23:00:08.863456: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.865569 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.865583 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.865586 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.865589 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.865592 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.865595 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.865598 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.865601 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.865604 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.865606 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.865611: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.867755 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.867769 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.867772 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.867775 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.867778 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.867781 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.867784 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.867786 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.867791 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.867794 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.867798: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:08.869885 3944451 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      -E0120 23:00:08.869899 3944451 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      -E0120 23:00:08.869902 3944451 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      -E0120 23:00:08.869905 3944451 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      -E0120 23:00:08.869908 3944451 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      -E0120 23:00:08.869911 3944451 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      -E0120 23:00:08.869913 3944451 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      -E0120 23:00:08.869916 3944451 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      -E0120 23:00:08.869919 3944451 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      -E0120 23:00:08.869922 3944451 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      -2025-01-20 23:00:08.869927: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -2025-01-20 23:00:10.392691: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
      +E0124 04:57:59.973802 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.973880 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.973888 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.973895 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.973902 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.973909 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.973916 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.973923 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.973929 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.973936 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.973954: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.976363 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.976393 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.976400 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.976406 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.976413 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.976420 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.976426 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.976433 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.976440 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.976446 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.976457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.978935 1635447 buffer_comparator.cc:156] Difference at 112: 304.19, expected 949.794
      +E0124 04:57:59.978971 1635447 buffer_comparator.cc:156] Difference at 113: 215.63, expected 1213.31
      +E0124 04:57:59.978978 1635447 buffer_comparator.cc:156] Difference at 114: 345.751, expected 1837.45
      +E0124 04:57:59.978985 1635447 buffer_comparator.cc:156] Difference at 115: 254.606, expected 1600.28
      +E0124 04:57:59.978992 1635447 buffer_comparator.cc:156] Difference at 116: 181.189, expected 1117.21
      +E0124 04:57:59.978998 1635447 buffer_comparator.cc:156] Difference at 117: 228.353, expected 1790.84
      +E0124 04:57:59.979005 1635447 buffer_comparator.cc:156] Difference at 118: 351.051, expected 1291.05
      +E0124 04:57:59.979011 1635447 buffer_comparator.cc:156] Difference at 119: 304.235, expected 943.247
      +E0124 04:57:59.979018 1635447 buffer_comparator.cc:156] Difference at 120: 216.539, expected 1198.87
      +E0124 04:57:59.979025 1635447 buffer_comparator.cc:156] Difference at 121: 345.609, expected 1820.16
      +2025-01-24 04:57:59.979035: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.981276 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.981291 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.981294 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.981299 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.981302 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.981305 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.981308 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.981310 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.981313 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.981316 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.981321: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.983564 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.983579 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.983583 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.983586 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.983588 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.983592 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.983594 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.983597 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.983600 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.983603 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.983608: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.985794 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.985820 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.985823 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.985826 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.985829 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.985832 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.985835 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.985838 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.985841 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.985844 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.985848: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.988189 1635447 buffer_comparator.cc:156] Difference at 225: 1429.96, expected 1208.53
      +E0124 04:57:59.988204 1635447 buffer_comparator.cc:156] Difference at 226: 1251.98, expected 1824.95
      +E0124 04:57:59.988207 1635447 buffer_comparator.cc:156] Difference at 227: 872.927, expected 1592.16
      +E0124 04:57:59.988210 1635447 buffer_comparator.cc:156] Difference at 228: 1392.16, expected 1119.85
      +E0124 04:57:59.988213 1635447 buffer_comparator.cc:156] Difference at 229: 994.073, expected 1778.8
      +E0124 04:57:59.988216 1635447 buffer_comparator.cc:156] Difference at 230: 754.206, expected 1283.29
      +E0124 04:57:59.988219 1635447 buffer_comparator.cc:156] Difference at 232: 1451.91, expected 1192.73
      +E0124 04:57:59.988221 1635447 buffer_comparator.cc:156] Difference at 233: 1262.29, expected 1803.14
      +E0124 04:57:59.988226 1635447 buffer_comparator.cc:156] Difference at 234: 889.008, expected 1571.89
      +E0124 04:57:59.988229 1635447 buffer_comparator.cc:156] Difference at 235: 1418.52, expected 1102.22
      +2025-01-24 04:57:59.988234: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.990451 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.990466 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.990470 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.990473 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.990476 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.990479 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.990482 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.990485 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.990488 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.990491 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.990495: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.992682 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.992696 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.992699 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.992702 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.992705 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.992708 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.992711 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.992714 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.992717 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.992720 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.992724: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.994909 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.994924 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.994928 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.994931 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.994933 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.994936 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.994939 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.994942 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.994945 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.994948 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.994953: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.997149 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.997165 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.997168 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.997171 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.997174 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.997177 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.997180 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.997183 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.997186 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.997188 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.997193: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:57:59.999399 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:57:59.999414 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:57:59.999418 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:57:59.999420 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:57:59.999423 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:57:59.999426 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:57:59.999429 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:57:59.999432 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:57:59.999435 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:57:59.999438 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:57:59.999443: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.001637 1635447 buffer_comparator.cc:156] Difference at 449: 1453.3, expected 1198.64
      +E0124 04:58:00.001652 1635447 buffer_comparator.cc:156] Difference at 450: 1279.09, expected 1813.5
      +E0124 04:58:00.001656 1635447 buffer_comparator.cc:156] Difference at 451: 886.266, expected 1575.24
      +E0124 04:58:00.001659 1635447 buffer_comparator.cc:156] Difference at 452: 1425.28, expected 1104.71
      +E0124 04:58:00.001661 1635447 buffer_comparator.cc:156] Difference at 453: 1023.45, expected 1764.88
      +E0124 04:58:00.001664 1635447 buffer_comparator.cc:156] Difference at 454: 741.257, expected 1269.7
      +E0124 04:58:00.001667 1635447 buffer_comparator.cc:156] Difference at 456: 1427.19, expected 1213.59
      +E0124 04:58:00.001670 1635447 buffer_comparator.cc:156] Difference at 457: 1244.7, expected 1821.28
      +E0124 04:58:00.001673 1635447 buffer_comparator.cc:156] Difference at 458: 874.06, expected 1595.74
      +E0124 04:58:00.001676 1635447 buffer_comparator.cc:156] Difference at 459: 1393.56, expected 1114.22
      +2025-01-24 04:58:00.001681: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.003933 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.003949 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.003952 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.003955 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.003958 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.003961 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.003965 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.003968 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.003971 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.003974 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.003979: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.006239 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.006253 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.006256 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.006259 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.006262 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.006265 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.006268 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.006271 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.006274 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.006277 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.006282: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.008468 1635447 buffer_comparator.cc:156] Difference at 7: 1058.92, expected 951.955
      +E0124 04:58:00.008485 1635447 buffer_comparator.cc:156] Difference at 11: 1263.92, expected 1121.95
      +E0124 04:58:00.008489 1635447 buffer_comparator.cc:156] Difference at 179: 1223.75, expected 1098.81
      +E0124 04:58:00.008492 1635447 buffer_comparator.cc:156] Difference at 266: 1047.35, expected 934.417
      +E0124 04:58:00.008495 1635447 buffer_comparator.cc:156] Difference at 270: 1246.8, expected 1101.55
      +E0124 04:58:00.008499 1635447 buffer_comparator.cc:156] Difference at 417: 1222.47, expected 1095.88
      +E0124 04:58:00.008502 1635447 buffer_comparator.cc:156] Difference at 521: 1725.84, expected 1550.38
      +E0124 04:58:00.008505 1635447 buffer_comparator.cc:156] Difference at 522: 1233.24, expected 1093.77
      +E0124 04:58:00.008508 1635447 buffer_comparator.cc:156] Difference at 620: 1247.46, expected 1121.09
      +E0124 04:58:00.008512 1635447 buffer_comparator.cc:156] Difference at 690: 1251.43, expected 1120.62
      +2025-01-24 04:58:00.008516: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.010695 1635447 buffer_comparator.cc:156] Difference at 0: 1057.27, expected 928.598
      +E0124 04:58:00.010711 1635447 buffer_comparator.cc:156] Difference at 1: 1319.15, expected 1186.9
      +E0124 04:58:00.010714 1635447 buffer_comparator.cc:156] Difference at 2: 2004.43, expected 1796.78
      +E0124 04:58:00.010717 1635447 buffer_comparator.cc:156] Difference at 3: 1745.74, expected 1565.85
      +E0124 04:58:00.010720 1635447 buffer_comparator.cc:156] Difference at 4: 1252.2, expected 1095.49
      +E0124 04:58:00.010723 1635447 buffer_comparator.cc:156] Difference at 7: 1175.57, expected 951.955
      +E0124 04:58:00.010726 1635447 buffer_comparator.cc:156] Difference at 8: 1398.75, expected 1216.24
      +E0124 04:58:00.010729 1635447 buffer_comparator.cc:156] Difference at 9: 2125.62, expected 1833.77
      +E0124 04:58:00.010731 1635447 buffer_comparator.cc:156] Difference at 10: 1878.38, expected 1592.38
      +E0124 04:58:00.010734 1635447 buffer_comparator.cc:156] Difference at 11: 1362.67, expected 1121.95
      +2025-01-24 04:58:00.010739: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.012966 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.012981 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.012984 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.012987 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.012990 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.012993 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.012996 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.012999 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.013002 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.013005 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.013010: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.015229 1635447 buffer_comparator.cc:156] Difference at 896: 485.098, expected 958.133
      +E0124 04:58:00.015248 1635447 buffer_comparator.cc:156] Difference at 897: 732.587, expected 1218.67
      +E0124 04:58:00.015251 1635447 buffer_comparator.cc:156] Difference at 898: 635.29, expected 1826.8
      +E0124 04:58:00.015254 1635447 buffer_comparator.cc:156] Difference at 899: 446.948, expected 1593.44
      +E0124 04:58:00.015257 1635447 buffer_comparator.cc:156] Difference at 900: 712.745, expected 1119.04
      +E0124 04:58:00.015260 1635447 buffer_comparator.cc:156] Difference at 901: 516.07, expected 1796.72
      +E0124 04:58:00.015263 1635447 buffer_comparator.cc:156] Difference at 902: 373.095, expected 1279.87
      +E0124 04:58:00.015266 1635447 buffer_comparator.cc:156] Difference at 903: 483.905, expected 941.483
      +E0124 04:58:00.015269 1635447 buffer_comparator.cc:156] Difference at 904: 721.412, expected 1202.98
      +E0124 04:58:00.015272 1635447 buffer_comparator.cc:156] Difference at 905: 633.571, expected 1817.42
      +2025-01-24 04:58:00.015277: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.017492 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.017507 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.017510 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.017513 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.017516 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.017519 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.017522 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.017525 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.017528 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.017531 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.017536: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.019748 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.019764 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.019767 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.019770 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.019775 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.019778 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.019781 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.019784 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.019787 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.019789 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.019794: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.022008 1635447 buffer_comparator.cc:156] Difference at 897: 1448.31, expected 1218.67
      +E0124 04:58:00.022021 1635447 buffer_comparator.cc:156] Difference at 898: 1259.31, expected 1826.8
      +E0124 04:58:00.022024 1635447 buffer_comparator.cc:156] Difference at 899: 890.005, expected 1593.44
      +E0124 04:58:00.022027 1635447 buffer_comparator.cc:156] Difference at 900: 1414.65, expected 1119.04
      +E0124 04:58:00.022030 1635447 buffer_comparator.cc:156] Difference at 901: 1020.46, expected 1796.72
      +E0124 04:58:00.022033 1635447 buffer_comparator.cc:156] Difference at 902: 745.849, expected 1279.87
      +E0124 04:58:00.022036 1635447 buffer_comparator.cc:156] Difference at 904: 1440.8, expected 1202.98
      +E0124 04:58:00.022039 1635447 buffer_comparator.cc:156] Difference at 905: 1259.53, expected 1817.42
      +E0124 04:58:00.022042 1635447 buffer_comparator.cc:156] Difference at 906: 875.773, expected 1572.98
      +E0124 04:58:00.022045 1635447 buffer_comparator.cc:156] Difference at 907: 1408.57, expected 1117.68
      +2025-01-24 04:58:00.022055: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.024415 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.024429 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.024432 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.024435 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.024438 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.024441 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.024444 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.024447 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.024450 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.024453 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.024457: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.026814 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.026829 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.026832 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.026835 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.026838 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.026841 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.026844 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.026847 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.026851 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.026854 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.026859: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:00.029167 1635447 buffer_comparator.cc:156] Difference at 1793: 1450.89, expected 1190.76
      +E0124 04:58:00.029184 1635447 buffer_comparator.cc:156] Difference at 1794: 1267.6, expected 1807.71
      +E0124 04:58:00.029187 1635447 buffer_comparator.cc:156] Difference at 1795: 881.963, expected 1565.59
      +E0124 04:58:00.029190 1635447 buffer_comparator.cc:156] Difference at 1796: 1413.49, expected 1101.05
      +E0124 04:58:00.029193 1635447 buffer_comparator.cc:156] Difference at 1797: 1005.6, expected 1756.22
      +E0124 04:58:00.029196 1635447 buffer_comparator.cc:156] Difference at 1798: 764.123, expected 1272.35
      +E0124 04:58:00.029199 1635447 buffer_comparator.cc:156] Difference at 1800: 1466.23, expected 1200.59
      +E0124 04:58:00.029202 1635447 buffer_comparator.cc:156] Difference at 1801: 1286.98, expected 1808.37
      +E0124 04:58:00.029205 1635447 buffer_comparator.cc:156] Difference at 1802: 899.199, expected 1570.73
      +E0124 04:58:00.029208 1635447 buffer_comparator.cc:156] Difference at 1803: 1441.04, expected 1102.47
      +2025-01-24 04:58:00.029212: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +2025-01-24 04:58:01.758767: I external/xla/xla/stream_executor/cuda/subprocess_compilation.cc:346] ptxas warning : Registers are spilled to local memory in function 'gemm_fusion_dot_30', 216 bytes spill stores, 216 bytes spill loads
       
      -E0120 23:00:10.649785 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.649889 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.649904 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.649912 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.649919 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.649926 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.649933 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.649939 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.649946 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.649953 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.649972: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.653130 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.653156 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.653164 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.653171 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.653178 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.653185 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.653191 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.653198 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.653205 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.653211 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.653225: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.656355 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.656367 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.656370 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.656373 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.656376 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.656379 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.656382 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.656385 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.656388 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.656391 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.656396: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.659359 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.659370 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.659374 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.659377 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.659380 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.659383 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.659386 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.659389 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.659392 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.659395 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.659400: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.662366 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.662377 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.662381 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.662384 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.662387 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.662390 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.662393 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.662396 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.662399 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.662401 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.662406: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.665448 3944451 buffer_comparator.cc:156] Difference at 112: 615.618, expected 769.985
      -E0120 23:00:10.665459 3944451 buffer_comparator.cc:156] Difference at 113: 621.908, expected 1100
      -E0120 23:00:10.665462 3944451 buffer_comparator.cc:156] Difference at 114: 511.482, expected 1061.37
      -E0120 23:00:10.665467 3944451 buffer_comparator.cc:156] Difference at 115: 351.151, expected 1558.2
      -E0120 23:00:10.665470 3944451 buffer_comparator.cc:156] Difference at 116: 299.162, expected 1573.39
      -E0120 23:00:10.665473 3944451 buffer_comparator.cc:156] Difference at 117: 438.626, expected 1297.47
      -E0120 23:00:10.665476 3944451 buffer_comparator.cc:156] Difference at 118: 419.047, expected 880.235
      -E0120 23:00:10.665479 3944451 buffer_comparator.cc:156] Difference at 119: 614.3, expected 764.244
      -E0120 23:00:10.665482 3944451 buffer_comparator.cc:156] Difference at 120: 624.213, expected 1089.23
      -E0120 23:00:10.665485 3944451 buffer_comparator.cc:156] Difference at 121: 511.707, expected 1044.63
      -2025-01-20 23:00:10.665490: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.668396 3944451 buffer_comparator.cc:156] Difference at 224: 612.82, expected 745.838
      -E0120 23:00:10.668407 3944451 buffer_comparator.cc:156] Difference at 225: 624.08, expected 1079.1
      -E0120 23:00:10.668411 3944451 buffer_comparator.cc:156] Difference at 226: 503.822, expected 1034.99
      -E0120 23:00:10.668414 3944451 buffer_comparator.cc:156] Difference at 227: 349.836, expected 1538.8
      -E0120 23:00:10.668417 3944451 buffer_comparator.cc:156] Difference at 228: 305.595, expected 1554.44
      -E0120 23:00:10.668420 3944451 buffer_comparator.cc:156] Difference at 229: 442.133, expected 1264.82
      -E0120 23:00:10.668423 3944451 buffer_comparator.cc:156] Difference at 230: 429.287, expected 853.966
      -E0120 23:00:10.668426 3944451 buffer_comparator.cc:156] Difference at 231: 623.312, expected 756.177
      -E0120 23:00:10.668429 3944451 buffer_comparator.cc:156] Difference at 232: 633.094, expected 1076.91
      -E0120 23:00:10.668432 3944451 buffer_comparator.cc:156] Difference at 233: 515.065, expected 1029.02
      -2025-01-20 23:00:10.668437: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.671336 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.671347 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.671350 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.671353 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.671356 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.671360 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.671363 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.671366 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.671369 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.671371 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.671376: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.674337 3944451 buffer_comparator.cc:156] Difference at 224: 1241.41, expected 745.838
      -E0120 23:00:10.674348 3944451 buffer_comparator.cc:156] Difference at 225: 1260.98, expected 1079.1
      -E0120 23:00:10.674352 3944451 buffer_comparator.cc:156] Difference at 227: 700.675, expected 1538.8
      -E0120 23:00:10.674355 3944451 buffer_comparator.cc:156] Difference at 228: 611.164, expected 1554.44
      -E0120 23:00:10.674358 3944451 buffer_comparator.cc:156] Difference at 229: 881, expected 1264.82
      -E0120 23:00:10.674361 3944451 buffer_comparator.cc:156] Difference at 231: 1239.26, expected 756.177
      -E0120 23:00:10.674364 3944451 buffer_comparator.cc:156] Difference at 232: 1261.85, expected 1076.91
      -E0120 23:00:10.674367 3944451 buffer_comparator.cc:156] Difference at 234: 698.402, expected 1533.54
      -E0120 23:00:10.674372 3944451 buffer_comparator.cc:156] Difference at 235: 603.167, expected 1551.87
      -E0120 23:00:10.674375 3944451 buffer_comparator.cc:156] Difference at 236: 859.047, expected 1264.84
      -2025-01-20 23:00:10.674380: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.677306 3944451 buffer_comparator.cc:156] Difference at 448: 1224.67, expected 770.258
      -E0120 23:00:10.677317 3944451 buffer_comparator.cc:156] Difference at 449: 1238.62, expected 1098.93
      -E0120 23:00:10.677320 3944451 buffer_comparator.cc:156] Difference at 451: 690.154, expected 1560.21
      -E0120 23:00:10.677324 3944451 buffer_comparator.cc:156] Difference at 452: 601.951, expected 1585.41
      -E0120 23:00:10.677327 3944451 buffer_comparator.cc:156] Difference at 453: 877.959, expected 1307.15
      -E0120 23:00:10.677330 3944451 buffer_comparator.cc:156] Difference at 455: 1229.45, expected 760.638
      -E0120 23:00:10.677333 3944451 buffer_comparator.cc:156] Difference at 456: 1249.76, expected 1092.67
      -E0120 23:00:10.677336 3944451 buffer_comparator.cc:156] Difference at 458: 694.593, expected 1551.71
      -E0120 23:00:10.677339 3944451 buffer_comparator.cc:156] Difference at 459: 614.473, expected 1570.73
      -E0120 23:00:10.677342 3944451 buffer_comparator.cc:156] Difference at 460: 884.496, expected 1283.32
      -2025-01-20 23:00:10.677346: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.680191 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.680202 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.680206 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.680209 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.680212 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.680215 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.680218 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.680221 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.680224 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.680227 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.680232: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.683110 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.683121 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.683124 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.683127 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.683131 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.683134 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.683137 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.683140 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.683143 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.683145 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.683150: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.686103 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.686116 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.686120 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.686123 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.686126 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.686129 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.686132 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.686135 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.686138 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.686141 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.686146: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.689023 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.689034 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.689037 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.689040 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.689043 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.689046 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.689049 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.689052 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.689055 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.689058 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.689063: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.691933 3944451 buffer_comparator.cc:156] Difference at 448: 1534.66, expected 770.258
      -E0120 23:00:10.691944 3944451 buffer_comparator.cc:156] Difference at 449: 1551.34, expected 1098.93
      -E0120 23:00:10.691948 3944451 buffer_comparator.cc:156] Difference at 450: 1275.31, expected 1056.29
      -E0120 23:00:10.691951 3944451 buffer_comparator.cc:156] Difference at 451: 865.11, expected 1560.21
      -E0120 23:00:10.691954 3944451 buffer_comparator.cc:156] Difference at 452: 753.818, expected 1585.41
      -E0120 23:00:10.691957 3944451 buffer_comparator.cc:156] Difference at 453: 1089.94, expected 1307.15
      -E0120 23:00:10.691960 3944451 buffer_comparator.cc:156] Difference at 454: 1039.51, expected 881.296
      -E0120 23:00:10.691963 3944451 buffer_comparator.cc:156] Difference at 455: 1540, expected 760.638
      -E0120 23:00:10.691966 3944451 buffer_comparator.cc:156] Difference at 456: 1558.94, expected 1092.67
      -E0120 23:00:10.691969 3944451 buffer_comparator.cc:156] Difference at 457: 1282.39, expected 1051.03
      -2025-01-20 23:00:10.691973: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.694973 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.694985 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.694988 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.694991 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.694994 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.694997 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.695002 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.695005 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.695008 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.695011 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.695016: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.697978 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.697988 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.697992 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.697995 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.697998 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.698001 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.698004 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.698007 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.698010 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.698013 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.698017: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.700955 3944451 buffer_comparator.cc:156] Difference at 896: 1533.56, expected 767.869
      -E0120 23:00:10.700966 3944451 buffer_comparator.cc:156] Difference at 897: 1566.05, expected 1090.2
      -E0120 23:00:10.700969 3944451 buffer_comparator.cc:156] Difference at 898: 1278.69, expected 1050.23
      -E0120 23:00:10.700972 3944451 buffer_comparator.cc:156] Difference at 899: 869.624, expected 1561.6
      -E0120 23:00:10.700975 3944451 buffer_comparator.cc:156] Difference at 900: 763.472, expected 1574.44
      -E0120 23:00:10.700978 3944451 buffer_comparator.cc:156] Difference at 901: 1112.54, expected 1303.84
      -E0120 23:00:10.700981 3944451 buffer_comparator.cc:156] Difference at 902: 1063.49, expected 881.498
      -E0120 23:00:10.700984 3944451 buffer_comparator.cc:156] Difference at 903: 1562.96, expected 755.455
      -E0120 23:00:10.700987 3944451 buffer_comparator.cc:156] Difference at 904: 1595.78, expected 1073.52
      -E0120 23:00:10.700990 3944451 buffer_comparator.cc:156] Difference at 905: 1307.7, expected 1034.81
      -2025-01-20 23:00:10.700995: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.703891 3944451 buffer_comparator.cc:156] Difference at 896: 610.467, expected 767.869
      -E0120 23:00:10.703903 3944451 buffer_comparator.cc:156] Difference at 897: 622.568, expected 1090.2
      -E0120 23:00:10.703906 3944451 buffer_comparator.cc:156] Difference at 898: 502.172, expected 1050.23
      -E0120 23:00:10.703909 3944451 buffer_comparator.cc:156] Difference at 899: 349.792, expected 1561.6
      -E0120 23:00:10.703912 3944451 buffer_comparator.cc:156] Difference at 900: 312.127, expected 1574.44
      -E0120 23:00:10.703915 3944451 buffer_comparator.cc:156] Difference at 901: 449.924, expected 1303.84
      -E0120 23:00:10.703918 3944451 buffer_comparator.cc:156] Difference at 902: 433.368, expected 881.498
      -E0120 23:00:10.703921 3944451 buffer_comparator.cc:156] Difference at 903: 632.775, expected 755.455
      -E0120 23:00:10.703924 3944451 buffer_comparator.cc:156] Difference at 904: 650.408, expected 1073.52
      -E0120 23:00:10.703927 3944451 buffer_comparator.cc:156] Difference at 905: 531.789, expected 1034.81
      -2025-01-20 23:00:10.703932: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.706816 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.706827 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.706831 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.706834 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.706837 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.706840 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.706843 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.706846 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.706849 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.706852 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.706856: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.709758 3944451 buffer_comparator.cc:156] Difference at 896: 1238.44, expected 767.869
      -E0120 23:00:10.709768 3944451 buffer_comparator.cc:156] Difference at 897: 1262.77, expected 1090.2
      -E0120 23:00:10.709772 3944451 buffer_comparator.cc:156] Difference at 899: 699.535, expected 1561.6
      -E0120 23:00:10.709775 3944451 buffer_comparator.cc:156] Difference at 900: 611.836, expected 1574.44
      -E0120 23:00:10.709778 3944451 buffer_comparator.cc:156] Difference at 901: 898.399, expected 1303.84
      -E0120 23:00:10.709781 3944451 buffer_comparator.cc:156] Difference at 903: 1249.57, expected 755.455
      -E0120 23:00:10.709784 3944451 buffer_comparator.cc:156] Difference at 904: 1276.06, expected 1073.52
      -E0120 23:00:10.709787 3944451 buffer_comparator.cc:156] Difference at 906: 708.794, expected 1528.61
      -E0120 23:00:10.709790 3944451 buffer_comparator.cc:156] Difference at 907: 604.026, expected 1544.19
      -E0120 23:00:10.709793 3944451 buffer_comparator.cc:156] Difference at 908: 883.464, expected 1276.75
      -2025-01-20 23:00:10.709797: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.712907 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.712918 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.712921 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.712924 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.712928 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.712931 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.712934 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.712937 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.712940 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.712943 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.712948: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.716059 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.716070 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.716074 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.716077 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.716082 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.716085 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.716088 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.716091 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.716094 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.716097 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.716102: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:00:10.719161 3944451 buffer_comparator.cc:156] Difference at 1792: 1245.59, expected 748.592
      -E0120 23:00:10.719172 3944451 buffer_comparator.cc:156] Difference at 1793: 1267.42, expected 1073.49
      -E0120 23:00:10.719175 3944451 buffer_comparator.cc:156] Difference at 1795: 702.928, expected 1535.73
      -E0120 23:00:10.719178 3944451 buffer_comparator.cc:156] Difference at 1796: 600.543, expected 1559.13
      -E0120 23:00:10.719181 3944451 buffer_comparator.cc:156] Difference at 1797: 865.055, expected 1277.09
      -E0120 23:00:10.719184 3944451 buffer_comparator.cc:156] Difference at 1799: 1224.8, expected 752.412
      -E0120 23:00:10.719188 3944451 buffer_comparator.cc:156] Difference at 1800: 1234.41, expected 1077.59
      -E0120 23:00:10.719190 3944451 buffer_comparator.cc:156] Difference at 1802: 688.427, expected 1537.6
      -E0120 23:00:10.719193 3944451 buffer_comparator.cc:156] Difference at 1803: 610.636, expected 1563.06
      -E0120 23:00:10.719196 3944451 buffer_comparator.cc:156] Difference at 1804: 864.306, expected 1270.2
      -2025-01-20 23:00:10.719201: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Test Loss: 1.548569	Test Acc: 65.5000%

      Appendix

      julia
      using InteractiveUtils
      +E0124 04:58:01.765178 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.765247 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.765255 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.765262 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.765269 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.765275 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.765282 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.765288 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.765294 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.765301 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.765315: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.768537 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.768555 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.768559 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.768564 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.768568 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.768572 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.768576 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.768579 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.768583 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.768587 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.768594: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.771718 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.771736 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.771740 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.771744 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.771749 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.771753 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.771756 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.771760 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.771764 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.771768 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.771775: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.774898 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.774915 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.774919 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.774923 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.774927 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.774931 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.774935 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.774939 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.774943 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.774947 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.774954: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.778134 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.778152 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.778156 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.778160 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.778164 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.778168 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.778172 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.778176 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.778180 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.778184 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.778190: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.781387 1635447 buffer_comparator.cc:156] Difference at 112: 0, expected 769.985
      +E0124 04:58:01.781399 1635447 buffer_comparator.cc:156] Difference at 113: 0, expected 1100
      +E0124 04:58:01.781402 1635447 buffer_comparator.cc:156] Difference at 114: 0, expected 1061.37
      +E0124 04:58:01.781405 1635447 buffer_comparator.cc:156] Difference at 115: 0, expected 1558.2
      +E0124 04:58:01.781408 1635447 buffer_comparator.cc:156] Difference at 116: 0, expected 1573.39
      +E0124 04:58:01.781411 1635447 buffer_comparator.cc:156] Difference at 117: 0, expected 1297.47
      +E0124 04:58:01.781415 1635447 buffer_comparator.cc:156] Difference at 118: 0, expected 880.235
      +E0124 04:58:01.781418 1635447 buffer_comparator.cc:156] Difference at 119: 0, expected 764.244
      +E0124 04:58:01.781421 1635447 buffer_comparator.cc:156] Difference at 120: 0, expected 1089.23
      +E0124 04:58:01.781424 1635447 buffer_comparator.cc:156] Difference at 121: 0, expected 1044.63
      +2025-01-24 04:58:01.781428: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.784469 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.784482 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.784485 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.784488 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.784491 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.784494 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.784497 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.784499 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.784502 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.784505 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.784510: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.787533 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.787546 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.787549 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.787552 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.787554 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.787557 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.787560 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.787563 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.787566 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.787569 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.787573: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.790648 1635447 buffer_comparator.cc:156] Difference at 224: 0, expected 745.838
      +E0124 04:58:01.790663 1635447 buffer_comparator.cc:156] Difference at 225: 0, expected 1079.1
      +E0124 04:58:01.790666 1635447 buffer_comparator.cc:156] Difference at 226: 0, expected 1034.99
      +E0124 04:58:01.790669 1635447 buffer_comparator.cc:156] Difference at 227: 0, expected 1538.8
      +E0124 04:58:01.790672 1635447 buffer_comparator.cc:156] Difference at 228: 0, expected 1554.44
      +E0124 04:58:01.790675 1635447 buffer_comparator.cc:156] Difference at 229: 0, expected 1264.82
      +E0124 04:58:01.790678 1635447 buffer_comparator.cc:156] Difference at 230: 0, expected 853.966
      +E0124 04:58:01.790681 1635447 buffer_comparator.cc:156] Difference at 231: 0, expected 756.177
      +E0124 04:58:01.790683 1635447 buffer_comparator.cc:156] Difference at 232: 0, expected 1076.91
      +E0124 04:58:01.790686 1635447 buffer_comparator.cc:156] Difference at 233: 0, expected 1029.02
      +2025-01-24 04:58:01.790691: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.793736 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.793752 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.793755 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.793758 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.793761 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.793764 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.793767 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.793770 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.793773 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.793775 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.793780: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.796775 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.796788 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.796791 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.796794 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.796797 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.796800 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.796803 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.796805 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.796808 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.796811 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.796816: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.799819 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.799834 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.799837 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.799840 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.799843 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.799846 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.799848 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.799851 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.799854 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.799857 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.799862: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.802928 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.802941 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.802944 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.802947 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.802950 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.802953 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.802956 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.802959 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.802963 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.802966 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.802970: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.805973 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.805987 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.805990 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.805993 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.805996 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.805999 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.806002 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.806005 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.806008 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.806010 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.806015: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.809028 1635447 buffer_comparator.cc:156] Difference at 448: 0, expected 770.258
      +E0124 04:58:01.809041 1635447 buffer_comparator.cc:156] Difference at 449: 0, expected 1098.93
      +E0124 04:58:01.809044 1635447 buffer_comparator.cc:156] Difference at 450: 0, expected 1056.29
      +E0124 04:58:01.809047 1635447 buffer_comparator.cc:156] Difference at 451: 0, expected 1560.21
      +E0124 04:58:01.809049 1635447 buffer_comparator.cc:156] Difference at 452: 0, expected 1585.41
      +E0124 04:58:01.809052 1635447 buffer_comparator.cc:156] Difference at 453: 0, expected 1307.15
      +E0124 04:58:01.809055 1635447 buffer_comparator.cc:156] Difference at 454: 0, expected 881.296
      +E0124 04:58:01.809058 1635447 buffer_comparator.cc:156] Difference at 455: 0, expected 760.638
      +E0124 04:58:01.809061 1635447 buffer_comparator.cc:156] Difference at 456: 0, expected 1092.67
      +E0124 04:58:01.809064 1635447 buffer_comparator.cc:156] Difference at 457: 0, expected 1051.03
      +2025-01-24 04:58:01.809068: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.812194 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.812208 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.812211 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.812214 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.812217 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.812220 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.812223 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.812225 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.812228 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.812231 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.812236: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.815341 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.815354 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.815357 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.815362 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.815365 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.815367 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.815370 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.815373 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.815376 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.815379 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.815383: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.818439 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.818452 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.818455 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.818458 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.818461 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.818464 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.818467 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.818470 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.818472 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.818475 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.818480: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.821504 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.821518 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.821521 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.821524 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.821526 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.821529 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.821532 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.821535 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.821538 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.821541 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.821545: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.824546 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.824560 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.824563 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.824566 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.824569 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.824571 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.824574 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.824577 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.824580 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.824583 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.824589: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.827634 1635447 buffer_comparator.cc:156] Difference at 896: 0, expected 767.869
      +E0124 04:58:01.827647 1635447 buffer_comparator.cc:156] Difference at 897: 0, expected 1090.2
      +E0124 04:58:01.827650 1635447 buffer_comparator.cc:156] Difference at 898: 0, expected 1050.23
      +E0124 04:58:01.827653 1635447 buffer_comparator.cc:156] Difference at 899: 0, expected 1561.6
      +E0124 04:58:01.827656 1635447 buffer_comparator.cc:156] Difference at 900: 0, expected 1574.44
      +E0124 04:58:01.827659 1635447 buffer_comparator.cc:156] Difference at 901: 0, expected 1303.84
      +E0124 04:58:01.827662 1635447 buffer_comparator.cc:156] Difference at 902: 0, expected 881.498
      +E0124 04:58:01.827664 1635447 buffer_comparator.cc:156] Difference at 903: 0, expected 755.455
      +E0124 04:58:01.827667 1635447 buffer_comparator.cc:156] Difference at 904: 0, expected 1073.52
      +E0124 04:58:01.827670 1635447 buffer_comparator.cc:156] Difference at 905: 0, expected 1034.81
      +2025-01-24 04:58:01.827675: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.830899 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.830913 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.830916 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.830918 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.830921 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.830924 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.830927 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.830930 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.830933 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.830936 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.830940: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.834190 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.834205 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.834208 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.834211 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.834214 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.834217 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.834220 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.834222 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.834225 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.834228 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.834233: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:58:01.837405 1635447 buffer_comparator.cc:156] Difference at 1792: 0, expected 748.592
      +E0124 04:58:01.837418 1635447 buffer_comparator.cc:156] Difference at 1793: 0, expected 1073.49
      +E0124 04:58:01.837421 1635447 buffer_comparator.cc:156] Difference at 1794: 0, expected 1027.26
      +E0124 04:58:01.837424 1635447 buffer_comparator.cc:156] Difference at 1795: 0, expected 1535.73
      +E0124 04:58:01.837429 1635447 buffer_comparator.cc:156] Difference at 1796: 0, expected 1559.13
      +E0124 04:58:01.837431 1635447 buffer_comparator.cc:156] Difference at 1797: 0, expected 1277.09
      +E0124 04:58:01.837434 1635447 buffer_comparator.cc:156] Difference at 1798: 0, expected 859.43
      +E0124 04:58:01.837437 1635447 buffer_comparator.cc:156] Difference at 1799: 0, expected 752.412
      +E0124 04:58:01.837440 1635447 buffer_comparator.cc:156] Difference at 1800: 0, expected 1077.59
      +E0124 04:58:01.837443 1635447 buffer_comparator.cc:156] Difference at 1801: 0, expected 1037.98
      +2025-01-24 04:58:01.837447: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +Test Loss: 1.441829	Test Acc: 66.7000%

      Appendix

      julia
      using InteractiveUtils
       InteractiveUtils.versioninfo()
       
       if @isdefined(MLDataDevices)
      @@ -1343,8 +1343,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -1362,7 +1362,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file diff --git a/dev/tutorials/intermediate/7_RealNVP.html b/dev/tutorials/intermediate/7_RealNVP.html index 1ed85231ef..de55621def 100644 --- a/dev/tutorials/intermediate/7_RealNVP.html +++ b/dev/tutorials/intermediate/7_RealNVP.html @@ -5,15 +5,15 @@ Normalizing Flows for Density Estimation | Lux.jl Docs - + - + - - - + + + @@ -57,7 +57,7 @@ scatter!(ax, z[1, :], z[2, :]; markersize=2) fig -end

      julia
      function load_moons_dataloader(
      +end

      julia
      function load_moons_dataloader(
               args...; batchsize::Int, noise::Union{Nothing, AbstractFloat}=nothing, kwargs...
       )
           return DataLoader(
      @@ -230,36 +230,36 @@
       end
       
       trained_model = main()
      Total Trainable Parameters: 5592
      -2025-01-20 23:06:11.275993: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 10156251217167081480
      -E0120 23:06:11.804459 3990735 buffer_comparator.cc:156] Difference at 17: 22.1601, expected 25.2177
      -E0120 23:06:11.804695 3990735 buffer_comparator.cc:156] Difference at 18: 17.8166, expected 20.6638
      -E0120 23:06:11.804699 3990735 buffer_comparator.cc:156] Difference at 24: 20.8071, expected 23.8795
      -E0120 23:06:11.804702 3990735 buffer_comparator.cc:156] Difference at 25: 19.1691, expected 23.0753
      -E0120 23:06:11.804706 3990735 buffer_comparator.cc:156] Difference at 27: 16.8353, expected 20.2124
      -E0120 23:06:11.804710 3990735 buffer_comparator.cc:156] Difference at 31: 20.6599, expected 23.7059
      -2025-01-20 23:06:11.804736: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -E0120 23:06:11.821113 3990735 buffer_comparator.cc:156] Difference at 32: 0, expected 0.569339
      -E0120 23:06:11.821139 3990735 buffer_comparator.cc:156] Difference at 33: 0, expected 0.542682
      -E0120 23:06:11.821142 3990735 buffer_comparator.cc:156] Difference at 34: 0, expected 0.348735
      -E0120 23:06:11.821145 3990735 buffer_comparator.cc:156] Difference at 35: 0, expected 0.223541
      -E0120 23:06:11.821148 3990735 buffer_comparator.cc:156] Difference at 36: 0, expected 0.474634
      -E0120 23:06:11.821151 3990735 buffer_comparator.cc:156] Difference at 37: 0, expected 0.288978
      -E0120 23:06:11.821153 3990735 buffer_comparator.cc:156] Difference at 38: 0, expected 0.205903
      -E0120 23:06:11.821156 3990735 buffer_comparator.cc:156] Difference at 39: 0, expected 0.446466
      -E0120 23:06:11.821159 3990735 buffer_comparator.cc:156] Difference at 40: 0, expected 0.524228
      -E0120 23:06:11.821161 3990735 buffer_comparator.cc:156] Difference at 41: 0, expected 0.432399
      -2025-01-20 23:06:11.821168: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      -Iter: [     1/ 10000]	Training Loss: 3.863690	Throughput: 0.738206 samples/s
      -Iter: [  1000/ 10000]	Training Loss: 0.839611	Throughput: 718.088204 samples/s
      -Iter: [  2000/ 10000]	Training Loss: 0.480875	Throughput: 1414.794825 samples/s
      -Iter: [  3000/ 10000]	Training Loss: 0.532076	Throughput: 2090.809874 samples/s
      -Iter: [  4000/ 10000]	Training Loss: 0.589929	Throughput: 2727.900330 samples/s
      -Iter: [  5000/ 10000]	Training Loss: 0.469730	Throughput: 3362.438248 samples/s
      -Iter: [  6000/ 10000]	Training Loss: 0.530390	Throughput: 3979.620796 samples/s
      -Iter: [  7000/ 10000]	Training Loss: 0.512984	Throughput: 4580.390459 samples/s
      -Iter: [  8000/ 10000]	Training Loss: 0.494450	Throughput: 5125.308281 samples/s
      -Iter: [  9000/ 10000]	Training Loss: 0.435445	Throughput: 5692.110374 samples/s
      -Iter: [ 10000/ 10000]	Training Loss: 0.536155	Throughput: 6244.824544 samples/s

      Visualizing the Results

      julia
      z_stages = Matrix{Float32}[]
      +2025-01-24 04:51:54.316887: I external/xla/xla/service/llvm_ir/llvm_command_line_options.cc:50] XLA (re)initializing LLVM with options fingerprint: 13249940196448407136
      +E0124 04:51:54.645504 1609376 buffer_comparator.cc:156] Difference at 17: 22.1601, expected 25.2177
      +E0124 04:51:54.645561 1609376 buffer_comparator.cc:156] Difference at 18: 17.8166, expected 20.6638
      +E0124 04:51:54.645564 1609376 buffer_comparator.cc:156] Difference at 24: 20.8071, expected 23.8795
      +E0124 04:51:54.645567 1609376 buffer_comparator.cc:156] Difference at 25: 19.1691, expected 23.0753
      +E0124 04:51:54.645571 1609376 buffer_comparator.cc:156] Difference at 27: 16.8353, expected 20.2124
      +E0124 04:51:54.645574 1609376 buffer_comparator.cc:156] Difference at 31: 20.6599, expected 23.7059
      +2025-01-24 04:51:54.645584: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +E0124 04:51:54.658324 1609376 buffer_comparator.cc:156] Difference at 32: 0, expected 0.569339
      +E0124 04:51:54.658368 1609376 buffer_comparator.cc:156] Difference at 33: 0, expected 0.542682
      +E0124 04:51:54.658371 1609376 buffer_comparator.cc:156] Difference at 34: 0, expected 0.348735
      +E0124 04:51:54.658374 1609376 buffer_comparator.cc:156] Difference at 35: 0, expected 0.223541
      +E0124 04:51:54.658377 1609376 buffer_comparator.cc:156] Difference at 36: 0, expected 0.474634
      +E0124 04:51:54.658380 1609376 buffer_comparator.cc:156] Difference at 37: 0, expected 0.288978
      +E0124 04:51:54.658382 1609376 buffer_comparator.cc:156] Difference at 38: 0, expected 0.205903
      +E0124 04:51:54.658385 1609376 buffer_comparator.cc:156] Difference at 39: 0, expected 0.446466
      +E0124 04:51:54.658388 1609376 buffer_comparator.cc:156] Difference at 40: 0, expected 0.524228
      +E0124 04:51:54.658390 1609376 buffer_comparator.cc:156] Difference at 41: 0, expected 0.432399
      +2025-01-24 04:51:54.658397: E external/xla/xla/service/gpu/autotuning/gemm_fusion_autotuner.cc:1080] Results do not match the reference. This is likely a bug/unexpected loss of precision.
      +Iter: [     1/ 10000]	Training Loss: 3.863690	Throughput: 0.708851 samples/s
      +Iter: [  1000/ 10000]	Training Loss: 0.839611	Throughput: 692.792474 samples/s
      +Iter: [  2000/ 10000]	Training Loss: 0.480875	Throughput: 1369.441548 samples/s
      +Iter: [  3000/ 10000]	Training Loss: 0.532076	Throughput: 2017.190574 samples/s
      +Iter: [  4000/ 10000]	Training Loss: 0.589929	Throughput: 2655.728896 samples/s
      +Iter: [  5000/ 10000]	Training Loss: 0.469730	Throughput: 3285.542704 samples/s
      +Iter: [  6000/ 10000]	Training Loss: 0.530390	Throughput: 3869.442394 samples/s
      +Iter: [  7000/ 10000]	Training Loss: 0.512984	Throughput: 4467.345266 samples/s
      +Iter: [  8000/ 10000]	Training Loss: 0.494450	Throughput: 5052.929012 samples/s
      +Iter: [  9000/ 10000]	Training Loss: 0.435445	Throughput: 5621.556140 samples/s
      +Iter: [ 10000/ 10000]	Training Loss: 0.536155	Throughput: 6129.436969 samples/s

      Visualizing the Results

      julia
      z_stages = Matrix{Float32}[]
       for i in 1:(trained_model.model.n_transforms)
           z = @jit sample(Random.default_rng(), Float32, trained_model, 10_000, i)
           push!(z_stages, Array(z))
      @@ -288,8 +288,8 @@
               println()
               AMDGPU.versioninfo()
           end
      -end
      Julia Version 1.11.2
      -Commit 5e9a32e7af2 (2024-12-01 20:02 UTC)
      +end
      Julia Version 1.11.3
      +Commit d63adeda50d (2025-01-21 19:42 UTC)
       Build Info:
         Official https://julialang.org/ release
       Platform Info:
      @@ -307,7 +307,7 @@
         JULIA_CUDA_HARD_MEMORY_LIMIT = 100%
         JULIA_PKG_PRECOMPILE_AUTO = 0
         JULIA_DEBUG = Literate

      This page was generated using Literate.jl.

      - + \ No newline at end of file